uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,116,691,497,315 | arxiv | \section{Introduction}
\label{}
A hot and dense partonic matter has been created at the Relativistic
Heavy Ion Collider (RHIC) of Brookhaven National Laboratory
\cite{White-papers}, which is thought to be a kind of perfect liquid
of quark and gluon~\cite{Gyul}. The study on the properties of this
matter is a hot issue. Presently one property which is of very
interest is the exact value of the ratio of viscosity over entropy
density for the matter. To extract the value, one needs to compare
experimental data with relativistic viscous hydrodynamics
simulation~\cite{Roma1,Roma2,Song1,Song2,Chaud}.
Two different kinds of relativistic viscous hydrodynamics have been
developed so far. One is the so-called first order relativistic
viscous hydrodynamics which was first developed by Eckart and varied
by Landau and Lifshitz\cite{Eckart,Landau}, and many calculations
have been done since then\cite{Teany}. But there are two problems in
the approaches: one is that dissipative fluctuation may propagate at
a speed larger than the speed of light and thus leads to a causality
problem; the other is that the solution which may develop
instabilities \cite{Muronga}. These problems have been studied
extensively in Refs. \cite{Hisc1,Koun}. The other is the second
order theories among which the Israel-Stewart theory is popularly
used so far \cite{Isra}. However, before Israel-Stewart theory is
applied to describe heavy ion collisions, one should know whether it
is stable or not. Till now, there have been some stability analysis
on the issue \cite{Hisc2,Deni,Deni2,Baier}. In
\cite{Hisc2,Deni,Deni2,Baier}, the authors used the plane wave
perturbation method to study the stability of the theory around a
hydrostatic state and discussed the regime of validity of
hydrodynamics. Also in \cite{Deni2}, using the Lyapunov direct
method as in \cite{Koun}, the authors analyzed the stability of the
scaling solution with only bulk viscosity and presented the stable
regions of the scaling solution for both homogeneous and
inhomogeneous perturbations. And several groups have compared this
theory with the experimental data and show that this theory can be
used without a problem when the shear viscosity and bulk viscosity
are small \cite{Roma1,Roma2,Song1,Song2,Chaud,Song3}.
It is well known that hydrodynamics fails to describe the HBT
results, i.e, the calculated ratio of HBT radii in outward direction
over that in sideward direction is higher than the experimental
data\cite{Pratt}. Recent attempt to reconcile them is to take bulk
viscosity into account. But in \cite{Pratt,Pratt2,Li}, the authors
found that the HBT radii hardly change if the peak of bulk viscosity
is small and the other parameters such as the equation of state,
initial condition and so on keep unchanged. However, in \cite{Torri}
the authors analyzed the stability of Navier-Stokes theory with the
bulk viscosity which has a large peak around $T_c$ and found that
there exists some inhomogeneous modes which will tear the system
into droplets. If this is the case, it will help us resolving the
interferometric data\cite{Torri2}. But there are some basic problems
with Navier-Stokes theory as mentioned above and the authors did
not consider the effects of shear viscosity additionally. In this
paper we use the same method to study if this could happen in the
Israel-Stewart theory and what is the role of the shear viscosity.
\section{Israel-Stewart theory and linear perturbations}
The general hydrodynamic equations arise from the local conservation
of energy and momentum \cite{Kolb}
\begin{eqnarray}
\label{cons}
\partial_\mu T^{\mu \nu}(x)&=&0,
\end{eqnarray}
where the energy-momentum tensor without heat conduction is
decomposed into the following form \cite{Chaud}
\begin{eqnarray}
\label{tmunu}
T^{\mu\nu}&=& eu^{\mu}u^{\nu} - (p{+}\Pi)\Delta^{\mu \nu}
+ \pi^{\mu \nu}.
\end{eqnarray}
where $e$ and $p$ are the local energy density and thermal
equilibrium pressure, and $u^{\mu}$ is the 4-velocity of the energy
flow which obeys $u^\mu u_\mu = 1$. $\Pi$ is the bulk viscous
pressure, and $\Delta^{\mu\nu}=g^{\mu\nu}-u^{\mu}u^{\nu}$ is
transverse to the flow velocity, that is $\Delta^{\mu\nu}u_{\nu}=0$.
$\pi^{\mu \nu}$ is the traceless shear viscous pressure tensor. With
Eq.~(\ref{cons}) and Eq.~(\ref{tmunu}), we can get the evolution
equations of the energy density and the 4-velocity of energy flow.
In Israel-Stewart approach, the kinetic evolution equations of the
bulk pressure $\Pi$ and the traceless shear viscous tensor
$\pi^{\mu\nu}$ are \cite{Chaud}
\begin{eqnarray}
\label{Pi}
D{\Pi}&=&-\frac{1}{\tau_{\Pi}}\big(\Pi+\zeta \nabla{\cdot}u\big),
\end{eqnarray}
\begin{eqnarray}
D\pi^{\mu \nu} &=&-\frac{1}{\tau_{\pi}}\big(\pi^{\mu\nu}-2\eta\langle\nabla^\mu u^\nu\rangle\big)
\label{pi}
-\bigl(u^\mu\pi^{\nu\alpha} + u^\nu\pi^{\mu\alpha}\bigr)
Du_\alpha,
\end{eqnarray}
where $D$ is the time derivative in the local fluid rest frame and
fulfills $D=u^{\mu}\partial_{\mu}$. The angular bracket notation is
defined by $\langle\nabla^\mu u^\nu\rangle=\frac{1}{2}(\nabla^\mu
u^\nu{+}\nabla^\nu u^\mu) -
\frac{1}{3}(\nabla{\cdot}u)\Delta^{\mu\nu}$. $\eta$ and $\zeta$
denote the bulk and shear viscous coefficient, respectively.
$\tau_{\pi}$ and $\tau_{\Pi}$ are the relaxation time for the bulk
pressure and the shear tensor, respectively. They can be related to
$\eta$ and $\zeta$ as follows \cite{Muronga}
\begin{eqnarray}
\label{rela}
\tau_{\pi}=2\eta\beta_2~~,~~~~~~~~~~~\tau_{\Pi}=\zeta\beta_0.
\end{eqnarray}
where $\beta_2$ and $\beta_0$ are the relaxation coefficients that
need to be calculated from other theories.
When hydrodynamics is applied to describe heavy ion collisions,
people always use the symmetry of the collision system to simplify
the evolution equations. Thus we have the scaling solution, the 1+1
solution and the 2+1 solution \cite{Muller,Roma1,Roma2,Song1}. Here
we focus on the 1+1 solution of the Israel-Stewart theory. For 1+1
solution, the system has boost-invariant longitudinal expansion and
transverse expansion in one dimension. For these geometries, it is
convenient to work in the co-moving and radial coordinates
$\tau,r,\phi,\eta$. We consider a small perturbation, and the
backgrounds $\epsilon_0(\tau), \pi^{\mu}_{\nu 0}(\tau), \Pi_0(\tau)$
evolve with time \begin{eqnarray} \label{total-e}
\epsilon(\tau,r)=\epsilon_0(\tau)+\delta\epsilon(\tau,r), \end{eqnarray}
\begin{eqnarray} \label{total-v} v(\tau,r)=\delta v(\tau,r), \end{eqnarray}
\begin{eqnarray} \label{total-pi} \pi^{\mu}_{\nu}(\tau,r)=\pi^{\mu}_{\nu
0}(\tau)+\delta\pi^{\mu}_{\nu}(\tau,r), \end{eqnarray}
\begin{eqnarray} \label{total-Pi} \Pi(\tau,r)=\Pi_0(\tau)+\delta\Pi(\tau,r).
\end{eqnarray}
With Eqs.~(\ref{cons}$\sim$\ref{pi}) and
Eqs.~(\ref{total-e}$\sim$\ref{total-Pi}) the evolution equations of
backgrounds can be given as follows
\begin{eqnarray}
\label{back-e}
\partial_{\tau}\epsilon_0=-(\epsilon_0+p_0+\Pi_0)\frac{1}{\tau}
+\pi_{\eta 0}^{\eta}\frac{1}{\tau},
\end{eqnarray}
\begin{eqnarray}
\label{back-piee} \tau_{\pi 0}\partial_{\tau}\pi_{\eta
0}^{\eta}+\pi_{\eta 0}^{\eta} =\frac{4}{3}\frac{\eta_0}{\tau},
\end{eqnarray}
\begin{eqnarray}
\label{back-pirr} \tau_{\pi 0}\partial_{r}\pi_{r0}^{r}+\pi_{r0}^{r}
=-\frac{2}{3}\frac{\eta_0}{\tau},
\end{eqnarray}
\begin{eqnarray}
\label{back-Pi} \tau_{\Pi
0}\partial_{\tau}\Pi_0+\Pi_0=-\frac{\zeta_0}{\tau}.
\end{eqnarray}
We can also get the evolution equations of the perturbations
\begin{eqnarray}
\label{del-v} &&\!\!
[(\epsilon_0+p_0+\Pi_0-\pi_{r0}^r)\partial_{\tau}+\partial_{\tau}(p_0+\Pi_0)
-(\partial_{\tau}+\frac{1}{\tau})\pi_{r0}^r+\frac{1}{\tau}\pi_{\eta
0}^{\eta}]\delta v
\nonumber\\
&&~~~~~~~~~~~~~~~~~~~~~~~~~~+\partial_r(\delta
p+\delta\Pi)-(\partial_r
+\frac{2}{r})\delta\pi_{r}^{r}-\frac{1}{r}\delta\pi_{\eta}^{\eta}=0,
\end{eqnarray}
\begin{eqnarray}
\label{del-e} &&\!\!\!\!\!\!\!\!
[(\epsilon_0+p_0+\Pi_0)(\partial_r+\frac{1}{r})-\pi_{r0}^{r}(\partial_r-\frac{1}{r})
+\frac{1}{r}\pi_{\eta 0}^{\eta}]\delta
v+\partial_{\tau}\delta\epsilon
\nonumber\\
&&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+\frac{1}{\tau}(\delta\epsilon+\delta
p+\delta\Pi)-\frac{1}{\tau}\delta\pi_{\eta}^{\eta}=0,
\end{eqnarray}
\begin{eqnarray}
\label{del-piee} \tau_{\pi
0}\partial_{\tau}\delta\pi_{\eta}^{\eta}+\delta\pi_{\eta}^{\eta}=
-\frac{\delta\tau_{\pi}}{\tau_{\pi
0}}(\frac{4}{3}\frac{\eta_0}{\tau}-\pi_{\eta 0}^{\eta})
-\frac{2}{3}\eta_0(\partial_r\delta v+\frac{\delta v}{r})
+\frac{4}{3}\frac{\delta\eta}{\tau},
\end{eqnarray}
\begin{eqnarray}
\label{del-pirr} \tau_{\pi
}\partial_{\tau}\delta\pi_r^r+\delta\pi_r^r=
\frac{\delta\tau_{\pi}}{\tau_{\pi
0}}(\frac{2}{3}\frac{\eta_0}{\tau}+\pi_{r0}^r)
-\frac{2}{3}\eta_0(-2\partial_r\delta v+\frac{\delta v}{r})
-\frac{2}{3}\frac{\delta\eta}{\tau},
\end{eqnarray}
\begin{eqnarray}
\label{del-Pi} \tau_{\Pi
0}\partial_{\tau}\delta\Pi+\delta\Pi=\frac{\delta\tau_{\Pi}}{\tau_{\Pi
0}} (\Pi_0+\frac{\zeta_0}{\tau})-\zeta_0(\partial_r\delta
v+\frac{\delta v}{r}) -\frac{\delta\zeta}{\tau}.
\end{eqnarray}
In order to get rid of the space-like derivatives, we can do the
Hankel transform (Fourier-Bessel transform) due to the geometry
system we use here\cite{Roma1}. After introducing \begin{eqnarray}
\label{delpirree}
\delta\tilde{\pi}=(\partial_r+\frac{2}{r})\delta\pi_r^r+\frac{1}{r}\delta\pi_{\eta}^{\eta}
\end{eqnarray} we do the following Hankel transforms, \begin{eqnarray} \label{Ha1}
\delta{v}(\tau,r)=\int^\infty_0{\rm
d}kJ_1(kr)k\delta\tilde{v}(\tau,k), \end{eqnarray} \begin{eqnarray} \label{Ha2}
\delta{\epsilon}(\tau,r)=\int^\infty_0{\rm
d}kJ_0k(kr)\delta\tilde{\epsilon}(\tau,k), \end{eqnarray} \begin{eqnarray} \label{Ha3}
\delta{{\pi}}(\tau,r)=\int^\infty_0{\rm
d}kJ_1(kr)k\delta\tilde{{\pi}}(\tau,k), \end{eqnarray} \begin{eqnarray} \label{Ha4}
\delta\pi^{\eta}_{\eta}(\tau,r)=\int^\infty_0{\rm
d}kJ_0(kr)k\delta\tilde{\pi}^{\eta}_{\eta}(\tau,k), \end{eqnarray} \begin{eqnarray}
\label{Ha5} \delta{\Pi}(\tau,r)=\int^\infty_0{\rm
d}kJ_0(kr)k\delta\tilde{\Pi}(\tau,k), \end{eqnarray} where $k$ is the wave
number. It stands for a homogeneous perturbation when $k$ equals
zero or an inhomogeneous perturbation when $k$ is not zero. Its
range for a realistic QGP fluid has been roughly estimated in
\cite{Koun}. With Eqs.~(\ref{del-v}$\sim$\ref{Ha5}), we can get the
evolution equations for $\delta\tilde{v}, \delta\tilde{\epsilon},
\delta\tilde{\pi_{\eta}^{\eta}}, \delta\tilde{\pi},
\delta\tilde{\Pi}$
\begin{eqnarray}
\label{Hdel-v}&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!!\!\!\!\!\!
[(\epsilon_0+p_0+\Pi_0-\pi_{r0}^r)\partial_{\tau}+\partial_{\tau}(p_0+\Pi_0)
-(\partial_{\tau}+\frac{1}{\tau})\pi_{r0}^r+\frac{1}{\tau}\pi_{\eta
0}^{\eta}]\delta\tilde{v}
\nonumber\\
&&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-kc_s^2\delta\tilde{\epsilon}
-k\delta\tilde{\Pi}-\delta\tilde{\pi}=0,
\end{eqnarray}
\begin{eqnarray}
\label{Hdel-e} (\epsilon_0+p_0+\Pi_0+\frac{1}{2}\pi^{\eta}_{\eta
0})\delta\tilde{v}+
(\partial_{\tau}+\frac{1+c_s^2}{\tau})\delta\tilde{\epsilon}
+\frac{1}{\tau}\tilde{\Pi}
-\frac{1}{\tau}\delta\tilde{\pi}^{\eta}_{\eta}=0,
\end{eqnarray}
\begin{eqnarray}
\label{Hdel-piee} &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\frac{2}{3}\frac{\eta_0}{\tau_{\pi 0}}k\delta\tilde{v}
+\frac{1}{\tau_{\pi 0}^2}(\frac{4}{3}\frac{\eta_0}{\tau}-\pi_{\eta
0}^{\eta})
(\frac{\partial\tau_{\pi}}{\partial\epsilon})_0\delta\tilde{\epsilon}
-\frac{4}{3}\frac{1}{\tau\tau_{\pi
0}}(\frac{\partial\eta}{\partial\epsilon})_0\delta\tilde{\epsilon}
\nonumber\\
&&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+(\partial_{\tau}+\frac{1}{\tau_{\pi
0}})\delta\tilde{\pi}_{\eta}^{\eta}=0,
\end{eqnarray}
\begin{eqnarray}
\label{Hdel-pirree} &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\frac{4}{3}\frac{\eta_0}{\tau_{\pi 0}}k^2\delta\tilde{v}
-\frac{1}{2}\frac{1}{\tau_{\pi
0}^2}(\frac{4}{3}\frac{\eta_0}{\tau}-\pi_{\eta 0}^{\eta})
(\frac{\partial\tau_{\pi}}{\partial\epsilon})_0k\delta\tilde{\epsilon}
+\frac{2}{3}\frac{1}{\tau\tau_{\pi
0}}(\frac{\partial\eta}{\partial\epsilon})_0k\delta\tilde{\epsilon}
\nonumber\\
&&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+(\partial_{\tau}
+\frac{1}{\tau_{\pi 0}})\delta\tilde{\pi}=0,
\end{eqnarray}
\begin{eqnarray}
\label{Hdel-Pi} \frac{\zeta_0}{\tau_{\Pi 0}}k\delta\tilde{v}
-\frac{1}{\tau_{\Pi 0}^2}(\frac{\zeta_0}{\tau}+\Pi_0)
(\frac{\partial\tau_{\Pi}}{\partial\epsilon})_0\delta\tilde{\epsilon}
+\frac{1}{\tau\tau_{\Pi
0}}(\frac{\partial\zeta}{\partial\epsilon})_0\delta\tilde{\epsilon}
+(\partial_{\tau}+\frac{1}{\tau_{\Pi 0}})\delta\tilde{\Pi}=0.
\end{eqnarray}
\section{Results}
In order to get numerical results, the values of viscosities and the
relaxation time are set up as described below. As to shear
viscosity, the strong coupling theory and the hydrodynamic and
transport model show that $\frac{\eta}{s}$ can not be too large
\cite{ADS,Roma2,Xuzhe}. Here we use two different values
$\frac{\eta}{s} = 0.02$ and 0.2 to see its effects. The relaxation
time of the shear viscosity is set to the Boltzmann gas result
$\tau_{\pi}=\frac{\eta}{s}\frac{6}{T}$ in our work. Although recent
results of SU(3) Yang-Mills theory show that the effects of a
potentially large bulk viscosity near $T_c$ are more subtle to
detect in spectral integrals\cite{Meyer} than previous work
suggested \cite{Meyer2}, but there are still some possibilities that
$\frac{\zeta}{s}$ becomes large around $T_c$\cite{PEAK}. So the
following parametrization as in \cite{Torri} is adopted to see the
effects of bulk viscosity on the stability
\begin{eqnarray} \label{zos} \zeta=s(z_{pQCD}+\frac{z_0}{\sqrt{2\pi\sigma}}{\rm
exp}[-\frac{t^2} {2\sigma^2}]), \end{eqnarray}
where $t = T-T_c$, $\sigma = 0.01 T_c$, $z_{pQCD}\sim 10^{-3}$ and
different $z_0$ denotes different magnitudes of ${\zeta}/{s}$. For
the relaxation time of bulk viscosity, the parametrization similar
to the relaxation time of shear viscosity is employed \begin{eqnarray}
\label{rela-pa} \tau_{\Pi}=b\frac{\zeta}{\epsilon+p}, \end{eqnarray} where
different $b$ is used to see the effect of relaxation time of bulk
viscosity.
To connect to relativistic heavy-ion collisions, the equation of
state in \cite{EOS} is adopted and the initial temperature $T_0$ is
set to be $0.34$ GeV. Firstly, let us see the effect of viscosity on
the evolution of background. Fig 1. shows the profiles of background
temperature for different $\eta/s$ and for different $z_0$ and $b$,
corresponding to different magnitudes and relaxation times of bulk
viscosity. Each plot consists of two different $\eta/s$. From the
left to the right, the plots represent $z_0$ = 0.0 and 0.1$~T_c$,
respectively. These plots show that the shear viscosity can slow
down the evolution of the source. We can also see that when $b$ is
large which means that the relaxation time of bulk viscosity is
large, bulk viscosity hardly have effects on the evolution of
temperature even when $\frac{\zeta}{s}$ has a large peak. But when
the relaxation time is small, for a larger magnitude of bulk
viscosity, we can see that the system stays at a nearly constant
temperature which is the same as the case found in Navier-Stokes
theory in \cite{Torri}. This effect is not hard to be understood.
Because the role of the relaxation time is to delay the appearance
of viscous forces \cite{Torri2}, bulk viscosity has effects on the
evolution of the source only after a time-scale $\tau_{\Pi}$. The
larger the relaxation time is, the longer time the bulk viscosity
needs to affect on the evolution of the system. Meanwhile, the
system is still evolving and it may be cooled down to be below $T_c$
where bulk viscosity can be negligible. So if the relaxation time is
large enough and the bulk viscosity has a small width, bulk
viscosity may have negligible effects on the evolution of
background. It is consistent with the results in \cite{Li}. In
\cite{Li} we found that the width of bulk viscosity has larger
effects on the evolution of the source than the magnitude of bulk
viscosity.
Fig 2. shows the profiles of the ratio of bulk pressure over the
equilibrium pressure of the background for different cases as in Fig
1. We can see that $\Pi/p$ has a peak when $\zeta/s$ has a peak
around $T_c$. And the fact that $\Pi/p$ exceeds one means that the
state is far away from equilibrium. It indicates that the matter is
not only hydrodynamically unstable, but also thermodynamically
unstable as stated in \cite{Muller} when the peak of bulk viscosity
is large and the relaxation time of bulk viscosity is small. We can
also see that shear viscosity will decrease $\Pi/p$ which indicates
that shear viscosity will weaken the instability that induced by
bulk viscosity. This behavior is also found in the next analysis
about linear perturbations.
Now we use Lyapunov direct method to study the stability of 1+1
solution of Israel-Stewart theory. The evolution equations of
perturbations Eqs.~(\ref{Hdel-v}$\sim$\ref{Hdel-Pi}) can be
rewritten in the following matrix formula
\begin{eqnarray}
\partial_{\tau}{ \delta Y}={ A\delta Y},
\end{eqnarray}
where
\begin{eqnarray} { \delta Y}=\left(
\begin{array}{cccc}
\delta \tilde{v}\\
\delta \tilde{\epsilon}\\
\delta \tilde{\pi}_{\eta}^{\eta}\\
\delta \tilde{\pi}\\
\delta \tilde{\Pi}
\end{array}\right).
\end{eqnarray}
When one uses the Lyapunov direct method, the Lyapunov function
should be given first. The Lyapunov function must be positive
definite. If it is a monotonically decreasing function, then the
solution is stable; if it is a monotonically increasing function,
then the solution is unstable. The more detailed description about
this method can be found in\cite{Koun,Deni2}. Here we assume the
Lyapunov function is $V=\delta Y^T\delta Y$. Then the evolution
equation of the Lyapunov function $V$ is \begin{eqnarray}
\partial_{\tau}V=\delta Y^T(A^T+A)\delta Y,
\end{eqnarray} after a short derivation, we can get \begin{eqnarray}
\lambda_{min}V\leq\partial_{\tau}V\leq\lambda_{max}V \end{eqnarray} where
$\lambda_{\rm max}$ and $\lambda_{min}$ are the largest and smallest
eigenvalues of ${ A}$ + ${A^T}$, respectively. It can be shown that
the solution is stable when $\lambda_{\rm max}\leq 0$ and is
unstable when $\lambda_{\rm min}\ge 0$. Fig. 3 and Fig. 4 show the
values of $\lambda_{\rm max}$ and $\lambda_{\rm min}$ for $k=0$ and
$k=3$, respectively. Each figure has differen cases as stated for
Fig. 1. We can see that neither the stable region nor the unstable
region can be determined because $\lambda_{\rm max}$ is always
larger than zero and $\lambda_{\rm min}$ is always smaller than
zero. As for the scaling solution, the unstable regions also can not
be found but the stable regions can be determined \cite{Deni2}.
These plots show that $\lambda_{\rm max}$ and $\lambda_{\rm min}$
have rapid change when the bulk viscosity is large and the
relaxation time is small. We can also see that shear viscosity will
delay this kind of rapid change and reduce the magnitude of
$\lambda_{\rm max}$ and $\lambda_{\rm min}$. This phenomenon is the
same for different $k$, but the magnitude of $\lambda_{\rm max}$ and
$\lambda_{\rm min}$ is larger with larger $k$.
We can see from Fig. 3 and Fig. 4 that the absolute value of
$\lambda_{\rm max}$ and $\lambda_{\rm min}$ increase rapidly which
means the growing and damping rates of perturbations increase
rapidly due to the large peak and small relaxation time of bulk
viscosity. Therefore the perturbations may rapidly grow to a value
comparable with the background. So they will break local homogeneity
and play an important role in the subsequent evolution of the
system. The created inhomogeneities have no reinteractions. It is
possible that isolated fragments will be created and move away from
each other. In \cite{Torri}, the authors argued that this may be a
reason that the source will be clusterized and then decoupled as the
fireballs. By adding a further free parameter which is the cluster
size to the system, this mechanism may solve the HBT
problem\cite{Torri2}. In the Israel-Stewart theory this phenomenon
may also happen when the peak of $\zeta/s$ is large, $\tau_\Pi$ is
small and $\zeta/s$ is not too large.
\section{Conclusions}
In summary, the stability problem of the 1+1 solution of
Israel-Stewart theory is studied. Firstly, the evolution of the
temperature and the ratio of the bulk pressure over the equilibrium
pressure are studied. We find that both shear and bulk viscosity
slow down the evolution of temperature. And the ratio of bulk
pressure over the equilibrium pressure ($\Pi/p$) will exceed one
with a large peak of bulk viscosity, which indicates the state is
unstable. The shear viscosity reduces the magnitude of $\Pi/p$ to
weaken the effects of bulk viscosity. Then using Lyapunov direct
method, we can not determine the stable or unstable regions. We also
find the phenomenon which may drive the source to clusterize which
is similar to that in Navier-Stokes theory. However, this phenomenon
will happen only when the peak of bulk viscosity is large enough,
the relaxation time of the bulk viscosity is small and the shear
viscosity is not too large.
\vspace{0.5cm} Acknowledgments:
This work is supported by the
National Natural Science Foundation of China (Grant Nos 10775167 and
10705044), the Knowledge Innovation Project of Chinese Academy of
Sciences (Grant Nos KJCX2-YW-A14), the Startup Foundation for the
CAS Presidential Scholarship Award of China (Grant No 29010702), the
Shanghai Development Foundation for Science and Technology under
Grant Nos. 09JC1416800.
\bibliographystyle{model1a-num-names}
|
1,116,691,497,316 | arxiv | \section*{Supplementary material}
Technical details and further measurement results are shown in the supplementary material.
\section*{Acknowledgements}
We want to thank our in-house workshop for the fabrication of the waveguides.
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement n$^{\circ}$ 714235).
|
1,116,691,497,317 | arxiv | \section{Introduction}
The technique of decomposing finite-volume cusped hyperbolic 3-manifolds into ideal hyperbolic tetrahedra
was introduced by Thurston \cite{ThNotes} and has proved very useful for understanding these 3-manifolds.
For example, such triangulations provide an effective means of calculating geometric invariants and
for computing deformations of hyperbolic structures, as used in the computer programs SnapPea \cite{SnapPea}, Snap \cite{Snap} and SnapPy \cite{SnapPy}.
A (topological) {\em ideal triangulation} of a 3-manifold $M$ is a decomposition of $M$ into ideal tetrahedra,
that is, 3-simplices with their vertices removed, such that their faces are affinely glued in pairs.
A {\em geometric} ideal triangulation is an ideal triangulation where each tetrahedron can be assigned the shape of a positive volume
ideal hyperbolic tetrahedron, such that the tetrahedra glue together coherently to define a global hyperbolic structure which agrees with the complete hyperbolic structure on $M$. A natural question to ask is whether every cusped hyperbolic 3-manifold admits a geometric ideal triangulation. This question remains unanswered.
In this paper we focus on ideal triangulations of fibred cusped hyperbolic 3-manifolds, that is, cusped hyperbolic 3-manifolds of the form $M_\varphi = S \times [0,1]/ (x,0) \sim (\varphi(x), 1)$, where $S$ is a punctured surface and $\varphi : S \rightarrow S$ is a pseudo-Anosov homeomorphism. For each such $\varphi$, Agol gives a construction of an ideal triangulation of $M_{\varphi^{\circ}}$, where $\varphi^{\circ}$ is the restriction of $\varphi$ to the surface obtained by puncturing $S$ at the singular points of the invariant foliations (see \cite{MR2866919}.)
This triangulation is a layered triangulation of $M_{\varphi^\circ}$ and is canonical in the sense that it is uniquely determined by $\varphi$. Furthermore, the triangulation satisfies a combinatorial condition called \emph{veering} (see Section \ref{sec:defns} for the definition). In fact, this triangulation is uniquely characterised by the veering condition, that is, every veering ideal triangulation of $M_{\varphi^{\circ}}$ which is layered with respect to $\varphi^\circ$ is isomorphic to the veering triangulation produced by Agol's construction (see \cite[Proposition 4.2]{MR2866919}). Agol poses the question:
\begin{center}{\bf Question: } \emph{Are the veering triangulations coming from this construction geometric?}\end{center}
In \cite{MR2860987} and \cite{FGAng}, it is shown that veering triangulations admit strict angle structures, which is a necessary condition for an ideal triangulation to be geometric. It can be checked that the well studied monodromy (or Floyd-Hatcher) triangulations are veering, so that they correspond to the triangulations produced by Agol's construction in the case that $S$ is a once-punctured torus. These triangulations are known to be geometric (see \cite{MR2255497} or \cite{MR1988201}.) Many other examples of geometric veering triangulations were studied in \cite{MR2860987}.
In this paper we produce the first examples of {\em non-geometric} veering triangulations.
Currently, the smallest such example (in terms of the number of tetrahedra) known,
described in Section \ref{sec:nongeo}, has 13 tetrahedra. It seems unlikely that a counterexample would have been found without a computer search, and it is still something of a mystery why veering triangulations are so frequently geometric.
In Section \ref{chapter:implementation}, we describe a computer program that we implemented, which, given a pseudo-Anosov homeomorphism of an oriented surface (of genus $g \ge 0$ with $p > 0$ punctures) described as a composition of Dehn twists and half twists about adjacent punctures (see Figure \ref{fig:twist_curves}), automates Agol's construction, producing veering triangulation files which can be readily input into the computer program SnapPy. In Section \ref{sec:example} we apply the algorithm given in Section \ref{chapter:implementation} on an example.
We find examples of veering triangulations which SnapPy reports are non-geometric, see Section \ref{chapter:results} and tables of data given in Appendix \ref{app:tables}. In Section \ref{sec:nongeo}, we outline how we rigorously verified that one 13 tetrahedron veering triangulation is not geometric.
In \cite{MR2866919}, Agol briefly mentions how periodic splitting sequences of train tracks give rise to conjugacy invariants which solve the restricted conjugacy problem for pseudo-Anosov mapping classes. As part of our computer program we implemented an algorithm which uses the periodic splitting sequences of train tracks to determine whether or not two pseudo-Anosov mapping classes are conjugate in the mapping class group. This is described in Section \ref{sec:conjtest}.
The program Veering \cite{Veering} described in this paper and tables of results are freely available at \url{http://www.ms.unimelb.edu.au/~veering/}.
We thank Toby Hall for his helpful comments regarding some technical details of the computer program Trains \cite{Trains}, on which our program relies. This paper is primarily based on work done as part of a Master's thesis \cite{Issa} by the second author.
\section{Definitions and background}\label{sec:defns}
Let $S$ be an orientable surface of genus $g$ with $p$ punctures and $\chi(S) < 0$. A \emph{train track} $\tau \subset S$ is a finite $1$-complex with $C^1$ embedded edges, each vertex of which is locally modeled on a switch (see Figure \ref{fig:tt_switch_condition}) so that each vertex has a well defined $1$-dimensional tangent space and which satisfies the following geometry condition \cite{PENNER}. If $R$ is a complementary region of $\tau$ in $S$ then the double of $R$ along $\partial R$ with non-smooth points removed has negative Euler characteristic. Edges of a train track are called \emph{branches}, and vertices are called \emph{switches}.
A \emph{measured train track} is a train track $\tau$ together with a transverse measure $\mu$, which is a function assigning a positive weight to each edge of $\tau$ such that the switch condition holds, that is, at each switch the sum of the weights of edges on each side of the tangent space are equal (see Figure \ref{fig:tt_switch_condition}.) We sometimes refer to $\tau$ as a measured train track when the measure is understood from context. We also occasionally drop the adjective measured, when it is clear that the train track is measured. We will denote by $k \tau$ the measured train track with the same underlying train track $\tau$ but with measure scaled by $k \in \mathbb{R}_{>0}$. A train track is \emph{trivalent} if every switch has degree $3$. If $\tau$ is a trivalent train track and $e$ is an edge of $\tau$ then it has two ends. An end of $e$ is \emph{large} if it comes into a switch $s$ on the side of the tangent space at $s$ opposite the side with two incident branches, otherwise it is \emph{small}. The branch $e$ is \emph{large} if both of its ends are large, and similarly it is \emph{small} if both of its ends are small.
\begin{figure}[H]
\centerline{\includegraphics[width=200pt]{illustrations/tt_switch_condition.eps}}
\caption{Model of train track switch, where $k,m > 0$. Switch condition: the sum of the weights of edges to the left and to the right of $v$ are equal.}
\label{fig:tt_switch_condition}
\end{figure}
Let $\varphi : S \rightarrow S$ be a homeomorphism. By the Nielsen-Thurston classification \cite{FLP}, $\varphi$ is isotopic to a homeomorphism $\varphi'$ which is either periodic, reducible (setwise fixes a non-empty union of finitely many disjoint essential simple closed curves) or pseudo-Anosov. In the pseudo-Anosov case there exist singular transverse measured foliations $\mathcal{F}^{\pm}$ on $S$ such that $\varphi'(\mathcal{F}^+) = \lambda \mathcal{F}^+$ and $\varphi'(\mathcal{F}^-) = \lambda^{-1} \mathcal{F}^-$, where $\lambda > 1$ is the \emph{dilatation} of $\varphi'$. Away from a finite minimal set of points $P \subset S$, each foliation $\mathcal{F}^{\pm}$ gives a decomposition of $S\backslash P$ into a disjoint union of curves, called \emph{leaves}. A finite line segment lying on a leaf of $\mathcal{F}^-$ can be thought of as scaled by $\varphi'$ by a factor of $\lambda$. The set of points $P$ together with the punctures of $S$ are called \emph{singular points} of $\mathcal{F}^\pm$. Fix a complete finite area hyperbolic metric on $S$. By removing the singular leaves of $\mathcal{F}^{-}$ (resp. $\mathcal{F}^{+}$), i.e. leaves which have an endpoint at a singular point, then isotoping each of the remaining leaves of the foliation to complete geodesic representatives, and finally taking the closure of the resulting subset of $S$, one obtains a geodesic lamination $\mathcal{L}^s$ (resp. $\mathcal{L}^u$) \cite[Construction 1.68]{MR2327361}. The geodesic lamination $\mathcal{L}^s$ (resp. $\mathcal{L}^u$) also inherits a transverse measure from $\mathcal{F}^{+}$ (resp. $\mathcal{F}^{-}$) and is called the \emph{stable} (resp. \emph{unstable}) measured geodesic laminations for $\varphi'$, see \cite{CAS} and \cite{CBNotes} for an alternative approach.
\begin{mydef} Let $(\tau, \mu)$ be a measured train track on a surface $S$, and let $e$ be a large branch with neighbouring edges labelled as in the left of Figure \ref{fig:tt_split_move}. A \emph{split} at $e$ is a move producing the train track $(\tau', \mu')$ obtained from $(\tau, \mu)$ by splitting $e$ and inserting a new edge $e'$ in one of two possible ways depending on the weights of the neighbouring edges, see Figure \ref{fig:tt_split_move}. We use the notation $(\tau, \mu) \rightharpoonup_e (\tau', \mu')$ to denote that $(\tau', \mu')$ is obtained from $(\tau, \mu)$ by splitting $e$.
{\bf Note:}
\begin{enumerate}
\item The weights of all other edges are kept the same.
\item The train track $(\tau', \mu')$ is only well-defined up to isotopy.
\item We only define a split for $\mbox{max}(a, d) \neq \mbox{max}(b, c)$.
\item The inverse move which produces $(\tau, \mu)$ from $(\tau', \mu')$ is called a \emph{fold} at $e'$.
\end{enumerate}
\end{mydef}
\begin{mydef} Let $(\tau, \mu)$ be a measured train track on a surface $S$. A \emph{maximal split} of $(\tau, \mu)$ is a move which produces the train track $(\tau', \mu')$ obtained by splitting all of the edges of $\tau$ that have the maximum weight. This is denoted by $(\tau, \mu) \rightharpoonup (\tau', \mu')$. If $$(\tau_0, \mu_0) \rightharpoonup (\tau_1, \mu_1) \rightharpoonup \cdots \rightharpoonup (\tau_n, \mu_n)$$
is a sequence of $n \in \mathbb{Z}_{>0}$ maximal splits then we write $(\tau_0, \mu_0) \rightharpoonup^{n} (\tau_n, \mu_n)$. Note that in general a maximal split may split more than one edge.
\end{mydef}
\begin{figure}[H]
\centerline{\includegraphics[height=230pt]{illustrations/tt_split_move.eps}}
\caption{The two possibilities for a split of the large branch $e$. We use the label of an edge to represent its weight.}
\label{fig:tt_split_move}
\end{figure}
\begin{mydef} Let $S$ be a punctured surface. Let $(\mathcal{L}, \lambda)$ be a measured geodesic lamination on $S$, and let $(\tau, \mu)$ be a measured train track on $S$. The measured lamination $\mathcal{L}$ is \emph{suited to} $\tau$ (we also say $\tau$ is suited to $\mathcal{L}$) if the following conditions are satisfied:
\begin{enumerate}[(i)]
\item There exists a differentiable map $f : S \rightarrow S$ homotopic to the identity such that $f(\mathcal{L}) = \tau$.
\item $f$ is non-singular on the tangent space to leaves of $\mathcal{L}$, that is, if $v \neq 0$ is a vector tangent to a leaf of $\mathcal{L}$ then $df(v)$ is a non-zero tangent vector to $\tau$.
\item The map $f$ respects the transverse measures, that is, if $p$ is a point in the interior of an edge $e$ of $\tau$ then $\lambda(f^{-1}(p)) = \mu(e)$.
\end{enumerate}
\end{mydef}
\begin{thm}[Theorem 3.5 of \cite{MR2866919}]\label{thm:splitting} Let $\varphi : S \rightarrow S$ be a pseudo-Anosov homeomorphism with dilatation $\lambda$ and stable measured geodesic lamination $(\mathcal{L}^s, \delta)$. If $(\mathcal{L}^s, \delta)$ is suited to the measured train track $(\tau, \mu)$ then there exist $n,m \in \mathbb{Z}_{>0}$ such that
$$(\tau, \mu) \rightharpoonup^n (\tau_n, \mu_n) \rightharpoonup^m (\tau_{n+m}, \mu_{n+m}),$$
and $\tau_{n+m} = \varphi(\tau_n)$ and $\mu_{n+m} = \lambda^{-1} \varphi(\mu_n)$, where if $e$ is an edge of $\tau_{n+m}$ then $\varphi(\mu_n)(e) := \mu_n(\varphi^{-1}(e))$.
\end{thm}
We call the sequence
$$(\tau_n, \mu_n) \rightharpoonup \cdots \rightharpoonup (\tau_{n+m}, \mu_{n+m}) \rightharpoonup \cdots, $$
of train tracks in Theorem \ref{thm:splitting} a \emph{periodic splitting sequence}, with respect to $\varphi$, as it is periodic modulo the action of the monodromy and scaling by the dilatation.
We briefly describe Agol's construction of the layered veering triangulation; see also Step 4 of Section \ref{sec:outline} and \cite{MR2866919} for details. With the notation as in Theorem \ref{thm:splitting}, let $\varphi^\circ : S^\circ \rightarrow S^\circ$ be the restriction of $\varphi$ to the surface $S^\circ$ obtained by puncturing $S$ at the singular points of the invariant foliations of $\varphi$. Then the complementary regions of each train track in the periodic splitting sequence are homeomorphic to once-punctured disks. There is an ideal triangulation $T_i$ of $S^\circ$ dual to the train track $\tau_i$, so that edges of $T_i$ are in bijection with branches of $\tau_i$. A split of a train track corresponds to a diagonal exchange of the dual triangulation. Hence, a maximal split corresponds to a sequence of diagonal exchanges interpolating between $T_i$ and $T_{i+1}$ (the veering triangulation is independent of the order of the diagonal exchanges.) Starting with the triangulation $T_n$ of $S^\circ$ we attach a tetrahedron for each diagonal exchange interpolating between the two triangulations. Finally, since $\tau_{n+m} = \varphi^\circ(\tau_n)$ (ignoring measures) we have $T_{n+m} = \varphi^\circ(T_n)$, so we can glue $T_n$ to $T_{n+m}$ to construct a veering triangulation of the mapping torus $M_{\varphi^\circ}$.
We now define veering, a combinatorial condition satisfied by the ideal triangulations produced by Agol's construction.
\begin{mydef}[Taut angle structure]\label{taut_angle_structure}
An {\bf angle-taut tetrahedron} is an ideal tetrahedron equipped with an assignment of angles taken from $\{0,\pi\}$ to its edges so that two opposite edges are assigned $\pi$ and the other four are assigned $0$. A {\bf taut angle structure} on $M$ is an assignment of angles taken from $\{0,\pi\}$ to the edges of each tetrahedron in $M,$ such that every tetrahedron is angle-taut and the sum of all angles around each edge in $M$ is $2\pi.$
\end{mydef}
\begin{mydef}[Taut structure]\label{taut_structure}
A {\bf taut tetrahedron} is a tetrahedron with a coorientation assigned to each face, such that precisely two faces are cooriented into the tetrahedron, and precisely two are cooriented outwards. Each edge of a taut tetrahedron is assigned an angle of either $\pi$ if the coorientations on the adjacent faces agree, or $0$ if they disagree. See Figure \ref{taut_ideal}(a) for the only possible configuration (up to symmetry). Then $\mathcal{T}$ is a {\bf taut ideal triangulation} of $M$ if there is a coorientation assigned to each ideal triangle, such that every ideal tetrahedron is taut, and the sum of all angles around each edge in $M$ is $2\pi$ (see Figure \ref{taut_ideal}(b)). This will also be called a {\bf taut structure} on $M.$
\end{mydef}
\begin{figure}[ht!]
\labellist
\pinlabel (a) at 60 220
\pinlabel (b) at 465 220
\small
\pinlabel 0 at 213 50
\pinlabel 0 at 208 238
\pinlabel 0 at 316 137
\pinlabel 0 at 99 155
\pinlabel $\pi$ at 200 100
\pinlabel $\pi$ at 203 160
\endlabellist
\centering
\includegraphics[width=0.7\textwidth]{diagrams/taut_ideal}
\caption{Conditions for a taut ideal triangulation.}
\label{taut_ideal}
\end{figure}
A taut ideal triangulation comes with a compatible taut angle structure, but not every taut angle structure arises from a taut structure.
Let $\Delta^3$ be the standard 3--simplex with a chosen orientation. Suppose the edges of $\Delta^3$ are labelled by $e,$ $e'$ and $e'',$ such that opposite edges have the same label and all three labels occur. Then the cyclic order of $e,$ $e'$ and $e''$ viewed from each vertex depends only on the orientation of the 3--simplex, i.e.\thinspace is independent of the choice of vertex. It follows that, up to orientation preserving symmetries, there are two possible labellings, and we fix one of these labellings as shown in Figure~\ref{veering_on_tetrahedron_d}.
\begin{mydef}[Veering triangulation]\label{veering_defn}
A {\bf veering tetrahedron} is an oriented angle-taut tetrahedron, where each edge with angle $0$ is coloured either red or blue (drawn dotted and dashed respectively), such that the cyclic order of the edges at each vertex takes the $\pi$ angle edge to a blue edge to a red edge. This is shown in Figure \ref{veering_on_tetrahedron_d}. We refer to the \underline{r}ed edges as {\bf right-veering} and the b\underline{l}ue edges as {\bf left-veering}. Colours assigned to the $\pi$ angle edges are irrelevant to the definition of a veering tetrahedron. A triangulation $\mathcal{T}$ with a taut angle structure is a {\bf veering triangulation} of $M$ if there is a colour assigned to each edge in the triangulation so that every tetrahedron is veering.
\end{mydef}
\begin{figure}[htb]
\labellist
\small
\pinlabel $e$ at 52 61
\pinlabel $e'$ at 52 95
\pinlabel $e''$ at 10 50
\endlabellist
\centering
\includegraphics[width=0.2\textwidth]{diagrams/veering_on_tetrahedron_d}
\caption{The canonical picture of a veering tetrahedron. The $0$ angles are at the four sides of the square, and the $\pi$ angles are the diagonals. We indicate the veering directions on the $0$ angle edges of a tetrahedron by colouring the edges. Note that this picture depends on a choice of orientation for the tetrahedron.}
\label{veering_on_tetrahedron_d}
\end{figure}
The definition of a veering triangulation given above matches the definition given in \cite{MR2860987}. This is slightly more general than the definition given by Agol, which requires that the angle-taut structure on the triangulation is promoted to a taut structure.
\section{Implementation}\label{chapter:implementation}
In this section we discuss a computer program we developed to construct examples of veering triangulations coming from Agol's construction.
Let $S$ be a surface of genus $g \ge 0$ with $p > 0$ punctures, where we label the punctures by integers $1,2,\ldots,p$. Let $\varphi : S \rightarrow S$ be a homeomorphism permuting the punctures, given by a composition $$\varphi = T_{n} \circ \cdots \circ T_{2} \circ T_{1},$$
where each of $T_1, \ldots, T_n$ is either a left or right Dehn twist in one of the curves shown in Figure \ref{fig:twist_curves}, or a half-twist permuting adjacent punctures $i$ and $i+1$, where $i \in \{1,2,\ldots,p-1\}$ (see Figure \ref{fig:half_twist}). The Dehn twists in curves shown in Figure \ref{fig:twist_curves} and half-twists in adjacent punctures generate the mapping class group of $S$ \cite{MR2850125}.
\begin{figure}[H]
\centerline{\includegraphics[width=\textwidth]{diagrams/twist_curves.eps}}
\caption{The orientation of $S$ is given by an outward normal vector field and the right hand rule. We consider a positive Dehn twist in a curve shown to be a left twist, that is, we twist to the left as we approach the curve from either side.}
\label{fig:twist_curves}
\end{figure}
\begin{figure}[H]
\centerline{\includegraphics[height=120pt]{diagrams/punc_perm.eps}}
\caption{A positive (clockwise) half twist $p_i : S \rightarrow S$ in punctures $i$ and $i+1$, supported in the disk $D$.}
\label{fig:half_twist}
\end{figure}
Let $G \subseteq S$ be a graph homotopy equivalent to $S$. Then $\varphi$ induces a homotopy equivalence $\mathfrak{g} : G \rightarrow G$ and conversely the isotopy class of $\varphi$ is uniquely determined by $\mathfrak{g}$. By a homotopy if necessary, we may assume that $\mathfrak{g}$ is a \emph{graph map}, that is, $\mathfrak{g}$ maps vertices to vertices and each oriented edge to an \emph{edge path}, which is an oriented path $e_1e_2\cdots e_k$, where $e_1,\ldots,e_k$, $k \ge 0$, are oriented edges and the terminal vertex of $e_i$ is equal to the initial vertex of $e_{i+1}$ for $i \in \{1,2,\ldots,k-1\}$. As the graph map $\mathfrak{g}$ is induced by $\varphi$, we say that $\mathfrak{g}$ is a graph map \emph{representing} $\varphi$. We compute such a graph map $\mathfrak{g}$ representing $\varphi$; this is described in more detail in Appendix \ref{sec:comp_graph_map}.
The graph map produced is input into the computer program Trains \cite{Trains}, written by Toby Hall, which is a software implementation of the Bestvina-Handel algorithm for punctured surfaces (see \cite{BH}.) The Bestvina-Handel algorithm will determine whether or not the homeomorphism represented by the graph map is isotopic to a pseudo-Anosov homeomorphism, and in the case that it is, it will produce a train track $\tau$ suited to the stable geodesic lamination. The program Trains provides the following combinatorial data of the train track:
\begin{enumerate}
\item The fat graph structure of $\tau$, i.e. the graph structure together with a cyclic order of incident edges around each vertex respecting the orientation of the surface.
\item The smoothing at each switch, i.e. the edges on each side of the tangent space.
\item A cycle of oriented edges of $\tau$ representing a loop on the boundary of each complementary region of $\tau \subset S$ containing a puncture.
\end{enumerate}
Since each complementary region is a disk or punctured disk, from the above combinatorial data alone we can reconstruct a surface $\Sigma$ diffeomorphic to $S$, with embedded train track $\tau_0$, so that $(\Sigma, \tau_0)$ is diffeomorphic to $(S, \tau)$. The above combinatorial data determines the embedding of the train track in the surface only up to a diffeomorphism of the surface fixing each puncture. That is, if two train tracks $\tau'$ and $\tau''$ on a surface $S$ have isomorphic combinatorial data then there exists a diffeomorphism $h : S \rightarrow S$ pointwise fixing the punctures of $S$, such that $h(\tau') = \tau''$ (see Proposition \ref{prop:ceqdiff} of Section \ref{combeqv}.)
Let $\tau$ be a train track suited to the stable geodesic lamination of $\varphi$, as given by the Bestvina-Handel algorithm. Let $M_{\varphi^{\circ}}$ be the mapping torus of $\varphi^{\circ}$, where $\varphi^{\circ} : S^\circ \rightarrow S^\circ$ is the restriction of $\varphi$ to the surface $S^\circ$ obtained by puncturing $S$ at the singular points of the invariant foliations. As in the Bestvina-Handel algorithm, Trains also outputs a $C^1$ graph map $\mathfrak{g} : \tau \rightarrow \tau$ representing $\varphi^\circ$, with the additional property that the measure on $\tau$ can be computed from $\mathfrak{g}$ (see Step 1 below.) We now outline the steps taken to algorithmically build the veering triangulation of $M_{\varphi^{\circ}}$, given only the combinatorial data of $\tau$ and the map $\mathfrak{g}$, as provided by Trains. Following the outline, we will describe Steps $2$ and $3$ in more detail.
\subsection{Algorithm outline}\label{sec:outline}
\begin{enumerate}
\item[Step 1:]{\bf Compute the tranverse measure.} Use the map $\mathfrak{g}$ given by Trains to compute the dilatation $\lambda > 1$ and weights of branches of $\tau$ to some specified arbitrary precision, so that $\tau$ is now a measured train track suited to the stable geodesic lamination of $\varphi^{\circ}$. More precisely, let $e_1, e_2, \ldots, e_m$ be the set of edges of $\tau$, with each edge equipped with an arbitrarily chosen orientation. Let $M$ be the $m \times m$ matrix given by setting $M_{ij}$ to be the number of times either $e_j$ or $\overline{e}_j$ ($e_j$ with its orientation reversed) appears in the edge path $\mathfrak{g}(e_i)$. The dilatation $\lambda$ is the largest real eigenvalue of $M$. Let $v$ be an eigenvector of $M$ with strictly positive entries which spans the one dimensional eigenspace corresponding to $\lambda$. The transverse measure on $\tau$ is given by assigning the $i$th component of $v$ to be the weight of $e_i$, for $i \in \{1,2,\ldots,m\}$. The transverse measure is well defined up to scaling. See \cite{BH} for details.
\item[Step 2:]{\bf Modify $\tau$ so that it is a trivalent train track (and modify $\mathfrak{g}$ appropriately.)} Agol's construction begins with a trivalent train track suited to the stable geodesic lamination $\mathcal{L}^s$ of $\varphi$, however those produced by the Bestvina-Handel algorithm are generally not trivalent. We modify $\tau$ by applying a `combing' procedure at each switch of $\tau$ with degree greater than $3$ and removing degree $2$ vertices to produce a trivalent measured train track suited to $\mathcal{L}^s$, which we continue to denote by $\tau$.
\item[Step 3:]{\bf Detect periodicity of splitting sequence of train tracks.} In this step we perform maximal splits starting with $\tau$ to obtain a splitting sequence of train tracks until we determine that the sequence becomes periodic, that is $\varphi^\circ(\lambda^{-1}\tau_i) = \tau_j$ for some $i < j$. Periodicity requires us to check that $\varphi^\circ(\lambda^{-1}\tau_i)$ and $\tau_j$ are identical up to (ambient) isotopy in $S^\circ$. However, since we do not keep track of embeddings of train tracks in $S^\circ$ and therefore cannot directly determine whether two train tracks are isotopic in $S^\circ$, we instead use an indirect approach to detect periodicity. We introduce the notion of combinatorial equivalence of two given measured train tracks on the same surface, which is essentially equivalent to saying that the two tracks have isomorphic combinatorial data. Combinatorial equivalence is a necessary but insufficient condition for two train tracks to be isotopic but can be checked for algorithmically. Moreover, given a combinatorial equivalence between the two train tracks $\lambda^{-1}\tau_i$ and $\tau_j$, there exists a diffeomorphism $h : S^\circ \rightarrow S^\circ$ such that $h(\lambda^{-1}\tau_i) = \tau_j$ which realises the combinatorial equivalence. Then $h$ and $\varphi^\circ$ are isotopic in $S^\circ$ if and only if the induced maps $h_*, \varphi^\circ_* : \pi_1(S^\circ) \rightarrow \pi_1(S^\circ)$ are equal up to an inner automorphism. The map $\varphi^\circ_*$ can be computed from the map $\mathfrak{g}$ given by Trains, and $h_*$ is determined by the combinatorial equivalence alone. If $h$ and $\varphi^\circ$ are determined to be isotopic then $\varphi^\circ(\lambda^{-1}\tau_i) = h(\lambda^{-1}\tau_i) = \tau_j$ so that the sequence is periodic as required.
\item[Step 4:]{\bf Use the splitting sequence of train tracks to construct the veering triangulation.} We briefly describe the construction of the layered triangulation from the periodic splitting sequence of train tracks and combinatorial equivalence found in Step $3$, see \cite[\S 4]{MR2866919} for details. Assume we have found that $\varphi^\circ(\lambda^{-1}\tau_n) = \tau_{n+m}$, $n,m\in\mathbb{Z}_{>0}$. Recall that we add punctures to $S$ at the singular points of the transverse measured foliations of $\varphi$ to obtain a surface $S^\circ$ so that each complementary region (in $S^\circ$) of a train track in our splitting sequence is homeomorphic to a punctured disk. For a given train track there is a dual ideal triangulation of the surface $S^\circ$ so that edges of ideal triangles are in bijection with branches of the train track. A maximal split $\tau_i \rightharpoonup \tau_{i+1}$ corresponds to a sequence of diagonal exchanges, and for each diagonal exchange an ideal tetrahedron is attached.
Finally, we need to specify how the triangulation $T_n$ of $S^\circ$ dual to $\tau_n$ glues to the triangulation $T_{n+m}$ of $S^\circ$ dual to $\tau_{n+m}$, which is required in order to completely determine the tetrahedron face pairings. The combinatorial equivalence between $\lambda^{-1}\tau_n$ and $\tau_{n+m}$ from Step 3 induces a bijection between the edges of $\tau_n$ and $\tau_{n+m}$, which in turn induces a gluing of the ideal triangles in $T_n$ to those in $T_{n+m}$, as required.
\end{enumerate}
\subsection{Step 2}\label{subsec:step2} We now describe Step 2 in more detail. Let $v$ be a switch of $\tau$ with degree greater than $3$. Let $e_1, e_2$ be consecutive edges incident to $v$ on the same side of the tangent space of $\tau$ at $v$, and let $U \subseteq S$ be a small closed disk such that $v \in \mbox{Int}(U)$ (see Figure \ref{fig:combing}.) Identify the arcs $e_1 \cap U$ and $e_2 \cap U$ to produce a train track, obtained from $\tau$ by \emph{combing}. By repeatedly combing we can obtain a trivalent train track, which we will denote by $\tau'$. The new edge is assigned a weight equal to the sum of the weights of edges $e_1$ and $e_2$. By Proposition \ref{combing} below, $\mathcal{L}$ is suited to $\tau'$. (Further discussion of combing can be found on page 40 of \S 1.4 of \cite{PENNER}.)
Assume first that $\tau'$ is obtained from $\tau$ by combing a single edge as in Figure \ref{fig:combing}. We modify the graph map $\mathfrak{g}$ to obtain a graph map $\tau' \rightarrow \tau'$ which we will continue to denote by $\mathfrak{g}$. This graph map will be used to detect periodicity in the splitting sequence of train tracks in Step 4. Let $E$ be the new edge introduced by combing which we orient away from $v$. Assume that the other edges of $\tau'$ are labelled the same as those of $\tau$ and that $e_1, e_2$ are oriented away from the vertex common with $E$ in $\tau'$ as in Figure \ref{fig:combing}. We think of the old edge $e_i$ as now being $E e_i$, $i \in {1,2}$. Set $\mathfrak{g}(E)$ to be the empty edge path. For each remaining edge $e$, and for each $e_i \in \{e_1,e_2\}$ modify $\mathfrak{g}(e)$ by replacing each occurrence of $e_i$ in $\mathfrak{g}(e)$ with $E e_i$ and each occurrence of $\overline{e}_i$ with $\overline{e}_i \overline{E}$. If $\tau'$ is obtained by combing more than one edge of $\tau$ then each time we comb an edge we update $\mathfrak{g}$ as described. The resulting map $\mathfrak{g}$ is a graph map representing $\varphi^\circ$.
Assume that $v'$ is a degree $2$ vertex of $\tau$ with incident edges $e_3$ and $e_4$, oriented away from $v'$. We declare the vertex to be removed from $\tau$ so that the union of $e_3$ and $e_4$ form a new edge $E = \overline{e_3}e_4$. Then we modify the graph map $\mathfrak{g}$ so that $E$ maps to $\mathfrak{g}(\overline{e_3}e_4)$, and we remove all occurrences of $e_3$ and $\overline{e}_3$ in the image of each edge. The edge $e_3$ is thought of as contracted into a vertex and $e_4$ lengthened to form $E$.
\begin{figure}[H]
\centerline{\includegraphics[height=135pt]{illustrations/combing.eps}}
\caption{Combing a train track}
\label{fig:combing}
\end{figure}
\begin{prop}\label{combing} Let $\mathcal{L}$ be a lamination suited to a train track $\tau' \subseteq S$. Let $\tau''$ be obtained from $\tau'$ by combing the edges $e_1, e_2$ of $\tau'$ with respect to the closed disk $U \subseteq S$. Then $\mathcal{L}$ is suited to $\tau''$.
\end{prop}
\begin{proof}
By definition since $\mathcal{L}$ is suited to $\tau'$ there exists a differentiable map $f : S \rightarrow S$ homotopic to the identity such that $f(\mathcal{L}) = \tau'$ and $f$ is non-singular on the tangent space to leaves of $\mathcal{L}$. Let $h : S \rightarrow S$ be a differentiable map homotopic to the identity which, intuitively, folds together $e_1 \cap U$ and $e_2 \cap U$. Then $h \circ f : S \rightarrow S$ is a differentiable map homotopic to the identity such that $h\circ f(\mathcal{L}) = \tau''$ and is non-singular on the tangent space to leaves of $\mathcal{L}$. The measure is also appropriately preserved by $h \circ f$.
\end{proof}
\subsection{Step 3}\label{combeqv} We denote by $\varphi^\circ : S^\circ \rightarrow S^\circ$ the restriction of $\varphi$ to the surface $S^\circ$ obtained by puncturing $S$ at the singular points of the invariant foliations. In this section we outline an effective procedure to detect the periodic splitting sequence associated with $\varphi^\circ$, given only $\tau$ and $\mathfrak{g} : \tau \rightarrow \tau$ as produced by Trains.
\subsubsection{Combinatorial equivalence of train tracks.}
Recall that we would like to perform maximal splits beginning with the train track $\tau$, producing a sequence of measured train tracks
$$\tau \rightharpoonup \tau_1 \rightharpoonup \cdots \rightharpoonup \tau_n \rightharpoonup \cdots \rightharpoonup \tau_{n+m} \rightharpoonup \cdots, \quad n,m\in\mathbb{Z}_{>0},$$
until we find that the sequence becomes periodic, that is, $\lambda^{-1}\varphi^\circ(\tau_n) = \tau_{n+m}$ for some $n,m \in \mathbb{Z}_{>0}$.
However, we do not keep track of the embedding of train tracks in the surface $S$ so we can not directly compare train tracks up to isotopy in $S$. Despite this, we can determine if two measured train tracks $\tau', \tau''$ are \emph{combinatorially equivalent}, that is, there is a bijection between their vertices and oriented edges which:
\begin{enumerate}[(i)]
\item induces an isomorphism of the combinatorial graph structure of the train tracks,
\item preserves the clockwise cyclic order of oriented edges around each switch,
\item preserves the smoothing at each switch (i.e. the list of edges on each side of the tangent space),
\item maps edges to edges of the same weight, and
\item preserves complementary regions, that is, if $p$ is a puncture of $S$, then the clockwise sequence of edges which form the boundary of the complementary region of $\tau$ containing $p$ are mapped by the bijection to the clockwise sequence of edges which form the boundary of the complementary region of $\tau'$ containing $p$.
\end{enumerate}
See \cite[\S 3.15]{Mosher2} for further discussion on combinatorial equivalence (note however, that we require the additional property that a combinatorial equivalence respects weights of edges.) We write $\tau' \sim \tau''$ to denote combinatorial equivalence. A combinatorial equivalence between two train tracks $\tau'$ and $\tau''$ induces a $C^1$-diffeomorphism $\tau' \rightarrow \tau''$ respecting the bijection of edges and vertices. In fact we have the following proposition.
\begin{prop}\label{prop:ceqdiff} Let $\tau', \tau''$ be combinatorially equivalent measured train tracks on a punctured surface $S$ each of which fill the surface, that is, the complementary regions of each of the train tracks are homeomorphic to disks or once-punctured disks. Then there exists a diffeomorphism $\phi : S \rightarrow S$ such that $\phi(\tau') = \tau''$ which induces the combinatorial equivalence.
\end{prop}
See \cite[Proposition 3.15.1]{Mosher2}.
Note that train tracks output by the Bestvina-Handel algorithm always fill the surface, and performing a maximal split on a filling train track results in a filling train track.
\subsubsection{Compute graph maps $\mathfrak{g}_i : \tau_i \rightarrow \tau_i$ representing $\varphi^\circ$, for $i \in \mathbb{Z}_{\ge 0}$.}
Singular points of the invariant foliations of $\varphi$ are in bijection with the complementary regions of $\tau \subset S$. Thus, in $S^\circ$ all complementary regions of $\tau$ (and $\tau_i$ for $i\in\mathbb{Z}_{>0}$) are punctured disks. For $i \in \mathbb{Z}_{>0}$, the train track $\tau_i$ is homotopy equivalent to $S^\circ$ and $\varphi^\circ$ induces a homotopy equivalence $\tau_i \rightarrow \tau_i$. Each time we perform a maximal split we compute such a homotopy equivalence, in fact, a graph map $\mathfrak{g}_i : \tau_i \rightarrow \tau_i$ representing $\varphi^\circ$, which is later used to compute the map $\varphi^\circ_* : \pi_1(S^\circ) \rightarrow \pi_1(S^\circ)$ mentioned in the outline in Section \ref{sec:outline}. This is done inductively, starting with the graph map $\mathfrak{g}_0 := \mathfrak{g}$ computed in Step 3, which was obtained by modifying the graph map produced by the Bestvina-Handel algorithm. Let $i \in \mathbb{Z}_{\ge 0}$ and assume that $\tau_{i+1}$ is obtained from $\tau_i$ by splitting a single large branch $E$, as shown in Figure \ref{fig:tt_split_isotopy}. The edge $e_1$ of $\tau_{i+1}$ can be thought of as represented by $\overline{E}e_1$ in $\tau_{i}$. Similarly, the edge $e_4$ of $\tau_{i+1}$ can be thought of as represented by $Ee_4$ of $\tau_{i}$. All other edges of $\tau_{i+1}$ are thought of as the same as the corresponding edges of $\tau_{i}$. We also think of $e_1$ of $\tau_i$ as being represented by $Ee_1$ of $\tau_{i+1}$ (and analogously for $e_2$.) This gives rise to the following modifications.
First define $\mathfrak{g}'_i : \tau_{i+1} \rightarrow \tau_i$ by setting $\mathfrak{g}'_i(e_1) = \mathfrak{g}_i(\overline{E}e_1)$ and $\mathfrak{g}'_i(e_4) = \mathfrak{g}_i(E e_4)$ (provided that $e_1 \neq \overline{e_4}$, otherwise set $\mathfrak{g}'_i(e_1) = \mathfrak{g}_i(\overline{E}e_1\overline{E})$) and $\mathfrak{g}'_i(e) = \mathfrak{g}_i(e)$ for all other edges $e$ of $\tau_{i+1}$, where we label the oriented edges of $\tau_{i+1}$ the same way as $\tau_i$ for edges not shown in Figure \ref{fig:tt_split_isotopy} and a bar above an edge indicates the edge with its direction reversed. Finally, $\mathfrak{g}_{i+1} : \tau_{i+1} \rightarrow \tau_{i+1}$ is obtained from $\mathfrak{g}'_i$ as follows. If $e$ is an edge of $\tau_{i+1}$ we set $\mathfrak{g}_{i+1}(e)$ to be the edge path given by replacing each edge $e_1$ by $E e_1$, and $e_4$ by $\overline{E}e_4$ in the edge path $\mathfrak{g}'_i(e)$ (and similarly for $\overline{e_1}$ and $\overline{e_4}$.) The case where $E$ is split the other way is dealt with analogously. If $\tau_{i+1}$ is obtained from $\tau_i$ by splitting multiple large branches, then we perform the above procedure for each split in any order.
\begin{remark} In general for an edge $e$ the edge path $\mathfrak{g}_i(e)$, $i\in\mathbb{Z}_{>0}$, may enter and exit a vertex via edges on the same side of the tangent space at a switch.\end{remark}
\begin{figure}[H]
\centerline{\includegraphics[width=\textwidth]{diagrams/tt_split_isotopy.eps}}
\caption{$\tau_i \rightharpoonup \tau_{i+1}$. By convention the new edge is also labelled $E$. The orientations of edges have been chosen arbitrarily.}
\label{fig:tt_split_isotopy}
\end{figure}
\subsubsection{Determine when the sequence of train tracks becomes periodic.}
For each train track $\tau_j$ computed, we enumerate all combinatorial equivalences $\frac{1}{\lambda}\tau_i \sim \tau_j$ with $i < j$. Note that in general two train tracks may be combinatorially equivalent in multiple distinct ways. Suppose that $\frac{1}{\lambda}\tau_i \sim \tau_j$ and fix a combinatorial equivalence. By Proposition \ref{prop:ceqdiff}, the combinatorial equivalence is induced by a homeomorphism $\psi : S^\circ \rightarrow S^\circ$ such that $\frac{1}{\lambda}\psi(\tau_i) = \tau_j$. We determine whether $\psi$ and $\varphi^\circ$ are isotopic as homeomorphisms of $S^\circ$ by determining whether they induce the same outer automorphism of $\pi_1(S^\circ)$.
For $k \in \mathbb{Z}_{>0}$, there exists a map $h_k : S^\circ \rightarrow S^\circ$ homotopic to the identity, mapping $\tau_{k+1}$ onto $\tau_k$, which folds at the small branches of $\tau_{k+1}$ that arose by the maximal splitting $\tau_k \rightharpoonup \tau_{k+1}$. Then $h_k$ restricts to a map $h_k : \tau_{k+1} \rightarrow \tau_k$. For example, if $\tau_k \rightharpoonup \tau_{k+1}$ is as in Figure \ref{fig:tt_split_isotopy} with $i = k$, then $h_k(e_1) = \overline{E}e_1$, $h_k(e_4) = Ee_4$ and all other edges are mapped to the corresponding edge of $\tau_k$. By composing the restrictions, we obtain a map $f := h_{i} \circ h_{i+1} \circ \cdots \circ h_{j-1} : \tau_{j} \rightarrow \tau_i$, which is induced by a map $S^\circ \rightarrow S^\circ$ homotopic to the identity.
Let $\mathfrak{g}_i' := f \circ \psi : \tau_i \rightarrow \tau_i$. Now $\mathfrak{g}_i'$ is a graph map representing $\psi$, and $\mathfrak{g}_i$ is a graph map representing $\varphi^\circ$, both in terms of the underlying train track $\tau_i$. Hence it suffices to determine whether they are homotopic graph maps. We do so by checking if they induce the same outer automorphism of $\pi_1(S^\circ) \cong \pi_1(\tau_i)$.
Let $v$ be a vertex of $\tau_i$. By a homotopy if necessary, we can assume that $\mathfrak{g}_i'$ and $\mathfrak{g}_i$ fix $v$. Then $\psi$ and $\varphi$ are isotopic in $S^\circ$ if and only if $(\mathfrak{g}_i')_*$ and $(\mathfrak{g}_i)_*$ are equal as outer automorphisms of the free group $\pi_1(\tau_i, v)$, that is, there exists $\alpha \in \pi_1(\tau_i, v)$ such that $(\mathfrak{g}_i')_*(\gamma) = \alpha \cdot (\mathfrak{g}_i)_*(\gamma) \cdot \alpha^{-1}$ for all $\gamma \in \pi_1(\tau_i, v)$. Noting that $\pi_1(\tau_i, v)$ is a free group on finitely many generators, we can algorithmically check this last condition as follows.
Let $F_n = \langle a_1, a_2, \ldots, a_n \rangle$, be the free group on $n > 1$ generators, and let $f, g : F_n \rightarrow F_n$ be group automorphisms. Our goal is to find $x \in F_n$ such that $f(a_i) = x g(a_i) x^{-1}$ for $i \in \{1,2,\ldots,n\}$, or show that such an $x$ does not exist. If there exists such an $x$, then the cyclically reduced parts of $f(a_i)$ and $g(a_i)$ are equal for $i \in \{1,2,\ldots,n\}$, so we can write $f(a_1) = x_1 g(a_1) x_1^{-1}$ and $f(a_2) = x_2 g(a_2) x_2^{-1}$, for some $x_1, x_2 \in F_n$. Since $f$ is an automorphism we know that $f(a_1),\ldots,f(a_n)$ forms a basis for $F_n$. Hence for $i \in \{1,2,\ldots,n\}$, the stabiliser of $f(a_i)$ under the conjugation action is $\{f(a_i)^k\, |\, k \in \mathbb{Z}\}$. We have $x^{-1} f(a_1) x = x_1^{-1} f(a_1) x_1$, so that $x x_1^{-1} = f(a_1)^k$ for some $k \in \mathbb{Z}$. Similarly, $x = f(a_2)^m x_2$ for some $m \in \mathbb{Z}$. Hence $f(a_2)^{-k} f(a_1)^m = x_2 x_1^{-1}$. A priori, if we did not know whether $f$ and $g$ were equal as outer automorphisms of $F_n$, then we could try to solve this last equation for $k, m \in \mathbb{Z}$. Since $x_2 x_1^{-1}$ as a reduced word in the generators is of finite length, there are only finitely many possible choices of $k, m$ to check, otherwise as a reduced word $f(a_2)^{-k} f(a_1)^m$ would have length exceeding the length of $x_2 x_1^{-1}$. If a solution exists, it is unique since $f(a_1), \ldots, f(a_n)$ forms a basis for $F_n$ so $f(a_2)^{-k} f(a_1)^m$ is distinct for distinct pairs $(k,m)$. Finally, if we are able to find $k, n$ solving the equation, then letting $x = f(a_1)^k x_1$, we can check whether $f(a_i) = x g(a_i) x^{-1}$ for $i = \{1,2,\ldots,n\}$, as required.
\section{Example}\label{sec:example}
Let $T$ be the once-punctured torus given by identifying opposite sides of the square with its vertices removed, as shown in Figure \ref{torus_twist_curves}. Let $f : T \rightarrow T$ be the homeomorphism given by a left Dehn twist in $c_2$, followed by a right Dehn twist in $c_1$, where $c_1$ and $c_2$ are the curves shown in Figure \ref{torus_twist_curves}.
\begin{figure}[H]
\centerline{\includegraphics[height=180pt]{diagrams/torus_twist_curves.eps}}
\caption{We orient the surface using an anticlockwise rotation direction.}
\label{torus_twist_curves}
\end{figure}
By applying the Bestvina-Handel algorithm we find that $f$ is a pseudo-Anosov homeomorphism with measured train track $\tau$ suited to the stable measured geodesic lamination as shown in Figure \ref{fig:torus_inv_tt}. In brief, we will convert $\tau$ into a trivalent train track. We then split the train track until we obtain the periodic splitting sequence, and construct the ideal triangulation of the mapping torus $M_f$.
\begin{figure}[H]
\centerline{\includegraphics[height=180pt]{diagrams/torus_tt_init.eps}}
\caption{Train track $\tau$, where $\phi = \frac{1}{2}(1+\sqrt{5})$ is the golden ratio.}
\label{fig:torus_inv_tt}
\end{figure}
\vspace{-3mm}
The initial train track map on $\tau$ is given by
\begin{eqnarray*}
\mathfrak{g} : \tau &\rightarrow& \tau \\
a &\mapsto& aba \\
b &\mapsto& ba
\end{eqnarray*}
The train track $\tau$ has one vertex of degree four. As in Section \ref{subsec:step2} we comb edges $a$ and $b$, introducing a new edge $c$ as shown in Figure \ref{fig:torus_trivalent_tt} below. We continue to denote the resulting trivalent train track by $\tau$. The weight of edge $c$ is the sum of the weights of edges $a$ and $b$.
\begin{figure}[H]
\centerline{\includegraphics[height=180pt]{diagrams/torus_trivalent_tt.eps}}
\caption{Trivalent weighted train track $\tau$.}
\label{fig:torus_trivalent_tt}
\end{figure}
The resulting map $\mathfrak{g}'$ on $\tau$ is given by
\begin{eqnarray*}
\mathfrak{g}' : \tau &\rightarrow& \tau \\
a &\mapsto& cacbca \\
b &\mapsto& cbca \\
c &\mapsto& \mbox{(empty path)}
\end{eqnarray*}
which is computed from $\mathfrak{g}$ by replacing $a$ by $ca$ and $b$ by $cb$ in the image of each edge, and mapping $c$ to the empty path ($c$ gets mapped into a vertex.) We will drop primes and denote $\mathfrak{g}'$ by $\mathfrak{g}$.
Let $\tau_0 = \tau$, as this will be our initial train track. We compute the image of $\tau_0$ after applying $f$, see Figure \ref{fig:torus_tt_image}. We will first detect the periodic splitting sequence by drawing pictures of train tracks on the surface keeping track of the embedding of the train tracks on the surface. We then execute our algorithm to detect periodicity combinatorially, which does not require train track embedding information, verifying that both methods agree.
\begin{figure}[H]
\centerline{\includegraphics[height=180pt]{diagrams/torus_tt_image.eps}}
\caption{$f(\tau_0)$ up to isotopy.}
\label{fig:torus_tt_image}
\end{figure}
In Figure \ref{fig:torus_tt_split}, we illustrate three maximal splits starting with $\tau_0$.
\begin{figure}[H]
\centerline{\includegraphics[width=380pt]{diagrams/torus_tt_split.eps}}
\caption{Splitting sequence. We have indicated the weight of the branch undergoing a split.}
\label{fig:torus_tt_split}
\end{figure}
We claim that $f(\lambda^{-1}\tau_1) = \tau_3$, that is, $f(\tau_1)$ and $\tau_3$ are isotopic in such a way that after scaling the weights of $\tau_3$ by the dilatation $\lambda = 1+\phi$, the branches of the train tracks which are identified have equal weights. We compute $f(\tau_1)$ by splitting $f(\tau_0)$ (Figure \ref{fig:torus_tt_image}), this is shown in Figure \ref{fig:torus_tt_isotopy}. We see that $f(\tau_1) = \tau_3$, given by $f(a) = c$, $f(b) = a$ and $f(c) = b$ and that the weights are preserved after scaling, e.g. $\lambda \mu_3(a) = \lambda (2-\phi) = 1$.
\begin{figure}[H]
\centerline{\includegraphics[width=380pt]{diagrams/torus_tt_isotopy.eps}}
\caption{Isotoping $\tau_3$ shows that $f(\tau_1) = \tau_3$. The dotted arrows are aids to visualise the isotopies.}
\label{fig:torus_tt_isotopy}
\end{figure}
We now illustrate how the periodic splitting sequence is detected combinatorially following Section \ref{combeqv}. We compute a homotopy equivalence $\mathfrak{g}_1 : \tau_1 \rightarrow \tau_1$ obtained from $\mathfrak{g}$. First we compute the intermediate function $\mathfrak{g}_1'$.
\begin{eqnarray*}
\mathfrak{g}_1' : \tau_1 &\rightarrow& \tau_0 \\
a &\mapsto& \mathfrak{g}(a) = cacbca \\
b &\mapsto& \mathfrak{g}(cbc) = cbca \\
c &\mapsto& \mathfrak{g}(c) = \mbox{(empty path)}.
\end{eqnarray*}
We now replace every occurrence of $b$ in the image of an edge with $\overline{c}b\overline{c}$.
\begin{eqnarray*}
\mathfrak{g}_1 : \tau_1 &\rightarrow& \tau_1 \\
a &\mapsto& cac(\overline{c}b\overline{c})ca \\
b &\mapsto& c(\overline{c}b\overline{c})ca \\
c &\mapsto& \mbox{(empty path)}.
\end{eqnarray*}
For convenience, we homotope $\mathfrak{g}_1$ to remove backtracking of the form $c\overline{c}$ or $\overline{c}c$ in the image of each edge, so that $\mathfrak{g}_1(a) = caba$ and $\mathfrak{g}_1(b) = ba$. There are two distinct combinatorial equivalences between $\tau_1$ and $\tau_3$, induced by homeomorphisms of $S$ which restrict to maps $\psi_1 : \tau_1 \rightarrow \tau_3$, $\psi_1(a) = c, \psi_1(b) = a, \psi_1(c) = b$, and $\psi_2 : \tau_1 \rightarrow \tau_3$, $\psi_2(a) = c, \psi_2(b) = a, \psi_2(c) = b$, respectively.
We compute $h_1 : \tau_2 \rightarrow \tau_1$, $h_1(a) = a$, $h_1(b) = b$, $h_1(c) = aca$ and $h_2 : \tau_3 \rightarrow \tau_2$, $h_2(a) = bab$, $h_2(b) = b$, $h_2(c) = c$. Let $h = h_1 \circ h_2 : \tau_3 \rightarrow \tau_1$. Then $h\circ \psi_1 : \tau_1 \rightarrow \tau_1$, $a \mapsto aca$, $b \mapsto bab$, $c \mapsto b$.
Let $v \in \tau_1$ be the terminal vertex of $c$. Then $\mathfrak{g}_1$ does not fix $v$. We homotope $\mathfrak{g}_1$ along $c$, so that $\mathfrak{g}_1(a) = \overline{c}(caba)c = abac$, $\mathfrak{g}_1(b) = \overline{c}(ba)c$, and $\mathfrak{g}_1(c)$ is the empty path (notice that we have removed backtracking.) Then $\mathfrak{g}_1$ fixes $v$.
The fundamental group $\pi_1(\tau_1, v)$ is the free group generated by the two loops $A = ac$ and $B = \overline{c}b$. We check that $(\mathfrak{g}_1)_*$ and $(h\circ\psi_1)_*$ are equal as outer automorphisms of $\pi_1(\tau_1, v)$.
We have $(\mathfrak{g}_1)_* : \pi_1(\tau_1, v) \rightarrow \pi_1(\tau_1, v)$, $A = ac \mapsto abac = ABA$, $B = \overline{c}b \mapsto \overline{c}bac = BA$, and $(h\circ\psi_1)_* : \pi_1(\tau_1, v) \rightarrow \pi_1(\tau_1, v)$, $A = ac \mapsto acab = AAB$, $B = \overline{c}b \mapsto \overline{b}bab = AB$. We see that $(\mathfrak{g}_1)_*(\alpha) = A^{-1} (h\circ\psi_1)_*(\alpha) A$ for all $\alpha \in \pi_1(\tau_1, v)$, as required.
It may be checked that the combinatorial equivalence $\psi_2$ does not induce the correct outer automorphism of $\pi_1(\tau_1, v)$, but we do not show the details.
We use the periodic splitting sequence to construct the veering triangulation, see Figure \ref{fig:torus_tt_layers}. We draw the triangulation of the punctured torus $T$ dual to each train track in the periodic splitting sequence, to obtain ``layers" of triangulations of $T$. Between every two layers a tetrahedron is inserted which interpolates between the layers. Layers $0$ and $2$ are then glued by the monodromy.
\begin{figure}[H]
\centerline{\includegraphics[width=\columnwidth]{diagrams/torus_tt_layers.eps}}
\caption{Triangulations of the punctured torus, dual to the train tracks of the splitting sequence. For aesthetic reasons we rearrange the triangles in layers 0 and 1 so that the diagonal of the parallelogram crosses the large branch of a train track.}
\label{fig:torus_tt_layers}
\end{figure}
Let $T_1$ and $T_2$ be tetrahedron $1$ and $2$, respectively.
Since the triangles of the front (back) face of $T_1$ ($T_2$) correspond to triangles of layer $1$, we obtain face pairings between the front of $T_1$ and back face of $T_2$. In order to completely specify all the tetrahedron face pairings it is necessary to pair the triangles of tetrahedron $1$ attached to layer $0$, to the triangles of tetrahedron $2$ attached to layer $2$.
The edges of triangles in layer $0$ are in one to one correspondence with oriented branches of $\tau_1$ which emanate from the unique vertex of $\tau_1$ interior to the triangle. Since we know that $f(\tau_1) = \tau_3$, with $f(a) = c$, $f(b) = a$ and $f(c) = b$ in agreement with $\psi_1$, we use this information to determine the remaining face pairings. Hence, we have that face $(102)$ of $T_1$ is paired with face $(103)$ of $T_2$, and face $(032)$ of $T_1$ is paired with face $(132)$ of $T_2$.
\begin{center}
\begin{table}[htbp]
\centering
\caption{Tetrahedron face pairings}
\begin{tabular}{|c|c|}
\hline
\mbox{Tetrahedron } 1 & \mbox{Tetrahedron } 2 \\ \hline
(103) & (203) \\
(321) & (021) \\
(102) & (103) \\
(032) & (132) \\
\hline
\end{tabular}
\end{table}
\end{center}
\section{Conjugacy testing}\label{sec:conjtest}
Given pseudo-Anosov homeomorphisms $\phi, \phi' : S \rightarrow S$ of a punctured surface $S$, in this section we describe an algorithm to effectively decide whether or not $\phi$ and $\phi'$ are conjugate in $\mbox{MCG}(S)$. We have implemented this algorithm as part of our computer program.
\begin{mydef} Let $S$ be a punctured surface. Let $\phi, \phi' : S \rightarrow S$ be pseudo-Anosov homeomorphisms with periodic splitting sequences
$$(\tau_0, \mu_0) \rightharpoonup^m (\tau_m, \mu_m) = \phi(\tau_0, \frac{1}{\lambda}\mu_0) \rightharpoonup \cdots$$
and
$$(\tau'_0, \mu'_0) \rightharpoonup^n (\tau'_n, \mu'_n) = \phi'(\tau'_0, \frac{1}{\lambda'}\mu'_0) \rightharpoonup \cdots$$
respectively, where $n,m \in \mathbb{Z}_{>0}$ and $\lambda, \lambda' \in \mathbb{R}_{>0}$ is the dilatation of $\phi, \phi'$ respectively, as in Theorem \ref{thm:splitting}. We denote the periodic splitting sequences of $\phi, \phi'$ by $\mathcal{S}, \mathcal{S}'$ respectively. We say that $\mathcal{S}$ and $\mathcal{S}'$ are \emph{combinatorially isomorphic} if there exists a diffeomorphism which conjugates between them up to rescaling, that is, $n = m$ and there exists $h : S \rightarrow S$, $p,q \in \mathbb{Z}_{\ge 0}$ and $c \in \mathbb{R}_{>0}$ such that:
\begin{enumerate}[(i)]
\item $\phi' = h \circ \phi \circ h^{-1}$, and
\item $h(\tau_{p+i}, \mu_{p+i}) = (\tau'_{q+i}, c\mu'_{q+i})$, for all $i \in \mathbb{Z}_{\ge 0}$.
\end{enumerate}
\end{mydef}
\begin{lemma}[Corollary 3.4 of \cite{MR2866919}]\label{lem:comsplit} Let $\mathcal{L}$ be a measured geodesic lamination suited to train tracks $(\tau, \mu), (\tau', \mu')$. Then $(\tau, \mu)$ and $(\tau', \mu')$ eventually split to a common train track, that is, there exists $(\tau'', \mu'')$ and $n,m\in\mathbb{Z}_{\ge 0}$ such that $(\tau, \mu) \rightharpoonup^n (\tau'', \mu'')$ and $(\tau', \mu') \rightharpoonup^m (\tau'', \mu'')$.
\end{lemma}
\begin{thm}\label{thm:conj} Let $\phi, \phi' : S \rightarrow S$ be pseudo-Anosov homeomorphisms with periodic splitting sequences
$\mathcal{S}$ and $\mathcal{S}'$ respectively, as in Theorem \ref{thm:splitting}.
Then $\phi$ and $\phi'$ are conjugate in $\mbox{MCG}(S)$ if and only if $\mathcal{S}$ and $\mathcal{S}'$ are combinatorially isomorphic.
\end{thm}
\begin{proof} Assume that $\phi$ and $\phi'$ are conjugate, and let $h \in \mbox{MCG}(S)$ such that $\phi' = h \circ \phi \circ h^{-1}$.
Let the periodic splitting sequences $\mathcal{S}$ and $\mathcal{S}'$ be given by
$$(\tau_0, \mu_0) \rightharpoonup^m (\tau_m, \mu_m) = \phi(\tau_0, \frac{1}{\lambda}\mu_0) \rightharpoonup \cdots$$
and
$$(\tau'_0, \mu'_0) \rightharpoonup^n (\tau'_n, \mu'_n) = \phi'(\tau'_0, \frac{1}{\lambda'}\mu'_0) \rightharpoonup \cdots$$
respectively, where $n,m \in \mathbb{Z}_{>0}$ and $\lambda, \lambda' \in \mathbb{R}_{>0}$ is the dilatation of $\phi, \phi'$ respectively.
Let $\mathcal{L}, \mathcal{L}'$ be the stable measured geodesic laminations of $\phi, \phi'$ which are suited to $(\tau_0, \mu_0), (\tau'_0, \mu'_0)$ respectively.
Then $(\tau'', \mu'') := h(\tau_0, \mu_0)$ is an invariant train track for $\phi'$ and there exists a stable measured geodesic lamination $\mathcal{L}''$ suited to $(\tau'', \mu'')$ (which is essentially given by straightening the leaves of $h(\mathcal{L})$.) By uniqueness of the stable lamination of $\phi'$ in $\mathcal{PML}(S)$, there exists $c \in \mathbb{R}_{>0}$ such that $\mathcal{L}' = c\,\mathcal{L}''$, where $c\,\mathcal{L}''$ denotes the measured geodesic lamination equal to $\mathcal{L}''$ as a lamination but with transverse measure scaled by $c$. Then $\mathcal{L}'$ is suited to $(\tau'', c\mu'')$ and $(\tau'_0, \mu'_0)$. Therefore by Lemma \ref{lem:comsplit}, we know that $(\tau'', c\mu'') = h(\tau_0, c\mu_0)$ and $(\tau'_0, \mu'_0)$ have a common split, that is, there exists $p,q \in \mathbb{Z}_{\ge 0}$ such that
$h(\tau_p, c\mu_p'') = (\tau'_q, \mu'_q)$, which implies
$$h(\tau_{p+i}, c\mu_{p+i}'') = (\tau'_{q+i}, \mu'_{q+i}) \mbox{ for }i\in\mathbb{Z}_{\ge 0}.$$
Thus, $\mathcal{S}$ and $\mathcal{S}'$ are combinatorially isomorphic.
We now prove the other direction. Assume that $\mathcal{S}$ and $\mathcal{S}'$ are combinatorially isomorphic. Then by definition there exists a diffeomorphism $h \in \mbox{MCG}(S)$ such that $\phi' = h \circ \phi \circ h^{-1}$, as required.
\end{proof}
We remark that Theorem \ref{thm:conj} is similar to \cite[Theorem 10.3.2]{Mosher2} of Lee Mosher.
\subsection{Algorithm for determining conjugacy in \texorpdfstring{$\mbox{MCG}(S)$}{MCG(S)}}
Proposition \ref{prop:ceqdiff} allows us to determine combinatorially whether or not two periodic splitting sequences are combinatorially isomorphic, which when combined with Theorem \ref{thm:conj} gives an effective algorithm for testing for conjugacy of pseudo-Anosov mapping classes in the mapping class group.
Let $\phi, \phi' : S \rightarrow S$ be pseudo-Anosov homeomorphisms with periodic splitting sequences
$$(\tau_0, \mu_0) \rightharpoonup^m (\tau_m, \mu_m) = \phi(\tau_0, \lambda^{-1}\mu_0) \rightharpoonup \cdots$$
and
$$(\tau'_0, \mu'_0) \rightharpoonup^n (\tau'_n, \mu'_n) = \phi'(\tau'_0, \lambda'^{-1}\mu'_0) \rightharpoonup \cdots$$
respectively, where $n,m \in \mathbb{Z}_{>0}$ and $\lambda, \lambda' > 1$ is the dilatation of $\phi, \phi'$ respectively, as in Theorem \ref{thm:splitting}.
We determine whether or not $\phi$ and $\phi'$ are conjugate in $\mbox{MCG}(S)$ using the following procedure.
\begin{enumerate}[(i)]
\item If $n \neq m$ or $\lambda \neq \lambda'$ then $\phi$ and $\phi'$ are not conjugate in $\mbox{MCG}(S)$.
\item Otherwise, we enumerate all combinatorial equivalences (up to rescaling of measures) between $(\tau_i, \mu_i)$ and $(\tau'_j, \mu'_j)$, for $0 \le i < m = n$ and $0 \le j < n$, then check whether any combinatorial equivalence conjugates between $\phi$ and $\phi'$ (in fact one only needs to check this for $i = 0$ and $0 \le j < n$.)
More precisely, let $\Phi$ be a combinatorial equivalence between $(\tau_i, \mu_i)$ and $(\tau'_j, c\mu'_j)$, where $c \in \mathbb{R}_{>0}$, $0 \le i < n$ and $0 \le j < n$. Then $\Phi$ gives a bijection between the edges of the train tracks which we denote by
$$\Phi_0 : \mathcal{E}_{dir}(\tau_i) \rightarrow \mathcal{E}_{dir}(\tau'_j),$$
where the notation $\mathcal{E}_{dir}(\tau_i)$ means the set of directed edges of $\tau_i$.
Furthermore, by splitting the train tracks $(\tau_i, \mu_i)$ and $(\tau_j, \mu_j)$ simultaneously, keeping track of the bijection between the edges induced by $\Phi_0$, we obtain a combinatorial equivalence
$$\Phi_n : \mathcal{E}_{dir}(\tau_{i+n}) \rightarrow \mathcal{E}_{dir}(\tau'_{j+n}).$$
($\Phi_0$ and $\Phi_n$ should be thought of as representing the diffeomorphism induced by $\Phi$ given by Proposition \ref{prop:ceqdiff}, in terms of different underlying train tracks.)
The homeomorphisms $\phi$ and $\phi'$ induce combinatorial equivalences
$$\phi_{*} : \mathcal{E}_{dir}(\tau_i) \rightarrow \mathcal{E}_{dir}(\tau_{i+n}) \mbox{ and }\phi'_{*} : \mathcal{E}_{dir}(\tau'_j) \rightarrow \mathcal{E}_{dir}(\tau'_{j+n}),$$
respectively.
\begin{enumerate}[(a)]
\item If $\phi'_{*} \circ \Phi_0 = \Phi_n \circ \phi_{*}$ then we conclude that $\phi$ and $\phi'$ are conjugate in $\mbox{MCG}(S)$, as this implies $\phi' = h \circ \phi \circ h^{-1}$ where $h \in \mbox{MCG}(S)$ is the homeomorphism inducing the combinatorial equivalence $\Phi$ as in Proposition \ref{prop:ceqdiff}.
\item Otherwise, if for no combinatorial equivalence does (a) hold, then $\phi$ and $\phi'$ are not conjugate in $\mbox{MCG}(S)$.
\end{enumerate}
\end{enumerate}
\begin{remark} Given a pseudo-Anosov homeomorphism $\phi$, with periodic splitting sequence of train tracks $(\tau_0, \mu_0) \rightharpoonup^m (\tau_m, \mu_m)$, the measures $\mu_0,\ldots,\mu_m$ are determined up to scaling by the (unmeasured) train tracks $\tau_0,\ldots,\tau_m$ together with the combinatorial equivalence given by $\phi(\lambda^{-1}\tau_0) = \tau_m$, where $\lambda$ is the dilatation of $\phi$. This follows from the argument given in the last paragraph of the proof of Proposition 4.2 in \cite{MR2866919}. Thus, in Step (ii) of our algorithm above, we may weaken our notion of combinatorial equivalence so that we no longer require that a combinatorial equivalence preserves weights of edges of train tracks. As long as the condition in (a) is satisfied the weights are guaranteed to agree. This allows us to avoid comparing weights of edges of train tracks.
Note, however, that in order to compute the train tracks $\tau_0,\ldots,\tau_m$ we need to be able to perform maximal splits, which requires that we identify all edges of a given train track which have maximal weight. In theory, this can be done exactly since the weights of the train tracks can be assumed to be algebraic numbers, as the weights of the initial train track in our algorithm are given by the entries of an eigenvector of an integer valued matrix and thus can be chosen to be algebraic numbers, and weights of subsequent train tracks are obtained from these by algebraic operations. In practice, in our program we only keep track of the weights of train tracks to a certain number of decimal places. Hence, if our program outputs that two homeomorphisms are conjugate then they are guaranteed to be, but if our program outputs that two homeomorphisms are not conjugate then there is a (small) chance that numerical errors occurred and that the homeomorphisms are in fact conjugate.
\end{remark}
\section{Results}\label{chapter:results}
We have included the beginning of longer tables of data which we have collected, given as tables in Appendix \ref{app:tables}. A number of triangulations which SnapPy reports are non-geometric were found, and in the case that we start with a pseudo-Anosov homeomorphism of a once-punctured genus 2 surface, a table of such examples is given in Appendix \ref{app:tables}.
Many of the veering triangulations produced are relatively large, and could be simplified by SnapPy to much smaller triangulations, see Figure \ref{fig:tri_table}. Heuristically, triangulations of a fixed hyperbolic manifold with a relatively small number of tetrahedra have a better chance of being geometric, and SnapPy makes an effort to simplify triangulations of link complements that it produces \cite[Chapter 10]{MR2179010}.
\begin{figure}[H]
\centerline{\includegraphics[width=350pt]{illustrations/tetsize_graph.eps}}
\caption{Vertical axis: number of tetrahedra in the veering triangulation. Horizontal axis: number of tetrahedra after the veering triangulation is simplified by SnapPy. Red squares indicate veering triangulations that SnapPy reports are non-geometric.}
\label{fig:tri_table}
\end{figure}
\section{A non-geometric example}\label{sec:nongeo}
In this section we describe the simplest example that we have found of a triangulation
that is non-geometric as reported by SnapPy, and give a rigorous proof that it is indeed non-geometric. This is a 13 tetrahedron triangulation $\mathcal{T}$ coming from the pseudo-Anosov homeomorphism $\varphi = T_{c_1} \circ T_{b_2} \circ T_{a_1} \circ T_{a_1} \circ T_{a_1} \circ T_{b_1} \circ T_{a_1}$ of the once-punctured genus 2 surface, where $T_{\gamma}$ denotes a Dehn twist in the curve $\gamma$ of Figure \ref{fig:twist_curves}. We computed a measured train track $\tau$ suited to the stable geodesic lamination of $\varphi$ which is shown in Figure \ref{fig:nongeo_tt}, where the octagon is punctured at the vertices, and edge identifications are determined by matching arrows on edges.
\begin{minipage}[t]{0.75\textwidth}
\begin{figure}[H]
\includegraphics[height=200pt]{illustrations/nongeo_tt.eps}
\caption{Invariant train track $\tau$ obtained by performing the Bestvina-Handel algorithm.}
\label{fig:nongeo_tt}
\end{figure}
\end{minipage}
\begin{minipage}[t]{0.23\textwidth}
Approximate weights of branches:
$$\bordermatrix{ & \cr
j_1 & 1.34601 \cr
j_2 & 1.46574 \cr
j_3 & 1.30459 \cr
j_4 & 1.46574 \cr
a & 0.50717 \cr
b & 1.46574 \cr
c & 0.34601 \cr
d & 0.46574 \cr
x & 0.95857 \cr
y & 0.83885 \cr
z & 1.00000}.$$
\end{minipage}
A graph map representing $\varphi$ given by applying the Bestvina-Handel algorithm is given by $\mathfrak{g} : \tau \rightarrow \tau$:
\begin{multicols}{3}
\begin{eqnarray*}
a &\mapsto& d\ j_4\ b\ \overline{j_2}\ \overline{a}\ j_1\ \overline{z} \\
b &\mapsto& y\ j_3\ \overline{x}\ j_2\ \overline{b} \\
c &\mapsto& x \\
d &\mapsto& \overline{b}\ \overline{j_4}\ z
\end{eqnarray*}
\begin{eqnarray*}
x &\mapsto& z\ \overline{j_1}\ y\ j_3\ \overline{x}\ j_2\ \overline{b}\ \overline{j_4}\ z\ \overline{j_1}\ a \\
y &\mapsto& d\ j_4\ b \\
z &\mapsto& \overline{c} \\
\end{eqnarray*}
\begin{eqnarray*}
j_1 &\mapsto& j_3 \\
j_2 &\mapsto& j_4 \\
j_3 &\mapsto& \overline{j_2} \\
j_4 &\mapsto& \overline{j_1}
\end{eqnarray*}
\end{multicols}
The dilatation of $\varphi$ is $\lambda = 2.89005..$, which is the largest real root of $x^4 - 2x^3 - 2x^2 -2x + 1$. From the invariant train track we see that the invariant measured foliations have a $6$-prong singular point at the puncture, and that there are no other singular points. The mapping torus $M_\varphi$ is $1$-cusped and has hyperbolic volume $4.85117..$. It is identified as manifold s479 in the SnapPy census of manifolds, which has a geometric triangulation of $M_\varphi$ by $6$ ideal tetrahedra. The triangulation of the boundary torus of $M_\varphi$ is shown in Figure \ref{fig:dtor_btorus}.
\begin{figure}[H]
\centerline{\includegraphics[width=\columnwidth]{illustrations/dtor_btorus}}
\caption{Induced triangulation of boundary torus of the triangulation $\mathcal{T}$. Arrows indicate edge identifications. SnapPy finds a solution to the {\em gluing and completeness equations} (see \cite{ThNotes}) where the shaded triangles correspond to the tetrahedron reported to be negatively oriented. The notation $6_3$ in a triangle means the truncated end of tetrahedron 6 at vertex 3.}
\label{fig:dtor_btorus}
\end{figure}
Generally, we can not immediately rule out the possibility that SnapPy finds a non-geometric solution although a geometric solution exists. We now outline how we rigorously verified that the triangulation $\mathcal{T}$ is non-geometric.
\begin{mydef} Let $\mathcal{T}$ be an ideal triangulation, with tetrahedron edge parameters given by a vector $\vec{z}$. The \emph{algebraic volume} $\mbox{vol}(\vec{z})$ of $\vec{z}$ is the sum of the signed volumes of hyperbolic tetrahedra with shapes given by $\vec{z}$, i.e. where a negatively oriented tetrahedron subtracts its volume from the sum. If it is clear which edge parameters we are referring to, we shall write $\mbox{vol}(\mathcal{T})$ for $\mbox{vol}(\vec{z})$.
\end{mydef}
\begin{thm}\label{thm:nongeo} Let $\mathcal{T}$ be an ideal triangulation of a hyperbolic 3-manifold $M$. Then there exists at most one solution $\vec{z}$ to the {\em gluing and completeness equations} (see \cite{ThNotes}) for $\mathcal{T}$ such that
$$\mbox{vol}(\vec{z}) = \mbox{vol}(M),$$
where $\mbox{vol}(M)$ is the hyperbolic volume of $M$.
\end{thm}
Theorem \ref{thm:nongeo} is a corollary of \cite[Remark 4.1.20]{Stefano} and \cite[Theorem 5.4.1]{Stefano} (see the last dot point of page 6.)
\begin{corol}\label{nongeo} Let $\vec{z}$ be a solution to the gluing and completeness equations for an ideal triangulation $\mathcal{T}$ of a hyperbolic 3-manifold $M$ such that
\begin{enumerate}
\setlength{\itemsep}{1pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item $\mbox{vol}(\vec{z}) = \mbox{vol}(M)$, and
\item $\vec{z}$ is non-geometric, i.e. at least one of the edge parameters has non-positive imaginary part.
\end{enumerate}
\vspace{-2mm}
Then $\mathcal{T}$ is non-geometric, that is, there is no solution to the gluing and completeness equations with all tetrahedra positively oriented.
\end{corol}
\begin{proof} If $\vec{z'}$ is a geometric solution to the gluing and completeness equations, then $\mbox{vol}(\vec{z'}) = \mbox{vol}(M)$. Hence, by Theorem \ref{thm:nongeo} we have $\vec{z} = \vec{z'}$, contradicting that $\vec{z}$ is non-geometric.
\end{proof}
By Corollary \ref{nongeo}, in order to show that $\mathcal{T}$ is non-geometric it suffices to find a non-geometric solution to the gluing and completeness equations which has algebraic volume equal to the volume of $M$. We find an exact non-geometric solution to the gluing and completeness equations using the computer program Snap \cite{Snap}, then verify that it has algebraic volume equal to the volume of $M$.
If $\vec{z}$ consists of the edge parameters to a solution of the gluing and completeness equations then the numbers $z_i$ are in fact algebraic numbers. The computer program Snap attempts to find such a solution, expressing each edge parameter $z_i$ as a polynomial in a number field $\mathbb{Q}(\tau)$, where $\tau$ is an algebraic number. In order to specify $\tau$ exactly, Snap provides the minimal polynomial $m(x)$ of $\tau$ as well as calculates $\tau$ to sufficiently many decimal places to uniquely specify it as the root of $m(x)$ closest to the decimal approximation. See \cite{MR1758805} for more information about Snap.
\begin{mydef}\cite[\S 3.1]{GenDef} Let $\mathcal{T}$ be a triangulation. A \emph{Pachner 2-3 move} is a move at a common face of two tetrahedra in $\mathcal{T}$ which produces the triangulation $\mathcal{T}'$, obtained by removing the face and inserting a dual edge, see Figure \ref{fig:pachner_moves}. If the tetrahedra in $\mathcal{T}$ are assigned edge parameters, then the three new tetrahedra in $\mathcal{T}'$ are assigned edge parameters as shown in Figure \ref{fig:pachner_moves}. A \emph{Pachner 3-2 move} is the reverse of a Pachner 2-3 move, that is, an edge with three surrounding tetrahedra is replaced by a dual face.
\end{mydef}
\begin{figure}[H]
\centerline{\includegraphics[height=180pt]{illustrations/pachner_edge_param.eps}}
\caption{Pachner 2-3 and 3-2 moves. On the left two tetrahedra have a common face and on the right three tetrahedra have a common edge, shown in green. The edge parameters are related by: $z_3 = w_2 w_3$ and $w_1 = z_1 z_2$, and similar relations at the other edges allow us to determine the shapes of each tetrahedron.}
\label{fig:pachner_moves}
\end{figure}
\begin{thm}\label{thm:volpres} Let $M$ be a hyperbolic 3-manifold and let $\mathcal{T}$ be an ideal triangulation of $M$. Let $\vec{z}$ be a solution to the gluing and completeness equations for $\mathcal{T}$, where we allow negatively oriented and flat tetrahedra, i.e. $z_i \in \mathbb{C}\backslash\{0,1\}$ for each edge parameter $z_i$. Let $\mathcal{T}'$ be the ideal triangulation obtained from $\mathcal{T}$ by a 2-3 or 3-2 Pachner move, and let $\vec{z'}$ be the corresponding edge parameters. If the edge parameters $\vec{z'}$ define non-degenerate tetrahedra, i.e. no edge parameter is equal to $0, 1$ or $\infty$, then the algebraic volumes of $\vec{z}$ and $\vec{z'}$ are equal.
\end{thm}
This theorem is a consequence of the ``five-term relation," an identity of the dilogarithm function, see paragraph 2 in the proof of Proposition 10.1 in \cite{MR1663915} for further discussion.
\begin{prop}\label{prop:nongeo} Let $\varphi : S \rightarrow S$ be the pseudo-Anosov homeomorphism $\varphi = T_{c_1} \circ T_{b_2} \circ T_{a_1} \circ T_{a_1} \circ T_{a_1} \circ T_{b_1} \circ T_{a_1}$ of the once-punctured genus 2 surface. Let $\mathcal{T}$ be the veering triangulation of the mapping torus $M_\varphi$ with respect to $\varphi$. Then $\mathcal{T}$ is non-geometric.
\end{prop}
We outline the steps we took to verify Proposition \ref{prop:nongeo}.
We found a sequence $$\mathcal{T} \stackrel{2-3}{\longrightarrow} \mathcal{T}_1 \stackrel{3-2}{\longrightarrow} \mathcal{T}_2 \stackrel{3-2}{\longrightarrow} \mathcal{T}_3 \stackrel{3-2}{\longrightarrow} \mathcal{T}_4,$$
of Pachner 2-3 and 3-2 moves starting with an exact non-geometric solution to Agol's triangulation $\mathcal{T}$, as given by Snap. We verified that the edge parameters of $\mathcal{T}_4$ are all positively-oriented, so that $\mathcal{T}_4$ is geometric and thus $\mbox{vol}(\mathcal{T}_4)$ equals the hyperbolic volume of $M_\varphi$. Furthermore, for $i=1,2,3$, we checked that none of the tetrahedron shapes of $\mathcal{T}_i$ are degenerate. Hence, by Theorem \ref{thm:volpres}, $\mbox{vol}(\mathcal{T}) = \mbox{vol}(\mathcal{T}_4)$. Therefore $\mbox{vol}(\mathcal{T})$ equals the hyperbolic volume of $M_\varphi$. Thus, by Corollary \ref{nongeo} Agol's triangulation $\mathcal{T}$ is non-geometric, as required.
\begin{remark} We expect that all the other examples in Tables \ref{table:nongeo1} and \ref{table:nongeo2} of Appendix \ref{app:tables} are also non-geometric. However, we have not shown this rigorously in the other cases.
\end{remark}
\section{Further work}
Although we have found that veering triangulations are not always geometric, we may ask:
{\bf Question:} Given a veering triangulation coming from Agol's construction, can we find positively-oriented hyperbolic ideal tetrahedra shapes for a (possibly incomplete) hyperbolic structure?
This is equivalent to finding a solution to the gluing equations where every edge parameter has positive imaginary part. Such a solution corresponds to a point in Dehn surgery space. Figure \ref{fig:dtor[5,4,5,5,5,2,3].tri} is obtained by using SnapPy to numerically solve the gluing and $(p,q)$-Dehn surgery equations and colouring each point $(p,q)$. A point is coloured green when a solution with all tetrahedra positively oriented is found, and blue when SnapPy finds a solution where at least one tetrahedron is negatively oriented. We see that even in this non-geometric example SnapPy finds positively-oriented hyperbolic ideal tetrahedra shapes for incomplete structures of the bundle. The complete hyperbolic structure corresponds to the point at infinity.
\begin{figure}[H]
\centerline{\includegraphics[height=250pt]{illustrations/dtor_5,4,5,5,5,2,3_.png}}
\caption{Dehn surgery space for $13$ tetrahedron non-geometric example.}
\label{fig:dtor[5,4,5,5,5,2,3].tri}
\end{figure}
|
1,116,691,497,318 | arxiv | \section{Introduction}
The Diffusion Limited Aggregation (DLA) \cite{witten81} model has
been the focus of a great deal of research due both to the fractal
\cite{plischke84, coniglio90, somfai99} and multifractal \cite{amitrano89, jensen03, somfai04} properties of the clusters it produces,
and to its underlying mathematical connection to diverse problems
including solidification \cite{langer80, mullins63}, viscous fingering \cite{nittmann85} and electrodeposition
\cite{matsushita84, brady84}. Its key feature is that the surface irreversibly absorbs an
incident diffusive flux, and growth velocity is locally proportional
to that flux density.
The problem is mathematically ill-posed unless the growth is constrained
to remain smooth below some ``ultra-violet cut-off'' lengthscale, which
in most simulation studies has been supplied by a particle size or
lattice scale. Experimentally the cut-off scale can be more subtle,
for example in solidification regulated by surface tension it varies
with the local incident flux density raised to power $-m$, with $m=1/2$.
Interest has also focussed on the more general Dielectric Breakdown
Model \cite{niemeyer84} cases where growth velocity is proportional
to the incident diffusive flux raised to some power $\eta$. Tuning $\eta$ has been claimed to match appearance between DBM growths (albeit in two dimensions) and real snowflakes \cite{nittmann87}. Recent
theory \cite{ball02prl, ball03pre} suggests quantitative equivalence classes exist in the $\eta$,$m$
plane, so that for example solidification should have a simpler-to-simulate
equivalent at fixed cut-off ($m=0$) at the computational expense
of non-trivial $\eta\ne1$.
A feature of real solidification patterns is that they macroscopically
strongly favour growth in specific directions, corresponding to microscopic
crystal lattice directions. The tendency of snow crystals to grow
six arms is well known, and lately this has been replicated in controlled
laboratory studies \cite{libbrecht99}. Cubic crystalline anisotropy also produces
striking anisotropic ``dendrite'' growth: succinonitryl is the classical
example \cite{huang81, corrigan99}, and lately colloidal crystal exemplars
have been observed in growth under microgravity \cite{russel97, zhu97}.
The manner in which surface tension and its anisotropy select the
morphology of growing tips has been the subject of intense analytical
study \cite{glicksman93}. Full numerical simulations of the continuum growth equations
have confirmed the theory and extended the spatial range out to growths
with significant side-branching \cite{kobayashi93, george02}, but none of these studies could claim
to reach the asymptotic regime of fractal growth.
Simple lattice and particle based simulations differ by having fixed
cut-off scale and lacking realistic local detail, but they can reveal
the limiting behaviour of highly branched growth. In two dimensions
a range of different angular anisotropies have been shown to be relevant
both by theory \cite{meakin87, ball85prl} and through simulations yielding self-similar
dendritic morphologies. Our principle objective in this paper is to
deliver the same level of understanding for three-dimensional simulations,
which have not been systematically explored in the literature to date.
We first introduce a new implementation of the DLA model that involves
enclosing the aggregates with a series of coverings, each made up
of a set of spheres, and show that it can successfully grow large,
three-dimensional DLA clusters. This algorithm entails no intrinsic
lattice or orientational bias, giving us a well posed isotropic reference.
We then show how anisotropy can be introduced to the algorithm by
confining growth to certain preferred directions, and combine this
with a noise reduction technique in which growth is only permitted
after $H\geq1$ random walkers have hit a growth site on the cluster.
We characterise growths using anisotropy functions which are sensitive
to the growth of fingers along the possible favoured directions. We
present a systematic comparison of the evolution of growths within
cubic symmetry, in particular the respective cases where growth is
favoured along the nearest neighbour directions of one of the simple
cubic, body-centred cubic and face-centred cubic lattices. We also
study growth with uniaxial bias, where growth in polar and planar directions are inequivalently favoured, particularly including the three dimensional hexagonal lattice.
We show that SC and BCC aggregates approach universal, anisotropic
asymptotic states independent of the level of noise reduction, and
that the approach to each of these states follows a single mastercurve.
FCC anisotropy is much slower to emerge, and we show that while high
noise reduction clusters appear to approach an anisotropic fixed point
in the same fashion, the existence of a different fixed point(s) for
low noise reduction growths cannot be ruled out. For growth with uniaxial bias, we observe limiting polar-to-planar aspect ratios of the clusters which depend continuously on the level of input bias. Thus for the three dimensional hexagonal lattice there appears to be a tunable continuum of asymptotic states.
\section{Growth Algorithm}
The original DLA algorithm takes as its starting point a single fixed
seed particle. A random walker is released from some distance away
and diffuses freely until it hits the seed, at which point it sticks
irreversibly. Further particles are released one at a time and a fractal
cluster is formed. Early simulations were done on (mostly cubic) lattices,
since this reduced the computer run-time required, and cluster sizes
were limited to $N\simeq10^{4}$ particles.
Modern DLA simulations are performed off-lattice and use a number
of tricks to speed up the growth. Since a diffusing particle should
reach the aggregate from a random direction, each walker can be released
from a randomly chosen point on a sphere that just encloses the cluster.
When a walker is far from the cluster it is allowed to take larger
steps than when it is nearby, as long as it never takes a step larger
than the distance to the nearest point of the cluster. A major development
was the Brady-Ball algorithm \cite{ball85jphysa}, which involves covering
the cluster with a series of coarse ``mappings'', to give a
lower bound on the distance to the cluster without looking up the
position of every cluster particle. A further refinement was invented
by Tolman and Meakin \cite{tolman89}, whereby the coarse mappings cover the cluster in
a manner constrained to give a margin of safety: this enables much
simpler (e.g. spherical) moves to be taken. Cluster sizes of $N\simeq10^{7}$
are easily obtainable by these methods.
Our new algorithm is a fundamentally off-lattice and dimension independent
development of the Brady-Ball-Tolman-Meakin algorithm. We represent the cluster
in terms of a set of \emph{zeroth level} spheres, and we maintain
a hierarchy of coarser scale sphere coverings of these labelled by
higher levels. For simplicity of exposition, we describe the case
where the physical cluster particles are monodisperse, in which case
it is convenient to choose the radius $r_{0}$ of the zeroth level
spheres to correspond to the centre-to-centre distance between contacting
particles (``sticking diameter'').
Higher level coverings, $n>0$, each consist of a set of spheres of
radius $r_{n}$ such that every zeroth level sphere is \emph{safely}
covered, in the following sense: all points within distance $\phi r_{n}$
of (the surface of) every zeroth level sphere lie inside the covering.
Each covering is also $\textit{simply}$ contained by all higher level
coverings. To make this structure easier to maintain we further required
that each zeroth level sphere was safely covered by a single (not
necessarily unique) sphere at all levels $n>0$. We chose the coverings
to have a geometric progression of size, with $r_{n}=\epsilon^{1-n}r_{1}$,
and terminated the hierarchy when safe covering of the whole cluster
was achieved by a single sphere.
Each sphere at level $n>0$ carries a full set of \emph{downlinks}.
These consist of a pointer to every ``child'' sphere at level
$n-1$ which overlaps the parent. In addition we gave each sphere
(below the highest level) one \emph{uplink}, pointing to one of its
parents; this is only required for the random walks (see later for
choice).
This construction gives an efficient method of generating moves for
our random walkers. At each step we need only determine the highest
level covering that the walker is outside to give a lower bound on
the walker's distance from the cluster. This in turn entails tracking
one (generally not unique) ``enclosing'' sphere which the walker
does lie inside at the next level up.
Given that the walker lies inside an enclosing sphere at level $m$
but outside the lower coverings, we first determine the nearest distance
$d$ from the walker to either the enclosing sphere or any of its
children. The walker can then make a spherically distributed move
of distance $d+\phi r_{m-1}$, because the nearest point of the cluster
must be at least this far away. If the walker has moved outside the
previous enclosing sphere, we follow uplinks until we find a new enclosing
sphere. We then recursively replace that sphere by any of its children
which enclose the walker, until a lowest level enclosing sphere is
found as required for the next move of the walker.
Walkers are deemed to have hit the cluster when they find themselves
inside a cluster particle. They are constrained by a (very small)
minimum step size, typically $10^{-3}r_0$, so they can only ever tresspass
this far into the cluster. They are then ``backed up'' to the cluster
perimeter and added to the aggregate.
As new particles are added to the cluster we must check that they
are safely covered at each level $n>0$. We start at the maximum level,
and create new maximum levels above it if required. Then we move down
levels to $n=1$ checking for safe coverage at each, noting the sphere
that provided this. A level $n$ sphere that safely covers our new
site will necessarily overlap that which did so at the previous level
$n+1$, so the the search at each level can be restricted to the children
of the previous safe container. If none of these give safe coverage
we must add a new sphere at level $n$, ensuring that the integrity
of the data structure is maintained and all the required new links
are put in place.
The safe container at level $n+1$ is made the parent of the new sphere
at level $n$; this is the uplink used by our random walkers. The
new sphere could simply be centred on the particle we wish to add;
however, in an attempt to maximise the efficiency of our coverings
we offset the new sphere by a distance $\gamma r_{n}$ in the direction
of local growth. This offset is constrained by our safe coverage requirement
to obey $r_{0}+\gamma r_{n}+\phi r_{n}<r_{n}$, which is most severe
for $n=1$ leading to \begin{equation}
r_{1}(1-\gamma-\phi)>r_{0}.\label{r1constraint}\end{equation}
We must now find all the spheres which may need a downlink to or from
the new sphere. To facilitate this we impose that each level covering
is simply (but with no required margin) contained within those above
it. In terms of our parameters this requires $r_{n}(1+\gamma)<\phi r_{n+1}+r_{0}$ and
choosing a geometric progression of radii $r_{n}=r_{1}/\epsilon^{n-1}$
with $\epsilon<1$ and no limit on $n$ then requires\begin{equation}
1+\gamma<\frac{\phi}{\epsilon}.\label{constraint}\end{equation}
This constraint ensures that our new sphere is completely covered
by the level $n+2$ safe container, whose child list will hence contain
all the level $n+1$ spheres that need linking to the new sphere.
Similarly, our new sphere is also covered by the level $n+1$ safe
container, and so any level $n-1$ spheres to which the new sphere
needs downlinks are guaranteed to be children $\textit{of the children}$
of that safe container. Thus by remembering the spheres which provided
safe coverage at the previous two levels and selecting parameter values
subject to the constraints (\ref{r1constraint}) and (\ref{constraint})
we can insert all the necessary new links, and ensure the integrity
of our data structure remains intact as the cluster growth proceeds.
Taking $r_{0}=1$ for convenience, a somewhat ad-hoc optimisation
scheme suggested the following parameters to minimise the run-time
of our program in three dimensions: $r_{1}=2.1$, $\gamma=0.29$, $\phi=0.4$ and $\epsilon=0.3$. We observe that the order of the algorithm is close to linear in $N$, consistent with the earlier discussion of Ball and Brady \cite{ball85jphysa}. Figure \ref{fig:3Dcluster} shows a large off-lattice DLA cluster grown in three dimensions by the new scheme and the convergence of measured fractal dimension
$D_f$ to a value $\sim2.5$ , in good agreement with previous
simulations \cite{tolman89, bowler04}.
\begin{figure}
\resizebox{0.8\columnwidth}{!}{\includegraphics*{lowres3Dclus.eps}}
\resizebox{0.8\columnwidth}{!}{\includegraphics*{3Ddf.eps}}
\caption{\label{fig:3Dcluster}
(Color online) A three-dimensional DLA cluster grown using the new algorithm containing $N=10^6$ particles, and the fractal dimension $D_f$ plotted against the deposition radius $R_{dep}$ obtained by averaging over a sample of 100 such clusters. $D_f$ converges to a value of $\sim2.5$ in agreement with previous results.
}
\end{figure}
\section{anisotropy and noise reduction}
We introduce anisotropy to our simulations by restricting growth to
a set of preferred directions, effectively growing our clusters on
a lattice. When a cluster site is ``grown'' (see following), we add
new ``sticky sites'' of prospective growth offset from the grown site
in each of our lattice directions. Sticky sites in turn grow when
they have accreted a set threshold number of walkers $H\geq1$. Requiring
$H>1$ walkers for growth gives better averaging over the diffusion
field, amounting to a noise reduction. Noise reduction has been widely
used for on-lattice planar DLA simulations \cite{ball02pre}, where it was found
to considerably accelerate the approach to asymptotic morphology.
We have grown aggregates favouring growth in the local simple, body-centred
and face-centred cubic lattice directions. To characterise the macroscopic
anisotropy of a resulting $N$-particle cluster we use functions $A_{K}=\frac{1}{N}\sum_{i=1}^{N}a_{K}(x_{i},y_{i},z_{i})$,
where $(x_{i},y_{i},z_{i})$ are the coordinates of the $i$th particle
relative to the seed of the growth, and $a_{K}$ is a function with
maxima in the appropriate lattice directions. We have constructed
$a_{K}$ out of angular harmonics of order $K$, with the appropriate
symmetry and minimal order to distinguish the different lattice responses
of study.
Growth biassed to the direction of simple cubic axes (relative to the cluster seed) is detected by using a harmonic of
order $4$,\[
a_{4}=\frac{5}{2r^{4}}\left(x^{4}+y^{4}+z^{4}\right)-\frac{3}{2},\]
where $r^{2}=x^{2}+y^{2}+z^{2}$ and the normalisation is chosen such
that $A_{4}=1$ for growth exactly along the lattice axes. Likewise growth along the nearest neighbour directions of an
FCC lattice gives $A_{6}=1$ based on
\begin{eqnarray}
\lefteqn{a_{6}=\frac{112}{13r^{6}}\bigg(x^{6}+y^{6}+z^{6}+} \nonumber\\
& &
\left.\frac{15}{4}\left(x^{4}y^{2}+x^{2}y^{4}+x^{4}z^{2}+x^{2}z^{4}+y^{4}z^{2}+y^{2}z^{4}\right)\right)-\frac{120}{13}. \nonumber
\end{eqnarray}
The combination of these two enables us to distinguish by sign growth
along SC, BCC or FCC directions as summarised in Table \ref{tab:Avalues},
where values given are for the extreme case of growth confined to
the corresponding nearest neighbour directions from the central seed.
\begin{table}
\caption{ \label{tab:Avalues}
Values of anisotropy functions $A_4$ and $A_6$ for growth along the nearest neighbour directions of simple, body-centred and face-centred cubic lattices.
}
\medskip
\begin{ruledtabular}
\begin{tabular}{cccc}
&
Simple Cubic &
Body-centred Cubic &
Face-centred Cubic \\ \hline
$A_4$ &
1 &
-2/3 &
-1/4 \\
$A_6$ &
-8/13 &
-128/117 &
1 \\
\end{tabular}
\end{ruledtabular}
\end{table}
We have also grown aggregates favouring six planar and two polar directions
of growth. These growths have their polar directions inequivalent
(by any symmetry) to their planar ones so we were naturally led to
admit different values of noise reduction in the two classes of local
growth direction to tune their relative growth. We found the clearest
characterisation of the corresponding growth response of these clusters
simply by measuring their aspect ratios, which we calculate using
extremal radii. We define a cluster's aspect ratio as $z_{\text{max}}/x_{\text{max}}$,
or, in terms of crystallographic notation, $c/a$.
\section{Results}
We grew aggregates favouring SC, BCC and FCC lattice directions at several levels of noise reduction $H$ from $1$ to $100$,
and measured their response using our anisotropy functions $A_{K}$.
The clusters were grown to size $N=3.16\times10^{4}$ particles, where
a site was included in this tally only when it had been hit $H$ times.
Figure \ref{fig:SCBCCpics} shows example SC and BCC clusters grown at the highest level of noise reduction $H=100$. Both clusters have major arms in the appropriate lattice directions, and each arm exhibits secondary growth along the remaining favoured directions.
\begin{figure}
\resizebox{0.8\columnwidth}{!}{\includegraphics*{lowresfaces.eps}}
\resizebox{0.8\columnwidth}{!}{\includegraphics*{lowrescorners.eps}}
\caption{\label{fig:SCBCCpics}
(Color online) Anisotropic DLA clusters grown by the new method: \textbf{a)} simple cubic case, and \textbf{b)} body-centred cubic case. Each contains $3.16\times10^4$ sites, grown under noise reduction such that sites were grown after capturing $H=100$ walkers.
}
\end{figure}
The measured anisotropy $A_4$ for SC and BCC growths at various $H$ is shown in Figure \ref{fig:A4SCBCC}.
Both sets of clusters appear to approach universal asymptotic values
of $A_{4}$ independent of noise reduction: $A_{4}(\infty)\simeq0.65$
for SC growths and $A_{4}(\infty)\simeq-0.5$ for BCC growths. These
values can be approached from both above and below, depending on $H$.
\begin{figure}
\resizebox{0.8\columnwidth}{!}{\includegraphics*{a4faces.eps}}
\resizebox{0.8\columnwidth}{!}{\includegraphics*{a4corners.eps}}
\caption{\label{fig:A4SCBCC}
Anisotropy function $A_4$ evaluated for \textbf{a)} simple cubic and \textbf{b)} body-centred cubic growths at various levels of noise reduction $H$ as a function of the number of sites grown $N$. Each curve is based on the average over 10 clusters. Comparing to reference values in Table \ref{tab:Avalues}, these confirm quantitatively the visual impression from Figure \ref{fig:SCBCCpics} that the respective SC and BCC anisotropies are self-sustaining under growth.
}
\end{figure}
The consistency of the shapes of the anisotropy curves suggest that
for these types of growth there may exist ``mastercurves'' that embody
the evolution of $A_{4}$ towards its asymptotic value, as a function of rescaled $N$. All individual
curves, regardless of $H$, will lie somewhere on these mastercurves.
To test this hypothesis for each case we shifted the curves along
the $N$-axis by a factor $k(H)$ until, by eye, they appeared to
follow a single curve. Figure \ref{fig:masterSCBCC}(a) and (b)
shows the results of this procedure for both the SC and BCC growths.
For each case, we have used only the results for $N>10^{2}$ in order
to be sure of the correct general trend, and we could not use the very
low $H$ curves because they vary too little across the simulation
range to give sufficient vertical overlap.
The figure shows power law relationships between the noise reduction
$H$ and the shift factors $k(H)$ in both cases, further evidence
that this mastercurve approach correctly describes the evolution of
SC and BCC growths.
In the SC case, Figure \ref{fig:masterSCBCC}(a), the shifted curves for values of
$H$ from $3$ to $16$ are shown. For $H>16$, since the anisotropy
curves are very close to the asymptotic value of $A_{4}$ and are hence
very flat, this curve-shifting process fails. There presumably exists
some ideal noise reduction value $H^*$ for which the $A_{4}$ curve
will approach the asymptote most quickly, and we would of course expect
the power law scaling to break down as $H$ approaches this value.
The curves for very high values $H\geq28$ approach the asymptotic
value from above, and it should presumably be possible to map them
onto a second mastercurve. However to test that systematically would require more data of considerable computational cost.
For the BCC case, Figure \ref{fig:masterSCBCC}(b), the anisotropy is much slower
to emerge from the noise and the curves for all values of $H$ save
the very highest $H=100$ approach the asymptotic value from the same
direction, and the mastercurve includes all values of $H$ from 5
to 24. Above $H=24$, the procedure fails in the same fashion as the
SC case as $H^*$ is approached.
\begin{figure}
\resizebox{0.8\columnwidth}{!}{\includegraphics*{shifta4faces.eps}}
\resizebox{0.8\columnwidth}{!}{\includegraphics*{shifta4corners.eps}}
\caption{\label{fig:masterSCBCC}
(Color online) Mastercurves of the evolution of $A_4$ for \textbf{a)} simple cubic and \textbf{b)} body-centred cubic growths. These suggest universal approach to respective SC and BCC fixed points. The insets show how the shift factors $k$ applied to $N$ vary with noise reduction parameter $H$.
}
\end{figure}
Anisotropy curves $A_{6}$ for FCC growths are shown in Figure \ref{fig:anisFCC}(a),
and it is immediately apparent that their behaviour is not as straightforward
as the SC and BCC cases. For high $H$ growths, $A_6$ appears to be increasing
in a fashion similar to that previously observed, suggesting the existence
of a fixed point of anisotropy for FCC growth at $A_{6}(\infty)\simeq0.48$.
All the curves approach this value from below, suggesting that the
FCC anisotropy is much slower to emerge from the noise than the SC
and BCC anisotropies. This seems reasonable given that
the FCC anisotropy has more competing ``arms'' than the other growths,
and we have verified that the $H=100$ clusters do indeed appear to
have a full set of 12 arms. A mastercurve for these higher values
of $H$ is shown in Figure \ref{fig:anisFCC}(b), and seems to describe
these results well.
\begin{figure}
\resizebox{0.8\columnwidth}{!}{\includegraphics*{a6edges.eps}}
\resizebox{0.8\columnwidth}{!}{\includegraphics*{shifta6edges.eps}}
\caption{\label{fig:anisFCC}
(Color online) \textbf{a)} Anisotropy function $A_6$ evaluated for FCC growths, based on the average over 10 clusters per curve. Only at higher noise reduction levels is there clear indication that the FCC anisotropy is sustained under growth.
\textbf{b)} Mastercurve for FCC growths at $H\ge 14$, which do appear to exhibit a common evolution, with the corresponding shift factors inset.
}
\end{figure}
For low noise reduction clusters $H<8$ however, $A_{6}$ does not
increase over the course of the growth, and if anything appears to
be \emph{decreasing} at large $N$ towards a value of about zero,
suggesting the possible existence of another fixed point. Visualisations
of these low $H$ FCC clusters appeared to indicate some growth along
the BCC lattice directions; Figure \ref{fig:FCCpics} shows an example of this for a low noise reduction $H=6$ cluster, and for comparison a high noise reduction $H=100$ cluster exhibiting some growth in all 12 FCC lattice directions.
\begin{figure}
\resizebox{0.8\columnwidth}{!}{\includegraphics*{lowresedgeslow.eps}}
\resizebox{0.8\columnwidth}{!}{\includegraphics*{lowresedgeshigh.eps}}
\caption{\label{fig:FCCpics}
(Color online) FCC clusters grown by the new method: \textbf{a)} low noise reduction $H=6$, and \textbf{b)} high noise reduction $H=100$. Both clusters contain $3.16\times10^4$ sites grown. The low noise reduction case appears to show some growth bias to the BCC lattice directions (corners) as per BCC lattice growth in Figure \ref{fig:SCBCCpics}. The high noise reduction cluster exhibits growth in all twelve FCC lattice directions, which correspond to the mid-edges of the box drawn.
}
\end{figure}
We were hence led to apply the BCC anisotropy
function $A_{4}$ to the FCC aggregates, and for comparison $A_{6}$
was also evaluated for the BCC and SC clusters. Studies of two-dimensional
anisotropic DLA \cite{barker90} have had some success focussing
on the interplay between anisotropy and noise in the growth process, and in this spirit we measured $\sigma(R)/R$, where $R$ is the
deposition radius of a cluster and $\sigma(R)$ is the standard deviation
of this measurement. This quantity offers a simple measure of
fluctuations due to noise during cluster growth; $\sigma(R)/R$
is plotted against $A_{4}$ in Figure \ref{fig:fluc}(a) and against $A_{6}$ in
Figure \ref{fig:fluc}(b) for clusters of each type at various $H$.
\begin{figure}
\resizebox{0.8\columnwidth}{!}{\includegraphics*{fluca4.eps}}
\resizebox{0.8\columnwidth}{!}{\includegraphics*{fluca6.eps}}
\caption{\label{fig:fluc}
(Color online) The relative fluctuation in cluster radius (at fixed $N$), plotted against each of the anisotropy measures $A_4$ and $A_6$ as clusters grow. Increasing $N$ corresponds to moving generally downwards in these plots, and the symbols are the same on both panels.
}
\end{figure}
Figure \ref{fig:fluc}(a) shows that for all clusters the noise decreases
reasonably monotonically as growth proceeds. SC and BCC growths for
all $H$ can be seen to converge towards their respective fixed point
values of approximately $0.65$ (SC) and $-0.5$ (BCC). The FCC growths
are all grouped around $A_{4}\simeq-0.2$, and this plot fails to
explain the behaviour of the low $H$ FCC clusters. However, Figure
\ref{fig:fluc}(b) gives us an idea of what may be happening: whereas
the higher $H$ growths head towards the same final value ($A_{6}\simeq0.5$),
the curves for the low $H$ FCC clusters have not turned towards this
value and appear to be following similar trajectories to the low $H$
BCC clusters. All the SC and BCC clusters appear to be approaching
common asymptotic values of $A_{6}$ of approximately $-0.2$ and $-0.35$
respectively.
Further evidence for this explanation of the FCC cluster behaviour
is given by plotting $A_{4}$ against $A_{6}$ for BCC and FCC growths in
Figure \ref{fig:acomp}(a). This clearly shows the BCC clusters evolving
(in a direction dependent on $H$) towards a fixed point. The high
$H$ FCC growths also head to their own fixed point, whereas the low
$H$ growths are moving in a different direction, towards the BCC
fixed point. The inset of Figure \ref{fig:acomp}(a) shows the position
of the SC growths in the $A_{4},A_{6}$ plane; they can also be seen
to approach a fixed point from different directions depending on $H$.
This information allows us to build what we believe to be a consistent
picture of the evolution of all three types of growth, interpreted in terms of how the parameters $A_4$ and $A_6$ evolve as a function of increasing lengthscale. This is shown in Figure \ref{fig:acomp}(b). Our anisotropy curves have shown the existence of stable fixed points, for each of SC, BCC and high noise reduction FCC clusters: assuming that the variables $A_4$ and $A_6$ capture the key distinction between the different anisotropies studied, these directly imply the three separate stable fixed points shown on Figure \ref{fig:acomp}(b). There is presumably an unstable fixed point located at $(0,0)$ in
the $A_{4},A_{6}$ plane corresponding to isotropic growth, and the
differing trajectories of the FCC clusters dependent on $H$ implies
the existence of another unstable fixed point to separate the two
behaviours. The measured directions of our data allow us to predict trajectories of growths with different starting points, with flow away from or towards each fixed point depending on its nature.
\begin{figure}
\resizebox{0.8\columnwidth}{!}{\includegraphics*{acomp.eps}}
\resizebox{0.8\columnwidth}{!}{\includegraphics*{schematic.eps}}
\caption{\label{fig:acomp}
(Color online) \textbf{a)} The evolution of clusters in the plane of $A_4$ and $A_6$ for BCC and FCC clusters at various noise reduction levels (always averaging over samples of 10), with arrows to indicate the direction of trajectories. The inset shows the well separated evolution of SC clusters in the same plane.
\textbf{b)} The same data can be interpreted as renormalisation flows for how effective parameters evolve as a function of lengthscale, leading to the inferred fixed points shown (crosses). It is an assumption here that the lowest relevant angular harmonics $A_4$, $A_6$ do capture the key distinction between the three different applied anisotropies. Bold arrows show observed evolution whereas gray arrows show the presumed flow from other starting points. The symbols on both panels are the same as those in Figure \ref{fig:fluc}.
}
\end{figure}
We next turn our attention to hexagonal growths. We define a parameter
$p=\frac{H_{z}}{H_{xy}}$ where $H_{\text{z}}$ is the
number of walker hits required to grow a site in either of the favoured
polar directions and $H_{\text{xy}}$ is the number of hits required
for growth in a favoured planar direction. For all growths $H_{\text{xy}}=100$
while $H_{\text{z}}$ was varied between simulations to give values
of $p$ ranging from $0.5$ to $4$. Low values of $p$ produced column-like
growths, while high $p$ resulted in virtually flat aggregates with
six arms in the plane. Example clusters are shown in Figure \ref{fig:hexpics}.
\begin{figure*}
\resizebox{0.8\columnwidth}{!}{\includegraphics*{lowressnow1.eps}}
\resizebox{0.8\columnwidth}{!}{\includegraphics*{lowressnow1.5.eps}}
\resizebox{0.8\columnwidth}{!}{\includegraphics*{lowressnow4.eps}}
\resizebox{0.8\columnwidth}{!}{\includegraphics*{lowressnow.eps}}
\caption{\label{fig:hexpics}
(Color online) Clusters grown favouring six planar directions and two polar directions. The parameter $p$ is a measure of the relative ease of planar growth compared with polar, with high values of $p$ favouring planar growth. Shown are \textbf{a)} $p=1$, \textbf{b)} $p=1.5$, and \textbf{c)} $p=4$; \textbf{d)} shows the $p=4$ cluster viewed from above, highlighting the complex six-armed morphology of these growths.
}
\end{figure*}
Aspect ratios of these hexagonal clusters are shown in
Figure \ref{fig:asp}(a). The results are from 5 clusters of size $N=10^{5}$
at each value of $p$, although since a cluster possesses six planar and two polar arms each provides us with twelve measurements of an aspect ratio. Strikingly, the aspect ratios remain almost
constant for $N\geq10^{3}$, suggesting the existence of a continuous
spectrum of fixed points which depend on the input anisotropy. This
is contrary to a simple expectation of two fixed points, favouring
polar or planar growth respectively.
To investigate this interesting result further we tuned the polar
growth of some SC clusters in the same way. Since these clusters possess
fewer competing arms than the hexagonal growths, for the same size
$N$ they should be more converged towards their asymptotic states. The aspect
ratios of these clusters are shown in Figure \ref{fig:asp}(b), and they
appear very similar to the hexagonal results. They do suggest a coherent
explanation, however. The $p=1$ clusters are of course just the standard
SC clusters investigated above and since polar and planer lattice
directions are in this case equivalent, they unsurprisingly display
a constant aspect ratio of 1. This fixed point is presumably stable under growth.
The aspect ratios of the extreme cases $p=0.5$ and $p=4$ appear
to diverge at very large $N$, suggesting the existence of additional
fixed points at infinity and zero, corresponding to column-like or
flat growths respectively. The intermediate growths $p=1.5$ and $p=2$
approach constant aspect ratios which we interpret to be the fixed
point at 1 displaced by the input tuning of the growths. The striking
similarity between these results and those for the hexagonal clusters
leads to a similar explanation for their origin.
\begin{figure}
\resizebox{0.8\columnwidth}{!}{\includegraphics*{avaspratnr100hex.eps}}
\resizebox{0.8\columnwidth}{!}{\includegraphics*{avaspratnr100faces.eps}}
\caption{\label{fig:asp}
Aspect ratios of \textbf{a)} hexagonal growths, and \textbf{b)} SC growths, both with tunable input polar anisotropy $p$. The data is from 5 clusters of each type for each value of $p$; using both polar arms and the 6 (hexagonal) or 4 (SC) planar arms means each cluster provides 12 or 8 measurements, respectively, of an aspect ratio at size $N$.
}
\end{figure}
\section{Conclusions}
We have utilised an efficient, dimension-independent numerical implementation
of the DLA model to explore the effects of several different lattice
anisotropies on three-dimensional aggregates.
For cubic anisotropies we used functions with maxima in the appropriate
lattice directions to characterise our aggregates. We have shown that
SC and BCC growths approach universal asymptotic states, independent
of the level of noise reduction, and that in each case the evolution
of the cluster anisotropy can be described by mastercurves.
For face-centred cubic anisotropy, high noise reduction clusters also
appear to approach a common asymptotic state. By evaluating the clusters'
anisotropic response in the BCC lattice directions we have shown that
lower noise reduction FCC growths appear to evolve towards the BCC fixed
point. The final appearance of these low noise reduction FCC clusters remains
uncertain.
We also studied hexagonal anisotropies with six favoured planar directions
and two favoured polar directions. We tuned our growths by varying
the criterion for growth in a polar direction, and somewhat surprisingly
found that the aspect ratios of our clusters appear to exhibit a continuous
spectrum of final states, dependent on the input tuning. These growths
bear a striking resemblence to snowcrystal morphologies.
Accurate simulation of real solidification patterns like snowcrystals
involves including the effect of a non-constant small-scale cutoff.
Theory \cite{ball02prl, ball03pre} suggests an equivalence between this and simple dielectric
breakdown model growth. A new method of realising DBM growth using
random walkers has been implemented in two dimensions \cite{somfai04},
and our new code is ideally suited to extending this investigation
to three dimensions, of which little or nothing is known. Combining
these advances with the anisotropic techniques described above it
seems feasible to develop fully self-consistent simulations of dendritic
solidification and related phenomena, and work on this task is underway.
\begin{acknowledgments}
This research has been supported by the EC under Contract No. HPMF-CT-2000-00800.
NRG was supported by an EPSRC CASE award. The computing facilities
were provided by the Centre for Scientific Computing of the University
of Warwick, with support from the JREI.
\end{acknowledgments}
|
1,116,691,497,319 | arxiv | \section{Introduction}
Biomolecular binding processes involve disrupting protein-water interactions
and replacing them with direct interactions between the binding partners.
Thus, protein-water interactions influence the thermodynamics and kinetics of protein interactions~\cite{Ashbaugh:2008:JACS,shea08,levy2006water,berne_rev09,Jamadagni:ARCB:2011,Abrams:2013:PSFB,Baron:2013aa,tiwary2015kinetics,tiwary2015role,bellissent2016water},
as well as the stability and phase behavior of protein solutions~\cite{Jaenicke:2000hb,Shire:JPS:2004,Shen:2006aa,Trevino:2008jh,Thirumalai:2012,Palmer:JPCL:2012,Ham:Angw:2014,Remsing:JPCB:2018}.
In this article, we characterize the
overall interactions between proteins and their hydration waters
by using specialized molecular simulations that employ an unfavorable potential
to displace water molecules from the vicinity of a protein.
Because displacing interfacial waters disrupts surface-water interactions,
the less favorable those interactions (e.g., for hydrophobic surfaces),
the easier it is to displace the interfacial waters~\cite{Patel:PNAS:2011,Patel:JPCB:2014,Xi:JCTC:2016}.
Indeed, both theory~\cite{LCW,LLCW,vaikuntanathan2016necessity,Xi:PNAS:2016}
and molecular simulations~\cite{Patel:JPCB:2010,Patel:JPCB:2012} have shown that the rare, low-density fluctuations,
which are accessed when interfacial waters are displaced,
are substantially more probable adjacent to a hydrophobic surface than at a hydrophilic surface.
Moreover, water molecules near a hydrophobic surface are susceptible to unfavorable perturbations,
and undergo a collective dewetting transition in response to such a perturbation~\cite{LCW,Patel:JPCB:2010,Patel:JPCB:2012,Xi:JCTC:2016}.
Proximity to such a dewetting transition is also reflected in other collective interfacial properties, such as
compressibility, transverse density correlations, and the distribution of water dipole orientations, among others
~\cite{LeeMR,LCW,mittal_pnas08,Sarupria:PRL:2009,Godawat:PNAS:2009,Patel:JPCB:2010,Willard:JPCB:2010,Patel:PNAS:2011,Patel:JPCB:2012,heyden2013spatial,sosso2016role,Shin:JPCB:2018}.
In contrast with simple hydrophobic or hydrophilic surfaces, proteins display nanoscopic chemical and topographical patterns,
which influence their interactions with water in non-trivial ways~\cite{Giovambattista:PNAS:2008,giovambattista2009enhanced,Acharya:Faraday:2010,daub2010influence,mittal2010interfacial,Wang19042011,fogarty2014water,Patel:JPCB:2014,harris2014effects,Xi:PNAS:2017,Shell:PNAS:2018,heyden:2018}.
By interrogating how protein hydration waters respond to an unfavorable potential,
here we find that the hydration shells of diverse proteins are also situated at the edge of a dewetting transition.
Such a resemblance of protein hydration shells to extended hydrophobic surfaces
appears to arise from the fact that --
even for protein surfaces that are enriched in polar and charged residues --
roughly half the surface consists of hydrophobic atoms.
Our findings, obtained by studying proteins across a broad range of sizes, chemistries, and functions,
suggest that susceptibility to unfavorable perturbations
is a common feature of soluble proteins with well-defined folded structures.
We also find that the strength of the unfavorable potential needed to trigger dewetting is roughly the same across all proteins,
and depends only on the width of the hydration shell we choose to perturb.
Our findings lay the groundwork for systematically disrupting protein-water interactions,
and uncovering regions of proteins that have the weakest (hydrophobic) and the strongest (hydrophilic) interactions with water.
A knowledge of the most hydrophobic protein regions
could enable the prediction of the interfaces through which protein interact with one another~\cite{Bogan:JMB:1998,DeLano:2002aa,nooren2003diversity,White:2008aa}.
Similarly, uncovering the most hydrophilic protein patches could result in the discovery of novel super-hydrophilic chemical patterns~\cite{nonfouling}.
\begin{figure*}[htb]
\centering
\includegraphics[width=0.8\textwidth]{fig/fig1}
\vspace{-0.3in}
\caption{
How interfacial waters respond to an unfavorable potential.
(a) Simulation snapshots of the CH$_3$-terminated SAM-water interface.
The SAM atoms are shown in space-fill representation;
the $N_v$ waters in the interfacial observation volume, $v$, are shown in the licorice representation, and the rest as lines.
The cylindrical $v$ is chosen to have a radius, $R_v = 2$~nm, and a width, $w = 0.3$~nm.
(b) In response to an unfavorable biasing potential, $\phi N_v$, the average number of interfacial waters, $ \langle N_v \rangle_{\phi} $,
decreases at both the hydrophobic CH$_3$-terminated and the hydrophilic OH-terminated SAM surfaces.
However, as the potential strength, $\phi$ is increased, $ \langle N_v \rangle_{\phi} $ near the hydrophilic SAM decreases gradually;
whereas, $ \langle N_v \rangle_{\phi} $ near the hydrophobic SAM decreases sharply~\cite{LCW,Patel:JPCB:2010,Patel:JPCB:2012,Xi:JCTC:2016}.
(c) These differences are also evident in the $\phi$-dependence of the susceptibility, $ \chi_v \equiv -\partial \langle N_v \rangle_{\phi} / \partial (\beta\phi)$,
which is roughly constant for the hydrophilic SAM, but shows a marked peak for the hydrophobic SAM.
Thus, interfacial waters at a hydrophobic surface are susceptible to unfavorable perturbations~\cite{LCW,LLCW,Patel:JPCB:2010,Patel:JPCB:2012}.
(d) Simulation snapshot of the ubiquitin protein,
highlighting the observation volume, $v$,
which contains waters in the first hydration shell of the protein.
Protein atoms are shown in space-fill representation, and colored according to their atom types (white = hydrophobic, blue = hydrophilic) following ref.~\cite{rossky};
waters in $v$ are shown in licorice representation, and the rest are shown as lines.
The well-characterized hydrophobic patch of ubiquitin, which mediates its interactions with other proteins,
is also shown~\cite{ubiq_recog}.
(e) The average number of protein hydration waters, $ \langle N_v \rangle_{\phi} $, decreases in a sigmoidal manner as $\phi$ is increased.
(f) The corresponding susceptibility, $ \chi_v $, displays a maximum, suggesting that protein hydration waters
are also susceptible to unfavorable perturbations, akin to those near hydrophobic surfaces.
\vspace{-0.25in}
}
\label{fig1}
\end{figure*}
\section{Results and Discussion}
\subsection{Water near uniform, flat surfaces}
To illustrate the molecular signatures of surface hydrophobicity,
we first review the contrasting behavior of water near CH$_3$-terminated (hydrophobic) and OH-terminated (hydrophilic)
self-assembled monolayer (SAM) surfaces.
The SAM surfaces are not only flat and uniform, and thereby considerably simpler than proteins,
but their hydrophobicity can also be defined unambiguously using a macroscopic measure, such as the water droplet contact angle.
We focus on water molecules in a cylindrical observation volume, $v$, at the SAM-water interface, as shown in Figure~\ref{fig1}a.
We choose a radius, $R_v = 2$~nm, and a width, $w = 0.3$~nm, for the cylindrical $v$;
with this choice, $v$ at either SAM surface contains an average of roughly 120 waters.
Following previous work~\cite{Patel:JSP:2011,Patel:JPCB:2012,Patel:JPCB:2014,Xi:JCTC:2016},
we then perturb the interfacial waters in $v$ by applying an unfavorable biasing potential, $\phi N_v$,
where $\phi$ represents the strength of the potential,
and $N_v$ is the number of coarse-grained waters in $v$;
a more precise definition of $N_v$ is included in the $\mathrm{Supporting~Information}$.
The potential imposes an energetic penalty that increases linearly with $N_v$,
so that
as $\phi$ is increased, waters are displaced from $v$, resulting in a decrease in the average water numbers, $ \langle N_v \rangle_{\phi} $, next to both SAM surfaces;
see Figure~\ref{fig1}b.
The decrease in $ \langle N_v \rangle_{\phi} $ with increasing $\phi$ is linear for the hydrophilic SAM.
In comparison, the corresponding $ \langle N_v \rangle_{\phi} $-values for the hydrophobic SAM are smaller for every $\phi$,
highlighting the relative ease of displacing waters.
Moreover, the decrease in $ \langle N_v \rangle_{\phi} $ with increasing $\phi$ is sensitive (or sigmoidal)
near the hydrophobic surface rather than gradual (and linear) as it is near the hydrophilic surface.
This contrast can be seen even more clearly in Figure~\ref{fig1}c, which shows the susceptibility,
$ \chi_v \equiv -\partial \langle N_v \rangle_{\phi} / \partial (\beta\phi)$, as a function of $\phi$; here, $\beta = 1 / k_{\rm B}T $, $k_{\rm B}$ is the Boltzmann constant, and $T$ is the system temperature.
The susceptibility is nearly constant for the hydrophilic surface.
However, it shows a pronounced peak for the hydrophobic surface,
suggesting that a collective dewetting of the interfacial waters
can be triggered when a sufficiently strong unfavorable potential is applied.
\subsection{Perturbing the protein hydration shell}
In contrast with the uniform SAM surfaces, proteins are heterogeneous, rugged, and amphiphilic.
Their surfaces tend to have polar and charged residues to ensure that the protein is soluble in water,
as well as hydrophobic residues to drive protein interactions.
Given the amphiphilicity of proteins, how might their hydration waters respond to an unfavorable potential?
Should we expect its hydration waters to be displaced gradually like the hydrophilic SAM surface?
Or should we expect the protein hydration waters to undergo collective dewetting like the hydrophobic SAM surface?
To address these questions, we first study ubiquitin, a highly-conserved protein involved in numerous signaling pathways, including protein degradation~\cite{ubiq_1}.
Although it is a relatively small protein (76 amino acid residues),
ubiquitin displays many of the characteristic features of a soluble globular protein,
including a stable folded structure, a chemically and topographically heterogeneous surface, and
interactions with diverse molecules that are crucial to its function (Figure~\ref{fig1}d)~\cite{ubiq_recog}.
Many of these interactions are mediated by a well-documented hydrophobic patch,
which is also shown in Figure~\ref{fig1}d.
To characterize the overall strength of protein-water interactions, we again apply a biasing potential, $\phi N_v$,
where $N_v$ is now the number of coarse-grained waters in the entire protein hydration shell, $v$.
The hydration shell, $v$, is defined as the union of spherical sub-volumes centered on all the protein heavy atoms,
with each sub-volume chosen to have the same radius, $R_v$.
Such a definition allows $v$ to capture the ruggedness of the underlying protein surface,
with the width of the hydration shell determined by $R_v$.
Here we choose $R_v=0.6$~nm so that only waters in the first hydration shell of the protein are included in $v$ (Figure~\ref{fig1}d).
The decrease in the average number of ubiquitin hydration waters, $ \langle N_v \rangle_{\phi} $, in response to the unfavorable potential, $\phi N_v$, is shown in Figure~\ref{fig1}e.
Interestingly, $ \langle N_v \rangle_{\phi} $ displays a sigmoidal dependence on $\phi$, akin to that for the hydrophobic CH$_3$-terminated SAM surface (Figure~\ref{fig1}b).
Correspondingly, a clear peak is also observed in the susceptibility around $\phi^* \approx 2~ k_{\rm B}T $ (Figure~\ref{fig1}f).
Thus, the hydration shell of the inherently amphiphilic and incredibly complex surface of the ubiquitin protein dewets collectively in response to an unfavorable perturbation.
\begin{figure*}[thb]
\centering
\includegraphics[width=1.\textwidth]{fig/fig2-v6}
\vspace{-0.25in}
\caption{
The hydration shells of proteins with diverse sizes, shapes, chemical patterns, and functional roles are susceptible to an unfavorable potential.
For each protein, surface atoms of interest are shown in white (non-polar) or blue (polar or charged);
the rest are shown in gray.
Susceptibilities of the protein hydration waters, $ \chi_v \equiv -\partial \langle N_v \rangle_{\phi} / \partial (\beta\phi)$, to the biasing potential strength, $\phi$,
are also shown.
(a) Hydrophobin II is a small fungal protein that adsorbs to water-vapor interfaces via a large hydrophobic patch (top);
the rest of the protein is amphiphilic (bottom)~\cite{hfb_struct,hfb_rev}.
(b) The protein that makes up a structural sub-unit of the hepatitis B viral capsid is shown,
along with the amphiphilic interface through which two protein sub-units bind;
the hydrophobic patch on the binding interface is also highlighted.
(c) The signaling protein, MDM2, interacts with its binding partner, p53,
through a smaller interaction interface, resembling a crevice that is lined with hydrophobic residues.
(d) The enzyme malate dehydrogenase forms a homo-dimer through a binding interface,
which is sufficiently hydrophilic that it retains waters in the bound state;
those waters are shown in red/white.
(e) The bacterial RNase barnase employs five charged residues to bind with its inhibitor barstar in one of the most stable complexes known;
structural waters at the barnase-barstar binding interface are also shown.
(f) Lysozyme is a soluble protein with a highly amphiphilic surface that resembles a checker pattern.
For each of these diverse proteins, the susceptibility of waters in the entire protein hydration shell to the unfavorable potential,
$ \chi_v $ displays a marked peak.
Moreover, the peak occurs at roughly the same potential strength, $\phi^* \approx 2~ k_{\rm B}T $, for every protein.}
\vspace{-0.25in}
\label{fig2}
\end{figure*}
\subsection{How the hydration shells of diverse proteins respond to unfavorable potentials}
Is ubiquitin unique?
If not, how general is the susceptibility of protein hydration waters to an unfavorable potential?
To address this question, we studied six additional proteins,
spanning a range of sizes, chemical patterns, and functional roles;
see Figure~\ref{fig2} as well as Figure~S2 in the $\mathrm{Supporting~Information}$.
Given the importance of hydrophobic surface moieties in situating the interfacial waters at the edge of a dewetting transition,
we first considered other proteins with well-defined hydrophobic patches.
First, we consider the fungal protein, Hydrophobin II, which is highly surface-active, and is known to self-assemble at water-vapor interfaces~\cite{hakanpaa2006hydrophobin}.
Although Hydrophobin II is charge neutral overall, the protein surface displays 10 charged residues.
In Figure~\ref{fig2}a, we show both the hydrophobic face of Hydrophobin II, which is enriched in hydrophobic residues,
as well as the remainder of the protein, which has been shown to be super-hydrophilic~\cite{Patel:JPCB:2014}.
As with ubiquitin, the susceptibility, $ \chi_v $, for Hydrophobin II also displays a marked peak.
Next, we consider the human hepatitis B viral capsid protein, which has a net charge of -6, but displays an even larger hydrophobic patch than the one on Hydrophobin II (Figure~\ref{fig2}b).
That patch drives the binding of two capsid proteins to form a dimer that further assembles into a 240-protein capsid shell~\cite{capsid_struct}.
The viral capsid protein also displays a clear peak in $ \chi_v $.
Does the collective dewetting seen in the above proteins stem from the
presence of extended hydrophobic patches on their surfaces?
Although many proteins possess such patches, not all of them do;
instead, most protein surfaces display chemical patterns that are amphiphilic,
and feature only smaller hydrophobic regions.
The signaling protein MDM2 contains such a modest hydrophobic groove, which is nevertheless important from a functional standpoint;
it enables MDM2 to exercise control over cellular senescence by binding to the transactivation domain of the tumor-suppresor protein p53~\cite{mdm2_struct}.
As shown in Figure~\ref{fig2}c, MDM2 also displays a peak in susceptibility, $ \chi_v $.
Might the large or small, but well-defined hydrophobic patches on ubiquitin, Hydrophobin II, the capsid sub-unit, and MDM2
be responsible for rendering their hydration shell waters susceptible to unfavorable perturbation?
To address this question, we study proteins that are known for being anomalously hydrophilic or charged.
Malate dehydrogenase is a large hydrophilic protein with 61 charged surface residues.
The protein dimerizes into a metabolic enzyme, and plays an important role in the citric acid cycle.
The interface through which the protein monomers bind is fairly hydrophilic, featuring 5 charged residues and several other polar residues.
In fact, regions of the binding interface are so hydrophilic that they hold on their waters even in the bound state.
In other words, the binding interface features structured waters that bridge the two interacting proteins;
such bridging waters, which are observed in the crystal structure of the malate dehydrogenase dimer,
are shown in Figure~\ref{fig2}d~\cite{bridging,mdh_struct}.
The proteins discussed previously, in contrast, feature binding interfaces that are entirely dry.
Interestingly, even for the largely hydrophilic malate dehydrogenase protein,
we observe a peak in susceptibility, $ \chi_v $, in response to an unfavorable perturbation (Figure~\ref{fig2}d).
Another fairly hydrophilic protein that features a charged interaction interface is barnase,
a bacterial RNase that interacts with its inhibitor, barstar, in one of the strongest known protein-protein interactions~\cite{barnase}.
The high-affinity sub-picomolar binding between barnase and barstar is facilitated by the formation of electrostatic contacts
between five positively charged residues on barnase and five negatively charged residues on barstar~\cite{barnase}.
Remarkably, a clear peak in susceptibility is also observed for barnase (Figure~\ref{fig2}e).
Finally, we study T4 lysozyme, a bacteriophage protein that catalyzes the hydrolysis of the peptidoglycan layer of bacterial cell walls~\cite{lysozyme_struct}, and has 45 charged surface residues with an overall charge of +9.
Lysozyme does not appear to participate in interactions with proteins other than its substrates,
or to possess a clear hydrophobic patch; rather, it displays a checkered pattern of hydrophobic and hydrophilic atoms (Figure~\ref{fig2}f).
As with all of the proteins studied, the hydration shell of lysozyme also displays a marked peak in susceptibility.
Our results, obtained across proteins with a diversity of sizes, biological functions, and surface chemistries, thus suggest that susceptibility to an unfavorable potential is a general property of protein hydration waters.
\subsection{Characterizing protein surfaces: residues vs atoms}
Although collective dewetting in response to an unfavorable potential may be expected for proteins that are fairly hydrophobic,
it is somewhat surprising that even the more hydrophilic and charged proteins display such behavior.
To better characterize the similarities and differences in the surface chemistries of the seven proteins discussed above,
we plot the fraction of their surface residues that are charged and hydrophobic in Figures~\ref{fig3}a and~\ref{fig3}b, respectively.
As expected, the more hydrophobic proteins have a smaller fraction of charged residues and a larger fraction of hydrophobic residues,
with the charge fraction ranging from 0.15 to 0.35, and the hydrophobic fraction varying from 0.5 to 0.2.
To interrogate whether the surface chemistries of these seven proteins are representative of the larger class of folded, globular proteins,
we additionally estimated these quantities for an expanded set containing a total of 20 proteins.
The results are included in Figure~S4 of the $\mathrm{Supporting~Information}$, and highlight that the characteristics of proteins studied here are indeed representative of typical proteins.
How then do we understand the sensitivity of such diverse protein hydration shells to unfavorable potentials,
and their resemblance to extended hydrophobic surfaces?
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\textwidth]{fig/fig3}
\caption{
An atom-centric view of diverse protein surfaces reveals that they are more hydrophobic
than anticipated by the corresponding residue-centric view.
(a) Fraction of protein surface residues that are charged.
(b) Fraction of surface residues that are hydrophobic.
(c) Fraction of protein surface atoms that are hydrophobic according to the Kapcha-Rossky classification~\cite{rossky}.
}
\label{fig3}
\end{figure}
To answer this question, we draw inspiration from work by Kapcha and Rossky, who highlighted that amino acid residues are not monolithic,
but are instead heterogeneous, and are composed of both hydrophobic and hydrophilic atoms~\cite{rossky}.
Kapcha and Rossky thus advocate adopting an atom-centric, rather than a residue-centric view of the protein surface.
They further suggest that an atom be classified as hydrophobic only if the magnitude of its partial charge is less than 0.25 in the OPLS force field, and hydrophilic otherwise.
Following these authors, we plot the fraction of surface atoms (not residues) that are hydrophobic in Figure~\ref{fig3}c.
Interestingly, we find that the fraction of surface atoms that are hydrophobic is not only larger than the corresponding fraction of surface residues,
but that roughly half the protein surface (or more) consists of hydrophobic atoms.
Importantly, the fraction of surface atoms that are hydrophobic is uniformly high for all proteins studied here.
To better understand these results,
we analyzed the atomic composition of the surface residues;
as shown in Figure~S5 of the $\mathrm{Supporting~Information}$, roughly 80\% of atoms belonging to hydrophobic residues are hydrophobic,
but nearly 50\% of atoms belonging to hydrophilic (polar or charged) residues are also hydrophobic.
Thus, although polar and charged surface residues do not contribute as heavily to hydrophobic surface atoms,
they nevertheless have substantive contributions.
We note that following Kapcha and Rossky, we classify not just the protein heavy atoms, but also hydrogen atoms as being either hydrophobic or hydrophilic~\cite{rossky}.
If the protein hydrogen atoms are excluded from the analysis, the fraction of hydrophobic surface atoms is somewhat lower,
but remains close to 0.5 as shown in Figure~S4 of the $\mathrm{Supporting~Information}$.
Our results thus suggest that for a wide variety of proteins, roughly half the surface consists of hydrophobic atoms;
these atoms situate the protein hydration waters at the edge of a dewetting transition, making them particularly susceptible to unfavorable potentials.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{fig/fig4}
\caption{
How the location of the peak in susceptibility depends on the width of the protein hydration shell, $v$.
(a) Susceptibility of the ubiquitin hydration waters to the biasing potential strength, $\phi$,
is shown for hydration shells with different widths;
as $R_v$ is increased, the peak in $ \chi_v $ shifts to lower $\phi$-values.
(b) The peak location, $\beta\phi^*$ as a function of $R_v^{-1}$ (symbols), and a linear fit to the data (line) are shown.
}
\label{fig4}
\end{figure}
\subsection{Strength of unfavorable potential needed to trigger dewetting}
Interestingly, not only do all of proteins studied here show a peak in $ \chi_v $,
the location of the characteristic peak in susceptibility, $\phi^*$,
is observed to be roughly $2~ k_{\rm B}T $ in all cases (Figure \ref{fig2}).
To understand this observation, we first recognize that for $\phi>0$, the biasing potential, $\phi N_v$, favors configurations with lower $N_v$-values and thereby lower densities.
As a result, the biasing potential effectively lowers the pressure in the protein hydration shell to:
$P_{\rm eff} \approx P - \phi \rho_{\rm w}$, where $P$ is the system pressure and $\rho_{\rm w}$ is the molar density of liquid water.
For a sufficiently large $\phi$, the tension (negative pressure) exerted on the protein hydration waters prompts the nucleation of vapor in certain regions of $v$.
The biasing potential thus stabilizes a water-vapor interface;
the pressure drop across this interface is related
to the corresponding interfacial tension, $\gamma$, and the mean interfacial curvature, $\bar{\kappa}$,
according to the Young-Laplace equation, $P - P_{\rm eff} = \gamma \bar{\kappa}$.
Moreover, because the biasing potential is only experienced by waters in $v$,
the interfacial curvature is determined by the radius, $R_v$, of the spherical sub-volumes defining $v$,
such that $\bar{\kappa} \propto 1/R_v$.
Thus, the biasing potential strength needed to trigger vapor nucleation ought to be: $\phi^* \approx (P - P_{\rm eff}) / \rho_{\rm w} \propto (\gamma / \rho_{\rm w}) (1/R_v)$.
By systematically varying $R_v$, and repeating our calculations for ubiquitin,
we find that as $R_v$ is increased, the peak in susceptibility indeed shifts to lower values (Figure~\ref{fig4}a).
Furthermore, as shown in Figure~\ref{fig4}b, $\phi^*$ also varies linearly with $1/R_v$ as predicted;
the best fit line through the origin has a slope of 1.38~nm,
which is comparable to $\beta \gamma / \rho_{\rm w} = 0.45$~nm,
estimated using $\rho_{\rm w}=33$~nm$^{-3}$ and $\gamma=60.2$~mJ/nm$^2$ for SPC/E water.
\section{Conclusions and Outlook}
Proteins employ intricate topographical and chemical patterns,
which have evolved to facilitate their many biological functions.
Although the space of such patterns is immense,
it is likely constrained by common characteristics that
all proteins must possess in order to function properly.
For example, all proteins must have favorable interactions with water to be soluble,
which requires the presence of hydrophilic groups on their surfaces.
Conversely, protein surfaces must also feature hydrophobic regions that interact poorly with water,
and provide a driving force for proteins to interact with other molecules.
In this article, we shed light on how proteins accomplish these competing goals
by balancing their overall interactions with their hydration waters.
We show that roughly half the atoms on the protein surface are hydrophobic -- a fact that can be obfuscated by focusing on surface residues rather than surface atoms.
We also find that the hydration shells of diverse proteins
-- even those with highly charged surfaces and amphiphilic interaction interfaces --
are highly susceptible to an unfavorable potential.
Our results thus suggest that hydrophobic atoms on the protein surface situate its hydration waters at the edge of a dewetting transition,
which can be triggered by an unfavorable perturbation.
Consistent with our results, signatures of collective transitions have also been observed in studies of partially hydrated proteins.
For example, Cui {\it et al.} found that partially hydrated proteins undergo a percolation transition at a critical value of protein hydration~\cite{Cui:Protein:2014}.
Similarly, in studying the uptake of water from the vapor phase by proteins,
Debenedetti and co-workers found protein hydration to display hysteresis
-- a hallmark of collective transitions --
between the adsorption and desorption branches of the isotherm~\cite{Palmer:JPCL:2012,Kim_2015}.
These authors also found that polar and charged residues contributed to the collective wetting of a dry protein (and the associated hysteresis)~\cite{kim2017microscopic};
correspondingly, here we find that non-polar regions of the protein give rise to the collective dewetting of a hydrated protein.
We also find that the biasing potential strength needed to trigger dewetting is inversely proportional to the width of the hydration shell,
but does not depend meaningfully on the particular protein being perturbed.
Thus, our results not only suggest that susceptibility to an unfavorable perturbation is a general feature of the hydration shells of proteins,
but also highlight that the strength of the perturbation needed to trigger dewetting is remarkably similar across different proteins.
This finding suggests a near-universal calibration of the perturbation strength across diverse proteins, i.e., by considering $\phi$ relative to $\phi^*$.
It also establishes a framework for systematically classifying how favorable the interactions between water and different parts of the protein surface are.
In particular, we expect that locations on the protein surface that dewet at low $\phi$-values (relative to $\phi^*$)
will correspond to the most hydrophobic regions on the protein surface,
whereas regions that retain their waters even at high $\phi$-values will be the most hydrophilic.
Given the importance of hydrophobicity in driving protein interactions,
regions of the protein surface that dewet most readily may
correspond closely with patches on the protein that participate in interactions~\cite{Bogan:JMB:1998,keskin2005hot,White:2008aa,Abrams:2013:PSFB}.
We are investigating whether such hydrophobic protein regions could serve as predictors of protein interaction sites,
and plan to report our findings in a future study.
Similarly, identification of the most hydrophilic regions of the protein could facilitate the discovery of novel super-hydrophilic chemical patterns;
an understanding of what enables such patterns to have strong interactions with water could also serve as the basis for the rational design of protein non-fouling surfaces or surfaces that display super-oleophobicity underwater~\cite{nonfouling,superoleo}.
\begin{acknowledgement}
A.J.P. gratefully acknowledges financial support from the National Science Foundation (CBET 1652646, CHE 1665339, and UPENN MRSEC DMR 11-20901), and a fellowship from the Alfred P. Sloan Research Foundation.
N.B.R. was supported by the National Science Foundation grant CBET 1652646.
\end{acknowledgement}
\begin{suppinfo}
In the $\mathrm{Supporting~Information}$, we include details of our simulations,
information on enhanced sampling techniques that we use,
plots supporting Figure~\ref{fig2}, and
additional analysis pertaining to the diversity of the proteins studied here.
\end{suppinfo}
\providecommand{\latin}[1]{#1}
\makeatletter
\providecommand{\doi}
{\begingroup\let\do\@makeother\dospecials
\catcode`\{=1 \catcode`\}=2 \doi@aux}
\providecommand{\doi@aux}[1]{\endgroup\texttt{#1}}
\makeatother
\providecommand*\mcitethebibliography{\thebibliography}
\csname @ifundefined\endcsname{endmcitethebibliography}
{\let\endmcitethebibliography\endthebibliography}{}
\begin{mcitethebibliography}{70}
\providecommand*\natexlab[1]{#1}
\providecommand*\mciteSetBstSublistMode[1]{}
\providecommand*\mciteSetBstMaxWidthForm[2]{}
\providecommand*\mciteBstWouldAddEndPuncttrue
{\def\unskip.}{\unskip.}}
\providecommand*\mciteBstWouldAddEndPunctfalse
{\let\unskip.}\relax}
\providecommand*\mciteSetBstMidEndSepPunct[3]{}
\providecommand*\mciteSetBstSublistLabelBeginEnd[3]{}
\providecommand*\unskip.}{}
\mciteSetBstSublistMode{f}
\mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})}
\mciteSetBstSublistLabelBeginEnd
{\mcitemaxwidthsubitemform\space}
{\relax}
{\relax}
\bibitem[Ashbaugh and Hatch(2008)Ashbaugh, and Hatch]{Ashbaugh:2008:JACS}
Ashbaugh,~H.~S.; Hatch,~H.~W. Natively {{Unfolded Protein Stability}} as a
{{Coil}}-to-{{Globule Transition}} in {{Charge}}/{{Hydropathy Space}}.
\emph{J. Am. Chem. Soc.} \textbf{2008}, \emph{130}, 9536--9542\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Krone \latin{et~al.}(2008)Krone, Hua, Soto, Zhou, Berne, and
Shea]{shea08}
Krone,~M.~G.; Hua,~L.; Soto,~P.; Zhou,~R.; Berne,~B.~J.; Shea,~J.-E. Role of
{{Water}} in {{Mediating}} the {{Assembly}} of {{Alzheimer Amyloid}}-$\beta$
{{A$\beta$16}}-22 {{Protofilaments}}. \emph{J. Am. Chem. Soc.} \textbf{2008},
\emph{130}, 11066--11072\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Levy and Onuchic(2006)Levy, and Onuchic]{levy2006water}
Levy,~Y.; Onuchic,~J.~N. Water Mediation in Protein Folding and Molecular
Recognition. \emph{Annu. Rev. Biophys. Biomol. Struct.} \textbf{2006},
\emph{35}, 389--415\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Berne \latin{et~al.}(2009)Berne, Weeks, and Zhou]{berne_rev09}
Berne,~B.~J.; Weeks,~J.~D.; Zhou,~R. Dewetting and {{Hydrophobic Interaction}}
in {{Physical}} and {{Biological Systems}}. \emph{Annu. Rev. Phys. Chem.}
\textbf{2009}, \emph{60}, 85--103\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Jamadagni \latin{et~al.}(2011)Jamadagni, Godawat, and
Garde]{Jamadagni:ARCB:2011}
Jamadagni,~S.~N.; Godawat,~R.; Garde,~S. Hydrophobicity of {{Proteins}} and
{{Interfaces}}: {{Insights}} from {{Density Fluctuations}}. \emph{Annu. Rev.
Chem. Biomol. Eng.} \textbf{2011}, \emph{2}, 147--171\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Vashisth and Abrams(2013)Vashisth, and Abrams]{Abrams:2013:PSFB}
Vashisth,~H.; Abrams,~C.~F. All-Atom Structural Models of Insulin Binding to
the Insulin Receptor in the Presence of a Tandem Hormone-Binding Element.
\emph{Proteins: Struct., Funct., Bioinf.} \textbf{2013}, \emph{81},
1017--1030\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Baron and McCammon(2013)Baron, and McCammon]{Baron:2013aa}
Baron,~R.; McCammon,~J.~A. Molecular Recognition and Ligand Association.
\emph{Annu. Rev. Phys. Chem.} \textbf{2013}, \emph{64}, 151--75\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Tiwary \latin{et~al.}(2015)Tiwary, Limongelli, Salvalaglio, and
Parrinello]{tiwary2015kinetics}
Tiwary,~P.; Limongelli,~V.; Salvalaglio,~M.; Parrinello,~M. Kinetics of
Protein--Ligand Unbinding: {{Predicting}} Pathways, Rates, and Rate-Limiting
Steps. \emph{Proc. Natl. Acad. Sci. U.S.A.} \textbf{2015}, \emph{112},
E386--E391\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Tiwary \latin{et~al.}(2015)Tiwary, Mondal, Morrone, and
Berne]{tiwary2015role}
Tiwary,~P.; Mondal,~J.; Morrone,~J.~A.; Berne,~B. Role of Water and Steric
Constraints in the Kinetics of Cavity--Ligand Unbinding. \emph{Proc. Natl.
Acad. Sci. U.S.A.} \textbf{2015}, \emph{112}, 12015--12019\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{Bellissent-Funel} \latin{et~al.}(2016){Bellissent-Funel}, Hassanali,
Havenith, Henchman, Pohl, Sterpone, {van der Spoel}, Xu, and
Garcia]{bellissent2016water}
{Bellissent-Funel},~M.-C.; Hassanali,~A.; Havenith,~M.; Henchman,~R.; Pohl,~P.;
Sterpone,~F.; {van der Spoel},~D.; Xu,~Y.; Garcia,~A.~E. Water Determines the
Structure and Dynamics of Proteins. \emph{Chem. Rev.} \textbf{2016},
\emph{116}, 7673--7697\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Jaenicke(2000)]{Jaenicke:2000hb}
Jaenicke,~R. Stability and Stabilization of Globular Proteins in Solution.
\emph{J. Biotechnol.} \textbf{2000}, \emph{79}, 193--203\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Liu \latin{et~al.}(2004)Liu, Shahrokh, and Jun]{Shire:JPS:2004}
Liu,~S. J.~S.; Shahrokh,~Z.; Jun, Challenges in the Development of High Protein
Concentration Formulations. \emph{J. Pharm. Sci.} \textbf{2004}, \emph{93},
1390--1402\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Shen \latin{et~al.}(2006)Shen, Cheung, Errington, and
Truskett]{Shen:2006aa}
Shen,~V.~K.; Cheung,~J.~K.; Errington,~J.~R.; Truskett,~T.~M. Coarse-Grained
Strategy for Modeling Protein Stability in Concentrated Solutions. {{II}}:
Phase Behavior. \emph{Biophys. J.} \textbf{2006}, \emph{90}, 1949--60\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Trevino \latin{et~al.}(2008)Trevino, Scholtz, and
Pace]{Trevino:2008jh}
Trevino,~S.~R.; Scholtz,~J.~M.; Pace,~C.~N. Measuring and Increasing Protein
Solubility. \emph{J. Pharm. Sci.} \textbf{2008}, \relax
\mciteBstWouldAddEndPunctfalse
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Thirumalai \latin{et~al.}(2012)Thirumalai, Reddy, and
Straub]{Thirumalai:2012}
Thirumalai,~D.; Reddy,~G.; Straub,~J.~E. Role of {{Water}} in {{Protein
Aggregration}} and {{Amyloid Polymorphism}}. \emph{Acc. Chem. Res.}
\textbf{2012}, \emph{45}, 83--92\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Debenedetti and Pablo(2012)Debenedetti, and Pablo]{Palmer:JPCL:2012}
Debenedetti,~J. C.~P.; Pablo, Computer {{Simulation}} of {{Water Sorption}} on
{{Flexible Protein Crystals}}. \emph{J. Phys. Chem. Lett.} \textbf{2012},
\emph{3}, 2713--2718\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Chong and Ham(2014)Chong, and Ham]{Ham:Angw:2014}
Chong,~S.-H.; Ham,~S. Interaction with the {{Surrounding Water Plays}} a {{Key
Role}} in {{Determining}} the {{Aggregation Propensity}} of {{Proteins}}.
\emph{Angew. Chem., Int. Ed.} \textbf{2014}, \emph{53}, 3751--3751\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Remsing \latin{et~al.}(2018)Remsing, Xi, and Patel]{Remsing:JPCB:2018}
Remsing,~R.~C.; Xi,~E.; Patel,~A.~J. Protein {{Hydration Thermodynamics}}:
{{The Influence}} of {{Flexibility}} and {{Salt}} on {{Hydrophobin II
Hydration}}. \emph{J. Phys. Chem. B} \textbf{2018}, \emph{122},
3635--3646\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Patel \latin{et~al.}(2011)Patel, Varilly, Jamadagni, Acharya, Garde,
and Chandler]{Patel:PNAS:2011}
Patel,~A.~J.; Varilly,~P.; Jamadagni,~S.~N.; Acharya,~H.; Garde,~S.;
Chandler,~D. Extended Surfaces Modulate Hydrophobic Interactions of
Neighboring Solutes. \emph{Proc. Natl. Acad. Sci. U.S.A.} \textbf{2011},
\emph{108}, 17678--17683\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Patel and Garde(2014)Patel, and Garde]{Patel:JPCB:2014}
Patel,~A.~J.; Garde,~S. Efficient {{Method To Characterize}} the
{{Context}}-{{Dependent Hydrophobicity}} of {{Proteins}}. \emph{J. Phys.
Chem. B} \textbf{2014}, \emph{118}, 1564--1573\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Xi \latin{et~al.}(2016)Xi, Remsing, and Patel]{Xi:JCTC:2016}
Xi,~E.; Remsing,~R.~C.; Patel,~A.~J. Sparse {{Sampling}} of {{Water Density
Fluctuations}} in {{Interfacial Environments}}. \emph{J. Chem. Theory
Comput.} \textbf{2016}, \emph{12}, 706--713\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lum \latin{et~al.}(1999)Lum, Chandler, and Weeks]{LCW}
Lum,~K.; Chandler,~D.; Weeks,~J.~D. Hydrophobicity at {{Small}} and {{Large
Length Scales}}. \emph{J. Phys. Chem. B} \textbf{1999}, \emph{103},
4570--4577\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Varilly \latin{et~al.}(2011)Varilly, Patel, and Chandler]{LLCW}
Varilly,~P.; Patel,~A.~J.; Chandler,~D. An Improved Coarse-Grained Model of
Solvation and the Hydrophobic Effect. \emph{J. Chem. Phys.} \textbf{2011},
\emph{134}, 074109\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Vaikuntanathan \latin{et~al.}(2016)Vaikuntanathan, Rotskoff, Hudson,
and Geissler]{vaikuntanathan2016necessity}
Vaikuntanathan,~S.; Rotskoff,~G.; Hudson,~A.; Geissler,~P.~L. Necessity of
Capillary Modes in a Minimal Model of Nanoscale Hydrophobic Solvation.
\emph{Proc. Natl. Acad. Sci. U.S.A.} \textbf{2016}, \emph{113},
E2224--E2230\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Xi and Patel(2016)Xi, and Patel]{Xi:PNAS:2016}
Xi,~E.; Patel,~A.~J. The Hydrophobic Effect, and Fluctuations: {{The}} Long and
the Short of It. \emph{Proc. Natl. Acad. Sci. U.S.A.} \textbf{2016},
\emph{113}, 4549--4551\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Patel and Chandler(2010)Patel, and Chandler]{Patel:JPCB:2010}
Patel,~A.~J.; Chandler,~D. Fluctuations of {{Water}} near {{Extended
Hydrophobic}} and {{Hydrophilic Surfaces}}. \emph{J. Phys. Chem. B}
\textbf{2010}, \emph{114}, 1632--1637\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Patel \latin{et~al.}(2012)Patel, Varilly, Jamadagni, Hagan, Chandler,
and Garde]{Patel:JPCB:2012}
Patel,~A.~J.; Varilly,~P.; Jamadagni,~S.~N.; Hagan,~M.~F.; Chandler,~D.;
Garde,~S. Sitting at the {{Edge}}: {{How Biomolecules Use Hydrophobicity}} to
{{Tune Their Interactions}} and {{Function}}. \emph{J. Phys. Chem. B}
\textbf{2012}, \emph{116}, 2498--2503\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rossky \latin{et~al.}(1984)Rossky, McCammon, and J.]{LeeMR}
Rossky,~C. Y.~L.; McCammon,~J.~A.; J.,~P. The Structure of Liquid Water at an
Extended Hydrophobic Surface. \emph{J. Chem. Phys.} \textbf{1984}, \emph{80},
4448--4455\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Mittal and Hummer(2008)Mittal, and Hummer]{mittal_pnas08}
Mittal,~J.; Hummer,~G. Static and Dynamic Correlations in Water at Hydrophobic
Interfaces. \emph{Proc. Nat. Acad. Sci.} \textbf{2008}, \emph{105},
20130--20135\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sarupria and Garde(2009)Sarupria, and Garde]{Sarupria:PRL:2009}
Sarupria,~S.; Garde,~S. Quantifying {{Water Density Fluctuations}} and
{{Compressibility}} of {{Hydration Shells}} of {{Hydrophobic Solutes}} and
{{Proteins}}. \emph{Phys. Rev. Lett.} \textbf{2009}, \emph{103}, 037803\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Godawat \latin{et~al.}(2009)Godawat, Jamadagni, and
Garde]{Godawat:PNAS:2009}
Godawat,~R.; Jamadagni,~S.~N.; Garde,~S. Characterizing {{Hydrophobicity}} of
{{Interfaces}} by {{Using Cavity Formation}}, {{Solute Binding}}, and {{Water
Correlations}}. \emph{Proc. Natl. Acad. Sci. U.S.A.} \textbf{2009},
\emph{106}, 15119--15124\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Willard and Chandler(2010)Willard, and Chandler]{Willard:JPCB:2010}
Willard,~A.~P.; Chandler,~D. Instantaneous Liquid Interfaces. \emph{J. Phys.
Chem. B} \textbf{2010}, \emph{114}, 1954--1958\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Heyden and Tobias(2013)Heyden, and Tobias]{heyden2013spatial}
Heyden,~M.; Tobias,~D.~J. Spatial Dependence of Protein-Water Collective
Hydrogen-Bond Dynamics. \emph{Phys. Rev. Lett.} \textbf{2013}, \emph{111},
218101\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sosso \latin{et~al.}(2016)Sosso, Caravati, Rotskoff, Vaikuntanathan,
and Hassanali]{sosso2016role}
Sosso,~G.~C.; Caravati,~S.; Rotskoff,~G.; Vaikuntanathan,~S.; Hassanali,~A. On
the Role of Nonspherical Cavities in Short Length-Scale Density Fluctuations
in Water. \emph{J. Phys. Chem. A} \textbf{2016}, \emph{121}, 370--380\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Shin and Willard(2018)Shin, and Willard]{Shin:JPCB:2018}
Shin,~S.; Willard,~A.~P. Water's Interfacial Hydrogen Bonding Structure Reveals
the Effective Strength of Surface-Water Interactions. \emph{J. Phys. Chem. B}
\textbf{2018}, \relax
\mciteBstWouldAddEndPunctfalse
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Giovambattista \latin{et~al.}(2008)Giovambattista, Lopez, Rossky, and
Debenedetti]{Giovambattista:PNAS:2008}
Giovambattista,~N.; Lopez,~C.~F.; Rossky,~P.~J.; Debenedetti,~P.~G.
Hydrophobicity of Protein Surfaces: {{Separating}} Geometry from Chemistry.
\emph{Proc. Natl. Acad. Sci. U.S.A.} \textbf{2008}, \emph{105},
2274--2279\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Giovambattista \latin{et~al.}(2009)Giovambattista, Debenedetti, and
Rossky]{giovambattista2009enhanced}
Giovambattista,~N.; Debenedetti,~P.~G.; Rossky,~P.~J. Enhanced Surface
Hydrophobicity by Coupling of Surface Polarity and Topography. \emph{Proc.
Natl. Acad. Sci. U.S.A.} \textbf{2009}, \emph{106}, 15181--15185\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Acharya \latin{et~al.}(2010)Acharya, Vembanur, Jamadagni, and
Garde]{Acharya:Faraday:2010}
Acharya,~H.; Vembanur,~S.; Jamadagni,~S.~N.; Garde,~S. Mapping Hydrophobicity
at the Nanoscale: {{Applications}} to Heterogeneous Surfaces and Proteins.
\emph{Faraday Discuss.} \textbf{2010}, \emph{146}, 353--365\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Daub \latin{et~al.}(2010)Daub, Wang, Kudesia, Bratko, and
Luzar]{daub2010influence}
Daub,~C.~D.; Wang,~J.; Kudesia,~S.; Bratko,~D.; Luzar,~A. The Influence of
Molecular-Scale Roughness on the Surface Spreading of an Aqueous Nanodrop.
\emph{Faraday Discuss.} \textbf{2010}, \emph{146}, 67--77\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Mittal and Hummer(2010)Mittal, and Hummer]{mittal2010interfacial}
Mittal,~J.; Hummer,~G. Interfacial Thermodynamics of Confined Water near
Molecularly Rough Surfaces. \emph{Faraday Discuss.} \textbf{2010},
\emph{146}, 341--352\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Wang \latin{et~al.}(2011)Wang, Bratko, and Luzar]{Wang19042011}
Wang,~J.; Bratko,~D.; Luzar,~A. Probing Surface Tension Additivity on
Chemically Heterogeneous Surfaces by a Molecular Approach. \emph{Proc. Natl.
Acad. Sci. U.S.A.} \textbf{2011}, \emph{108}, 6374--6379\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Fogarty and Laage(2014)Fogarty, and Laage]{fogarty2014water}
Fogarty,~A.~C.; Laage,~D. Water Dynamics in Protein Hydration Shells: {{The}}
Molecular Origins of the Dynamical Perturbation. \emph{J. Phys. Chem. B}
\textbf{2014}, \emph{118}, 7715--7729\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Harris and Pettitt(2014)Harris, and Pettitt]{harris2014effects}
Harris,~R.~C.; Pettitt,~B.~M. Effects of Geometry and Chemistry on Hydrophobic
Solvation. \emph{Proc. Natl. Acad. Sci. U.S.A.} \textbf{2014}, \emph{111},
14681--14686\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Xi \latin{et~al.}(2017)Xi, Venkateshwaran, Li, Rego, Patel, and
Garde]{Xi:PNAS:2017}
Xi,~E.; Venkateshwaran,~V.; Li,~L.; Rego,~N.; Patel,~A.~J.; Garde,~S.
Hydrophobicity of Proteins and Nanostructured Solutes Is Governed by
Topographical and Chemical Context. \emph{Proc. Natl. Acad. Sci. U.S.A.}
\textbf{2017}, 201700092\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Monroe and Shell(2018)Monroe, and Shell]{Shell:PNAS:2018}
Monroe,~J.~I.; Shell,~M.~S. Computational Discovery of Chemically Patterned
Surfaces that Effect Unique Hydration Water Dynamics. \emph{Proc. Natl. Acad.
Sci. U.S.A.} \textbf{2018}, \emph{115}, 8093--8098\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Heyden()]{heyden:2018}
Heyden,~M. Disassembling solvation free energies into local
contributions---Toward a microscopic understanding of solvation processes.
\emph{Wiley Interdiscip. Rev.: Comput. Mol. Sci.} e1390\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bogan and Thorn(1998)Bogan, and Thorn]{Bogan:JMB:1998}
Bogan,~A.~A.; Thorn,~K.~S. Anatomy of Hot Spots in Protein Interfaces. \emph{J.
Mol. Biol.} \textbf{1998}, \emph{280}, 1--9\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[DeLano(2002)]{DeLano:2002aa}
DeLano,~W.~L. Unraveling Hot Spots in Binding Interfaces: Progress and
Challenges. \emph{Curr. Opin. Struct. Biol.} \textbf{2002}, \emph{12},
14--20\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Nooren and Thornton(2003)Nooren, and Thornton]{nooren2003diversity}
Nooren,~I.~M.; Thornton,~J.~M. Diversity of Protein--Protein Interactions.
\emph{EMBO J.} \textbf{2003}, \emph{22}, 3486--3492\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[White \latin{et~al.}(2008)White, Westwell, and Brahemi]{White:2008aa}
White,~A.~W.; Westwell,~A.~D.; Brahemi,~G. Protein-Protein Interactions as
Targets for Small-Molecule Therapeutics in Cancer. \emph{Expert Rev. Mol.
Med.} \textbf{2008}, \emph{10}, e8\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Chen \latin{et~al.}(2009)Chen, Cao, and Jiang]{nonfouling}
Chen,~S.; Cao,~Z.; Jiang,~S. Ultra-Low Fouling Peptide Surfaces Derived from
Natural Amino Acids. \emph{Biomaterials} \textbf{2009}, \emph{30},
5892--5896\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kapcha and Rossky(2014)Kapcha, and Rossky]{rossky}
Kapcha,~L.~H.; Rossky,~P.~J. A {{Simple Atomic}}-{{Level Hydrophobicity Scale
Reveals Protein Interfacial Structure}}. \emph{J. Mol. Biol.} \textbf{2014},
\emph{426}, 484--498\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Winget and Mayor(2010)Winget, and Mayor]{ubiq_recog}
Winget,~J.~M.; Mayor,~T. The {{Diversity}} of {{Ubiquitin Recognition}}: {{Hot
Spots}} and {{Varied Specificity}}. \emph{Mol. Cell} \textbf{2010},
\emph{38}, 627--635\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Patel \latin{et~al.}(2011)Patel, Varilly, Chandler, and
Garde]{Patel:JSP:2011}
Patel,~A.~J.; Varilly,~P.; Chandler,~D.; Garde,~S. Quantifying {{Density
Fluctuations}} in {{Volumes}} of {{All Shapes}} and {{Sizes Using Indirect
Umbrella Sampling}}. \emph{J. Stat. Phys.} \textbf{2011}, \emph{145},
265--275\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hershko and Ciechanover(1982)Hershko, and Ciechanover]{ubiq_1}
Hershko,~A.; Ciechanover,~A. Mechanisms of Intracellular Protein Breakdown.
\emph{Annu. Rev. Biochem.} \textbf{1982}, \emph{51}, 335--364\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hakanp\"a\"a \latin{et~al.}(2006)Hakanp\"a\"a, Linder, Popov, Schmidt,
and Rouvinen]{hfb_struct}
Hakanp\"a\"a,~J.; Linder,~M.; Popov,~A.; Schmidt,~A.; Rouvinen,~J. Hydrophobin
{{HFBII}} in {{Detail}}: {{Ultrahigh}}-{{Resolution Structure}} at 0.75
{{\AA}}. \emph{Acta Crystallogr., Sect. D: Biol. Crystallogr.} \textbf{2006},
\emph{62}, 356--367\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[W\"osten and {de Vocht}(2000)W\"osten, and {de Vocht}]{hfb_rev}
W\"osten,~H. A.~B.; {de Vocht},~M.~L. Hydrophobins, the {{Fungal Coat
Unravelled}}. \emph{Biochim. Biophys. Acta, Rev. Biomembr.} \textbf{2000},
\emph{1469}, 79--86\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hakanpaa \latin{et~al.}(2006)Hakanpaa, Linder, Popov, Schmidt, and
Rouvinen]{hakanpaa2006hydrophobin}
Hakanpaa,~J.; Linder,~M.; Popov,~A.; Schmidt,~A.; Rouvinen,~J. Hydrophobin
{{HFBII}} in Detail: Ultrahigh-Resolution Structure at 0.75 {{A}}. \emph{Acta
Crystallogr., Sect D: Biol. Crystallogr.} \textbf{2006}, \emph{62},
356--367\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Wynne \latin{et~al.}(1999)Wynne, Crowther, and Leslie]{capsid_struct}
Wynne,~S.~A.; Crowther,~R.~A.; Leslie,~A.~G. The {{Crystal Structure}} of the
{{Human Hepatitis B Virus Capsid}}. \emph{Mol. Cell} \textbf{1999}, \emph{3},
771--780\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kussie \latin{et~al.}(1996)Kussie, Gorina, Marechal, Elenbaas, Moreau,
Levine, and Pavletich]{mdm2_struct}
Kussie,~P.~H.; Gorina,~S.; Marechal,~V.; Elenbaas,~B.; Moreau,~J.;
Levine,~A.~J.; Pavletich,~N.~P. Structure of the {{MDM2 Oncoprotein Bound}}
to the {{P53 Tumor Suppressor Transactivation Domain}}. \emph{Science}
\textbf{1996}, \emph{274}, 948--953\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rodier \latin{et~al.}(2005)Rodier, Bahadur, Chakrabarti, and
Janin]{bridging}
Rodier,~F.; Bahadur,~R.~P.; Chakrabarti,~P.; Janin,~J. Hydration of
{{Protein}}-{{Protein Interfaces}}. \emph{Proteins: Struct., Funct., Bioinf.}
\textbf{2005}, \emph{60}, 36--45\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zaitseva \latin{et~al.}(2009)Zaitseva, Meneely, and Lamb]{mdh_struct}
Zaitseva,~J.; Meneely,~K.~M.; Lamb,~A.~L. Structure of {{Escherichia Coli
Malate Dehydrogenase}} at 1.45 {{A Resolution}}. \emph{Acta Crystallogr.,
Sect. F: Struct. Biol. Cryst. Commun.} \textbf{2009}, \emph{65},
866--869\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Schreiber and Fersht(1995)Schreiber, and Fersht]{barnase}
Schreiber,~G.; Fersht,~A.~R. Energetics of {{Protein}}-{{Protein
Interactions}}: {{Analysis Of the Barnase}}-{{Barstar Interface}} by {{Single
Mutations}} and {{Double Mutant Cycles}}. \emph{J. Mol. Biol.} \textbf{1995},
\emph{248}, 478--486\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Shoichet \latin{et~al.}(1995)Shoichet, Baase, Kuroki, and
Matthews]{lysozyme_struct}
Shoichet,~B.~K.; Baase,~W.~A.; Kuroki,~R.; Matthews,~B.~W. A {{Relationship}}
between {{Protein Stability}} and {{Protein Function}}. \emph{Proc. Natl.
Acad. Sci. U.S.A.} \textbf{1995}, \emph{92}, 452--456\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Cui \latin{et~al.}(2014)Cui, Ou, and Patel]{Cui:Protein:2014}
Cui,~D.; Ou,~S.; Patel,~S. Protein-Spanning Water Networks and Implications for
Prediction of Protein--Protein Interactions Mediated through Hydrophobic
Effects. \emph{Proteins: Struct., Funct., Bioinf.} \textbf{2014}, \emph{82},
3312--3326\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kim \latin{et~al.}(2015)Kim, Palmer, and Debenedetti]{Kim_2015}
Kim,~S.~B.; Palmer,~J.~C.; Debenedetti,~P.~G. A {{Computational Study}} of the
{{Effect}} of {{Matrix Structural Order}} on {{Water Sorption}} by
{{Trp}}-{{Cage Miniproteins}}. \emph{J. Phys. Chem. B} \textbf{2015},
\emph{119}, 1847--1856\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kim \latin{et~al.}(2017)Kim, Sparano, Singh, and
Debenedetti]{kim2017microscopic}
Kim,~S.~B.; Sparano,~E.~M.; Singh,~R.~S.; Debenedetti,~P.~G. Microscopic Origin
of Hysteresis in Water Sorption on Protein Matrices. \emph{J. Phys. Chem.
Lett.} \textbf{2017}, \emph{8}, 1185--1190\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Keskin \latin{et~al.}(2005)Keskin, Ma, and Nussinov]{keskin2005hot}
Keskin,~O.; Ma,~B.; Nussinov,~R. Hot Regions in Protein--Protein Interactions:
The Organization and Contribution of Structurally Conserved Hot Spot
Residues. \emph{J. Mol. Biol.} \textbf{2005}, \emph{345}, 1281--1294\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Si \latin{et~al.}(2018)Si, Dong, and Jiang]{superoleo}
Si,~Y.; Dong,~Z.; Jiang,~L. Bioinspired Designs of Superhydrophobic and
Superhydrophilic Materials. \emph{ACS Cent. Sci.} \textbf{2018}, \relax
\mciteBstWouldAddEndPunctfalse
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{}{\mcitedefaultseppunct}\relax
\unskip.}
\end{mcitethebibliography}
\end{document}
\section{Simulation Details}
All systems were simulated using version 4.5.3 of the GROMACS molecular dynamics package~\cite{gromacs}.
The leapfrog integrator~\cite{leapfrog} was used to integrate the equations of motion with a time-step of 2 fs.
The oxygen-hydrogen bonds in water were constrained using the SETTLE algorithm~\cite{settle},
and all other bonds to hydrogens were constrained using the LINCS algorithm~\cite{lincs}.
The SPC/E water model~\cite{spce} was used throughout,
and the proteins were simulated using the AMBER99SB force field~\cite{amber99sb}.
Short-range van der Waals and Coulombic interactions were trunctated using a cut-off of 1.0 nm, and long-range electrostatics were calculated using the particle-mesh Ewald (PME) algorithm~\cite{PME}.
All simulations were performed in the canonical ensemble; the temperature was maintained at $T=300 \; K$ using the stochastic velocity-rescale thermostat~\cite{vrescale}.
To ensure that water density fluctuations in the observation volume were not suppressed by our use of the canonical ensemble, we created a buffering water-vapor interface at the edge of the simulation box, as described elsewhere~\cite{Miller:PNAS:2007,Patel:PNAS:2011}.
The presence of a buffering interface effectively maintained the system to remain at its coexistence pressure.
\subsection{Applying Unfavorable Biasing Potentials Using INDUS}
The GROMACS package was suitably modified to bias the coarse-grained water number, $N_v$, in observation volumes of interest using the Indirect Umbrella Sampling (INDUS) prescription~\cite{Patel:JSP:2011,Patel:JPCB:2014}.
The Gaussian coarse-graining function used in INDUS was parameterized with a standard deviation of $\sigma=0.01 \; \rm{nm}$ and a trunctation length $r_c = 0.02 \; \rm{nm}$.
In the main text and in Figure~\ref{figS_n_v_phi}, we report averages in biased ensembles; the statistics were governed by the Hamiltonian: $\mathcal{H}_\phi = \mathcal{H}_0 + \phi N_v$, where $\mathcal{H}_0$ is the unbiased Hamiltonian and $\phi$ represents the strength of the biasing potential~\cite{Patel:JPCB:2014,Xi:JCTC:2016}.
The biased ensemble averages $\langle N_v \rangle_\phi$ and $\chi_v \equiv -\partial \langle N_v \rangle_\phi / \partial (\beta \phi) = \langle \delta N_v^2 \rangle_\phi$ were obtained either by sampling directly from biased ensembles with different $\phi$-values or by reweighting the underlying unbiased free energy landscape, $F_v(N)$.
In the latter case, $F_v(N)$ was obtained using umbrella sampling by performing a series of overlapping biased simulations that were analyzed using the Multistate Bennett Acceptance Ratio (MBAR)~\cite{wham,mbar} method.
All biased simulations were run for a total of $3$~ns; the first 500 ps were discarded for equilibration.
\subsection{SAM Surfaces}
The parameterization and setup of the SAM surfaces has been described in detail elsewhere~\cite{Godawat:PNAS:2009,Sarupria:PRL:2009,Shenogina_PRL_2009}.
Briefly, both the hydrophilic and hydrophobic SAMs were composed of 10-carbon alkyl chains that were terminated by sulfurs on one end and either an OH- or CH$_3$- head group on the other.
Each SAM surface had a total of 224 chains; the sulfur atoms were arranged on a hexagonal lattice with a spacing of 0.5 nm between adjacent chains.
Both SAM surfaces had a cross-sectional area of 7 x 7 nm in the $y$ and $z$ dimensions and a height of approximately 1.3 nm in the $x$ dimension.
The head groups of the SAM surfaces were solvated with a total of 6,464 water molecules; the corresponding water slab had a width of approximately 4 nm.
The simulation boxes were then extended in the $x$ direction to create a buffering liquid-vapor interface.
The solvated SAM surfaces were equilibrated for $1$~ns to produce the starting structures for the biased simulations.
A cylindrical observation volume, $v$, with a height of $0.3$ nm in the $x$ dimension and a radius of $2.0$ nm in the $y$-$z$ plane
was placed at the SAM-water interface.
The coarse-grained number of water molecules, $N_v$ in $v$, was biased using a harmonic potential with a spring constant of $\kappa = 0.24$ and $0.34$ $\frac{kJ}{mol}$ for the CH$_3$- and OH- terminated SAMs, respectively.
For each SAM, a total of 22 biased simulations were employed to sample the entire range of $N_v$ and to estimate $F_v(N)$.
\subsection{Protein Systems}
The following crystal structures were used to prepare the seven proteins described in the main text: Ubiquitin (PDB: 1UBQ) \cite{ubiq_struct}, barnase (PDB: 1BRS) \cite{barnase_struct}, hepatitis B viral capsid (PDB: 1QGT) \cite{capsid_struct}, MDM2 (PDB: 1YCR) \cite{mdm2_struct}, bacteriophage T4 lysozyme (PDB: 253L) \cite{lysozyme_struct}, hydrophobin II (PDB: 2B97) \cite{hfb_struct}, and malate dehydrogenase (PDB: 3HHP) \cite{mdh_struct}.
All protein systems were prepared from their PDB structures by first removing crystallographic waters and co-solutes.
Each system was then placed in a cubic box; the side lengths were chosen so that all protein atoms were at least 1.6 nm from the box edges.
The proteins were solvated using the GROMACS `genbox' utility. Sodium or chloride counter ions were added, if needed, to maintain a net neutral charge.
The simulation boxes were extended in the $z$ direction to create a buffering liquid-vapor interface.
Water slabs were maintained at the centers of the simulation boxes by restraining the $z$ component of their centers of mass to a dummy atom in the center of the box using a harmonic restraint with a spring constant of $1000$~kJ/mol/nm$^2$.
The initial configurations for all protein systems were prepared using steepest-descent energy minimization followed by 1~ns of equilibration.
These configurations were used as the starting structures for the biased simulations.
The observation volume, $v$, was defined as the union of spherical sub-volumes centered on the initial positions of the protein heavy (non-hydrogen) atoms.
Each spherical sub-volume was chosen to have the same radius, $R_v$. $R_v$ was chosen to be $0.6$ nm for all proteins. For ubiquitin, $R_v$ was varied systematically from $0.4$ nm to $1.0$ nm.
To ensure that the protein hydration shell continued to overlap with $v$ throughout the biased simulations,
the protein heavy atoms were position-restrained to their initial positions by applying a harmonic potential with a spring constant of $\kappa=1000$~kJ/mol/nm$^2$.
\section{Response of Protein Hydration Waters to an Unfavorable Potential}
\begin{figure*}[thb]
\centering
\includegraphics[width=0.8\textwidth]{fig/figS_n_v_phi}
\caption{
In Figure~2 of the main text, the dependence of the susceptibility, $\chi_v$, on the biasing potential strength, $\phi$, is shown for six proteins.
Here, the dependence of the average coarse grained number of waters, $\langle N_v \rangle_\phi$, on $\phi$ is shown.
In each case, $\langle N_v \rangle_\phi$ displays a sigmoidal dependence on the biasing potential strength, $\phi$.
}
\label{figS_n_v_phi}
\end{figure*}
\section{Properties of the Protein Surfaces Studied}
Figure~\ref{figS1} highlights the differences in the surface chemistries of the seven protein systems described in the main text.
These proteins were chosen to span the broad range of surface chemistries representative of globular proteins.
\vspace{-0.6in}
\begin{figure*}[thb]
\centering
\includegraphics[width=0.7\textwidth]{fig/figS1}
\vspace{-0.6in}
\caption{
The seven proteins studied here display a range of surface chemistries.
The number of hydrophobic, hydrophilic, and total surface atoms for each protein are shown in panels (a), (b), and (c), respectively;
the corresponding number of surface residues are shown in panels (d), (e), and (f).
The total number of charged surface residues (positive or negative), the net charge, and the total dipole moment (in Debye units) of the proteins are shown in panels (g), (h), and (i), respectively.
}
\label{figS1}
\end{figure*}
\subsection{Determining Protein Surface Atoms, Heavy Atoms and Residues}
We identified protein surface atoms by their exposure to solvent.
To determine solvent exposure, the average number of water oxygens within 0.6 nm of each protein heavy atom was calculated from a 2.5 ns equilibrium simulation; those with 6 water oxygens or more were classified as surface heavy atoms.
Hydrogen atoms bonded to any surface heavy atoms were also considered to be surface atoms.
Moreover, any amino acid residue containing at least one surface atom was categorized as a surface residue.
Although many other reasonable ways for making the above classifications exist,
we do not believe that they are likely influence our qualitative findings.
The total surface atoms and surface residues for the seven proteins
studied in the main text are shown in Figures~\ref{figS1}c and~\ref{figS1}f, respectively.
\begin{figure*}[thb]
\centering
\includegraphics[width=0.8\textwidth]{fig/figS2}
\caption{
The seven proteins studied span a wide range of sizes that is representative of globular proteins.
Results for the studied systems are shown in color, whereas those for 13 additional proteins are shown in gray,
and are included for comparison.
The number of protein surface atoms, surface residues, and surface heavy atoms (non-hydrogen atoms) are shown in panels (a), (b), and (c), respectively.
Panels (d) and (e) illustrate that the number of surface atoms vary linearly with the number of surface residues and the number of surface heavy atoms, respectively, highlighting that all three quantities contain the same information about the relative sizes of the proteins.
}
\label{figS2}
\end{figure*}
\subsection{Chemical Classification of Protein Surface Atoms and Residues}
The protein surface atoms were classified as either hydrophobic or hydrophilic according to the atom-wise hydropathy scale developed by Kapcha and Rossky~\cite{rossky}.
The number of protein surface atoms that are hydrophobic and hydrophilic are shown in Figures~\ref{figS1}a and~\ref{figS1}b, respectively, for the seven proteins studied in the main text.
Amongst the surface residues, the following were classified as hydrophobic: alanine, valine, leucine, isoleucine, phenylalanine,
proline
and tryptophan.
The remaining residues were classified as hydrophilic.
Out of the hydrophilic residues, those with a net charge of -1 or +1 were classified as charged; these residues include aspartate, glutamate, arginine, lysine, and protonated histidine.
A pH of 7 was assumed, and protonation states were assigned by the GROMACS utility `pdb2gmx';
protonation states of histidine residues were assigned according to their positions in the crystal structure.
The total number of hydrophobic, hydrophilic, and charged residues for the seven proteins
studied in the main text are shown in Figures~\ref{figS1}d,~\ref{figS1}e, and~\ref{figS1}g, respectively.
The net charge on the protein surface was determined by summing the charges of all surface residues that are charged, and is shown in Figures~\ref{figS1}h.
Protein dipole moments were calculated from the crystal structures using the Protein Dipole Moments server (http://bip.weizmann.ac.il/dipol)~\cite{dipole_server}, and are shown in Debye units in Figure \ref{figS1}i.
\subsection{Comparison with an Expanded Protein Library}
To determine whether the range of surface chemistries for the seven proteins studied here is characteristic of globular proteins, we examined the surface properties of an expanded library containing 13 additional proteins.
These proteins all participate in protein-protein intermolecular interactions;
otherwise, they were chosen without regard to their surface chemistries
and should represent the typical diversity of protein surface chemistries.
The PDB ID's of the additional proteins are: 1AUO \cite{xtal_1auo}, 1CMB \cite{xtal_1cmb}, 1GVP \cite{xtal_1gvp}, 1HJR \cite{xtal_1hjr}, 1MSB \cite{xtal_1msb}, 1PP2 \cite{xtal_1pp2}, 1UTG \cite{xtal_1utg}, 1WR1 \cite{xtal_1wr1}, 2K6D \cite{xtal_2k6d}, 2QHO \cite{xtal_2qho}, 2RSP \cite{xtal_2rsp}, 2TSC \cite{xtal_2tsc}, and 2Z59 \cite{xtal_2z59}.
These systems span a range of sizes; the number of surface atoms, surface residues, and surface heavy atoms for proteins in this expanded library are shown in Figures~\ref{figS2}a-c.
Trends in protein sizes were also observed to be consistent across the three metrics; see Figures~\ref{figS2}d,e.
\begin{figure*}[thb]
\centering
\includegraphics[width=0.8\textwidth]{fig/figS3}
\caption{
The hydrophobicity of a protein surface depends on whether entire residues or individual atoms are considered.
The fraction of surface residues that are charged (a) or hydrophobic (b) varies substantially across different proteins; roughly 15 to 35\% of the surface residues are charged, and 25 to 50\% of the surface residues are hydrophobic.
The fraction of surface heavy atoms that are hydrophobic (c), or the fraction of all surface atoms that are hydrophobic (d)
are not only higher than the fraction of surface residues that are hydrophobic, they are remarkably similar across all the proteins;
the hydrophobic fraction of surface heavy atoms spans ranges from roughly 45 to 50\%,
and that of the all surface atoms spans from 55 to 65\%.
The color scheme is the same as in Figure~\ref{figS2}.
}
\label{figS3}
\end{figure*}
\subsection{Residue- vs Atom-centric Characterization of Protein Surfaces}
In Figure 3 of the main text, we showed that the fraction of hydrophobic surface atoms is larger than the fraction of hydrophobic surface residues. Additionally, the fraction of per-atom hydrophobic composition is remarkably similar across the proteins studied, while the per-residue composition is not.
As shown in Figure~\ref{figS3}, these trends are also observed in the expanded library of proteins.
Figures~\ref{figS3}a and~\ref{figS3}b show the fraction of surface residues that are either charged or hydrophobic, respectively;
both fractions vary by nearly 20 - 25\% across the proteins considered.
However, the variation in the fraction of surface atoms (or heavy atoms) that are hydrophobic is only about 5 - 10\% (Figures~\ref{figS3}c,d).
Moreover, the fraction surface atoms that are hydrophobic is greater than 50\% for all protein systems considered, and is consistently greater than the fraction of surface residues that are hydrophobic.
\begin{figure*}[thb]
\centering
\includegraphics[width=0.7\textwidth]{fig/fig_S_frac_from_phob}
\caption{
The differences in the hydrophobic fraction of surface residues and that of surface atoms can be understood by considering the atomic composition of the surface residues.
The fraction of atoms from either hydrophobic (a) or hydrophilic (b) surface residues that are hydrophobic.
As expected, hydrophobic residues are composed primarily of hydrophobic atoms; roughly 80\% of their atoms are hydrophobic.
However, nearly 50\% of the atoms belonging to hydrophilic surface residues are also hydrophobic.
Thus, hydrophilic residues contribute substantially to the overall hydrophobicity of the protein surface.
Consequently, as we consider proteins with surfaces that have a decreasing fraction of hydrophobic residues and an increasing fraction hydrophilic (and charged) residues,
hydrophobic residues contribute less (c) and hydrophilic residues contribute more (d) to the overall fraction of hydrophobic surface atoms.
The color scheme is the same as in Figure~\ref{figS2}.
}
\label{figS4}
\end{figure*}
To better understand the discrepency between the fraction of hydrophobic surface atoms and hydrophobic surface residues,
we considered into the atomic composition of the surface residues.
Figure \ref{figS4}a shows that nearly 80\% of the surface atoms belonging to hydrophobic residues are themselves hydrophobic;
hydrophobic residues are composed mostly of hydrophobic atoms (the remaining 20\% are primarily backbone atoms).
Surprisingly, nearly 50\% of the surface atoms that belong to hydrophilic residues are also hydrophobic, as shown in Figure \ref{figS4}b.
This observation helps explains why seemingly hydrophilic proteins
with a large fraction of hydrophilic and/or charged surface residues
contain a large fraction of surface hydrophobic atoms.
The fractions of the hydrophobic surface atoms that belong to either hydrophobic or hydrophilic surface residues are shown in Figures~\ref{figS4}c,d, and highlight that
as proteins become seemingly more hydrophilic (i.e., have a larger fraction of surface residues that are hydrophilic),
they contribute an increasingly larger fraction of hydrophobic atoms to the protein surface.
\providecommand{\latin}[1]{#1}
\makeatletter
\providecommand{\doi}
{\begingroup\let\do\@makeother\dospecials
\catcode`\{=1 \catcode`\}=2 \doi@aux}
\providecommand{\doi@aux}[1]{\endgroup\texttt{#1}}
\makeatother
\providecommand*\mcitethebibliography{\thebibliography}
\csname @ifundefined\endcsname{endmcitethebibliography}
{\let\endmcitethebibliography\endthebibliography}{}
\begin{mcitethebibliography}{41}
\providecommand*\natexlab[1]{#1}
\providecommand*\mciteSetBstSublistMode[1]{}
\providecommand*\mciteSetBstMaxWidthForm[2]{}
\providecommand*\mciteBstWouldAddEndPuncttrue
{\def\unskip.}{\unskip.}}
\providecommand*\mciteBstWouldAddEndPunctfalse
{\let\unskip.}\relax}
\providecommand*\mciteSetBstMidEndSepPunct[3]{}
\providecommand*\mciteSetBstSublistLabelBeginEnd[3]{}
\providecommand*\unskip.}{}
\mciteSetBstSublistMode{f}
\mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})}
\mciteSetBstSublistLabelBeginEnd
{\mcitemaxwidthsubitemform\space}
{\relax}
{\relax}
\bibitem[Hess \latin{et~al.}(2008)Hess, Kutzner, {van der Spoel}, and
Lindahl]{gromacs}
Hess,~B.; Kutzner,~C.; {van der Spoel},~D.; Lindahl,~E. {{GROMACS}} 4:
{{Algorithms}} for {{Highly Efficient}}, {{Load}}-{{Balanced}}, and
{{Scalable Molecular Simulation}}. \emph{J. Chem. Theory Comput.}
\textbf{2008}, \emph{4}, 435--447\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hockney \latin{et~al.}(1974)Hockney, Goel, and Eastwood]{leapfrog}
Hockney,~R.~W.; Goel,~S.~P.; Eastwood,~J.~W. Quiet High-Resolution Computer
Models of a Plasma. \emph{J. Comput. Phys.} \textbf{1974}, \emph{14},
148--158\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Miyamoto and Kollman(1992)Miyamoto, and Kollman]{settle}
Miyamoto,~S.; Kollman,~P.~A. Settle: {{An}} Analytical Version of the {{SHAKE}}
and {{RATTLE}} Algorithm for Rigid Water Models. \emph{J. Comput. Chem.}
\textbf{1992}, \emph{13}, 952--962\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hess \latin{et~al.}(1997)Hess, Bekker, Berendsen, and Fraaije]{lincs}
Hess,~B.; Bekker,~H.; Berendsen,~H. J.~C.; Fraaije,~J. G. E.~M. {{LINCS}}:
{{A}} Linear Constraint Solver for Molecular Simulations. \emph{J. Comput.
Chem.} \textbf{1997}, \emph{18}, 1463--1472\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Berendsen \latin{et~al.}(1987)Berendsen, Grigera, and Straatsma]{spce}
Berendsen,~H. J.~C.; Grigera,~J.~R.; Straatsma,~T.~P. The Missing Term in
Effective Pair Potentials. \emph{J. Phys. Chem.} \textbf{1987}, \emph{91},
6269--6271\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hornak \latin{et~al.}(2006)Hornak, Abel, Okur, Strockbine, Roitberg,
and Simmerling]{amber99sb}
Hornak,~V.; Abel,~R.; Okur,~A.; Strockbine,~B.; Roitberg,~A.; Simmerling,~C.
Comparison of Multiple {{Amber}} Force Fields and Development of Improved
Protein Backbone Parameters. \emph{Proteins: Struct., Funct., Bioinf.}
\textbf{2006}, \emph{65}, 712--725\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Darden \latin{et~al.}(1993)Darden, York, and Pedersen]{PME}
Darden,~T.; York,~D.; Pedersen,~L. Particle Mesh {{Ewald}}: {{An
N}}$\cdot$log({{N}}) Method for {{Ewald}} Sums in Large Systems. \emph{J.
Chem. Phys.} \textbf{1993}, \emph{98}, 10089--10092\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bussi \latin{et~al.}(2007)Bussi, Donadio, and Parrinello]{vrescale}
Bussi,~G.; Donadio,~D.; Parrinello,~M. Canonical Sampling through Velocity
Rescaling. \emph{J. Chem. Phys.} \textbf{2007}, \emph{126}, 014101\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Miller \latin{et~al.}(2007)Miller, Vanden-Eijnden, and
Chandler]{Miller:PNAS:2007}
Miller,~T.~F.; Vanden-Eijnden,~E.; Chandler,~D. Solvent Coarse-Graining and the
String Method Applied to the Hydrophobic Collapse of a Hydrated Chain.
\emph{Proc. Natl. Acad. Sci. U.S.A.} \textbf{2007}, \emph{104},
14559--14564\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Patel \latin{et~al.}(2011)Patel, Varilly, Jamadagni, Acharya, Garde,
and Chandler]{Patel:PNAS:2011}
Patel,~A.~J.; Varilly,~P.; Jamadagni,~S.~N.; Acharya,~H.; Garde,~S.;
Chandler,~D. Extended Surfaces Modulate Hydrophobic Interactions of
Neighboring Solutes. \emph{Proc. Natl. Acad. Sci. U.S.A.} \textbf{2011},
\emph{108}, 17678--17683\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Patel \latin{et~al.}(2011)Patel, Varilly, Chandler, and
Garde]{Patel:JSP:2011}
Patel,~A.~J.; Varilly,~P.; Chandler,~D.; Garde,~S. Quantifying {{Density
Fluctuations}} in {{Volumes}} of {{All Shapes}} and {{Sizes Using Indirect
Umbrella Sampling}}. \emph{J. Stat. Phys.} \textbf{2011}, \emph{145},
265--275\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Patel and Garde(2014)Patel, and Garde]{Patel:JPCB:2014}
Patel,~A.~J.; Garde,~S. Efficient {{Method To Characterize}} the
{{Context}}-{{Dependent Hydrophobicity}} of {{Proteins}}. \emph{J. Phys.
Chem. B} \textbf{2014}, \emph{118}, 1564--1573\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Xi \latin{et~al.}(2016)Xi, Remsing, and Patel]{Xi:JCTC:2016}
Xi,~E.; Remsing,~R.~C.; Patel,~A.~J. Sparse {{Sampling}} of {{Water Density
Fluctuations}} in {{Interfacial Environments}}. \emph{J. Chem. Theory
Comput.} \textbf{2016}, \emph{12}, 706--713\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kumar \latin{et~al.}(1992)Kumar, Rosenberg, Bouzida, Swendsen, and
Kollman]{wham}
Kumar,~S.; Rosenberg,~J.~M.; Bouzida,~D.; Swendsen,~R.~H.; Kollman,~P.~A.
{{THE}} Weighted Histogram Analysis Method for Free-Energy Calculations on
Biomolecules. {{I}}. {{The}} Method. \emph{J. Comput. Chem.} \textbf{1992},
\emph{13}, 1011--1021\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Shirts and Chodera(2008)Shirts, and Chodera]{mbar}
Shirts,~M.~R.; Chodera,~J.~D. Statistically Optimal Analysis of Samples from
Multiple Equilibrium States. \emph{J. Chem. Phys.} \textbf{2008},
\emph{129}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Godawat \latin{et~al.}(2009)Godawat, Jamadagni, and
Garde]{Godawat:PNAS:2009}
Godawat,~R.; Jamadagni,~S.~N.; Garde,~S. Characterizing {{Hydrophobicity}} of
{{Interfaces}} by {{Using Cavity Formation}}, {{Solute Binding}}, and {{Water
Correlations}}. \emph{Proc. Natl. Acad. Sci. U.S.A.} \textbf{2009},
\emph{106}, 15119--15124\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sarupria and Garde(2009)Sarupria, and Garde]{Sarupria:PRL:2009}
Sarupria,~S.; Garde,~S. Quantifying {{Water Density Fluctuations}} and
{{Compressibility}} of {{Hydration Shells}} of {{Hydrophobic Solutes}} and
{{Proteins}}. \emph{Phys. Rev. Lett.} \textbf{2009}, \emph{103}, 037803\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Shenogina \latin{et~al.}(2009)Shenogina, Godawat, Keblinski, and
Garde]{Shenogina_PRL_2009}
Shenogina,~N.; Godawat,~R.; Keblinski,~P.; Garde,~S. How {{Wetting}} and
{{Adhesion Affect Thermal Conductance}} of a {{Range}} of {{Hydrophobic}} to
{{Hydrophilic Aqueous Interfaces}}. \emph{Phys. Rev. Lett.} \textbf{2009},
\emph{102}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{Vijay-Kumar} \latin{et~al.}(1987){Vijay-Kumar}, Bugg, and
Cook]{ubiq_struct}
{Vijay-Kumar},~S.; Bugg,~C.~E.; Cook,~W.~J. Structure of {{Ubiquitin Refined}}
at 1.8 {{A Resolution}}. \emph{J. Mol. Biol.} \textbf{1987}, \emph{194},
531--544\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Buckle \latin{et~al.}(1994)Buckle, Schreiber, and
Fersht]{barnase_struct}
Buckle,~A.~M.; Schreiber,~G.; Fersht,~A.~R. Protein-{{Protein Recognition}}:
{{Crystal Structural Analysis}} of a {{Barnase}}-{{Barstar Complex}} at
2.0-{{A Resolution}}. \emph{Biochemistry} \textbf{1994}, \emph{33},
8878--8889\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Wynne \latin{et~al.}(1999)Wynne, Crowther, and Leslie]{capsid_struct}
Wynne,~S.~A.; Crowther,~R.~A.; Leslie,~A.~G. The {{Crystal Structure}} of the
{{Human Hepatitis B Virus Capsid}}. \emph{Mol. Cell} \textbf{1999}, \emph{3},
771--780\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kussie \latin{et~al.}(1996)Kussie, Gorina, Marechal, Elenbaas, Moreau,
Levine, and Pavletich]{mdm2_struct}
Kussie,~P.~H.; Gorina,~S.; Marechal,~V.; Elenbaas,~B.; Moreau,~J.;
Levine,~A.~J.; Pavletich,~N.~P. Structure of the {{MDM2 Oncoprotein Bound}}
to the {{P53 Tumor Suppressor Transactivation Domain}}. \emph{Science}
\textbf{1996}, \emph{274}, 948--953\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Shoichet \latin{et~al.}(1995)Shoichet, Baase, Kuroki, and
Matthews]{lysozyme_struct}
Shoichet,~B.~K.; Baase,~W.~A.; Kuroki,~R.; Matthews,~B.~W. A {{Relationship}}
between {{Protein Stability}} and {{Protein Function}}. \emph{Proc. Natl.
Acad. Sci. U.S.A.} \textbf{1995}, \emph{92}, 452--456\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hakanp\"a\"a \latin{et~al.}(2006)Hakanp\"a\"a, Linder, Popov, Schmidt,
and Rouvinen]{hfb_struct}
Hakanp\"a\"a,~J.; Linder,~M.; Popov,~A.; Schmidt,~A.; Rouvinen,~J. Hydrophobin
{{HFBII}} in {{Detail}}: {{Ultrahigh}}-{{Resolution Structure}} at 0.75
{{\AA}}. \emph{Acta Crystallogr., Sect. D: Biol. Crystallogr.} \textbf{2006},
\emph{62}, 356--367\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zaitseva \latin{et~al.}(2009)Zaitseva, Meneely, and Lamb]{mdh_struct}
Zaitseva,~J.; Meneely,~K.~M.; Lamb,~A.~L. Structure of {{Escherichia Coli
Malate Dehydrogenase}} at 1.45 {{A Resolution}}. \emph{Acta Crystallogr.,
Sect. F: Struct. Biol. Cryst. Commun.} \textbf{2009}, \emph{65},
866--869\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kapcha and Rossky(2014)Kapcha, and Rossky]{rossky}
Kapcha,~L.~H.; Rossky,~P.~J. A {{Simple Atomic}}-{{Level Hydrophobicity Scale
Reveals Protein Interfacial Structure}}. \emph{J. Mol. Biol.} \textbf{2014},
\emph{426}, 484--498\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Felder \latin{et~al.}(2007)Felder, Prilusky, Silman, and
Sussman]{dipole_server}
Felder,~C.~E.; Prilusky,~J.; Silman,~I.; Sussman,~J.~L. A Server and Database
for Dipole Moments of Proteins. \emph{Nucleic Acids Res.} \textbf{2007},
\emph{35}, W512--W521\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kim \latin{et~al.}(1997)Kim, Song, Shin, Hwang, Choe, Yoo, and
Suh]{xtal_1auo}
Kim,~K.~K.; Song,~H.~K.; Shin,~D.~H.; Hwang,~K.~Y.; Choe,~S.; Yoo,~O.~J.;
Suh,~S.~W. Crystal Structure of Carboxylesterase from {{Pseudomonas}}
Fluorescens, an Alpha/Beta Hydrolase with Broad Substrate Specificity.
\emph{Structure (Cambridge, MA, U.S.)} \textbf{1997}, \emph{5},
1571--1584\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rafferty \latin{et~al.}(1989)Rafferty, Somers, {Saint-Girons}, and
Phillips]{xtal_1cmb}
Rafferty,~J.~B.; Somers,~W.~S.; {Saint-Girons},~I.; Phillips,~S.~E.
Three-Dimensional Crystal Structures of {{Escherichia}} Coli Met Repressor
with and without Corepressor. \emph{Nature} \textbf{1989}, \emph{341},
705--710\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Su \latin{et~al.}(1997)Su, Gao, Zhang, Terwilliger, and
Wang]{xtal_1gvp}
Su,~S.; Gao,~Y.~G.; Zhang,~H.; Terwilliger,~T.~C.; Wang,~A.~H. Analyses of the
Stability and Function of Three Surface Mutants ({{R82C}}, {{K69H}}, and
{{L32R}}) of the Gene {{V}} Protein from {{Ff}} Phage by {{X}}-Ray
Crystallography. \emph{Protein Sci.} \textbf{1997}, \emph{6}, 771--780\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ariyoshi \latin{et~al.}(1994)Ariyoshi, Vassylyev, Iwasaki, Nakamura,
Shinagawa, and Morikawa]{xtal_1hjr}
Ariyoshi,~M.; Vassylyev,~D.~G.; Iwasaki,~H.; Nakamura,~H.; Shinagawa,~H.;
Morikawa,~K. Atomic Structure of the {{RuvC}} Resolvase: {{A}} Holliday
Junction-Specific Endonuclease from {{E}}. Coli. \emph{Cell} \textbf{1994},
\emph{78}, 1063--1072\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Weis \latin{et~al.}(1991)Weis, Kahn, Fourme, Drickamer, and
Hendrickson]{xtal_1msb}
Weis,~W.~I.; Kahn,~R.; Fourme,~R.; Drickamer,~K.; Hendrickson,~W.~A. Structure
of the Calcium-Dependent Lectin Domain from a Rat Mannose-Binding Protein
Determined by {{MAD}} Phasing. \emph{Science} \textbf{1991}, \emph{254},
1608--1615\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Brunie \latin{et~al.}(1985)Brunie, Bolin, Gewirth, and
Sigler]{xtal_1pp2}
Brunie,~S.; Bolin,~J.; Gewirth,~D.; Sigler,~P.~B. The Refined Crystal Structure
of Dimeric Phospholipase {{A2}} at 2.5 {{A}}. {{Access}} to a Shielded
Catalytic Center. \emph{The Journal of Biological Chemistry} \textbf{1985},
\emph{260}, 9742--9749\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Morize \latin{et~al.}(1987)Morize, Surcouf, Vaney, Epelboin, Buehner,
Fridlansky, Milgrom, and Mornon]{xtal_1utg}
Morize,~I.; Surcouf,~E.; Vaney,~M.~C.; Epelboin,~Y.; Buehner,~M.;
Fridlansky,~F.; Milgrom,~E.; Mornon,~J.~P. Refinement of the {{C222}}(1)
Crystal Form of Oxidized Uteroglobin at 1.34 {{A}} Resolution. \emph{J. Mol.
Biol.} \textbf{1987}, \emph{194}, 725--739\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ohno \latin{et~al.}(2005)Ohno, Jee, Fujiwara, Tenno, Goda, Tochio,
Kobayashi, Hiroaki, and Shirakawa]{xtal_1wr1}
Ohno,~A.; Jee,~J.; Fujiwara,~K.; Tenno,~T.; Goda,~N.; Tochio,~H.;
Kobayashi,~H.; Hiroaki,~H.; Shirakawa,~M. Structure of the {{UBA}} Domain of
{{Dsk2p}} in Complex with Ubiquitin Molecular Determinants for Ubiquitin
Recognition. \emph{Structure (Cambridge, MA, U.S.)} \textbf{2005}, \emph{13},
521--532\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bezsonova \latin{et~al.}(2008)Bezsonova, Bruce, Wiesner, Lin, Rotin,
and {Forman-Kay}]{xtal_2k6d}
Bezsonova,~I.; Bruce,~M.~C.; Wiesner,~S.; Lin,~H.; Rotin,~D.;
{Forman-Kay},~J.~D. Interactions between the Three {{CIN85 SH3}} Domains and
Ubiquitin: Implications for {{CIN85}} Ubiquitination. \emph{Biochemistry}
\textbf{2008}, \emph{47}, 8937--8949\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kozlov \latin{et~al.}(2007)Kozlov, Nguyen, Lin, De~Crescenzo, Park,
and Gehring]{xtal_2qho}
Kozlov,~G.; Nguyen,~L.; Lin,~T.; De~Crescenzo,~G.; Park,~M.; Gehring,~K.
Structural Basis of Ubiquitin Recognition by the Ubiquitin-Associated
({{UBA}}) Domain of the Ubiquitin Ligase {{EDD}}. \emph{The Journal of
Biological Chemistry} \textbf{2007}, \emph{282}, 35787--35795\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Jask\'olski \latin{et~al.}(1990)Jask\'olski, Miller, Rao, Leis, and
Wlodawer]{xtal_2rsp}
Jask\'olski,~M.; Miller,~M.; Rao,~J.~K.; Leis,~J.; Wlodawer,~A. Structure of
the Aspartic Protease from {{Rous}} Sarcoma Retrovirus Refined at 2-{{A}}
Resolution. \emph{Biochemistry} \textbf{1990}, \emph{29}, 5889--5898\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Montfort \latin{et~al.}(1990)Montfort, Perry, Fauman, {Finer-Moore},
Maley, Hardy, Maley, and Stroud]{xtal_2tsc}
Montfort,~W.~R.; Perry,~K.~M.; Fauman,~E.; {Finer-Moore},~J.~S.; Maley,~G.~F.;
Hardy,~L.; Maley,~F.; Stroud,~R.~M. Structure, Multiple Site Binding, and
Segmental Accommodation in Thymidylate Synthase on Binding {{dUMP}} and an
Anti-Folate. \emph{Biochemistry} \textbf{1990}, \emph{29}, 6964--6977\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Schreiner \latin{et~al.}(2008)Schreiner, Chen, Husnjak, Randles,
Zhang, Elsasser, Finley, Dikic, Walters, and Groll]{xtal_2z59}
Schreiner,~P.; Chen,~X.; Husnjak,~K.; Randles,~L.; Zhang,~N.; Elsasser,~S.;
Finley,~D.; Dikic,~I.; Walters,~K.~J.; Groll,~M. Ubiquitin Docking at the
Proteasome through a Novel Pleckstrin-Homology Domain Interaction.
\emph{Nature} \textbf{2008}, \emph{453}, 548--552\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\end{mcitethebibliography}
\end{document}
|
1,116,691,497,320 | arxiv | \section{Introduction}
In 1918, Emmy Noether published a paper that strongly influenced
the physics of the 20th century \cite{MR2761345}. She proved
a theorem asserting that if the Lagrangian is invariant
under changes in the coordinate system, then there exists a conserved
quantity along all the Euler--Lagrange extremals. Within the years,
this result has been studied by many authors and generalized in different directions:
see \cite{Bartos,Gastao:PhD:thesis,GF:JMAA:07,GF2010,MR2323264,book:frac,NataliaNoether,ejc}
and references therein. In particular, in the recent paper \cite{GF2012}, Noether's theorem was formulated
for variational problems with delayed arguments. The result is important
because problems with delays play a crucial role in the modeling
of real-life phenomena in various fields of applications \cite{GoKeMa}.
In order to prove Noether's theorem with delays, it was assumed that
admissible functions are $\mathcal{C}^2$-smooth and that Noether's conserved quantity holds
along all $\mathcal{C}^2$-extremals of the Euler--Lagrange equations with time delay \cite{GF2012}.
Here we remark that when one extends Noether's theorem to the biggest class
for which one can derive the Euler--Lagrange equations, i.e., for Lipschitz continuous functions,
then one can find Lipschitz Euler--Lagrange extremals that fail to satisfy
the Noether conserved quantity established in \cite{GF2012}
(see a simple example in Section~\ref{sec:example}).
We show that to formulate Noether's theorem with time delays
for nonsmooth functions, it is enough to restrict the set of
delayed Euler--Lagrange extremals to those that satisfy the
delayed DuBois--Reymond condition. Moreover,
we prove that this result can be generalized
to higher-order variational problems.
The text is organized as follows. In Section~\ref{sec:prelim}
the fundamental problem of variational calculus with delayed arguments is formulated
and a short review of the results for $\mathcal{C}^2$-smooth admissible functions is given.
In Section~\ref{sec:example} we show, through an example, that nonsmooth
Euler--Lagrange delayed extremals may fail to satisfy Noether's constants of motion \cite{GF2012}.
The main contributions of the paper appear in Sections~\ref{sec:MR} and \ref{sec:MRHO}:
we prove a Noether symmetry theorem with time delay for Lipschitz functions (Theorem~\ref{theo:tnnd}),
Euler--Lagrange and DuBois--Reymond optimality type conditions
for nonsmooth higher-order variational problems with delayed arguments
(Theorems~\ref{Thm:ELdeordm} and~\ref{theo:cDRifm}, respectively),
and a delayed higher-order Noether's symmetry theorem (Theorem~\ref{thm:Noether}).
\section{Preliminaries}
\label{sec:prelim}
In this section we review necessary results on the calculus of variations with time delay.
For more on variational problems with delayed arguments we refer the reader to
\cite{Basin:book,Bok,GoKeMa,DH:1968,Kra,Kharatishvili,Ros}.
The fundamental problem consists of minimizing a functional
\begin{equation}
\label{Pe}
J^{\tau}[q(\cdot)] = \int_{t_{1}}^{t_{2}}
L\left(t,q(t),\dot{q}(t),q(t-\tau),\dot{q}(t-\tau)\right) dt
\end{equation}
subject to boundary conditions
\begin{equation}
\label{Pe2}
q(t)=\delta(t)~\textnormal{ for }~t\in[t_{1}-\tau,t_{1}]~\textnormal{ and }~q(t_2)=q_{t_2}.
\end{equation}
We assume that the Lagrangian $L :[t_1,t_2] \times \mathbb{R}^{4n} \rightarrow \mathbb{R}$, $n\in\mathbb{N}$,
is a $\mathcal{C}^{2}$-function with respect to all its arguments,
admissible functions $q(\cdot)$ are $\mathcal{C}^2$-smooth,
$t_{1}< t_{2}$ are fixed in $\mathbb{R}$,
$\tau$ is a given positive real number such that $\tau<t_{2}-t_{1}$,
and $\delta$ is a given piecewise smooth function on $[t_1-\tau,t_1]$.
Throughout the text, $\partial_{i}L$ denotes the partial derivative of $L$
with respect to its $i$th argument, $i=1,\dots,5$. For convenience of notation,
we introduce the operator $[\cdot]_{\tau}$ defined by
$$
[q]_{\tau}(t)=(t,q(t),\dot{q}(t),q(t-\tau),\dot{q}(t-\tau)).
$$
The next theorem gives a necessary optimality condition of Euler--Lagrange type
for \eqref{Pe}--\eqref{Pe2}.
\begin{theorem}[Euler--Lagrange equations with time delay \cite{DH:1968}]
\label{th:EL1}
If $q(\cdot)\in\mathcal{C}^2$ is a minimizer for problem \eqref{Pe}--\eqref{Pe2},
then $q(\cdot)$ satisfies the following Euler--Lagrange equations with time delay:
\begin{equation}
\label{EL1}
\begin{cases}
\frac{d}{dt}\left\{\partial_{3}L[q]_{\tau}(t)+
\partial_{5}L[q]_{\tau}(t+\tau)\right\}
=\partial_{2}L[q]_{\tau}(t)+\partial_{4}L[q]_{\tau}(t+\tau),
\quad t_{1}\leq t\leq t_{2}-\tau,\\
\frac{d}{dt}\partial_{3}L[q]_{\tau}(t) =\partial_{2}L[q]_{\tau}(t),
\quad t_{2}-\tau\leq t\leq t_{2}.
\end{cases}
\end{equation}
\end{theorem}
\begin{remark}
\label{re:EL}
If one extends the set of admissible functions in problem
\eqref{Pe}--\eqref{Pe2} to the class of Lipschitz continuous functions,
then the Euler--Lagrange equations \eqref{EL1} remain valid.
This result is obtained from our Corollary~\ref{cor:16}
by choosing $m = 1$.
\end{remark}
\begin{definition}[Extremals]
\label{def:scale:ext}
The solutions $q(\cdot)$ of the Euler--Lagrange equations
\eqref{EL1} with time delay are called extremals.
\end{definition}
\begin{definition}[Invariance of \eqref{Pe}]
\label{def:invnd}
Consider the following $s$-parameter group of infinitesimal transformations:
\begin{equation}
\label{eq:tinf}
\begin{cases}
\bar{t} = t + s\eta(t,q) + o(s) \, ,\\
\bar{q}(t) = q(t) + s\xi(t,q) + o(s),
\end{cases}
\end{equation}
where $\eta\in \mathcal{C}^1(\mathbb{R}^{n+1},\mathbb{R})$
and $\xi\in \mathcal{C}^1(\mathbb{R}^{n+1},\mathbb{R}^n)$.
Functional \eqref{Pe} is said to be invariant
under \eqref{eq:tinf} if
\begin{multline*}
\label{eq:invnd}
0 = \frac{d}{ds}
\int_{\bar{t}(I)} L\left(t+s\eta(t,q(t))+o(s),q(t)+s\xi(t,q(t))+o(s),
\frac{\dot{q}(t)+s\dot{\xi}(t,q(t))}{1+s\dot{\eta}(t,q(t))},\right.\\
\left.q(t-\tau)+s\xi(t-\tau,q(t-\tau))+o(s),\frac{\dot{q}(t-\tau)
+s\dot{\xi}(t-\tau,q(t-\tau))}{1+s\dot{\eta}(t-\tau,q(t-\tau))}\right)
(1+s\dot{\eta}(t,q(t))) dt \Biggl.\Biggr|_{s=0}
\end{multline*}
for any subinterval $I \subseteq [t_1,t_2]$.
\end{definition}
\begin{definition}[Constant of motion/conservation law with time delay]
\label{def:leicond}
We say that a quantity
$C(t,t+\tau,q(t),q(t-\tau),q(t+\tau),\dot{q}(t),\dot{q}(t-\tau),\dot{q}(t+\tau))$ is
a constant of motion with time delay $\tau$ if
\begin{equation}
\label{eq:conslaw:td1}
\frac{d}{dt} C(t,t+\tau,q(t),q(t-\tau),q(t+\tau),\dot{q}(t),\dot{q}(t-\tau),\dot{q}(t+\tau))= 0
\end{equation}
along all the extremals $q(\cdot)$ (\textrm{cf.} Definition~\ref{def:scale:ext}).
The equality \eqref{eq:conslaw:td1} is then a conservation law with time delay.
\end{definition}
Next theorem extends the DuBois--Reymond necessary
optimality condition
to problems of the calculus of variations with time delay.
\begin{theorem}[DuBois--Reymond necessary conditions with time delay \cite{GF2012}]
\label{theo:cdrnd}
If $q(\cdot)\in\mathcal{C}^2$ is an extremal of functional
\eqref{Pe} subject to \eqref{Pe2}, then the following conditions are satisfied:
\begin{equation}
\label{eq:cdrnd}
\begin{cases}
\frac{d}{dt}\left\{L[q]_{\tau}(t)-\dot{q}(t)\cdot(\partial_{3} L[q]_{\tau}(t)
+\partial_{5} L[q]_{\tau}(t+\tau))\right\} = \partial_{1} L[q]_{\tau}(t),\quad
t_1\leq t\leq t_{2}-\tau,\\
\frac{d}{dt}\left\{L[q]_{\tau}(t)
-\dot{q}(t)\cdot\partial_{3} L[q]_{\tau}(t)\right\}
=\partial_{1} L[q]_{\tau}(t),\quad t_2-\tau\leq t\leq t_{2}\,.
\end{cases}
\end{equation}
\end{theorem}
\begin{remark}
If we assume that admissible functions in problem \eqref{Pe}--\eqref{Pe2}
are Lipschitz continuous, then one can show that the DuBois--Reymond
necessary conditions with time delay \eqref{eq:cdrnd}
are still valid (cf. Corollary~\ref{cor:DR:m1}).
\end{remark}
Theorem~\ref{theo:tnnd1} establishes an extension of Noether's
theorem to problems of the calculus of variations with time delay.
\begin{theorem}[Noether's symmetry theorem with time delay \cite{GF2012}]
\label{theo:tnnd1}
If functional \eqref{Pe} is invariant in the
sense of Definition~\ref{def:invnd}, then the quantity
$C(t,t+\tau,q(t),q(t-\tau),q(t+\tau),\dot{q}(t),\dot{q}(t-\tau),\dot{q}(t+\tau))$
defined by
\begin{multline}
\label{eq:tnnd1}
\left(\partial_{3} L[q]_{\tau}(t)
+\partial_{5} L[q]_{\tau}(t+\tau)\right)\cdot\xi(t,q(t))\\
+\Bigl(L[q]_{\tau}(t)-\dot{q}(t)\cdot(\partial_{3} L[q]_{\tau}(t)
+\partial_{5} L[q]_{\tau}(t+\tau))\Bigr)\eta(t,q(t))
\end{multline}
for $t_1\leq t\leq t_{2}-\tau$ and by
\begin{equation}
\label{eq:tnnd2}
\partial_{3} L[q]_{\tau}(t)\cdot\xi(t,q(t))
+\Bigl(L[q]_{\tau}(t)-\dot{q}(t)\cdot\partial_{3} L[q]_{\tau}(t)\Bigr)\eta(t,q(t))
\end{equation}
for $t_2-\tau\leq t\leq t_{2}\,,$ is a constant of motion with time delay
(\textrm{cf.} Definition~\ref{def:leicond}).
\end{theorem}
\section{Nonsmooth Euler--Lagrange extremals may fail
to satisfy Noether's conservation laws with time delay}
\label{sec:example}
Consider the problem of the calculus of variations with time delay
\begin{equation}
\label{eq:ex}
\begin{gathered}
J^{1}[q(\cdot)]=\int_0^3\left(\dot{q}(t) + \dot{q}(t-1) \right)^2dt \longrightarrow \min,\\
q(t)=-t \, ,~-1\leq t\leq 0, \quad q(3)=1,
\end{gathered}
\end{equation}
in the class of functions $q(\cdot)\in Lip\left([-1,3];\mathbb{R}\right)$.
From Theorem~\ref{th:EL1} (see Remark~\ref{re:EL}),
one obtains that any solution to problem \eqref{eq:ex} must satisfy
\begin{equation}
\label{eq:ex:EL1}
2\dot{q}(t)+\dot{q}(t-1)+\dot{q}(t+1)=c_1,
\quad 0\leq t\leq 2,
\end{equation}
\begin{equation}
\label{eq:ex:EL2}
\dot{q}(t)+\dot{q}(t-1)=c_2,
\quad 2\leq t\leq 3,
\end{equation}
where $c_1$ and $c_2$ are constants. Because functional $J^{1}$
of problem \eqref{eq:ex} is autonomous, we have invariance, in the sense of Definition~\ref{def:invnd},
with $\eta\equiv 1$ and $\xi\equiv 0$. Simple calculations show that Noether's constant of motion
with time delay \eqref{eq:tnnd1}--\eqref{eq:tnnd2} coincides with the DuBois--Reymond condition \eqref{eq:cdrnd}:
\begin{equation}
\label{eq:ex:DBR1}
\left(\dot{q}(t)+\dot{q}(t-1)\right)^2-2\dot{q}(t)\left(2\dot{q}(t)
+\dot{q}(t-1)+\dot{q}(t+1)\right)=c_3,\quad 0\leq t\leq 2,
\end{equation}
\begin{equation}
\label{eq:ex:DBR2}
\dot{q}(t)^2-\dot{q}(t-1)^2=c_4,\quad 2\leq t\leq 3,
\end{equation}
where $c_3$ and $c_4$ are constants. One can easily check that function
\begin{equation}
\label{ex:sol}
q(t)=
\begin{cases}
-t & ~\textnormal{for}~ -1< t\leq 0\\
t & ~\textnormal{for}~ 0< t\leq 2\\
-t+4 & ~\textnormal{for}~ 2< t\leq 3
\end{cases}
\end{equation}
satisfies \eqref{eq:ex:EL1}--\eqref{eq:ex:EL2} with $c_1=2$ and $c_2=0$,
but does not satisfy \eqref{eq:ex:DBR1}--\eqref{eq:ex:DBR2}:
for $0< t\leq 1$ constant $c_3$ should be $-4$ and for $1< t\leq 2$ constant $c_3$ should be $0$.
We conclude that nonsmooth solutions of Euler--Lagrange equations \eqref{EL1} do not preserve
Noether's quantity defined by \eqref{eq:tnnd1}--\eqref{eq:tnnd2} and one needs
to restrict the set of Euler--Lagrange extremals. In Section~\ref{sec:MR} we show that
it is enough to restrict the Euler--Lagrange extremals to those that satisfy
the DuBois--Reymond necessary condition \eqref{eq:cdrnd}.
\section{Noether's theorem with time delay for Lipschitz functions}
\label{sec:MR}
The notion of invariance given in Definition~\ref{def:invnd} can be extended up to
an exact differential.
\begin{definition}[Invariance up to a gauge-term]
\label{def:invndLIP}
We say that functional \eqref{Pe} is invariant under the
$s$-parameter group of infinitesimal transformations \eqref{eq:tinf}
up to the gauge-term $\Phi$ if
\begin{multline}
\label{eq:invndLIP}
\int_{I} \dot{\Phi}[q]_{\tau}(t)dt = \frac{d}{ds}
\int_{\bar{t}(I)} L\left(t+s\eta(t,q(t))+o(s),q(t)+s\xi(t,q(t))+o(s),
\frac{\dot{q}(t)+s\dot{\xi}(t,q(t))}{1+s\dot{\eta}(t,q(t))},\right.\\ \left.
q(t-\tau)+s\xi(t-\tau,q(t-\tau))+o(s),\frac{\dot{q}(t-\tau)
+s\dot{\xi}(t-\tau,q(t-\tau))}{1+s\dot{\eta}(t-\tau,q(t-\tau))}\right)
(1+s\dot{\eta}(t,q(t))) dt\Biggr|_{s=0}
\end{multline}
for any subinterval $I \subseteq [t_1,t_2]$ and for all
$q(\cdot)\in Lip\left([t_1-\tau,t_2];\mathbb{R}^n\right)$.
\end{definition}
\begin{lemma}[Necessary condition of invariance]
\label{thm:CNSI:SCV}
If functional \eqref{Pe} is invariant up to $\Phi$ in the sense
of Definition~\ref{def:invndLIP}, then
\begin{multline}
\label{eq:cnsind1}
\int_{t_1}^{t_2-\tau}\Bigl[-\dot{\Phi}[q]_{\tau}(t)+\partial_{1}
L[q]_{\tau}(t)\eta(t,q) +\left(\partial_{2}
L[q]_{\tau}(t)+\partial_4 L[q]_{\tau}(t+\tau)\right)\cdot\xi(t,q)\\
+\left(\partial_{3}L[q]_{\tau}(t)
+\partial_5L[q]_{\tau}(t+\tau)\right)\cdot\left(\dot{\xi}(t,q)
-\dot{q}(t)\dot{\eta}(t,q)\right)
+ L[q]_{\tau}(t)\dot{\eta}(t,q)\Bigr]dt = 0
\end{multline}
for $t_1\leq t\leq t_{2}-\tau$ and
\begin{multline}
\label{eq:cnsind2}
\int_{t_2-\tau}^{t_2}\Bigl[-\dot{\Phi}[q]_{\tau}(t)
+\partial_{1}L[q]_{\tau}(t)\eta(t,q)
+\partial_{2}L[q]_{\tau}(t)\cdot\xi(t,q)\\
+\partial_{3}L[q]_{\tau}(t)\cdot\left(\dot{\xi}(t,q)
-\dot{q}(t)\dot{\eta}(t,q)\right)+L[q]_{\tau}(t)\dot{\eta}(t,q)\Bigr]dt =0
\end{multline}
for $t_2-\tau\leq t\leq t_{2}$.
\end{lemma}
\begin{proof}
Without loss of generality, we take $I=[t_1,t_2]$.
Then, \eqref{eq:invndLIP} is equivalent to
\begin{equation}
\label{eq:cnsind3}
\begin{split}
\int_{t_1}^{t_2} \Bigl[ &-\dot{\Phi}[q]_{\tau}(t)
+\partial_{1}L[q]_{\tau}(t)\eta(t,q)
+\partial_{2}L[q]_{\tau}(t)\cdot\xi(t,q)\\
&+\partial_{3}L[q]_{\tau}(t)\cdot\left(\dot{\xi}(t,q)
-\dot{q}(t)\dot{\eta}(t,q)\right)+L[q]_{\tau}(t)\dot{\eta}(t,q)\Bigr]dt\\
&+\int_{t_1}^{t_2}\Bigl[\partial_{4}
L[q]_{\tau}(t)\cdot\xi(t-\tau,q(t-\tau))\\
&+\partial_{5}L[q]_{\tau}(t)\cdot\left(\dot{\xi}(t-\tau,q(t-\tau))
-\dot{q}(t-\tau)\dot{\eta}(t-\tau,q(t-\tau))\right)\Bigr]dt= 0.
\end{split}
\end{equation}
Performing a linear change of variables $t=\sigma+\tau$ in the last integral
of \eqref{eq:cnsind3}, and keeping in mind that $L[q]_{\tau}(t)\equiv 0$ on
$[t_1-\tau,t_1]$, equation \eqref{eq:cnsind3} becomes
\begin{equation}
\label{eq:cnsind}
\begin{split}
\int_{t_1}^{t_2-\tau}\Bigl[&-\dot{\Phi}[q]_{\tau}(t)+\partial_{1}
L[q]_{\tau}(t)\eta(t,q) +\left(\partial_{2}
L[q]_{\tau}(t)+\partial_4 L[q]_{\tau}(t+\tau)\right)\cdot\xi(t,q)\\
&+\left(\partial_{3}L[q]_{\tau}(t)+\partial_5
L[q]_{\tau}(t+\tau)\right)\cdot\left(\dot{\xi}(t,q)
-\dot{q}(t)\dot{\eta}(t,q)\right)
+L[q]_{\tau}(t)\dot{\eta}(t,q)\Bigr]dt\\
&+ \int_{t_2-\tau}^{t_2}\Bigl[-\dot{\Phi}[q]_{\tau}(t)+\partial_{1}
L[q]_{\tau}(t)\eta(t,q) +\partial_{2}
L[q]_{\tau}(t)\cdot\xi(t,q)\\
&+\partial_{3}L[q]_{\tau}(t)\cdot\left(\dot{\xi}(t,q)
-\dot{q}(t)\dot{\eta}(t,q)\right)+L[q]_{\tau}(t)\dot{\eta}(t,q)\Bigr]dt = 0.
\end{split}
\end{equation}
Taking into consideration that \eqref{eq:cnsind} holds for an arbitrary subinterval
$I \subseteq [t_1,t_2]$, equations \eqref{eq:cnsind1} and \eqref{eq:cnsind2} hold.
\end{proof}
\begin{theorem}[Noether's symmetry theorem with time delay for Lipschitz functions]
\label{theo:tnnd}
If functional \eqref{Pe} is invariant up to $\Phi$ in the
sense of Definition~\ref{def:invndLIP}, then the quantity
$C(t,t+\tau,q(t),q(t-\tau),q(t+\tau),\dot{q}(t),\dot{q}(t-\tau),\dot{q}(t+\tau))$
defined by
\begin{multline}
\label{eq:tnnd}
-\Phi[q]_{\tau}(t)+\left(\partial_{3} L[q]_{\tau}(t)
+\partial_{5} L[q]_{\tau}(t+\tau)\right)\cdot\xi(t,q(t))\\
+\Bigl(L[q]_{\tau}-\dot{q}(t)\cdot(\partial_{3} L[q]_{\tau}(t)
+\partial_{5} L[q]_{\tau}(t+\tau))\Bigr)\eta(t,q(t))
\end{multline}
for $t_1\leq t\leq t_{2}-\tau$ and by
\begin{equation}
\label{eq:Noeth}
-\Phi[q]_{\tau}(t)+\partial_{3} L[q]_{\tau}(t)\cdot\xi(t,q(t))
+\Bigl(L[q]_{\tau}-\dot{q}(t)\cdot\partial_{3} L[q]_{\tau}(t)\Bigr)\eta(t,q(t))
\end{equation}
for $t_2-\tau\leq t\leq t_{2}\,,$ is a constant of motion with time delay
along any $q(\cdot)\in Lip\left([t_1-\tau,t_2];\mathbb{R}^n\right)$ satisfying both \eqref{EL1} and
\eqref{eq:cdrnd}, i.e., along any Lipschitz Euler--Lagrange extremal that is also a Lipschitz
DuBois--Reymond extremal.
\end{theorem}
\begin{proof}
We prove the theorem in the interval $t_1\leq t\leq t_{2}-\tau$.
The proof is similar for the interval $t_2-\tau\leq t\leq t_{2}$.
Noether's constant of motion with time delay \eqref{eq:tnnd} follows by
using in the interval $t_1\leq t\leq t_{2}-\tau$ the
DuBois--Reymond condition with time delay \eqref{eq:cdrnd}
and the Euler--Lagrange equation with time delay \eqref{EL1}
into the necessary condition of invariance \eqref{eq:cnsind1}:
\begin{equation*}
\begin{split}
0&=\int_{t_1}^{t_2-\tau}\Bigl[-\dot{\Phi}[q]_{\tau}(t)+\partial_{1}
L[q]_{\tau}(t)\eta(t,q)+\left(\partial_{2}
L[q]_{\tau}(t)+\partial_4 L[q]_{\tau}(t+\tau)\right)\cdot\xi(t,q)\\
&\quad +\left(\partial_{3}L[q]_{\tau}(t)+\partial_5
L[q]_{\tau}(t+\tau)\right)\cdot\left(\dot{\xi}(t,q)
-\dot{q}(t)\dot{\eta}(t,q)\right)+L[q]_{\tau}(t)\dot{\eta}(t,q)\Bigr]dt\\
&= \int_{t_1}^{t_2-\tau}\Bigl[\frac{d}{dt}-\Phi[q]_{\tau}(t)+\left(\partial_{3}
L[q]_{\tau}(t)+\partial_5L[q]_{\tau}(t+\tau)\right)\cdot\xi(t,q)\\
&\quad +\left(\partial_{3}
L[q]_{\tau}(t)+\partial_5L[q]_{\tau}(t+\tau)\right)\cdot\dot{\xi}(t,q)\\
&\quad +\frac{d}{dt}\left\{L[q]_{\tau}(t)-\dot{q}(t)\cdot(\partial_{3} L[q]_{\tau}(t)
+\partial_{5} L[q]_{\tau}(t+\tau))\right\}\eta(t,q)\\
&\quad +\left\{L[q]_{\tau}(t)-\dot{q}(t)\cdot(\partial_{3} L[q]_{\tau}(t)
+\partial_{5} L[q]_{\tau}(t+\tau))\right\}\dot{\eta}(t,q)\Bigr]dt,
\end{split}
\end{equation*}
that is,
\begin{multline}
\label{eq:cnsind11}
\int_{t_1}^{t_2-\tau}\frac{d}{dt}\Bigl[-\Phi[q]_{\tau}(t)
+\left(\partial_{3} L[q]_{\tau}(t)
+\partial_{5} L[q]_{\tau}(t+\tau)\right)\cdot\xi(t,q(t))\\
+\Bigl(L[q]_{\tau}(t)-\dot{q}(t)\cdot(\partial_{3} L[q]_{\tau}(t)
+\partial_{5} L[q]_{\tau}(t+\tau))\Bigr)\eta(t,q(t))\Bigr]dt = 0.
\end{multline}
Taking into consideration that \eqref{eq:cnsind11} holds for
any subinterval $I\subseteq [t_1,t_2]$, we conclude that
\begin{multline*}
-\Phi[q]_{\tau}(t)+\left(\partial_{3} L[q]_{\tau}(t)
+\partial_{5} L[q]_{\tau}(t+\tau)\right)\cdot\xi(t,q(t))\\
+\Bigl(L[q]_{\tau}(t)-\dot{q}(t)\cdot(\partial_{3} L[q]_{\tau}(t)
+\partial_{5} L[q]_{\tau}(t+\tau))\Bigr)\eta(t,q(t))=\text{constant}.
\end{multline*}
\end{proof}
\begin{example}
Consider problem \eqref{eq:ex}. Function
$q(\cdot)\in Lip\left([-1,3];\mathbb{R}^n\right)$ defined by
\begin{equation}
\label{eq:ext:ex:22b}
q(t)=
\begin{cases}
-t & ~\textnormal{for}~ -1< t\leq 0\\
t & ~\textnormal{for}~ 0< t\leq 1\\
-t+2 & ~\textnormal{for}~ 1< t\leq 2\\
t-2 & ~\textnormal{for}~ 2< t\leq 3
\end{cases}
\end{equation}
is an Euler--Lagrange extremal, i.e., satisfies \eqref{eq:ex:EL1}--\eqref{eq:ex:EL2}, but,
in contrast with \eqref{ex:sol}, is also a DuBois--Reymond extremal, i.e.,
satisfies \eqref{eq:ex:DBR1}--\eqref{eq:ex:DBR2}. Theorem~\ref{theo:tnnd} asserts the validity
of Noether's constant of motion, which is here easily verified:
\eqref{eq:tnnd}--\eqref{eq:Noeth} holds along \eqref{eq:ext:ex:22b}
with $\Phi\equiv 0$, $\eta\equiv 1$, and $\xi\equiv 0$.
\end{example}
\section{Nonsmooth higher-order Noether's theorem for problems of the calculus of variations with time delay}
\label{sec:MRHO}
Let $\mathbb{W}^{k,p}$, $k\geq 1$, $1\leq p \leq \infty$, denote the
class of functions that are absolutely continuous with their derivatives
up to order $k-1$, the $k$th derivative belonging to $L^p$. With this notation,
the class $Lip$ of Lipschitz functions is represented by $\mathbb{W}^{1,\infty}$.
We now extend previous results to problems with higher-order derivatives.
\subsection{Higher-order Euler--Lagrange and DuBois--Reymond optimality conditions with time delay}
Let $m\in\mathbb{N}$ and $q^{(i)}(t)$ denote the $i$th derivative of $q(t)$, $i=0,\dots,m$,
with $q^{(0)}(t)=q(t)$. For simplicity of notation, we introduce
the operator $[\cdot]^m_{\tau}$ by
$$
[q]^m_{\tau}(t) := \Bigl(t,q(t),\dot{q}(t),
\ldots,q^{(m)}(t),\\q(t-\tau),\dot{q}(t-\tau),
\ldots,q^{(m)}(t-\tau)\Bigr).
$$
Consider the following higher-order variational problem with time delay:
to minimize
\begin{equation}
\label{Pm}
J^{\tau}_{m}[q(\cdot)] =\int_{t_1}^{t_2}
L[q]^m_{\tau}(t)
dt
\end{equation}
subject to the boundary conditions \eqref{Pe2} and
$q^{(i)}(t_2)=q_{t_2}^i,~i=1,\dots,m-1$.
The Lagrangian $L :[t_1,t_2] \times \mathbb{R}^{2 n (m+1)} \rightarrow \mathbb{R}$
is assumed to be a $\mathcal{C}^{m+1}$-function with respect to all its arguments,
admissible functions $q(\cdot)$ are assumed to be $\mathbb{W}^{m,\infty}$,
$t_{1}< t_{2}$ are fixed in $\mathbb{R}$,
$\tau$ is a given positive real number such that $\tau<t_{2}-t_{1}$,
and $q_{t_2}^i$ are given vectors in $\mathbb{R}^n$, $i=1,\dots,m-1$.
\begin{remark}
When $m=1$ functional \eqref{Pm} reduces to \eqref{Pe},
i.e., $J^{\tau}_{1} = J^{\tau}$.
\end{remark}
A variation of $q\in \mathbb{W}^{m,\infty}\left([t_1-\tau,t_2],
\mathbb{R}^{n}\right)$ is another function in the set
$\mathbb{W}^{m,\infty}\left([t_1-\tau,t_2], \mathbb{R}^{n}\right)$ of
the form $q+ \varepsilon h$, with $h \in \mathbb{W}^{m,\infty}\left([t_1-\tau,t_2],
\mathbb{R}^{n}\right)$, such that $h^{(i)}(t_2)=0,\,i=0,\ldots,m$, $h(t)=0$
if $t \in[t_{1}-\tau,t_{1}]$, and $\varepsilon$ a small real positive number.
\begin{definition}[Extremal of \eqref{Pm}]
\label{df1}
We say that $q$ is an extremal of the delayed funcional
\eqref{Pm} if for any $h(\cdot)\in \mathbb{W}^{m,\infty}\left([t_1-\tau,t_2],
\mathbb{R}^{n}\right)$ such that $h^{(i)}(t_2)=0,\,i=0,\ldots,m-1$,
and $h(t)=0$, $t \in[t_{1}-\tau,t_{1}]$, the following equation holds:
\begin{equation*}
\frac{d}{d\varepsilon}\left.J_m^{\tau}[q + \varepsilon
h]\right|_{\varepsilon = 0}=0.
\end{equation*}
\end{definition}
\begin{theorem}[Higher-order Euler--Lagrange equations with time delay in integral form]
\label{Thm:ELdeordm}
If $q(\cdot)\in\mathbb{W}^{m,\infty}\left([t_1-\tau,t_2],
\mathbb{R}^{n}\right)$ is an extremal of functional \eqref{Pm},
then $q(\cdot)$ satisfies the following
higher-order Euler--Lagrange integral equations with time delay:
\begin{multline}
\label{eq:ELdeordmInt}
\sum_{i=0}^{m}(-1)^{m-i-1}\Biggl(\underbrace{\int_{t_2-\tau}^{t}
\int_{t_2-\tau}^{s_1}\dots \int_{t_2-\tau}^{s_{m-i-1}}}_{m-i~\textnormal{times}}
\Bigl(\partial_{i+2}
L[q]^m_{\tau}(s_{m-i})\\
+\partial_{i+m+3} L[q]^m_{\tau}(s_{m-i}+\tau)\Bigr)ds_{m-i}\dots ds_2 ds_1\Biggr)=p(t)
\end{multline}
for $t_1\leq t\leq t_{2}-\tau$ and
\begin{equation}
\label{eq:ELdeordmInt1}
\sum_{i=0}^{m}(-1)^{m-i-1}\Biggl(\underbrace{\int_{t_2-\tau}^{t}
\int_{t_2-\tau}^{s_1}\dots
\int_{t_2-\tau}^{s_{m-i-1}}}_{m-i~\textnormal{times}}\Bigl(\partial_{i+2}
L[q]^m_{\tau}(t)\Bigr)ds_{m-i}\dots ds_2 ds_1\Biggr) =p(t)
\end{equation}
for $t_{2}-\tau\leq t \leq t_{2}$, where $p(t)$ is a polynomial of order $m-1$, i.e.,
$p(t)=c_0+c_1t+\dots +c_{m-1}t^{m-1}$
for some constants $c_i\in\mathbb{R}$, $i=0,\dots,m-1$.
\end{theorem}
\begin{proof}
Assume that $q(\cdot)\in\mathbb{W}^{m,\infty}\left([t_1-\tau,t_2],
\mathbb{R}^{n}\right)$ is an extremal of functional \eqref{Pm}.
According to the Definition~\ref{df1}, for any $h(\cdot)\in \mathbb{W}^{m,\infty}\left([t_1-\tau,t_2],
\mathbb{R}^{n}\right)$ such that
$h^{(i)}(t_2)=0,\,i=0,\ldots,m-1$, and $h(t)=0$, $t \in[t_{1}-\tau,t_{1}]$, we have
\begin{equation}
\label{pel}
\int_{t_1}^{t_2}\left(\sum_{i=0}^{m}\partial_{i+2}
L[q]^m_{\tau}(t)\cdot h^{(i)}(t)+\sum_{i=0}^{m}\partial_{i+m+3}
L[q]^m_{\tau}(t)\cdot h^{(i)}(t-\tau)\right)dt = 0.
\end{equation}
Performing the linear change of variables $t=\sigma+\tau$ in the last term of integral
\eqref{pel}, and using the fact that $h(t)=0$ if $t \in[t_{1}-\tau,t_{1}]$, \eqref{pel} becomes
\begin{equation}
\label{pel1}
\int_{t_1}^{t_2}\left(\sum_{i=0}^{m}\partial_{i+2}
L[q]^m_{\tau}(t)\cdot h^{(i)}(t)\right)dt
+\int_{t_1}^{t_2-\tau}\left(\sum_{i=0}^{m}\partial_{i+m+3}
L[q]^m_{\tau}(t+\tau)\cdot h^{(i)}(t)\right)dt = 0.
\end{equation}
By repeated integration by parts one has
\begin{multline}
\label{eq:identity1}
\sum\limits_{i=0}^{m} \int_{t_1}^{t_2} \partial_{i+2}L[q]^m_{\tau}(t)\cdot h^{(i)}(t) dt\\
=\sum\limits_{i=0}^{m}\Biggl\{\Biggl[\sum\limits_{j=1}^{m-i}(-1)^{j+1}h^{(i+j-1)}(t)
\cdot\Biggl(\underbrace{\int_{t_2-\tau}^{t}\int_{t_2-\tau}^{s_1}
\dots \int_{t_2-\tau}^{s_{j-1}}}_{j~\textnormal{times}}\Bigl(\partial_{i+2}
L[q]^m_{\tau}(s_j)\Bigr)ds_{j}\dots ds_2 ds_1\Biggr)\Biggr]_{t_1}^{t_2}\\
+(-1)^{i}\int_{t_1}^{t_2}h^{(m)}(t)
\cdot\Biggl( \underbrace{\int_{t_2-\tau}^{t}\int_{t_2-\tau}^{s_1}\dots
\int_{t_2-\tau}^{s_{m-i-1}}}_{m-i~\textnormal{times}}\Bigl(\partial_{i+2}
L[q]^m_{\tau}(s_{m-i})\Bigr)ds_{m-i} \dots ds_{2}ds_1\Biggr)dt\Biggr\}
\end{multline}
and
\begin{multline}
\label{eq:identity2}
\sum\limits_{i=0}^{m}\int_{t_1}^{t_2-\tau}
\partial_{i+m+3}L[q]^m_{\tau}(t+\tau)\cdot h^{(i)}(t) dt\\
=\sum\limits_{i=0}^{m}\Biggl\{\Biggl[\sum\limits_{j=1}^{m-i}(-1)^{j+1}h^{(i+j-1)}(t)
\cdot\Biggl(\underbrace{\int_{t_2-\tau}^{t}\int_{t_2-\tau}^{s_1}
\dots \int_{t_2-\tau}^{s_{j-1}}}_{j~\textnormal{times}}\Bigl(\partial_{i+m+3}
L[q]^m_{\tau}(s_j+\tau)\Bigr)ds_{j}\dots ds_2 ds_1\Biggr)\Biggr]_{t_1}^{t_2-\tau}\\
+(-1)^{i}\int_{t_1}^{t_2-\tau}h^{(m)}(t)
\cdot\Biggl( \underbrace{\int_{t_2-\tau}^{t}\int_{t_2-\tau}^{s_1}
\dots \int_{t_2-\tau}^{s_{m-i-1}}}_{m-i~\textnormal{times}}\Bigl(\partial_{i+m+3}
L[q]^m_{\tau}(s_{m-i}+\tau)\Bigr)ds_{m-i} \dots ds_{2}ds_1\Biggr)dt\Biggr\}.
\end{multline}
Because $h^{(i)}(t_2)=0,\,i=0,\ldots,m-1$, and $h(t)=0$, $t \in[t_{1}-\tau,t_{1}]$,
the terms without integral sign in the right-hand sides of identities \eqref{eq:identity1}
and \eqref{eq:identity2} vanish. Therefore, equation \eqref{pel1} becomes
\begin{multline}
\label{eq:identity3}
0=\int_{t_1}^{t_2-\tau}h^{(m)}(t)
\cdot\Biggl[\sum\limits_{i=0}^{m}(-1)^{i}
\Biggl( \underbrace{\int_{t_2-\tau}^{t}\int_{t_2-\tau}^{s_1}
\dots \int_{t_2-\tau}^{s_{m-i-1}}}_{m-i~\textnormal{times}}\Bigl(\partial_{i+2}
L[q]^m_{\tau}(s_{m-i})\\
+\partial_{i+m+3}
L[q]^m_{\tau}(s_{m-i}+\tau)\Bigr)ds_{m-i} \dots ds_{2}ds_1\Biggr)\Biggr]dt\\
+\int_{t_2-\tau}^{t_2}h^{(m)}(t)
\cdot\Biggl[\sum\limits_{i=0}^{m}(-1)^{i}
\Biggl( \underbrace{\int_{t_2-\tau}^{t}\int_{t_2-\tau}^{s_1}
\dots \int_{t_2-\tau}^{s_{m-i-1}}}_{m-i~\textnormal{times}}\Bigl(\partial_{i+2}
L[q]^m_{\tau}(s_{m-i})\Bigr)ds_{m-i} \dots ds_{2}ds_1\Biggr)\Biggr]dt.
\end{multline}
For $i=0,\dots,m$ we define functions
\begin{equation*}
\varphi_i (t)=
\begin{cases}
\partial_{i+2}L[q]^m_{\tau}(t)+\partial_{i+m+3}L[q]^m_{\tau}(t+\tau)
& ~\textnormal{for}~ t_1\leq t\leq t_2-\tau\\
\partial_{i+2}L[q]^m_{\tau}(t) & ~\textnormal{for}~ t_2-\tau\leq t\leq t_2.
\end{cases}
\end{equation*}
Then one can write equation \eqref{eq:identity3} as follows:
\begin{equation*}
0=\int_{t_1}^{t_2}h^{(m)}(t)
\cdot\Biggl[\sum\limits_{i=0}^{m}(-1)^{i}
\Biggl( \underbrace{\int_{t_2-\tau}^{t}\int_{t_2-\tau}^{s_1}
\dots \int_{t_2-\tau}^{s_{m-i-1}}}_{m-i~\textnormal{times}}
\Bigl(\varphi_i (s_{m-i})\Bigr)ds_{m-i} \dots ds_{2}ds_1\Biggr)\Biggr]dt.
\end{equation*}
Applying the higher-order DuBois--Reymond lemma \cite{Jost:book,Troutman:book},
one arrives to \eqref{eq:ELdeordmInt} and \eqref{eq:ELdeordmInt1}.
\end{proof}
\begin{corollary}[Higher-order Euler--Lagrange equations with time delay in differential form]
\label{cor:16}
If $q(\cdot)\in\mathbb{W}^{m,\infty}\left([t_1-\tau,t_2], \mathbb{R}^{n}\right)$
is an extremal of functional \eqref{Pm}, then
\begin{equation}
\label{eq:ELdeordm}
\sum_{i=0}^{m}(-1)^{i}\frac{d^{i}}{dt^{i}}\Bigl(\partial_{i+2}
L[q]^m_{\tau}(t)+\partial_{i+m+3}
L[q]^m_{\tau}(t+\tau)\Bigr)=0
\end{equation}
for $t_1\leq t\leq t_{2}-\tau$ and
\begin{equation}
\label{eq:ELdeordm1}
\sum_{i=0}^{m}(-1)^{i}\frac{d^{i}}{dt^{i}}\partial_{i+2}
L[q]^m_{\tau}(t)=0
\end{equation}
for $t_{2}-\tau\leq t \leq t_{2}$.
\end{corollary}
\begin{proof}
We obtain \eqref{eq:ELdeordm} and \eqref{eq:ELdeordm1} applying the derivative of order
$m$ to \eqref{eq:ELdeordmInt} and \eqref{eq:ELdeordmInt1}, respectively.
\end{proof}
\begin{remark}
If $m=1$, then the higher-order Euler--Lagrange equations
\eqref{eq:ELdeordm}--\eqref{eq:ELdeordm1} reduce to \eqref{EL1}.
\end{remark}
Associated to a given function $q(\cdot)\in\mathbb{W}^{m,\infty}\left([t_1-\tau,t_2],
\mathbb{R}^{n}\right)$, it is convenient to introduce
the following quantities (\textrm{cf.} \cite{Torres:proper}):
\begin{equation}
\label{eq:eqprin}
\psi^{j}_1=\sum_{i=0}^{m-j}(-1)^{i}\frac{d^{i}}{dt^{i}}\Bigl(\partial_{i+j+2}
L[q]^m_{\tau}(t)+\partial_{i+j+m+3}
L[q]^m_{\tau}(t+\tau)\Bigr)
\end{equation}
for $t_1\leq t\leq t_{2}-\tau$, and
\begin{equation}
\label{eq:eqprin11}
\psi^{j}_2=\sum_{i=0}^{m-j}(-1)^{i}\frac{d^{i}}{dt^{i}}\partial_{i+j+2}
L[q]^m_{\tau}(t)
\end{equation}
for $t_{2}-\tau\leq t\leq t_{2}$, where $j=0,\ldots,m$.
These operators are useful for our purposes because of the following properties:
\begin{equation}
\label{eq:eqprin1}
\frac{d}{dt}\psi^{j}_1=\partial_{j+1}L[q]^m_{\tau}(t)
+\partial_{j+m+2}L[q]^m_{\tau}(t+\tau)-\psi^{j-1}_1
\end{equation}
for $t_1\leq t\leq t_{2}-\tau$, and
\begin{equation*}
\frac{d}{dt}\psi^{j}_2=\partial_{j+1}L[q]^m_{\tau}(t)
-\psi^{j-1}_2
\end{equation*}
for $t_{2}-\tau\leq t\leq t_{2}$, where $j=1,\ldots,m$.
We are now in conditions to prove a higher-order DuBois--Reymond
optimality condition for problems with time delay.
\begin{theorem}[Higher-order delayed DuBois--Reymond condition]
\label{theo:cDRifm}
If $q(\cdot)\in\mathbb{W}^{m,\infty}\left([t_1-\tau,t_2], \mathbb{R}^{n}\right)$
is an extremal of functional \eqref{Pm}, then
\begin{equation}
\label{eq:DBRordm}
\frac{d}{dt}\left(L[q]^m_{\tau}(t)
-\sum_{j=1}^{m}\psi^{j}_1\cdot q^{(j)}(t)\right)
=\partial_{1}
L[q]^m_{\tau}(t)
\end{equation}
for $t_1\leq t\leq t_{2}-\tau$ and
\begin{equation}
\label{eq:DBRordm:2}
\frac{d}{dt}\left(L[q]^m_{\tau}(t)
-\sum_{j=1}^{m}\psi^{j}_2\cdot q^{(j)}(t)\right)
=\partial_{1}
L[q]^m_{\tau}(t)
\end{equation}
for $t_{2}-\tau\leq t\leq t_{2}$, where $\psi^{j}_1$ is given
by \eqref{eq:eqprin} and $\psi^{j}_2$ by \eqref{eq:eqprin11}.
\end{theorem}
\begin{proof}
We prove the theorem in the interval $t_{1}\leq t\leq t_{2}-\tau$.
The proof is similar for $t_{2}-\tau\leq t\leq t_{2}$.
We derive equation \eqref{eq:DBRordm} as follows:
\begin{multline}
\label{pr}
\int_{t_1}^{t_2}\frac{d}{dt}\left(L[q]^m_{\tau}(t)
-\sum_{j=1}^{m}\psi^{j}_1\cdot q^{(j)}(t)\right)dt\\
=\int_{t_1}^{t_2}\left(\partial_1 L[q]^{m}_{\tau}(t)+\sum_{j=0}^{m}
\partial_{j+2} L[q]^m_{\tau}(t)\cdot q^{(j+1)}(t)
-\sum_{j=1}^{m}\left(\dot{\psi}^{j}_1\cdot
q^{(j)}(t)+\psi^{j}_1\cdot q^{(j+1)}(t)\right)\right)dt\\
+\int_{t_1}^{t_2}\sum_{j=0}^{m}
\partial_{j+m+3} L[q]^m_{\tau}(t)\cdot q^{(j+1)}(t-\tau)dt.
\end{multline}
From \eqref{eq:eqprin1} and by performing a linear change
of variables $t=\sigma+\tau$ in the last integral
of \eqref{pr}, in the interval where $t_{1}\leq t\leq t_{2}-\tau$,
the equation \eqref{pr} becomes
\begin{multline}
\label{pr2}
\int_{t_1}^{t_2-\tau}\frac{d}{dt}\left(L[q]^m_{\tau}(t)
-\sum_{j=1}^{m}\psi^{j}_1\cdot q^{(j)}(t)\right)dt
=\int_{t_1}^{t_2-\tau}\left[\partial_1 L[q]_{\tau}^m(t)+\sum_{j=0}^{m}
\partial_{j+2} L[q]^m_{\tau}(t)\cdot q^{(j+1)}(t)\right.\\ \left.
-\sum_{j=1}^{m}\left(\left(\partial_{j+1}L[q]^m_{\tau}(t)
+\partial_{j+m+2}L[q]^m_{\tau}(t+\tau)-\psi^{j-1}_1\right)\cdot
q^{(j)}(t)+\psi^{j}_1\cdot q^{(j+1)}(t)\right)\right.\\
\left. +\sum_{j=0}^{m} \partial_{j+m+3} L[q]^m_{\tau}(t+\tau)
\cdot q^{(j+1)}(t)\right]dt.
\end{multline}
We now simplify the second term on the right-hand side of \eqref{pr2}:
\begin{multline}
\label{pr3}
\sum_{j=1}^{m}\left(\left(\partial_{j+1}L[q]^m_{\tau}(t)
+\partial_{j+m+2}L[q]^m_{\tau}(t+\tau)-\psi^{j-1}_1\right)\cdot
q^{(j)}(t)+\psi^{j}_1\cdot q^{(j+1)}(t)\right)\\
=\sum_{j=0}^{m-1}\Bigl(\left(\partial_{j+2}L[q]^m_{\tau}(t)
+\partial_{j+m+3}L[q]^m_{\tau}(t+\tau)-\psi^{j}_1\right)
\cdot q^{(j+1)}(t)+\psi^{j+1}_1\cdot q^{(j+2)}(t)\Bigr)\\
=\sum_{j=0}^{m-1}\left[\left(\partial_{j+2}L[q]^m_{\tau}(t)
+\partial_{j+m+3}L[q]^m_{\tau}(t+\tau)\right)\cdot q^{(j+1)}\right]
-\psi^{0}_1\cdot \dot{q}(t)+\psi^{m}_1\cdot q^{(m+1)}(t).
\end{multline}
Substituting \eqref{pr3} into \eqref{pr2} and using the higher-order Euler--Lagrange
equations with time delay \eqref{eq:ELdeordm}, and since, by definition,
$\psi^{m}_1=\partial_{m+2}L[q]^m_{\tau}(t)+\partial_{2m+3}L[q]^m_{\tau}(t+\tau)$
and
\begin{equation*}
\psi^0_1=
\sum_{i=0}^{m}(-1)^{i}\frac{d^{i}}{dt^{i}}\Bigl(\partial_{i+2}
L[q]^m_{\tau}(t)+\partial_{i+m+3}
L[q]^m_{\tau}(t+\tau)\Bigr)=0\, ,
\end{equation*}
we obtain the intended result, that is,
\begin{multline*}
\int_{t_1}^{t_2-\tau}\frac{d}{dt}\left(L[q]^m_{\tau}(t)
-\sum_{j=1}^{m}\psi^{j}_1\cdot q^{(j)}(t)\right)dt\\
=\int_{t_1}^{t_2-\tau}\left[\partial_1 L[q]_{\tau}^m(t)
+\left(\partial_{m+2}L[q]^m_{\tau}(t)+\partial_{2m+3}
L[q]^m_{\tau}(t+\tau)\right)\cdot q^{(m+1)}\right.\\
\left. +\psi^{0}_1\cdot
\dot{q}(t)-\psi^{m}_1\cdot q^{(m+1)}(t)\right]dt
=\int_{t_1}^{t_2-\tau}\partial_1 L[q]_{\tau}^m(t)dt.
\end{multline*}
\end{proof}
In the particular case when $m=1$, we obtain from Theorem~\ref{theo:cDRifm}
an extension of Theorem~\ref{theo:cdrnd} to the class of Lipschitz functions.
\begin{corollary}[Nonsmooth DuBois--Reymond conditions]
\label{cor:DR:m1}
If $q(\cdot)\in Lip\left([t_1-\tau,t_2];\mathbb{R}^n\right)$
is an extremal of functional \eqref{Pe}, then the DuBois--Reymond
conditions with time delay \eqref{eq:cdrnd} hold true.
\end{corollary}
\begin{proof}
For $m=1$, condition \eqref{eq:DBRordm} is reduced to
\begin{equation}
\label{eq:DBRordm111}
\frac{d}{dt}\left(L[q]_{\tau}(t)
-\psi^{1}_1\cdot \dot{q}(t)\right)
=\partial_{1} L[q]_{\tau}(t)
\end{equation}
for $t_1\leq t\leq t_{2}-\tau$, and \eqref{eq:DBRordm:2} to
\begin{equation}
\label{eq:DBRordm121}
\frac{d}{dt}\left(L[q]_{\tau}(t)
-\psi^{1}_2\cdot \dot{q}(t)\right)
=\partial_{1} L[q]_{\tau}(t)
\end{equation}
for $t_{2}-\tau\leq t\leq t_{2}$.
Keeping in mind \eqref{eq:eqprin} and \eqref{eq:eqprin11},
we obtain
\begin{equation}
\label{eq:DBRord3}
\psi^{1}_1=\partial_3 L[q]_{\tau}(t)+\partial_5 L[q]_{\tau}(t+\tau)
\end{equation}
and
\begin{equation}
\label{eq:DBRord4}
\psi^{1}_2=\partial_3 L[q]_{\tau}(t).
\end{equation}
One finds the intended equalities \eqref{eq:cdrnd} by substituting the quantities
\eqref{eq:DBRord3} and \eqref{eq:DBRord4} into \eqref{eq:DBRordm111}
and \eqref{eq:DBRordm121}, respectively.
\end{proof}
\subsection{Higher-order Noether's symmetry theorem with time delay}
Now, we generalize the Noether-type theorem proved in Section~\ref{sec:MR}
to the more general case of delayed variational problems with higher-order derivatives.
\begin{definition}[Invariance of \eqref{Pm} up to a gauge-term]
\label{def:invaifm}
Consider the $s$-parameter group of infinitesimal transformations \eqref{eq:tinf}.
Functional \eqref{Pm} is invariant under \eqref{eq:tinf}
up to the gauge-term $\Phi$ if
\begin{multline}
\label{eq:invndm}
\int_{I} \dot{\Phi}[q]^m_{\tau}(t)dt
= \frac{d}{ds} \int_{\bar{t}(I)} L\left(\bar{t},\bar{q}(\bar{t}),
{\bar{q}}'(\bar{t}),\ldots,\bar{q}^{(m)}(\bar{t}),\right.\\
\left.\left.\bar{q}(\bar{t}-\tau),{\bar{q}}'(\bar{t}-\tau),
\ldots,\bar{q}^{(m)}(\bar{t}-\tau)\right)
(1+s\dot{\eta}(t,q(t))dt\right|_{s=0}
\end{multline}
for any subinterval $I \subseteq [t_1,t_2]$ and for all
$q(\cdot)\in \mathbb{W}^{m,\infty}\left([t_1-\tau,t_2], \mathbb{R}^{n}\right)$.
\end{definition}
\begin{remark}
Expressions $\dot{\Phi}$ and $\bar{q}^{(i)}$ in equation \eqref{eq:invndm},
$i=1,\ldots,m$, are interpreted as
\begin{equation}
\label{eq:invifm1}
\dot{\Phi}=\frac{d}{dt}\Phi\,\,,\quad\bar{q}'
=\frac{d\bar{q}}{d\bar{t}}=\frac{\frac{d\bar{q}}{dt}}{\frac{d\bar{t}}{dt}}\,\,,
\quad \bar{q}^{(i)}=\frac{d^{i}\bar{q}}{d\bar{t}^{i}}=
\frac{\frac{d}{dt}\left(\frac{d^{i-1}}{d\bar{t}^{i-1}}\bar{q}\right)}{\frac{d\bar{t}}{dt}},\,
i=2,\ldots,m.
\end{equation}
\end{remark}
The next lemma gives a necessary condition of invariance for functional \eqref{Pm}.
\begin{lemma}[Necessary condition of invariance for \eqref{Pm}]
\label{thm:cnsi}
If functional \eqref{Pm} is invariant up to the gauge-term $\Phi$ under
the $s$-parameter group of infinitesimal transformations \eqref{eq:tinf}, then
\begin{multline}
\label{eq:cnsiifm}
\int_{t_1}^{t_2-\tau}\left[-\dot{\Phi}[q]^m_{\tau}(t)+\partial_{1}
L[q]^m_{\tau}(t)\eta(t,q)+ L[q]^m_{\tau}(t)
\dot{\eta}(t,q)\right.\\ \left.
+\sum_{i=0}^{m}\left(\partial_{i+2}
L[q]^m_{\tau}(t)+\partial_{i+m+3}
L[q]^m_{\tau}(t+\tau)\right)\cdot
\rho^{i}(t) \right]dt =0
\end{multline}
for $t_1\leq t\leq t_{2}-\tau$ and
\begin{equation}
\label{eq:cnsiifm11}
\int_{t_2-\tau}^{t_2}\left[-\dot{\Phi}[q]^m_{\tau}(t)+\partial_{1}
L[q]^m_{\tau}(t)\eta(t,q)+ L[q]^m_{\tau}(t)
\dot{\eta}(t,q) + \sum_{i=0}^{m}\partial_{i+2}
L[q]^m_{\tau}(t)\cdot \rho^{i}(t)\right]dt =0
\end{equation}
for $t_2-\tau\leq t\leq t_{2}$, where
\begin{equation}
\label{eq:cnsiifm1}
\begin{cases}
\rho^{0}(t)=\xi(t,q) \, , \\
\rho^{i}(t)=\frac{d}{dt}\left(\rho^{i-1}(t)\right)-q^{(i)}(t)\dot{\eta}(t,q)\, ,
\quad i=1,\ldots,m.
\end{cases}
\end{equation}
\end{lemma}
\begin{proof}
Without loss of generality, we take $I=[t_1,t_2]$.
Then, \eqref{eq:invndm} is equivalent to
\begin{multline}\label{mm}
\int_{t_1}^{t_2}\left[-\dot{\Phi}[q]_{\tau}(t)+\partial_{1}L[q]^m_{\tau}(t)\eta(t,q)
+\sum_{i=0}^{m}\partial_{i+2} L[q]^m_{\tau}(t)\cdot\frac{\partial}{\partial s}
\left.\left(\frac{d^{i}\bar{q}}{d\bar{t}^{i}}\right)\right|_{s=0}\right.\\ \left.
+\sum_{i=0}^{m}\partial_{i+m+3} L[q]^m_{\tau}(t)\cdot\frac{\partial}{\partial s}
\left.\left(\frac{d^{i}\bar{q}(\bar{t}-\tau)}{d(\bar{t}-\tau)^{i}}\right)\right|_{s=0}
+L[q]^m_{\tau}(t) \dot{\eta} \right] =0.
\end{multline}
Using the fact that \eqref{eq:invifm1} implies
\begin{equation*}
\frac{\partial}{\partial s}\left.\left(\frac{d\bar{q}(\bar{t})}{d\bar{t}}\right)\right|_{s=0}
=\dot{\xi}(t,q)-\dot{q}\dot{\eta}(t,q) \, ,
\end{equation*}
\begin{equation*}
\frac{\partial}{\partial s}\left.\left(
\frac{d^{i}\bar{q}(\bar{t})}{d\bar{t}^{i}}\right)\right|_{s=0}
=\frac{d}{dt}\left[\frac{\partial}{\partial s}
\left.\left(\frac{d^{i-1}\bar{q}(\bar{t})}{d\bar{t}^{i-1}}\right)\right|_{s=0}\right]
-q^{(i)}(t)\dot{\eta}(t,q)\, , \quad i=2,\ldots,m,
\end{equation*}
then equation \eqref{mm} becomes
\begin{multline}
\label{mm1}
\int_{t_1}^{t_2}\left[-\dot{\Phi}[q]^m_{\tau}(t)+\partial_{1}
L[q]^m_{\tau}(t)\eta(t,q)+ L[q]^m_{\tau}(t)
\dot{\eta}(t,q)\right.\\ \left.
+\sum_{i=0}^{m}\partial_{i+2}
L[q]^m_{\tau}(t)\cdot\rho^{i}(t)+\sum_{i=0}^{m}\partial_{i+m+3}
L[q]^m_{\tau}(t)\cdot
\rho^{i}(t-\tau)
\right]dt =0.
\end{multline}
Performing the linear change of variables $t=\sigma+\tau$ in the last integral
of \eqref{mm1}, and keeping in mind that $L[q]^m_{\tau}(t)\equiv 0$ on
$[t_1-\tau,t_1]$, equation \eqref{mm1} becomes
\begin{multline}
\label{mm2}
\int_{t_1}^{t_2-\tau}\left[-\dot{\Phi}[q]^m_{\tau}(t)+\partial_{1}
L[q]^m_{\tau}(t)\eta(t,q)+ L[q]^m_{\tau}(t)
\dot{\eta}(t,q)\right.\\ \left.
+\sum_{i=0}^{m}\left(\partial_{i+2}
L[q]^m_{\tau}(t)+\partial_{i+m+3}
L[q]^m_{\tau}(t+\tau)\right)\cdot
\rho^{i}(t) \right]dt \\
+ \int_{t_2-\tau}^{t_2}\left[-\dot{\Phi}[q]^m_{\tau}(t)+\partial_{1}
L[q]^m_{\tau}(t)\eta(t,q)+ L[q]^m_{\tau}(t)
\dot{\eta}(t,q) +\sum_{i=0}^{m}\partial_{i+2} L[q]^m_{\tau}(t)\cdot
\rho^{i}(t)\right]dt =0.
\end{multline}
Equations \eqref{eq:cnsiifm} and \eqref{eq:cnsiifm11} follow from the fact
that \eqref{mm2} holds for an arbitrary $I \subseteq [t_1,t_2]$.
\end{proof}
\begin{definition}[Higher-order constant of motion/conservation law with time delay]
\label{def:leicond2}
A quantity
\begin{multline*}
C\{q\}_{\tau}^{m}(t) := C\Bigl(t,t+\tau,q(t),\dot{q}(t),
\ldots,q^{(m)}(t),q(t-\tau),\dot{q}(t-\tau),
\ldots,q^{(m)}(t-\tau),\\q(t+\tau),\dot{q}(t+\tau),
\ldots,q^{(m)}(t+\tau)\Bigr)
\end{multline*}
is a higher-order constant of motion with time delay $\tau$ if
\begin{equation}
\label{eq:conslaw:td}
\frac{d}{dt} C\{q\}_{\tau}^{m}(t) = 0,
\end{equation}
$t\in [t_1,t_2]$, along any
$q(\cdot)\in \mathbb{W}^{m,\infty}\left([t_1-\tau,t_2], \mathbb{R}^{n}\right)$
satisfying both Theorem~\ref{Thm:ELdeordm} and Theorem~\ref{theo:cDRifm}.
The equality \eqref{eq:conslaw:td} is then said
to be a higher-order conservation law with time delay.
\end{definition}
\begin{theorem}[Higher-order Noether's symmetry theorem with time delay]
\label{thm:Noether}
If functional \eqref{Pm} is invariant up to the gauge-term $\Phi$
in the sense of Definition~\ref{def:invaifm},
then the quantity $C\{q\}_{\tau}^{m}(t)$ defined by
\begin{equation}
\label{eq:TeNetm}
\sum_{j=1}^{m}\psi^{j}_1\cdot
\rho^{j-1}(t)+\left(L[q]^m_{\tau}(t)
-\sum_{j=1}^{m}\psi^{j}_1\cdot q^{(j)}(t)\right)\eta(t,q)
-\Phi[q]^m_{\tau}(t)
\end{equation}
for $t_1\leq t\leq t_{2}-\tau$ and by
\begin{equation*}
\label{eq:TeNetm1}
\sum_{j=1}^{m}\psi^{j}_2\cdot
\rho^{j-1}(t)+\left(L[q]^m_{\tau}(t)
-\sum_{j=1}^{m}\psi^{j}_2\cdot q^{(j)}(t)\right)\eta(t,q)
-\Phi[q]^m_{\tau}(t)
\end{equation*}
for $t_2-\tau\leq t\leq t_{2}$, is a higher-order constant of motion with time delay
(\textrm{cf.} Definition~\ref{def:leicond2}), where $\psi^{j}_1$ and $\psi^{j}_2$
are given by \eqref{eq:eqprin} and \eqref{eq:eqprin11}, respectively.
\end{theorem}
\begin{proof}
We prove the theorem in the interval $t_1\leq t\leq t_{2}-\tau$.
The proof is similar in the interval $t_2-\tau\leq t\leq t_{2}$.
Equation \eqref{eq:TeNetm} follows by direct calculations:
\begin{equation}
\label{eq:TeNetm2}
\begin{split}
0&=\int_{t_1}^{t_2-\tau}\frac{d}{dt}\left[\psi_1^1\cdot\rho^0+\sum_{j=2}^{m}\psi^{j}_1
\cdot \rho^{j-1}(t) +\left(L[q]^m_{\tau}(t)
-\sum_{j=1}^{m}\psi^{j}_1\cdot q^{(j)}(t)\right)\eta(t,q)
-\Phi[q]^m_{\tau}(t)\right]dt\\
&=\int_{t_1}^{t_2-\tau}\left[-\dot{\Phi}[q]^m_{\tau}(t)+\rho^0(t)\cdot\frac{d}{dt}\psi^1_1
+\psi^1_1\cdot\frac{d}{dt}\rho^0(t)
+\sum_{j=2}^{m}\left(\rho^{j-1}(t)\cdot\frac{d}{dt}\psi^{j}_1
+\psi^{j}_1\cdot\frac{d}{dt}\rho^{j-1}(t)\right)\right.\\
& \qquad \left.+\eta(t,q)\frac{d}{dt}\left(L[q]^m_{\tau}(t)-\sum_{j=1}^{m}\psi_1^{j}
\cdot q^{(j)}(t)\right)+\left(L[q]^m_{\tau}(t)-\sum_{j=1}^{m}\psi_1^{j}
\cdot q^{(j)}(t)\right)\dot{\eta}(t,q)\right]dt.
\end{split}
\end{equation}
Using the Euler--Lagrange equation \eqref{eq:ELdeordm}, the
DuBois--Reymond condition \eqref{eq:DBRordm}, and relations
\eqref{eq:eqprin1} and \eqref{eq:cnsiifm1} in \eqref{eq:TeNetm2},
we obtain:
\begin{multline}
\label{eq:dems}
\int_{t_1}^{t_2-\tau}\left[-\dot{\Phi}[q]^m_{\tau}(t)
+\left(\partial_{2} L[q]^m_{\tau}(t)+\partial_{m+3}
L[q]^m_{\tau}(t+\tau)\right)\cdot\xi(t,q)+\psi^{1}_1
\cdot(\rho^1(t)+\dot{q}(t)\dot{\tau}(t,q))\right. \\
\left. +\sum_{j=2}^{m}\left[\left(\partial_{j+1}
L[q]^m_{\tau}(t)+\partial_{j+m+2}
L[q]^m_{\tau}(t+\tau)-\psi_1^{j-1}\right)\cdot\rho^{j-1}(t)+
\psi_1^{j}\cdot\left(\rho^{j}(t)+q^{(j)}(t)
\dot{\tau}(t,q)\right)\right]\right.\\ \left.
+\partial_{1} L[q]^m_{\tau}(t)\eta(t,q)
+\left(L[q]^m_{\tau}(t)-\sum_{j=1}^{m}\psi_1^{j}
\cdot q^{(j)}(t)\right)\dot{\eta}(t,q)\right]dt\\
=\int_{t_1}^{t_2-\tau}\Bigl[
\partial_{1} L [q]^m_{\tau}(t)\eta(t,q)+L[q]^m_{\tau}(t)\dot{\eta}(t,q)
+\left(\partial_{2} L[q]^m_{\tau}(t)+\partial_{m+3}
L[q]^m_{\tau}(t+\tau)\right)\cdot\xi(t,q)\\
+\psi_1^{1} \cdot(\rho^1(t)+\dot{q}(t)\dot{\eta}(t,q))-\psi_1^1\cdot\rho^1(t)
-\psi_1^1\cdot\dot{q}(t)\dot{\eta}(t,q)+\psi_1^m\cdot\rho^m(t)\\
+\sum_{j=2}^{m}\left(\partial_{j+1} L[q]^m_{\tau}(t)+\partial_{j+m+2}
L[q]^m_{\tau}(t+\tau)\right]\cdot\rho^{j-1}(t)-\dot{\Phi}[q]^m_{\tau}(t)\Bigr]dt = 0.
\end{multline}
Simplification of \eqref{eq:dems} leads to the necessary
condition of invariance \eqref{eq:cnsiifm}.
\end{proof}
\section*{Acknowledgements}
This work was supported by FEDER funds through
COMPETE --- Operational Programme Factors of Competitiveness
(``Programa Operacional Factores de Competitividade'')
and by Portuguese funds through the
Center for Research and Development
in Mathematics and Applications (University of Aveiro)
and the Portuguese Foundation for Science and Technology
(``FCT --- Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia''),
within project PEst-C/MAT/UI4106/2011
with COMPETE number FCOMP-01-0124-FEDER-022690.
Frederico was also supported by FCT through the
post-doc fellowship SFRH/BPD/51455/2011,
program ``Ci\^{e}ncia Global'', Odzijewicz and Torres
by EU funding under the 7th Framework Programme
FP7-PEOPLE-2010-ITN, grant agreement number 264735-SADCO.
|
1,116,691,497,321 | arxiv | \section{Introduction}
Planetary nebulae (PNe) are formed from highly evolved low-medium mass stars, which are in the pre- white dwarf stage. The age of PN central stars fluctuates between 0.1 to about 9 Gy (Allen et al. 1998). The chemical abundances in the nebulae are typical of the moment when the star was born, except for some elements like He, N, C, and possibly O, which have been processed in the stellar nucleus and have been partially dredged up to the surface through several dredge-up events. Thus, the youngest PNe show O, Ne, Ar, S and other $\alpha$-element abundances similar to those of the present interestellar medium, while the older objects show abundances typical of an older stellar population. Also the galactic kinematics is different in the sense that the young objects belong to the thin disk, while the oldest PNe belong to the galactic halo and appear as high velocity objects. In general most of the PNe are disk intermediate population. See Peimbert (1978; 1990) for a thorough review on these subjects.
Considering the above, Peimbert (1978) classified the galactic PNe in four types, according to their chemical composition and kinematics (see also Peimbert \& Serrano 1980, and Peimbert \& Torres-Peimbert 1983). According to Peimbert (1990), who presents a more refined classification, the main characteristics of the different types are: Type I PNe are He- and N-rich objects (He/H$\geq$0.125, N/O$\geq$0.5), the initial masses of their central stars are in the range 2--8 M$_\odot$, the nebulae have, in general, bipolar morphologies, and they belong to the young population; Type II PNe, representing the majority of the known PN sample, are intermediate population, they have no particular He and N enrichment, the initial masses of their central stars are smaller than 2 M$_\odot$ and they show peculiar velocities lower than 60 km s$^{-1}$; Type III PNe are similar to Type II's, but their peculiar velocities are larger than 60 km s$^{-1}$ and they probably have distances to the galactic plane larger than 1 kpc. Finally, Type IV PNe are defined as those extreme population II objects that belong to the galactic halo; there are only a few objects in this category and they show very low metallicities and high peculiar velocities (see e.g., Howard et al. 1997). Peimbert's classification has been revised by several authors. It is worth to mention for instance, that Kingsburgh \& Barlow (1994) proposed as Type I those object with N/O $\geq$0.8. In \S 4 we will use some of the criteria given above, for classifying our objects.
\par
Among central stars of PNe there is a particular group which presents important atmospheric instabilities and large mass losses. Their spectra are similar to the ones shown by massive Wolf-Rayet stars of the C series and are classified in a similar way but with the nomenclature [WC] or [WO]. PNe with this type of star represent no more than 15\% of the known sample and in the following we call them [WR]PNe. The [WR] central stars are H-deficient and their atmospheres show He, C and O, composition typical of the zone where nucleosynthesis took place (e.g., Koesterke 2001). Many works have been devoted to analyze these special type of central stars and their surrounding nebulae (e.g., G\'orny \& Stasi\'nska 1995; Pe\~na et al. 2001; Gesicki et al. 2006; G\'orny et al. 2009; Garc{\'\i}a-Rojas et al. 2009, 2012; DePew et al. 2011, among others).
In this paper we analyze high spectral resolution data obtained with the 2.1-m telescope and the Echelle REOSC spectrograph of the Observatorio Astron\'omico Nacional San Pedro M\'artir (OAN-SPM), M\'exico, and with the 6.5-m Clay Telescope equipped with the double echelle spectrograph MIKE, at Las Campanas Observatory (LCO), Chile,
in order to study the galactic kinematical behavior of near a hundred PNe. Of these, a significant number are [WR]PNe
and the rest are ionized by normal central stars or by weak emission lines stars (wels)
In Section 2 we present the PN sample, the observations and the Galactic distribution of the objects; in \S 3, the heliocentric, circular and peculiar velocities are calculated for the sample with available distances. The distribution of objects in the different Peimbert Types is presented and discussed in \S 4, and our conclusions can be found in \S 5.
\section{The sample: Observations and data analysis and the Galactic distribution of objects.}
The log of our observations, for the whole sample, is presented in Table 1, where we list the observatory and the observing date for each object. The sample collected at the OAN-SPM consists of 56 PNe observed from 1995 to 2001, while the sample from LCO (Clay telescope) consists of 25 objects (9 are in common with the OAN-SPM sample), observed during runs in 2006, 2009 and 2010.
For the SPM objects, the echelle spectrograph REOSC was used at high resolution (Levine \& Chakrabarty 1994). The observed wavelength range covers from about 3600 to 6900 \AA. The description of the observations as well as the data reduction procedure can be found in Pe\~na et al. (2001) and Medina et al. (2006). In these works, the data of a sample of objects from Table 1 (mainly [WR]PNE), were used to derive and analyze the physical conditions, chemical abundances, and expansion velocities of the nebulae. In this paper we use the data presented in those works (notice that several nebulae were observed more than once, and here we are using only one spectrum for radial velocities measurements), together with data for other objects (processed in the same way) to derive radial velocities of the nebulae. The spectral resolution of these spectra is 0.2 to 0.3 \AA~per pix which allows to determine radial velocities with a precision of about 12 to 19 km s$^{-1}$.
\begin{table*}
\caption{ Log of Observations$^{(1)}$}\label{log-observations}\centering
\begin{tabular}{llcrlllcl}
\toprule
PN G &name & obs & obs date$^{(2)}$ & &PN G & name & obs & obs date$^{(2)}$\\
\midrule
001.5$-$06.7 & SwSt1* & LCO & 08/09/09 & & 103.7+00.4 & M2-52* & SPM & 02/11/00\\
001.5$-$06.7 & ~~'' & SPM & 05/08/97 & & 104.4$-$01.6 & M2-53* & SPM & 02/11/00\\
002.2$-$09.4 & Cn1-5* & LCO & 09/09/09 & & 108.4$-$76.1 & BoBn1* & SPM & 27/08/01\\
002.2$-$09.4 &~~ '' & SPM & 17/06/96 & & 111.8$-$02.8 & Hb12* & SPM & 25/08/01\\
002.4+05.8 & NGC6369* & LCO & 05/06/10 & & 118.0$-$08.6 & Vy1-1* & SPM & 02/11/00\\
002.4+05.8 & ~~'' & SPM & 15/06/96 & &118.8$-$74.7 & NGC246* & SPM & 13/12/98\\
003.1+02.9 & Hb4* & LCO & 05/06/10 & &119.6$-$06.7 & Hu1-1* & SPM & 26/08/01\\
003.1+02.9 & ~~'' & SPM & 14/06/96 & &120.0+09.8 & NGC40* & SPM & 13/12/98\\
003.9-14.9 & Hb7* & SPM & 26/08/01 & & 130.2+01.3 & IC1747* & SPM & 14/12/98\\
004.9+04.9 & M1-25 & LCO & 05/06/10 & &130.3$-$11.7 & M1-1* & SPM & 02/11/00\\
004.9+04.9 & ~~'' & SPM & 17/06/96 & &130.9$-$10.5 & NGC650-51* & SPM & 26/08/01\\
006.8+04.1 & M3-15* & LCO & 08/09/09 & &133.1$-$08.6 & M1-2* & SPM & 27/08/01\\
006.8+04.1 & ~~''& SPM & 17/06/96 & &144.5+06.5 & NGC1501 & SPM & 14/12/98\\
009.4$-$05.5 & NGC6629* & SPM & 05/08/97 & &146.7+07.6 & M4-18* & SPM & 14/12/98\\
010.8$-$01.8 & NGC6578 & SPM & 14/06/96 & &159.0$-$15.1 & IC351* & SPM & 05/10/99\\
011.7$-$00.6 & NGC6567* & SPM & 05/08/97 & &161.2$-$14.8 & IC2003* & SPM & 05/10/99\\
011.9+04.2 & M1-32* & LCO & 05/06/10 & &166.1+10.4 & IC2149* & SPM & 14/12/98\\
011.9+04.2 & ~~'' & SPM & 14/06/96 & &194.2+02.5 & J900* & SPM & 02/11/00\\
012.2+04.9 & PM1-188 & SPM & 04/08/97 & &197.8+17.3 & NGC2392* & SPM & 13/12/98\\
017.9$-$04.8 & M3-30* & SPM & 17/06/96 & &243.3$-$01.0 & NGC2452* & SPM & 13/12/98\\
019.4$-$05.3 & M1-61 & LCO & 05/06/10 & & 278.1$-$05.9 & NGC2867 & LCO & 08/05/06\\
019.4$-$05.3 & ~~'' & SPM & 25/08/01 & & 278.8+04.9 & PB6 & LCO & 10/05/06\\
019.7$-$04.5 & M1-60 & SPM & 25/08/01 & &285.4+01.5 & Pe1-1 & LCO & 04/06/10\\
025.8$-$17.9 & NGC6818* & SPM & 27/08/01 & &286.3+02.8 & He2-55 & LCO & 10/05/06\\
027.6+04.2 & M2-43 & SPM & 17/06/96 & &291.3$-$26.2 & Vo1 & LCO & 08/05/06\\
029.2$-$05.9 & NGC6751* & LCO & 09/09/09 & &292.4+04.1 & PB8 & LCO & 09/05/06\\
037.7$-$34.5 & NGC7009* & SPM & 02/11/01 & &294.1+43.6 & NGC4361* & SPM & 17/06/96\\
042.5$-$14.5 & NGC6852* & SPM & 27/08/01 & &300.7$-$02.0 & He2-86 & LCO & 05/06/10\\
046.4$-$04.1 & NGC6803* & SPM & 26/08/01 & & 307.2$-$03.4 & NGC5189* & LCO & 09/05/06\\
048.7+01.9 & He2-429 & SPM & 05/10/99 & &309.0$-$04.2 & He2-99 & LCO & 09/05/06\\
051.9$-$03.8 & M1-73 & SPM & 26/08/01 & &321.0+03.9 & He2-113 & LCO & 09/05/06\\
054.1$-$12.1 & NGC6891* & SPM & 25/08/01 & &327.1$-$02.2 & He2-142 & LCO & 09/05/06\\
058.3$-$10.9 & IC4997* & SPM & 25/08/01 & &332.9$-$09.9 & CPD-56 & LCO & 10/05/06\\
061.4$-$09.5 & NGC6905* & SPM & 14/06/96 & &336.2$-$06.9 & PC14 & LCO & 05/06/10\\
064.7+05.0 & BD+30 3639* & SPM & 05/10/99 & &337.4+01.6 & Pe1-7 & LCO & 04/06/10\\
086.5$-$08.8 & Hu1-2* & SPM & 25/08/01 & &355.2$-$02.5 & H1-29 & SPM & 26/08/01\\
089.0+00.3 & NGC7026* & SPM & 14/12/98 & &355.9$-$04.2 & M1-30* & LCO & 05/06/10\\
089.8$-$05.1 & IC5117* & SPM & 25/08/01 & &355.9$-$04.2 & ~~'' & SPM & 27/08/01\\
096.3+02.3 & K3-61* & SPM & 05/10/99 & &356.2$-$04.4 & Cn2-1*
& SPM & 05/08/97\\
096.4+29.9 & NGC6543* & SPM & 15/06/96 & &358.3$-$21.6 & IC1297 & LCO & 08/09/09\\
100.6$-$05.4 & IC5217* & SPM & 05/10/99 & & &\\
\bottomrule
\multicolumn{9}{l}{$^{(1)}$The observatory and observing dates are indicated.}\\
\multicolumn{9}{l}{~~~~Observations at LCO were obtained with MIKE, and at SPM, with the echelle REOSC.}\\
\multicolumn{9}{l}{~~~~Objects with * are part of the sample downloaded from the SPM Kinematical Catalogue.}\\
\multicolumn{9}{l}{$^{(2)}$Observing date in dd/mm/yy.}
\end{tabular}
\end{table*}
The data for LCO objects were obtained with the double echelle Magellan Inamori Kyocera spectrograph, MIKE (Berstein et al. 2003). A full description of the observations and data reduction procedures are presented by Garc{\'\i}a-Rojas et al. (2009) and Garc{\'\i}a-Rojas et al. (2012), who have used the data of thirteen of these objects to analyze the physical conditions and chemical behavior of the nebulae. In this case the spectral resolution is better than 0.17 \AA ~in the blue (about 10.8 km s$^{-1}$) and 0.23 in the red (about 12.8 km s$^{-1}$).
In addition, we have collected data from the SPM Kinematic Catalogue of Galactic Planetary Nebulae (L\'opez et al. 2012), which provides spatially resolved, long-slit echelle spectra for about 600 galactic PNe. Position-velocity images in H$\alpha$, [NII]$\lambda\lambda$ 6548, 6583, and [OIII]$\lambda$5007, obtained with different slit positions across the nebulae, are presented for each object. From this catalogue, we downloaded all the available spectra of [WR]PNe (slit passing through the center), in order to measure their systemic heliocentric radial velocity. In total we found 54 objects. Of these, 48 are in common with SPM and LCO objects (they are marked with * in Table 1). The additional 6 objects, not observed at SPM or LCO, are PN G009.8-04.6, PN G068.3-02.7, PN G081.2-14.9, PN G189.1+19.8, PN G208.5+33.2, and PN G307.5-04.9.
Our final sample, with measured radial velocities, consists of 78 objects of which 44 are [WR]PNe, 3 are [WC]-PG1159 objects (considered as one group in the following), and 16 PNe are ionized by wels (hereafter WLPNe). The remaining 15 PNe contain a normal or a PG1159 central star. The [WR]PN sample represents the 43\% of the total sample of known [WR]PNe which presently amounts to 103 objects (De Pew et al. 2011). Medina et al. (2006) showed that, regarding the expansion velocities, WLPNe behave similarly to normal PNe. Gesicki et al. (2006) and other authors have found that there are noticeable differences between [WR]PNe and WLPNe, so in the following, we will consider WLPNe and normal PNe in one group, apart from the [WR]PN group. This will be further discussed in \S 2.1 and \S 4.
Due to we are analyzing the distribution and galactic kinematics of [WR]PNe, we have included in our sample other 5 [WR]PNe (PN G020.9-01.1, PN G274.3+09.1, PN G306.422.4-00.1, PN G309.1-04.3, and PN G322.4-00.1)
for which we found distances (given by Stanghellini \& Haywood 2010), but not velocities. They will be used for analyzing the galactic distribution of [WR]PNe.
The main characteristics of all the analyzed objects are presented in Table 4 where we include, in column 1, the name corresponding to the Strasbourg-ESO Catalogue of Galactic Planetary Nebulae by Acker et al. (1992); in column 2, the common name and, in column 3, the spectral classification of the central star: [WC\#] or [WO\#] for Wolf-Rayet central stars, `wels' for weak emission line stars, and `pn' for normal stars. The [WR] or `wels' classification were adopted from Acker \& Neiner (2003) and references therein, and Todt et al. (2010) for the case of PN G292.4+04.1 (PB\,8). Columns 4 and 5 of Table 4 show the galactocentric distances of the objects and their errors, as given in the work by Stanghellini \& Haywood (2010).
\begin{figure}[!h]
\begin{center}
\includegraphics [scale=0.34]{WRPN.eps}
\includegraphics [scale=0.34]{wels_normal.eps}
\caption{Distribution of objects at different height, $z$, above the galactic plane. (a) [WR]PNe (46 objects), (b) objects ionized by wels (histogram in gray) and by normal central stars (histogram in black); 31 objects in total. The halo PN G108.4-76.1(BoBn\,1) is not represented here. It has $z$ = 17 kpc}
\end{center}
\end{figure}
\subsection{Distribution of [WR]PNe and WLPNe, relative to the galactic disk }
Figure 1 presents the distribution of objects ([WR]PNe, WLPNe and normal PNe as a function of height above the galactic disk, {\it z} (pc). Heights were obtained by assuming the heliocentric distances given by Stanghellini \& Haywood (2010, distances were found for 78 objects) and taking into account the galactic coordinates of the objects. The heights are listed in column 6 of Table 4. In the graph for [WR]PNe (Fig. 1a, which also includes three [WC]-PG1159 stars) it is evident that most of the objects (32 of 46) belong to a thin disk with height lower than 400 pc, while in the graph for WLPNe and normal PNe (Fig. 1b), the great majority of the objects (25 of 31) presents height above the galactic plane up to a distance of 800 pc. It is very interesting to notice that WLPNe (gray histogram in Fig.1b) do not show any particular concentration towards the thin disk. Twelve of sixteen WLPNe have heights up to 800 pc and the other four objects are located at larger $z$, very similar to the distribution shown by PNe with normal central stars. Although there are only 16 WLPNe in our sample, this result is indicating that these objects are distributed differently than [WR]PNe.
Thus, regarding their position in the Galaxy, [WR]PNe seem to belong to a population located in a thinner disk than PNe ionized by wels and normal central stars, indicating that progenitor stars of [WR]PNe would be younger and with larger initial masses. In our [WR]PN sample there are only 4 objects with $z~ >$ 1 kpc; they are PN G358.3-21.6 (IC\,1297) with $z$= 1.827 kpc, PN G161.2-14.8 (IC\,2003) with $z$= 1.21 kpc, PN G146.7+0.6 (M\,4-18) with $z$=1.185 kpc, and PN G118.0$-$08.6 (Vy\,1-1) with $z$= 1.07 kpc.
The galactic distribution of [WR]PNe was analyzed by Acker et al. (1996). They found that the fraction of [WR]PNe with galactic latitude $|b| < 7^o$ is similar to the fraction for normal PNe, thus concluding that both distributions are equal. However they did not considered the distances and heights above the galactic plane, which is probably the reason for the difference with our results.
\section{Heliocentric, circular and peculiar radial velocities}
For the objects observed at the OAN-SPM and LCO, radial velocities were measured from the most intense spectral lines (not saturated), such as [OIII]$\lambda\lambda$ 5007, 4959, H$\gamma$, H$\beta$, H$\alpha$, [NII]$\lambda\lambda$6548, 6583, by using the task {\it splot} of IRAF{\footnote{IRAF is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc. (AURA) under contract with the National Science Foundation.}.
For the OAN-SPM data we found that the red zone of our spectra (wavelength longer than 6000 \AA) was not properly calibrated in wavelength, giving results with large discrepancies relative to the blue zone, therefore we used only the blue lines [OIII]$\lambda\lambda$ 5007, 4959, H$\gamma$, and H$\beta$ to determine the radial velocities of these objects.
The data from LCO have better resolution and the blue and red lines show almost no discrepancy, then radial velocities were
measured for all the lines, (and their results are preferred over the OAN-SPM and SPM kinematical catalogue ones).
The finally adopted radial velocity in each case, corresponds to the average obtained from the used lines and the errors were calculated as the mean quadratic error, $\Delta$V = ($\Sigma_i(V_i - V)^2 / (n(n-1))^{1/2}$, where $V_i$ is the velocity for each line, V is the average velocity and n is the number of considered lines. Then, these errors represent the internal consistency of our spectra and not necessarily the true uncertainty in the determined velocity (see \S 3.1).
Afterwards we used the IRAF routine {\it rvcorrect} to determine, for each object, the heliocentric radial velocity and its error. The results are presented in columns 7 and 8 of Table 4. In these columns, the data from LCO are boldfaced.
As said before, we selected the galactic disk [WR]PNe, appearing in the Kinematical Catalogue of SPM (L\'opez et al. 2012) and determined the heliocentric radial velocities of these objects by measuring the systemic velocity of the nebula from the position-velocity diagram obtained when the slit was positioned through the center of the object. We measured all the available lines for each object ([OIII]$\lambda$5007, H$\alpha$, and [NII]$\lambda\lambda$ 6548,6583). Such velocities, V$_{\rm Cat}$, are listed in column 9 of Table 4. Notice that for a few objects this is the only available velocity.
\subsection{Analysis of velocities}
\begin{figure}[!th]
\begin{center}
\includegraphics [scale=0.4]{LCOCat_2.eps}
\includegraphics [scale=0.4]{LCODAZ_2.eps}
\includegraphics[scale=0.4]{SPMCat_2.eps}
\includegraphics[scale=0.4]{SPMDAZ_2.eps}
\caption{Comparison of heliocentric radial velocities from different observations. Top: V$_{\rm LCO}$ vs. velocities from the SPM Kinematical Catalog, V$_{\rm Cat}$, and V$_{\rm LCO}$ vs. Durand et al. (DAZ, 1998) data. Bottom: V$_{\rm SPM}$ vs. V$_{\rm Cat}$ and V$_{\rm SPM}$ vs. Durand et al. velocities. The solid lines represent the linear fits to the data and they are discussed in the text. The dashes lines represent a 1:1 relation (45$^{\rm o}$ slope lines). }
\end{center}
\end{figure}
An adequate way of estimating the uncertainties in an observed quantity is by comparing independent measurements of the quantity. For the heliocentric velocities of our objects we have, in most of the cases, two or three independent observations that can be compared, and thus the uncertainties can be estimated.
Figure 2 shows a comparison of our heliocentric radial velocities (V$_{\rm LCO}$ and V$_{\rm SPM}$) with the values derived from the SPM Kinematical Catalogue, (V$_{\rm Cat}$), and with values from the literature (Durand et al. 1998, V$_{\rm DAZ}$). A 45$^{\rm o}$ slope line has been included in all the graphs for comparison. It is evident that the velocities from LCO and from the Kinematical Catalogue (Fig. 2 up, left), which are the ones with the best spectral resolution, are very well correlated; the linear correlation is V$_{\rm LCO}$ = 0.978 V$_{\rm Cat} -$3.543 km s$^{-1}$, with a correlation coefficient r$^2$= 0.994. The dispersion of the differences (V$_{\rm LCO}$ - V$_{\rm Cat}$) is 3.8 km s$^{-1}$. For the velocities from SPM we found a good correlation with those from the Kinematical Catalogue (Fig. 2 bottom, left), except for a few objects showing large differences. The linear fit is V$_{\rm SPM}$ = 1.072 V$_{\rm Cat} -$3.967, with r$^2$=0.948. The dispersion of the differences (V$_{\rm SPM}$ - V$_{\rm Cat}$) is 12.9 km s$^{-1}$.
The comparison of V$_{\rm LCO}$ vs. V$_{\rm DAZ}$ (Fig. 2 up, right) is also good with only one object showing a discrepancy as large as 40 km s$^{-1}$. The fit is V$_{\rm LCO}$ = 0.927V$_{\rm DAZ} -$0.719 with correlation coefficient r$^2$= 0.942. The dispersion of the differences (V$_{\rm LCO}$ - V$_{\rm DAZ}$) is 8.8 km s$^{-1}$. The correlation is worse for V$_{\rm SPM}$ vs. Durand et al. data (Fig. 2 bottom, right) that has a fit V$_{\rm SPM}$ = 1.028 V$_{\rm DAZ} -$6.382, with correlation coefficient r$^2$= 0.890. The dispersion of the differences is, in this case, 18.0 km s$^{-1}$.
Durand et al. data correspond to a compilation from the literature of results from different authors and, accordig to Durand et al. most of their sample presents velocity uncertainties better than about 20 km s$^{-1}$.
In all the graphs it is apparent that the 1:1 correlation and the linear fits are very similar, showing the good quality of the different estimates of velocities.
Considering the above discussion, we have decided to adopt, when possible, the velocities obtained from LCO with an uncertainty of $\pm$3.8 km s$^{-1}$; if these are not available, we have adopted the velocities from the SPM Kinematical Catalogue with the same uncertainty and, as a third choice, we have used the velocities obtained with the Echelle REOSC from SPM, assuming an uncertainty of $\pm$12.9 km s$^{-1}$.
\subsection{Circular velocities}
From the galactocentric distance of the objects, R$_{\rm G}$, as given by Stanghellini \& Haywood (2010), we determined their circular radial velocities following the expression:
\begin{eqnarray}
{\rm V}_{\rm circ}= -{\rm u_\sun \, cos\,l~cos\,b -v_\sun \,sin\,l ~cos\,b } \nonumber \\
- {\rm w_\sun \,sin\,b - 2 A ( R_G - R_\sun) \, sin\,l~cos\,b} \nonumber \\
+ {\frac {\rm A_2} {2} } \, {\rm (R_ G}- {\rm R_\sun)^2~sin\,l ~cos\,b +K.}
\end{eqnarray}
\noindent This equation is the expansion to second order of the circular radial velocity as a function of R$_{\rm G}$. The expansion to second order allows to use the kinematics of objects located further from the Sun. In Eq. 1, u$_\sun$, v$_\sun$ and w$_\sun$ are the radial, azimuthal and vertical components of the solar motion relative to the Standard Local of Rest.
R$_\sun$ is the distance of the Sun to the Galactic center and `l' and `b' are the galactic longitude and latitude of the object.
The forth term in the right side of Eq. 1 represents the usual differential galactic rotation to first order, being A the Oort constant. In the second row, A$_2$ is the second order coefficient of the derivative of the rotation speed with respect to R$_{\rm G}$, and K is the K-term of local galactic expansion.
To calculate the radial circular velocities from Eq. 1, we used the standard values for the kinematical parameters that are listed in Table 2.
\setcounter{table}{1}
\begin{table}
\begin{center}
\caption{Kinematical parameters for Eq. 1}
\begin{tabular}{lll}
\toprule
parameter & value (units)& ref.$^{(1)}$\\
\midrule
$u_\odot$ & 10.4 (km s$^{-1}$) & 1\\
$v_\odot$ & 14.8 (km s$^{-1}$) & 1\\
$w_\odot$ & 7.3 (km s$^{-1}$) & 1\\
$R_\odot$ & 8.0$\pm$0.5 (kpc)& 2\\
$A$ & 14.4$\pm$1.2 (km s$^{-1}$ kpc$^{-1}$)& 3\\
$A_{2}$ & 13.0$\pm$0.9 (km s$^{-1}$ kpc$^{-2}$) & 4\\
$K$ & 5.1$\pm$2.8 (km s$^{-1}$) & 4\\
\bottomrule
\multicolumn{3}{l}{$^{(1)}$ \small 1. Mihalas \& Routly 1968; 2. Reid 1993;}\\
\multicolumn{3}{l}{ \small 3. Kerr \& Lynden-Bell 1986; 4. Durand et al. 1998}\\
\end{tabular}
\end{center}
\end{table}
Values for the derived V$_{\rm circ}$ are listed in column 11 of Table 4. The errors for V$_{\rm circ}$ (columns 12 and 13) were computed by considering the errors in the galactocentric distances.
After calculating V$_{\rm circ}$ we determined the peculiar velocity for each object as:
\centerline{ ${\rm V_{pec} = V_{ hel} - V_{circ}}$.}
The resulting values and the corresponding errors are listed in columns 14, 15, and 16 of Table 4. It is important to notice that the errors in distances are, in general, the most important uncertainties in computing V$_{\rm pec}$.
\section{Stellar populations of PNe}
\label{Description}
We have classified our sample of PNe using the criteria of Peimbert 's classification scheme defined in the Introduction, thus Types I are the PNe with He/H$>$0.125, and N/O$>$0.5, Types II do not show He or N enrichment and have V$_{\rm pec} \leq$ 60 km s$^{-1}$ and Types III have V$_{\rm pec} >$ 60 km s$^{-1}$. The chemical composition for our objects (important for Type I classification) were adopted from the works by Garc{\'\i}a-Rojas et al. (2009; 2013 in preparation), Pe\~na et al. (2001) or from the literature. For each Type I object we have listed its relevant abundance ratios (N/O and He/H) in Table 3 where the source for abundances is identified.
Peimbert's type for each object is shown in column 17 of Table 4. In Table 5 we present a summary of the number of [WR]PNe, WLPNe and normal PNe in each Type. In next subsections a discussion on the different Peimbert's type samples is given.
\begin{table*}[tH]
\begin{center}
\caption{Peimbert's Type I PNe ([WR]PNe are boldfaced).}
\label{modcel}
\begin{tabular}{lrrrrrrrrrr}
\toprule
&object &V$_{\rm pec}$& N/O & He/H & ref$^{(1)}$ \\
\midrule
{\bf 001.5$-$06.7}& {\bf SwSt\,1}&$-$17.0 &0.80 &0.040& PSM01\\
{\bf 002.2$-$09.4}&\textbf{Cn\,1-5}& $-$37.9 & 0.83 & 0.158&G-R12\\
{\bf 003.1+02.9}&\textbf{Hb\,4}&-75.9& 0.71& 0.115& G-R12\\
{\bf 011.9+04.2}&\textbf{M\,1-32}&$-$132.3 & 0.51& 0.126& G-R12\\
{\bf 064.7+05.0}&\textbf{BD+30 3036}&$-$40.5 &1.25& ---& PSM01\\
086.5$-$08.8&Hu\,1-2&19.0 & 1.15& 0.151 & PLT-P95 \\
{\bf 089.0+00.3}&\textbf{NGC\,7026}&$-$24.8 & 1.01 & 0.124& PSM01\\
103.7+00.4&M\,2-52&$-$32.3 & 2.3 & 0.165& PM02\\
{\bf 278.8+04.9}&\textbf{PB\,6}&25.1 & 1.48 & 0.19 &G-R09\\
{\bf 300.7$-$02.0}&\textbf{He\,2-86}&5.9 &0.72&0.123 & G-R12 \\
{\bf 307.2$-$03.4}&\textbf{NGC\,5189}&$-$16.9 & 0.68 & 0.123 & KB94\\
{\bf 307.5$-$04.9} & \textbf{MyCn\,18} & $-$34.6 & 1.04 & 0.095 & KB94\\
\bottomrule
\multicolumn{6}{l}{\small {$^{(1)}$ PSM01: Pe\~na et al. 2001; G-R12: Garc{\'\i}a-Rojas et al. 2013 in prep;}}\\
\multicolumn{6}{l}{\hskip 0.45cm {\small PLT-P95: Peimbert et al. 1995; PM02: Pe\~na \& Medina 2001;}}\\
\multicolumn{6}{l}{\hskip 0.45cm {\small G-R09: Garc{\'\i}a-Rojas et al. 2009; KB94: Kingsburgh \& Barlow 1994}}\\
\end{tabular}
\end{center}
\end{table*}
\subsection{Type I objects}
As expected, Type I PNe show, in general, low peculiar velocities (lower than 50 km s$^{-1}$). This agrees with these kind of PNe being supposedly produced by the most massive stars among progenitors (M$_{\rm i} \geq $ 2.0 M$_\sun$, Peimbert \& Serrano 1980).
In our [WR]PN+[WR]-PG1159 sample, there are 10 Type I PNe and among them there are two objects with high peculiar velocities: PN G003.1+02.9 (Hb\,4) with V$_{\rm pec} = -$75.9 km s$^{-1}$, and PN G011.9+04.2 (M\,1-32), with V$_{\rm pec}=-$132.2. If we follow the standard criteria for considering an object as belonging to the bulge, which are: galactic position within 10$^{\rm o}$ from the galactic center, radius lower than 20$''$ and radio flux at 5 Ghz smaller than 100 mJy (Stasi\'nska \& Tylenda 1994; Cavichia et al. 2011), Hb\,4 could be considered as belonging to the bulge, because it is located within 10$^{\rm o}$ from the galactic center, it has a radius of 2.5$''$ and its flux at 5 Ghz (6 cm) is 166 mJy. Then, Hb\,4 fulfills all the criteria except that it is brighter at 5 Ghz. Its galactocentric distance is about 2.9$\pm$1.0 kpc, so it could be part of the bulge, despite its relatively strong 5 Ghz flux.
M\,1-32 is a peculiar object. Its chemical abundances (N/O= 0.51 and He/H = 0.126) locate it marginally among Type I PNe. If Kingsburgh \& Barlow (1994) criteria were adopted, it would not be a Type I PN and its peculiar velocity would locate it among Type III's despite its height above the galactic plane is only 0.35 kpc.
The internal kinematics of M\,1-32 is peculiar; Medina et al. (2006) found high velocity wings in the nebular lines, which are also apparent in Fig. 1 of Garc{\'\i}a-Rojas et al. (2012). Recently Akras \& L\'opez (2012) analyzed the velocity field in this object and confirmed the high velocity wings, which they interpret as due to collimated bipolar outflows reaching velocities of $\pm$200 km s$^{-1}$. These authors consider M\,1-32 as a bulge object. Its angular diameter is 7.6$''$ and its observed flux at 5 Ghz is 61 mJy, so these two criteria are fulfilled for being a bulge PN. However it is located at more than 10$^{\rm o}$ from the galactic center, at a galactocentric distance of about 3.46$\pm$0.84 kpc. With these characteristics M\,1-32 could be at the border of the bulge and its high V$_{\rm pec}$ could be a consequence of its location in the Galaxy.
Regarding WLPNe, it is important to remark that none of the objects in our sample is a Type I PN. This fact was already mentioned by Pe\~na et al. (2003) for a smaller sample, and also by Fogel et al. (2003), who analyzed a sample of 42 WLPNe finding none Type I among them.
Our sample includes only a small number (12) of normal PNe chosen randomly (11 objects belong to the disk and one is in the galactic halo), which is by no means representative of the total galactic PNe. In this short sample we have 2 Type I PNe (17\%), a fraction similar to what is found in larger samples (e.g., Peimbert 1990).
\subsection{Type II objects}
Among the [WR]PNe and [WC]-PG1159 PNe, there are 23 objects (51\% of the sample) classified as Type II. This also occurs in normal PNe, where the Type II objects are the majority (see e.g., Peimbert 1990; Stanghellini \& Haywood 2010). Among our normal PN and WLPN samples, there are 6 and 7 Type II PNe, respectively. These PNe belong to the disk intermediate population, their progenitors had initial masses lower than 2 M$_\odot$, and in consequence they would be older than Types I.
\subsection{Type III objects}
Interestingly, there are five [WR]PNe among the Type III objects which, as we said, have V$_{\rm pec}$ larger than 60 km s$^{-1}$ and are the older PNe among the disk population, belonging probably to the thick disk. The 5 objects represent the 11.6\% of our [WR] sample; they are PNG 002.4+5.8 (NGC\,6369, [WC]4, V$_{\rm pec}=-89.6$ km s$^{-1}$), PN G029.3$-$05.9 (NGC\,6751, [WC]4, V$_{\rm pec}=-62.5$ km s$^{-1}$), PN G096.3+02.3 (K\,3-61 [WC]4-5, V$_{\rm pec}=-79.1$ km s$^{-1}$), PN G146.7+07.6 (M\,4-18, [WC]11, V$_{\rm pec}=-160.8$ km s$^{-1}$), and PNG 336.2$-$06.9 (PC\,14 [WO]4, V$_{\rm pec}=-69.2$ km s$^{-1}$). Finding Type III PNe among the [WR]PNe is peculiar because in \S 2.1 we have shown that [WR]PNe are in general closer to the galactic plane than normal PNe and this would indicate that [WR]PNe are young objects. Among the few objects possessing {\it z} larger than 800 kpc (see Fig. 1) only PN G146.7+07.6 (with {\it z}=1.19 kpc) belongs to the Type III group.
One plausible explanation for these unexpected Type III [WR]PNe could be the errors in the adopted distances. For the objects with V$_{\rm pec}$ slightly above 60 km s$^{-1}$, the large V$_{\rm pec}$ error bars can move them into Type II's. However there are at least three objects that would remain as genuine Type III: PN G002.4+5.8, PN G146.7+07.6, and PN G336.2$-$06.9. The first one, NGC\,6369, is a well-known extended nearby PN. Stanghellini \& Haywood (2010) attribute it a heliocentric distance of 1.089 kpc, very similar to the distance given by Zhang (1995) of 0.92 kpc (with Zhang's distance we derive V$_{\rm pec}$=$-$89.1 km s$^{-1}$), therefore, the errors in the distance do not seem to be an explanation. Possibly the central star of NGC\,6369 is an old low-mass star belonging to the thick disk, despite its [WR] condition and its low height above the galactic plane.
For the case of PN G146.7+07.6 (M\,4-18) Stanghellini \& Haywood (2010) locate it at a heliocentric distance of 8.96 kpc and at a galactocentric distance of 16.17 kpc, therefore we derived a peculiar velocity of $-$160.8 km s$^{-1}$. If the heliocentric distance given by Zhang (1995) of 6.85 kpc is adopted, the galactocentric distance is 14.2 kpc, and the peculiar velocity results to be $-$91.5 km s$^{-1}$, that is still high, classifying M\,4-18 as a Type III PN anyway. The distance given by Zhang (1995) is similar to the one derived independently by De Marco \& Crowther (1999). In order to choose the more appropriate distance we can calculate the H$\beta$ luminosity of this object, by assuming the observed log(F(H$\beta$))= $-$11.89 (Acker et al. 1992) and the different heliocentric distances. When the distance by Stanghellini \& Haywood is used, we get log (L(H$\beta$)/L$_\odot$) = 0.50 that is slightly high for a PN (see Fig. 5a by Pe\~na et al. 2007), while using the distance by Zhang (1995) produces a log(L(H$\beta$)/L$_\odot$) = 0.27, typical of a not too-bright PN. Thus, we consider that the distance by Zhang (1995) is more adequate for this [WR]PN, although it is still a Type III object and its central star would be a low mass object belonging to the thick disk.
For the case of PN G336.2$-$06.9 (PC\,14), Stanghellini \& Haywood heliocentric distance is 6.147 kpc, with a galactocentric distance of 3.451 kpc. This provides a V$_{\rm pec}$= 69.2 km s$^{-1}$ . If the distance by Zhang (1995) of 5.11 kpc is used, V$_{\rm pec}$ results to be
51.3 km s$^{-1}$ and the object would be a Type II.
Therefore in our [WR]PN sample, only NGC\,6369 and M\,4-18 would be bonafide Type III PNe.
Regarding non-[WR]PNe, we found three normal and seven WLPNe among Type III PNe which is an adequate number for normal objects and a little too high for WLPNe. As our numbers are small, in this case our results are only tentative.
\subsection{The bulge objects}
Apart from the two possibly bulge objects discussed in \S 4.1, there are several PNe in our sample, for which we have
determined heliocentric radial velocities, that have been classified as bulge PNe by some authors (G\'orny et al. 2009; Cavichia et al. 2011). As these objects do not rotate with the inner disk, it has not much sense to calculate their circular velocities. We have done it for completeness. Although the number of bulge objects in our sample is very small, it is interesting to analyze their heliocentric velocities in comparison with other samples of bulge PNe. We have PN G004.9+04.9 (M\,1-25) with V$_{\rm hel} = 10.3$ km s$^{-1}$, PN G006.8+04.1 (M\,3-15) with V$_{\rm hel}= 96.9$ km s$^{-1}$, PN G009.8$-$04.6 (H\,1-67) with V$_{\rm hel}= -15.0$ km s$^{-1}$) , PN G355.2$-$02.5 (H\,1-29) with V$_{\rm hel}= -$27.7km s$^{-1}$), PN G355.9$-$04.2 (M\,1-30), with V$_{\rm hel}= -$117.0 km s$^{-1}$, and PN G356.2$-$04.4 (Cn\,2-1) with V$_{\rm hel}=-169.0$ km s$^{-1}$). The heliocentric velocities of these objects are mainly positive when the galactic longitude $l$ is between 0$^o$ and 10$^{\rm o}$, and negative when $l$ is from 0$^{\rm o}$ to -10$^{\rm o}$.
In this sense they behave similarly to the sample discussed by Durand et al. (1998, see their Fig. 6), who have interpreted this behavior as rotation of the bulge.
\subsection{ Special cases}
Among our PNe there are two objects with a very large V$_{\rm pec}$.
One is PN G108.4-76.1 (BoBn\,1), a very interesting PN with a strange chemical composition as it shows Ne/O larger than 1 (Pe\~na et al. 1993; Otsuka et al. 2010), identified as belonging to the galactic halo at a very large height above the plane (17 kpc). Zijlstra et al. (2006) have argued that this object could be part of the Sagittarius Dwarf Spheroidal galaxy, which is supported by the PN galactic position and its large V$_{\rm pec}$.
Other PN with extremely high V$_{\rm pec}$ ($-$412.0 km s$^{-1}$) is PN G111.8$-$02.8 (Hb\,12) but this could be due to a possibly erroneous heliocentric distance of 14.25 kpc (galactocentric distance 18.47 kpc) given by Stanghellini \& Haywood (2010). The heliocentric distance of 8.11 kpc, given by Zhang (1995), locates the object much closer and its galactocentric distance results to be 13.34 kpc. With this, we derive a much more reasonable V$_{\rm pec}$ of $-$32.9 km s$^{-1}$. In addition, if we calculate the total H$\beta$ luminosity for this object (taking the apparent H$\beta$ flux by Acker et al. 1992), log(F(H$\beta$)) = 10.98), for a distance of 14.25 kpc, we get log(L(H$\beta$)/L$_\odot$) = 1.82, which is too high for a PN and more typical of a compact HII region (see again Fig. 5a by Pe\~na et al. 2007). When assuming the distance by Zhang (1995), we obtain Log(L(H$\beta$)/L$_\odot$) = 1.33, which is an adequate H$\beta$ luminosity for a bright PN. Therefore we consider than the distance by Stanghellini \& Haywood (2010) might be an errata.
\setcounter{table}{4}
\begin{table}[tH]
\begin{center}
\caption{Distribution of objects in Peimbert's types}
\label{types}
\begin{tabular}{lcccc}
\toprule
&Ty I& Ty II & Ty III & bulge \\
\midrule
{[WR]}PN$^{(1)}$ & 10 & 23 & 5 & 5 \\
WLPN & 0 & 7 & 7 & 2\\
PN & 2 & 6 &3 & -- \\
\bottomrule
\multicolumn{5}{l}{\small {$^{(1)}$ [WR]PNe and [WR]-PG1159 included.}}\\
\multicolumn{5}{l}{\small {$^{(1)}$ The 9 missing [WR]PNe do not have velocities }}\\
\multicolumn{5}{l}{\small \hskip 0.4cm or distances.}\\
\end{tabular}
\end{center}
\end{table}
\section{Conclusions}
-- From high-quality high-spectral resolution spectra we have determined the heliocentric radial velocities of a sample of [WR]PNe (43 objects, representing the 42\% of the total known sample), a sample of WLPNe and PNe with normal central star. These data, together with distances obtained from the literature, allowed us to determine the galactic kinematics of the objects. The radial circular and peculiar velocities were computed. We found that these quantities are largely affected by errors in the assumed distances and
for some objects where too large V$_{\rm pec}$ are obtained, this can be attributed to poorly determined distances.
-- We have found that most of the analyzed [WR]PNe are located in the galactic disk and they are more concentrated towards the thin disk (height smaller than about 400 pc from the disk) than the WLPNe and PNe with normal central star, most of which are distributed up to 800 pc from the disk.
-- According to their chemical composition and peculiar velocities we have classified the studied sample in Peimbert's Types. Nine [WR]PNe (21\%) and one [WR]-PG1159 PN have been classified as Peimbert's Type I (they are N- and He-rich) and would have progenitor stars with initial masses larger than 2 M$_\odot$. Therefore they would be young objects (ages between 0.1 to 1 Gyr). This is confirmed by their kinematics as all of them (except two objects of the bulge) show V$_{\rm pec}$ smaller than 50 km s$^{-1}$.
-- The [WR]PNe with V$_{\rm pec} \leq$ 60 km s$^{-1}$ and no particular He or N enrichment, amounting 23 objects (51\% of the sample), are catalogued as Peimbert's Type II. They belong to the disk population and would be of intermediate age. Interestingly, there are a five [WR]PNe with V$_{\rm pec}$ larger than 60 km s$^{-1}$ which are classified as Type III PNe. Although some of them could be of Type II by considering the uncertainties in their distances, there are two objects which appear as genuine Type III [WR]PNe. With this classification, these objects would be of the thick disk with an old low-mass central star, indicating that the [WR] phenomenon can occur also in less massive and old progenitors.
-- In our sample of 16 WLPNe, no one is a Peimbert Type I, result also found by other authors in the analysis of larger samples (e.g., Fogel et al. 2003). Our WLPNe are distributed in 7 Types II, 7 Types II and 2 bulge objects. With these characteristics WLPNe are objects belonging to the intermediate and old disk population, with progenitors of low initial masses. Thus it is corroborated that [WR]PNe and WLPNe are unrelated objects.
-- We have obtained the radial velocity of the [WR] PN G332.9$-$09.9 (He3-1333 whose central star, CPD-56 8032, has a [WC]10 spectral type) that has not been reported previously. Its heliocentric distance is not reported in the literature, therefore its peculiar velocity can not be calculated.
\acknowledgments
J.S. R.-G. acknowledges scholarship from CONACyT- M\'exico. J. G.-R. received financial support from the spanish Ministerio de Educaci\'on y Ciencia (MEC), under project AYA2007-63030. This work received financial support from DGAPA-UNAM (PAPIIT project IN105511).
|
1,116,691,497,322 | arxiv | \section{Introduction}
The characteristics such as the non-homogeneity of the fault itself and the complex formation mechanism and its important role in the process of oil-gas development determine that the prediction of fault reservoir has always been an important subject in oil-gas exploration, and fault detection is the key point. The main methods of fault detection include log methods and seismic methods, and our study focuses on the use of seismic data for fault detection.
Before deep learning was widely used, researchers used traditional geological methods for fault detection. The first to be applied to fault detection was the theory of anisotropy. Crampin discovered and put forward many new understandings and opinions on fault anisotropy \cite{1}, Rüger proposed the Rüger approximation formula, which verified that the formula has good adaptability in weakly anisotropic media \cite{2}, and proposed AVO (Amplitude Variation with Offset) gradient inversion to calculate the fault parameters \cite{3}, but the anisotropy as a basic property of the fault, is prone to noise interference from seismic data when applied to the detection task, and the detection accuracy is very low; Bahorich proposed the use of coherent technology interpretation and detection of seismic faults \cite{4}, by calculating the cross-correlation coefficients between seismic traces to highlight the characteristics of fault discontinuity, but for seismic data with relatively large coherent noise, especially for small faults, the detection effect is poor. Subsequently, Marfurt etc. Proposed the second-generation coherent technology, which improved the anti-noise ability, but the resolution was low \cite{5}. The third-generation coherent technology provided high resolution detection results in noisy data by calculating the eigenvalues of covariance matrix, but the effect was not good in some special geological environment \cite{6} (such as the wing of salt mound).
Pedersen s applied ant colony algorithm to fault detection \cite{7}, ant colony algorithm used ant tracking to highlight fault lines, filtered irrelevant noise and non fault response; D. C. Sun et al combined spectral decomposition technology with ant colony algorithm \cite{8}; A. Aqrawi uses the improved 3D Sobel filtering method and ant colony algorithm to realize the detection of small faults \cite{9}. However, the ant colony algorithm based on 3D seismic data is often disorganized, whether it is sliced along the layer or on the section, and the description of the bottom fault does not correspond to the distribution characteristics of the seismic event axis, resulting in poor practical application effects. In addition, there are some other fault detection algorithms. Saito and Hayashi use frequency domain-based Stoneley waves to detect faults \cite{10}; F. Admasu et al. proposed an active contour algorithm to achieve semi-automatic tracing of faults \cite{11}; Priezzhev and Scollard It is proposed to detect faults through orthogonal decomposition of seismic data \cite{12}; Hale D uses three steps: calculation of 3D tomographic images, extraction of fault surface and drop estimation to detect faults \cite{13}; W. Zhen et al. proposed based on Hough transform and Vector tracking interactive fault detection algorithm \cite{14}; Wu and Fomel proposed a method to extract the optimal surface based on the maximum fault attributes, and use these optimal surface voting to detect faults \cite{15}. However, the use of traditional geological methods or the introduction of digital image processing algorithms on this basis cannot solve the problem of high noise and serious interference in seismic data.
As early as 2005, K. M. Tingdahl, M. de Rooij, etc. realized an algorithm that uses multiple seismic attributes and BP neural network to detect faults \cite{16}, but its performance is limited by the neural network theory and hardware conditions at that time. With the development of deep learning in recent years, some studies have introduced convolutional neural networks into seismic fault detection \cite{17,18,19,20}. These methods regard fault detection as an image segmentation task in the field of computer vision. Seismic image voxels are classified into two categories (fault and non-fault), but doing so will lose the 3D spatial morphological feature of the fault, which will cause the segmented fault to be discontinuous; Guitton A proposed a method of fault segmentation using 3D convolution \cite{21}, but its stacked neural network structure cannot effectively extract the spatial information of seismic data. The workload of 3D data annotation is huge and requires expert experience. Therefore, Guitton A uses the results of the algorithm proposed by Hale D \cite{13} as the training label. This approach may cause the model to learn only the detection mode of the algorithm proposed by the reference \cite{13}, and its performance is affected by the quality of the label; Wu et al. use synthetic Seismic data is used to train the 3D U-Net model \cite{22,23}. Synthetic data avoids the problems caused by manual labeling. However, in many cases, synthetic data cannot be generalized to real seismic data. We verified the work of Wu et al. There is still a lot of noise in the prediction results of the model trained on synthetic data on the real data we provide (see Figure \ref{fig5}, \ref{fig7}), which is difficult to apply in the field of petroleum exploration with variable geological structures.
In summary, the segmentation of 3D faults still faces two major problems. First of all, various complex geological conditions and the influence of acquisition equipment lead to a low signal-to-noise ratio of the original seismic data, resulting in a large amount of noise in the detection results obtained by traditional geological methods or machine learning methods; secondly, 3D seismic data cannot be directly labeled , The workload of labeling on 2D data and then synthesizing 3D labels is huge and requires expert experience. Wrong labeling and missing labeling will affect the segmentation performance of the model.
The method proposed in this paper can train a model that can accurately segment 3D seismic data through a small amount of labeled 2D seismic data slices.
In general, we have improved the standard U-Net, added an attention module that can be actively trained, and proposed two new loss functions, so that a 3D fault segmentation network can be trained using few 2D images.
We have drawn on the ideas from reference \cite{24,25}. reference \cite{24} uses attention gate for medical image segmentation, allowing the model's attention coefficients to be more specific to the local areas that need attention during the training process, thereby effectively filtering noise; in the work of \cite{22}, sparse data training 3D medical segmentation model. However, seismic data is different from medical data. In medical images, the pixels of the target area are clustered into a 2D plane, while the fault pixels in the seismic image are arranged in a line. From the local point of view, it is one-dimensional, which makes the fault difficult to obtain by attention mechanism. Moreover, the seismic data is more complex and contains more noise, the proportion between fault area and non-fault area is seriously out of balance, and the proportion of fault voxels in the overall voxel is so few, which makes it more difficult to transmit the effective gradient (the gradient brought by fault voxels) in sparse data training.
The contribution of our work can be summarized as follows:
(1) We propose an attention module that can be actively supervised and trained (Active Attention Module, AAM) based on the characteristics of seismic faults (without providing additional annotations), which can make the model pay more attention to the fault area, thereby effectively suppressing noise. In addition, this module can treated as intermediate supervision to provide more effective gradients for training.
(2) We propose a new binary cross entropy and smooth $L_1$ loss function for seismic fault segmentation ($\lambda$-BCE and $\lambda$-smooth $L_1$), and only use a small amount of labeled real 2D data to train the 3D convolutional neural network.
(3) This allows geologists and oil and gas prospectors to label only a small part of the 2D slices (At least 3.3\% of the original) in the seismic data to obtain accurate 3D fault segmentation models for all similar geological types of seismic data.
\section{Approach}
\subsection{Active Attention Module}
The AAM embedded in 3D U-Net model to suppress a large amount of noise in seismic data, make the model focus on fault area, and provide more effective gradient for model training.
This module obtains the linear projections $\omega_l$$F_l$ and $\omega_h$$F_h$ from the low-level detail feature $F_l$ and the high-level semantic feature $F_h$ through $1\times1$ convolution respectively, and then combines them into a single channel and normalizes them by sigmoid. The whole process is expressed as a formula \ref{attetion},
\begin{equation}
\Theta=\mbox{Sigmoid}(\omega_s\mbox{ReLU}^T(\omega_lF_l+\omega_hF_h))\label{attetion}
\end{equation}
Where $\omega_l$ and $\omega_h$ is differentiable, so in the formula \ref{attetion}, $\omega_lF_l+\omega_hF_h$ (denoted as $D$) can be interpreted as the difference between the low-level features and the high-level features. With the deepening of network layers, the features extracted from the deep layer will more and more tend to the ground truth \cite{25}. Therefore, $D$ shows the signal response of ground truth. We think this is the main mechanism of Attention Gate in reference \cite{24}. The $\Theta$ that is weighted and normalized by $D$ is the Attention Map we need.
In U-Net, one of the reasons to concatenate $F_l$ and $F_h$ is to make the segmentation results merge more detailed cue from $F_l$, but $F_l$ also include a lot of noise. Before they concatenate, the multiplication of $\Theta$ and $F_l$ can effectively introduce the details around the suspected ground truth area while suppressing noise. So that the model pays more attention to the fault area. However, in seismic fault segmentation, it is very difficult to automatically generate Attention Map during the model training iteration process, because the ground truth pixels of the fault are arranged in lines. After continuous convolution, it is difficult for $D$ to capture the difference between high and low level fault features. Therefore, we propose a method to generate Attention Map from label data to supervise $\Theta$ to generate attention regions.
We hope that the Attention Map extracted by the attention mechanism can effectively suppress the non-fault area feature $F_l^r$ in $F_l$, and retain the feature $F_l^t$ of the fault area, then the idealized Attention Map (denoted as $\Theta$) is expressed as,
\begin{equation}
\begin{split}
\lim\limits_{\mbox{pos}(f_{l,i}) \to \mbox{pos}(F_l^r)}\theta_i=0\\
\lim\limits_{\mbox{pos}(f_{l,i}) \to \mbox{pos}(F_l^t)}\theta_i=1
\end{split}
\end{equation}
Where, $f_{l,i}$ denotes a single eigenvector in the feature region, $\mbox{pos}(x)$ denotes the coordinates or coordinate clusters of the obtained features in Euclidean space, $\theta_i\in\Theta$, that is, the weight response of 0 tends to 1 with the decrease of the euclidean distance from the fault region. We use Gaussian function to simulate this trend, this process can be expressed by the formula \ref{att_fumula}, assuming $\mbox{pos}(\theta^t)\in\{\mbox{pos}(F_l^t )\}$, then
\begin{equation}
\Theta(\theta^t)=\mbox{exp}( \frac{\Arrowvert\mbox{pos}(\theta^t)- \textbf{\mbox{x}}_{w,h,d}\Arrowvert^2_2}{\sigma^2})\label{att_fumula}
\end{equation}
In the labeled data, all the variables in the formula \ref{att_fumula} are known, and the generated heatmap is shown in Figure \ref{fig1}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{1.png}
\centering\caption{In (a), the trend of $\theta_i$ around $\mbox{pos}(\theta^t)$ is shown, (b) shows the heatmap generated when we replace each pixel in ground truth with the response point in (a)}
\label{fig1}
\end{figure}
Use the generated heatmap to supervise $\Theta$ through smooth $L_1$ loss \cite{26},
\begin{equation}
\mathcal{L}_{sL_1}(\theta_i,\theta_i^{gt})=\sum_{i\in\{0,1,2...whd\}}\mbox{smooth}_{L_1}(\theta_i-\theta_i^{gt})
\end{equation}
where.
\begin{equation}
\mbox{smooth}_{L_1}(x)=\begin{cases}
0.5x^2 \ &\mbox{if}|x|<1\\
|x|-0.5 \ &\mbox{otherwise}
\end{cases}
\end{equation}
The reason for using smooth $L_1$ loss is to ensure that when the model cannot extract enough features at the initial stage of training, the large difference between the predicted value and the ground truth leads to a high gradient that causes training instability. And the difference between the later predicted value and ground truth is very small and still can provide a stable gradient.
The overall structure of the model is shown in Figure \ref{fig2}.
\begin{figure*}[ht]
\includegraphics[scale=0.25]{2.png}
\centering\caption{The model adds an Attention module to the basic U-Net framework to suppress the noise introduced when fusing features. Active supervision is used in the Attention module to ensure that it can extract the effective area near the fault. In the figure, our labels are sparse, and we will describe the detailed process in the next section.}
\label{fig2}
\end{figure*}
In subsequent experiments, it was found that this module can not only suppress noise, but it can also be regarded as a intermediate supervision mechanism to provide more effective gradients for the model.
It can effectively prevent a large number of holes in the segmentation result when the label is very sparse (Figure \ref{fig7}). At the same time, it can significantly improve the quantitative index of the model with less noise data (synthetic data, Tabel \ref{t1}).
\subsection{Learning 3D segmentation from few 2D labeled seismic data slices}
3D seismic data requires a large amount of accurate labeling. Seismic data labeling is difficult and requires expert experience, so the cost is very high, and the labeling process is subjective, wrong and missing labels can mislead the backward process. In this paper, a method of learning 3D segmentation from a small amount of 2D data is proposed, and the effectiveness of this method is proved from theory and experiment.
U-Net can be divided into two parts, backbone and prediction layer. backbone is used to extract features. The prediction layer is one convolution layer. Use $\Gamma$ to represent the convolution kernel of this layer, then the shape of $\Gamma$ is $(C_1,k,k,k,C_0)$, where $C_1$ is the number of channels in the upper layer, $k=1$, is the size of the convolution kernel, $C_0=1$, is the number of convolution kernels, $C_1\times k\times k\times k\times C_0=C_1$, so $\Gamma$ is a vector of length $C_1$. As shown in Figure \ref{fig3}, for the convenience of presentation, the figure shows a two-dimensional situation, where the last feature map shares a set of convolution weights $\Gamma$, and the weight $\Gamma$ slides on the last feature map to obtain the final result of the prediction, which can be expressed as formula \ref{f6}.
\begin{equation}
prediction = \mbox{sigmoid}(\{\sum_{l=1}^{C_1}\gamma_la^1_l,\sum_{l=1}^{C_1}\gamma_la^2_l,...\sum_{l=1}^{C_1}\gamma_la^{whd}_l, \})\label{f6}
\end{equation}
Where $a_l^i$ represents the value of each element on the last feature, $\gamma_l\in\Gamma$. The label in Figure \ref{fig3} is sparse, that is, only the red part is labeled. Our method is to calculate only the gradient caused by the labeled area in backward. The last feature map shares the convolution weight $\Gamma$, so even if some voxels that are not labeled are missing, it can still provide an effective gradient in backward. The main process is as follows.
At this time, the number of positive samples of voxels in ground truth is $S_p$, and the number of negative samples is $S_f$. Denote $\mbox{sigmoid}(\sum_{l=1}^{C_1}\gamma_la_l^i)$ as $x_i$, ground truth as $y_i$, and use the binary cross-entropy loss to calculate the cost.
\begin{equation}
\mathcal{L}_{\mbox{bce}}(x_i,y_i)=\sum_{i\in\{0,1,2,...whd\}}y_i\mbox{log}x_i+(1-x_i)\mbox{log}(1-x_i)
\end{equation}
Then the gradient generated by each voxel is $\eta\frac{\partial\mathcal{L}_{bce}}{\partial x_i}$, where $\eta$ is the learning rate, now we calculate the weight $\mu=\frac{S_p}{S_f}$ according to the state of the voxel samples in the label. The gradient propagated to the next layer is expressed as \ref{f8}.
\begin{equation}
grad = \frac{\eta}{S_p+S_f}\sum_{i\in\{0,1,2,...whd\}}\lambda_i\frac{\partial\mathcal{L}_{\mbox{bce}}}{\partial x_i}\label{f8}
\end{equation}
where,
\begin{equation}
\lambda_i=\begin{cases}
\frac{S_f}{S_p}\ &\mbox{if} \ Positive\\
\ 1 \ &\mbox{if} \ Negative\\
\ 0 \ &\mbox{if} \ Nonlabelled
\end{cases}
\end{equation}
$\lambda_i$ is the backward gradient coefficient, so it is equivalent to acting on the loss function to obtain the $\lambda$-BCE loss function.
\begin{equation}
\mathcal{L}_{\lambda- \mbox{bce}}(x_i,y_i)= \lambda_i \sum_{i\in\{0,1,2,...whd\}}y_i\mbox{log}x_i+(1-x_i)\mbox{log}(1-x_i)
\end{equation}
In the same way, we get $\lambda$-smooth $L_1$ loss function.
\begin{equation}
\mathcal{L}_{\lambda- sL_1}(\theta_i,\theta_i^{gt})= \lambda_i \sum_{i\in\{0,1,2...whd\}}\mbox{smooth}_{L_1}(\theta_i-\theta_i^{gt})
\end{equation}
\begin{figure}[ht]
\centering \includegraphics[scale=0.2]{3.png}
\caption{The elements on the last feature map share a set of weights $\Gamma$, which allows us to obtain effective gradients only by training using the labeled voxels in the label.}
\label{fig3}
\end{figure}
In actual operation, we sample at equal intervals along the iline and xline directions of the seismic body and label the sampled 2D data, and then form the labeled 2D data into the grid. Finally, the seismic volume is divided into $64\times64\times64$ tensors, and Adam is used for training \cite{27}.
\section{Experiment}
\subsection{Illustration of the experiment}
Our real data come from the Shengli Oilfield Branch of Sinopec, this data is mainly used for qualitative experiments. In addition, we also used the synthetic data disclosed by Wu \cite{22}, which is mainly used for quantitative analysis, because in real seismic data, it is almost impossible to make accurate labels. So, this is not conducive to quantifying the performance of the analysis model, and the label information of the synthetic data is absolutely correct.
According to the loss function of $\lambda$-BCE and $\lambda$-smooth $L_1$, each sample can participate in training as long as one 2D cross section is labeled.
In order to verify the most efficient labeling way, so as to save more labeling costs and help geological professionals to improve efficiency as much as possible, we have verified six labeling ways, as shown in Figure \ref{fig4}.
\begin{figure}[htb]
\includegraphics[scale=0.23]{4.png}
\centering\caption{Mode A, B, C, D only label iline, Mode E, F label both crossline and iline. In the qualitative experiment and the ablation experiment, we used these six methods for labeling, and conducted comparative experiments to discover the most efficient labeling ways.}
\label{fig4}
\end{figure}
\subsection{Qualitative experiment}
We have data from two work areas, one of which is used as the train set and the other is used as the test set. Annotate the train set as shown in Figure 4 and divide it into 5000$\times$64$\times$64$\times$64 samples. The experiment used two NVIDIA Tesla P100 16GB (16$\times$2 memory), the training epoch is 35, and the batch size is 32.
The segmentation effect of the model on the test set is shown in Figure \ref{fig5}.
\begin{figure*}[htb]
\centering\includegraphics[scale=0.332]{5.png}
\centering\caption{Qualitative experimental segmentation results. To facilitate visualization, the original data uses pseudo colors. Our method achieves high performance via a few labels, and can segment the fault very clearly and accurately.}
\label{fig5}
\end{figure*}
\subsubsection{The most efficient model training way}
Figure \ref{fig5} shows the advanced performance of our method on real data, which suppresses most of the noise in seismic data. Experiments show that training the model with volume data samples that only label one slice can accomplish segmentation of 3D seismic data, this validates our theory in the previous section.
We observe that in Mode B-F, adding labeled slices does not significantly improve the segmentation performance. On the contrary, in E and F labeled in two directions, the segmentation effect is not ideal. Although both Mode D and Mode E only labeled 6 slices, the effect of Mode D was better than that of E. Not only that, the effect of D was even better than that of Mode F labeled with 12 slices (Figure \ref{fig5}, \ref{fig6}). The difference between E and D is that E annotates both iline and crossline, while D only iline.
\begin{figure*}[htb]
\centering\includegraphics[scale=0.5]{6.png}
\centering\caption{Among them, (a) is iline, (b) is crossline, and (c) is tline. The segmentation effect of Mode D in the three directions is significantly better than the E, which shows that our method is the most efficient for labeling only iline in the segmentation of real seismic data.}
\label{fig6}
\end{figure*}
The main reason for this phenomenon is that the alignment direction of the faults is often perpendicular to the iline, resulting in the faults observed and marked from the iline are straight lines, and it is easier to find the faults when labeling the data.
When observing from the crossline, the arrangement of the faults is often chaotic and difficult to find, which leads to a large number of missed and mislabeled labels, which in turn misleads the back propagation process.
In addition, considering that the weights of the 3D convolution kernel are not symmetrical, convolution operations on seismic data from different directions will get different results.
If we only label the slice in one direction, it will cause the model to have a serious overfitting problem in this direction.
Therefore, we randomly rotate the data during training, which also enables the convolution kernel to fully learn the spatial characteristics of the data. In subsequent ablation experiments, we verified the effectiveness of random rotation to improve the performance of the model.
This also makes the model obtained by labeling only one direction in real data have better performance.
The method of Wu et al. (training using only synthetic data) has achieved advanced performance on some data sets \cite{22}, but it has not effectively migrated to our real data. It shows a lot of messy noise. This proves that it is difficult to accurately detect faults in real data using only synthetic data training models.
Our preliminary verification of the most efficient way is as follows: (1) Only label iline for training data; (2) Label at least once every 30 frames; (3) Randomly rotate the data during training.
\subsubsection{Active Attention Module}
AAM has a powerful ability to deal with noise. It uses the generated heatmap to suppress the underlying noise introduced during feature fusion. In addition, AAM is also an intermediate supervision mechanism in order to provide more effective gradients.
\begin{figure*}[htb]
\centering\includegraphics[scale=0.4]{7.png}
\centering\caption{Among them, (a) is tline, (b) is crossline, and (c) is iline. When we use Mode B for training (Only 2 slices), the model inference results obtained without AAM will have more noise, which shows the excellent ability of AAM to suppress noise.
In addition, the segmented image obtained by not applying AAM is very rough, and the segmented faults have small holes. We analyze this because when fewer slices are used for training, the effective gradient is fewer, and AAM can provide more gradients.}
\label{fig7}
\end{figure*}
In Figure \ref{fig7}, the model inference result obtained by training only using synthetic data contains a lot of noise, and even the geological texture is judged as a fault, which reflects that the model trained on synthetic data is difficult to migrate to real data.
The inference result without AAM will be more noisy. In addition, Figure 7 also shows that when we use Mode D (Only 2 sclices) for training, not using AAM may cause a lot of small holes in the inference results.
But when we increase the sclices to 3 or further increase the weight of the fault during training, this phenomenon may disappear, but this is not absolute, and this phenomenon has never appeared in the model using AAM.
This because AAM provides a intermediate supervision mechanism, which allows the model to obtain more effective gradients during training. Wei et al. discussed the mechanism of intermediate supervision in reference \cite{25}.
\subsection{Quantitative experiment}
\subsubsection{Ablation experiment}
Taking into account the accuracy of the numerical requirements of the experiment, the experiment completely uses a synthetic dataset.
We split the data of size $128\times128\times128$ disclosed by Wu into 8 pieces of data of $64\times64\times64$. In addition, in order to ensure the continuity of the data, we also downsample (resize) each original data into size of $64\times64\times64$, a total of $220\times(8+1)=1980$ data, 300 of which are randomly sampled as test set. We use the six ways shown in Figure 4 to process labels, plus the original label mode (all label), and a total of seven label modes for training.
In the experiment, we found that the data enhancement method of randomly rotating samples is very effective for training, but it is often ignored when used in 3D tasks. We added this to the ablation experiment. Therefore, our ablation experiment contains two variables: whether to add AAM and whether to rotate the sample cube during training.
We use IOU (Intersection Over Union) as the performance evaluation metric. The IOU is expressed by formula \ref{iou}.
\begin{equation}
IOU=\frac{TP}{FP+TP+FN}\label{iou}
\end{equation}
Among them, TP (True Positive) is classified as a positive sample, in fact it is also a positive sample. FP (False Positive), is classified as a positive sample, but in fact it is a negative sample.
FN (False Negative) is classified as a negative sample, but in fact it is a positive sample.
\begin{table*}[htb]
\caption{Ablation Experiment}
\label{t1}
\centering
\begin{tabular}{ccccccccc}
\hline
\textbf{\textit{AAM}} & \textbf{\textit{\begin{tabular}[c]{@{}c@{}}Random\\ Rotation\end{tabular}}} & \textbf{\textit{\begin{tabular}[c]{@{}c@{}}All Label\\ 64,64 slices\end{tabular}}} & \textbf{\textit{\begin{tabular}[c]{@{}c@{}}Mode F\\ 6,6 slices\end{tabular}}} & \textbf{\textit{\begin{tabular}[c]{@{}c@{}}Mode E\\ 3,3 slices\end{tabular}}} & \textbf{\textit{\begin{tabular}[c]{@{}c@{}}Mode D\\ 6 slices\end{tabular}}} & \textbf{\textit{\begin{tabular}[c]{@{}c@{}}Mode C\\ 3 slices\end{tabular}}} & \textbf{\textit{\begin{tabular}[c]{@{}c@{}}Mode B\\ 2 slices\end{tabular}}} & \textbf{\textit{\begin{tabular}[c]{@{}c@{}}Mode A\\ 1 Slice\end{tabular}}} \\ \hline
& & 69.72 & 63.04 & 65.59 & 60.48 & 55.10 & 37.44 & 33.60 \\
\textbf{$\surd$} & & 70.10 & 69.15 & 67.66 & 61.34 & 58.64 & 39.48 & 36.56 \\
& \textbf{$\surd$} & 71.43 & 70.22 & 68.19 & 66.88 & 64.67 & 64.17 & 59.59 \\
\textbf{$\surd$} & \textbf{$\surd$} & \textbf{72.18} & \textbf{71.69} & \textbf{70.01} & \textbf{70.92} & \textbf{69.65} & \textbf{68.83} & \textbf{64.41} \\ \hline
\end{tabular}%
\end{table*}
Figure \ref{fig8} shows the loss curve obtained by verifying the model on the validation set every 200 steps during the training process.
From the Tabel \ref{t1}, experiments show that AAM not only has the obvious ability to eliminate noise in qualitative experiments, but also has a significant improvement in quantitative indicators. Although there is less noise in the synthetic data, the use of AAM still improves the performance of the model.
It is worth noting that during training, whether or not to rotate the sample plays a decisive role in the performance of the model. Especially when the sample is only labeled in one direction, if the sample is not rotated, the performance of the model will be greatly reduced.
In addition, there is no significant difference between the quantitative and quantitative indicators of All Label and Model B-F.
This phenomenon is also reflected in Figure 5. Increasing a large amount of labeling workload will not significantly improve the performance of the model, and may even play the opposite effect (wrong labeling and missing labeling) in real scenarios.
\begin{figure}[htb]
\centering\includegraphics[scale=0.42]{8.png}
\centering\caption{In order to facilitate observation, the curve has been smoothed. (a) shows the convergence curve of the model after using AAM and random rotation of the sample. When the labeled slice is greater than 1, the convergence of the model is similar.
It can also be seen from Table 1 that there is no significant difference between the quantitative data (IOU) of Mode B-F and All Label.
(b) shows when an ablation experiment is performed in Mode D (6 slices), Mode D only labels iline. The figure shows that the performance of the non-rotating model quickly reaches saturation and gradually declines (overfitting). It also shows that using AAM will make the model converge faster and have a higher upper limit.}
\label{fig8}
\end{figure}
In the quantitative experiment, labeling the two directions of the sample also showed good performance. This is because the labels in the synthetic data are absolutely objective and accurate, and there is no problem of a large number of incorrect labels when labeling the crossline of the real data.
We verified that the seismic data obtained for a certain work area or a certain instrument, we only need to label iline once every 30 frames to make the model obtain very good segmentation performance.
Next, we use cross-validation to further confirm our conclusion.
\subsubsection{Cross validation}
This experiment uses the K-fold cross-validation method. Let K=5, divide the 1980 samples into 5 sub-samples evenly and randomly, a single sub-sample is kept as the validation set, and the other 4 sub-samples are used for training. The cross-validation was repeated 5 times, and each sub-sample was validated once. The advantage of this method is that random sub-samples are repeatedly used for training and verification at the same time, which means that each sample will be used for verification. Among them, all training data uses Mode B (2 slices labeled, i.e. labeled once every 30 frames).
The cross-validation experiment uses five metrics: Precision, Recall, IOU, Dice and Hausdorff Distance.
Among them, Precision and Recall are common metrics in machine learning, IOU and Dice are a metric that is sensitive to the segmentation area, and Hausdorff Distance is a metric that is sensitive to the segmentation boundary. For more analysis of these five metrics, see reference \cite{28}.
\begin{table}[htb]
\caption{Cross Validation for Two Slices Labeled}
\label{t2}
\centering
\begin{tabular}{ccccccc}
\hline
& \textbf{\textit{Precision}} & \textbf{\textit{\ Recall\ }} & \textbf{\textit{\ IOU\ }} & \textbf{\textit{\ Dice\ }} & \textbf{\textit{Hausdorff}} \\ \hline
\textbf{\textit{set 1}} & 76.58 & 89.30 & 66.86 & 80.14 & 62.84 \\
\textbf{\textit{set 2}} & 76.07 & 88.47 & 65.97 & 79.50 & 66.43 \\
\textbf{\textit{set 3}} & 75.88 & 91.40 & 66.51 & 79.88 & 69.29 \\
\textbf{\textit{set 4}} & 75.74 & 89.17 & 65.74 & 79.33 & 65.80 \\
\textbf{\textit{set 5}} & 77.06 & 90.25 & 67.78 & 80.80 & 68.16 \\
\textbf{\textit{Mean}} & 76.27 & 89.72 & 66.57 & 79.93 & 64.30 \\ \hline
\end{tabular}%
\end{table}
The cross-validation result data in Table \ref{t2} shows that each sample only uses two slices labeled for training to obtain a very stable and usable model.
The experiment showed a high recall rate and IOU of the model, indicating that the model detected almost all faults.
The reason why Precision is slightly lower than the Recall is that the width of the fault label in the label is too narrow, and the detected fault is wider, which leads to a slightly larger FP.
Here, the width of the detected fault can be controlled by adjusting the $\lambda$ coefficient of the positive sample in the loss function.
The stability of the hausdorff distance above 60 indicates that the model is very advanced in processing the boundary and noise.
\section{Conclusion}
Under the premise of using our method, we have obtained the most effective way of labeling, which only needs to be labeled once every 30 frames. The experiment shows that redundant labels will not significantly improve the segmentation performance. Although we only used 3.3\% of the total labels, we still achieved the most advanced segmentation performance. This is a leap forward for fault detection of seismic data. This work enables the deep learning model to be quickly migrated to the seismic data obtained by different work areas, which greatly improves the work efficiency of geologists and petroleum exploration workers. We will explore more efficient models or methods in the next study.
\section*{Acknowledgments}
The authors are very indebted to the anonymous referees for their critical comments and suggestions for the improvement of this paper. This work was also supported by grants from the National Natural Science Foundation of China (Major Program, No.51991365).
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,116,691,497,323 | arxiv | \section{Introduction}
Chondrites, the most primitive known meteorites, are the witnesses to the birth of our
solar system.
Their parent bodies, which are asteroids, formed some 4.57 billion years ago in the region
roughly 2-3 AU from the Sun
and have suffered relatively little alteration since then (Wadhwa \& Russell 2000).
As such, they record conditions (chemistry, pressure, temperature) in the solar nebula.
Chondrites are the key to understanding our solar system's birth and, by extension, the
processes in protoplanetary disks where planets are forming today.
From a petrological standpoint, most chondrites are analogous to conglomerates, of
igneous spheres.
Chondrites are remarkable for containing calcium-rich, aluminum-rich inclusions (CAIs),
the oldest solids formed in the solar system.
The formation of some CAIs has been dated very precisely: Pb-Pb dating of CAIs in the CV3
chondrite NWA 2364 reveals an age $4568.67 \pm 0.17$ Myr (Bouvier \& Wadhwa 2009).
The majority of CAIs (all but the Fluffy Type A, and related objects) experienced
some degree of melting while floating freely in the solar nebula (Connolly et al.\ 2006).
Type B CAIs, in particular, were heated to high temperatures, followed by cooling over
periods of hours (at rates of $\approx \, 5 \, {\rm K} \, {\rm hr}^{-1}$; Stolper 1982;
Stolper \& Paque 1986).
Also found in abundance within chondrites are sub-millimeter- to millimeter-sized,
(mostly ferromagnesian) igneous spheres, called chondrules.
Chondrules formed at most $2-3 \, {\rm Myr}$ after CAIs (Amelin et al. 2002;
Kita et al. 2005; Russell et al. 2006; Wadhwa et al. 2007; Connelly et al. 2008), as
melt droplets that were heated to high temperatures while they were independent, free-floating
objects in the early solar nebula, after which they cooled over periods of hours.
Chondrules and CAIs together indicate that the components of chondrites were exposed to
widespread, energetic, transient heating events.
Unraveling the unusual process that melted chondrules and CAIs is fundamental to understanding
the evolution of protoplanetary disks.
That the mechanism was intermittent and transient follows directly from the inferred timescales
for heating and cooling, which are hours or less.
The widespread nature of the mechanism is inferred from the fact that chondrules make up
as much as $\approx$ 80\% of the volume of ordinary chondrites (Grossman 1988).
The energies involved are staggering.
It is estimated that the current mass of chondrules in the asteroid belt is $\sim 10^{24} \, {\rm g}$
(Levy 1988).
The energy required to heat rock 1000 K and then melt it typically exceeds
$3 \times 10^{10} \, {\rm erg} \, {\rm g}^{-1}$, so at a minimum $3 \times 10^{34}$ ergs were
required to melt the existing chondrules.
For every gram of chondrules in the present-day asteroid belt, though, there were originally perhaps
300 grams, subsequently lost as the asteroid belt was depleted by orbital resonances (Weidenschilling 1977a;
Bottke et al.\ 2005; Weidenschilling et al. 2001).
As well, for every gram of rock in the solar nebula there was an associated 200 grams of gas
(Lodders 2003).
The energy to raise gas 1000 K in temperature exceeds $3 \times 10^{10} \, {\rm erg} \, {\rm g}^{-1}$
and thus far outweighs the energy needed to melt chondrules.
If chondrules were melted in the solar nebula and were thermally coupled to gas, the energy
required to heat the gas, along with all the chondrules inferred to have
originally been there, exceeded $2 \times 10^{39} \, {\rm erg}$.
All in all, a remarkable fraction ($> 1\%$) of the gravitational potential energy of the disk mass
from 2 to 3 AU was involved in heating the gas during chondrule formation.
In doing so, this mysterious mechanism did more than merely melt chondrules and CAIs: it left
clues to its nature in the manner in which chondrules and CAIs were melted, cooled and recrystallized.
The differences in igneous textures of chondrules and (type B) CAIs, combined with the elemental
fractionations within crystals, provide constraints on their thermal histories (Connolly et al. 2006),
and therefore provide constraints on the type of transient heating events that melted them.
Since Sorby (1877) first recognized the need to explain the igneous textures of chondrules,
numerous mechanisms for melting of chondrules and CAIs have been proposed.
Some of the more favored mechanisms include interaction with the early active Sun, through jets
(Liffman \& Brown 1995, 1996) or magnetic flares (Shu et al. 1996, 1997, 2001); melting
by lightning (Pilipp et al.\ 1998; Desch \& Cuzzi 2000); melting by planetesimal impacts
(Merrill 1920; Urey \& Craig 1953; Urey 1967; Sanders 1996; Lugmair \& Shukolyukov 2001); and also
passage of solids through nebular shocks (
Wood 1963; Hood \& Horanyi 1991, 1993; Hewins 1997; Connolly \& Love 1998; Hood 1998; Jones et al. 2000;
Iida et al.\ 2001; Desch \& Connolly 2002; Ciesla \& Hood 2002; Connolly \& Desch 2004; Desch et al.\ 2005;
Connolly et al. 2006; Miura \& Nakamoto 2006; Morris \& Desch 2010).
Of the proposed transient heating mechanisms, the two that have received the most attention
and which have been modeled in the most detail have been the nebular shock model and the so-called
``X-wind model" of Shu et al. (1996, 1997, 2001).
The nebular shock model hypothesizes that chondrule precursors were overtaken by shocks passing through
the gas of the solar nebula disk at about the present-day location of the chondrules, the asteroid
belt, 2-3 AU from the Sun.
The source of these shocks may have been X-ray flares, gravitational instabilities, or bow shocks
driven by planetesimals on eccentric orbits (see Desch et al.\ 2005).
Chondrules would be melted by the friction of the supersonic gas streaming past them, thermal exchange
with the shocked, compressed gas, as well as by absorption of radiation from other heated chondrules.
CAI precursors presumably formed in a hotter portion of the nebula but could have been melted by
shocks as well.
The X-wind model hypothesizes that solid material was transported to $< 0.1$ AU from the Sun, formed
chondrule and CAI precursors there, were melted, and then were transported back to 2-3 AU.
Additional constraints on processes acting at the birth of the solar system arise from isotopic
studies of meteorites, which reveal the presence of short-lived radionuclides (SLRs) in the solar nebula,
radioactive isotopes with half-lives of millions or years or less.
Although these isotopes have long since decayed, their one-time presence is inferred from excesses in
their decay products that correlate with the parent elements.
For example, the one-time presence of ${}^{26}{\rm Al}$, which decays to ${}^{26}{\rm Mg}$ with a
half-life of 0.71 Myr, is inferred by analyzing several minerals within a given inclusion, and finding
excesses in the ratio ${}^{26}{\rm Mg} / {}^{24}{\rm Mg}$ that correlate with the elemental ratio
${}^{27}{\rm Al} / {}^{24}{\rm Mg}$.
The excesses are due to ${}^{26}{\rm Al}$ decay, so the proportionality between the ratios above yields
the value of ${}^{26}{\rm Al} / {}^{27}{\rm Al}$ when the inclusion crystallized (achieved isotopic
closure).
In this way Lee et al.\ (1976) inferred an initial abundance
${}^{26}{\rm Al} / {}^{27}{\rm Al} \approx 5 \times 10^{-5}$ in CAIs from the carbonaceous chondrite
Allende.
Likewise several more SLRs have been inferred to exist, including such key isotopes as:
${}^{60}{\rm Fe}$ (Tachibana \& Huss 2003), with a half-life $t_{1/2} = 2.62$ Myr (Rugel et al.\ 2009);
${}^{10}{\rm Be}$ (McKeegan et al.\ 2000), with $t_{1/2} = 1.5$ Myr;
and ${}^{36}{\rm Cl}$ (Lin et al.\ 2005), with $t_{1/2} = 0.36$ Myr.
The origins of these SLRs are debated, as reviewed by Wadhwa et al.\ (2007).
The consensus model, at least for the majority of SLRs, hypothesizes an origin in a nearby core-collapse
supernova, either just before or during the formation of the solar system.
Supernova material may have been injected into the Sun's molecular cloud core (Cameron \& Truran 1977;
Vanhala \& Boss 2002), or may have been injected into the Sun's protoplanetary disk (Chevalier 2000;
Ouellette et al.\ 2005, 2007).
Indeed, the abundance of ${}^{60}{\rm Fe}$ is inconsistent with all models for its origin that do not
involve nearby, recent supernovae in the Sun's star-forming environment (Wadhwa et al.\ 2007).
On the other hand, ${}^{10}{\rm Be}$ is not formed significantly in supernovae, and must have an
origin distinct from ${}^{60}{\rm Fe}$; this interpretation is supported by the observed decoupling
of these two SLRs in meteorites (Marhas et al.\ 2002).
Desch et al.\ (2004) point out that the abundance of Galactic cosmic rays (GCRs) that are themselves
${}^{10}{\rm Be}$ nuclei is much higher than the ratio in the solar nebula, and that GCRs trapped
in the Sun's collapsing molecular cloud core will easily lead to the observed meteoritic abundance
of ${}^{10}{\rm Be}$.
We discuss this model in somewhat more detail in \S 6.3.
An alternative model for the origins of the SLRs is that they were created when energetic
($> \, {\rm MeV} \, {\rm nucleon}^{-1}$) ions collided with nuclei of rock-forming elements brought
$< 0.1$ AU from the Sun, in the context of the X-wind model (Gounelle et al.\ 2001).
If this were true, a supernova source for the SLRs would not be demanded (except for ${}^{60}{\rm Fe}$).
Unraveling the origins of the SLRs has obvious, fundamental implications for where the Sun formed.
The formation of chondrules and CAIs, and the origins of the SLRs, place important constraints on
the place of the Sun's origin, the presence of supernovae in its birth environment, and for processes in
its protoplanetary disk.
These issues apply more broadly to protostars forming today, and bear on the likelihood of Earth-forming
planets.
The X-wind model claims to explain chondrule and CAI formation, and the origins of the SLRs, in a
unified model.
The purpose of this paper is to critically examine the X-wind model.
In \S 2, we first review the meteoritic constraints on the formation of chondrules and CAIs and on the
origins of SLRs.
We include petrologic constraints arising from the CAI {\it Inti} found in the {\it STARDUST} sample
return (Zolensky et al.\ 2006).
The X-wind model itself is reviewed in \S 3.
In \S 4 we discuss internal inconsistencies within the X-wind model, and in \S 5 we compare its
predictions about chondrule and CAI formation and SLR production against the meteoritic constraints.
We discuss alternative hypotheses to the X-wind model in \S 6.
In \S 7 we draw conclusions about the viability of the X-wind model.
\section{Meteoritic Constraints}
Isotopic and petrologic studies of chondrules and CAIs have yielded a wealth of
constraints about how these particles formed, and then were melted.
Here we review the constraints that all models for the formation of chondrules and CAIs must satisfy.
For further descriptions of these constraints, the reader is referred to reviews
by Jones et al.\ (2000), Connolly \& Desch (2004), Desch et al.\ (2005), Connolly et al.\ (2006),
MacPherson (2003), and Ebel (2006).
We also review the meteoritic evidence for SLRs and their possible origins.
For further details, the reader is referred to Goswami \& Vanhala (2000), McKeegan \& Davis (2003),
Gounelle (2006), and Wadhwa et al.\ (2007).
\subsection{Chondrule Formation}
The most important constraints on chondrule formation come from experimental constraints
on their thermal histories.
Chondrules are the result of melting and recrystallization of precursor assemblages,
and constraints exist on the initial temperature of chondrule precursors, their peak
temperatures and the time spent at these temperatures, as well as the cooling rates
from the peak and during crystallization.
Here we highlight the main constraints only; the reader is referred to reviews on chondrule
thermal histories by Desch \& Connolly (2002), Connolly \& Desch (2004), Desch et al.\ (2005),
and Hewins et al.\ (2005), and references therein.
The initial temperatures of the chondrule precursors are generally held to be $< 650$ K,
the condensation temperature of S (at least in a solar-composition gas: Lodders 2003),
because chondrules contain primary S that was not lost during chondrule formation
(Rubin 1999; Jones et al. 2000; Tachibana \& Huss 2005; Zanda 2004).
Chondrules could not have spent more than a few hours at temperatures higher than
$650 - 1200$ K, depending on pressure (Hewins et al.\ 1996; Connolly \& Love 1998;
Rubin 1999; Jones et al.\ 2000; Lauretta et al.\ 2001; Tachibana \& Huss 2005).
The majority of chondrules experienced peak temperatures in the range of 1770 - 2120 K
for several seconds to minutes (Lofgren \& Lanier 1990; Radomsky \& Hewins 1990; Hewins
\& Connolly 1996; Lofgren 1996; Hewins 1997; Connolly et al.\ 1998; Connolly \& Love 1998;
Jones et al.\ 2000; Connolly \& Desch 2004; Hewins et al.\ 2005; Lauretta et al.\ 2006),
although the peak temperatures of barred olivine chondrules may have been as high as 2200 K
(Connolly et al.\ 1998).
Approximately 15\% of chondrules in ordinary chondrites contain relict grains (Jones 1996),
whose survival depends on the time spent a chondrule spends at the peak temperature
(Lofgren 1996; Connolly \& Desch 2004; Hewins et al.\ 2005).
On this basis, chondrules spent only tens of seconds to several minutes at their peak
temperatures (Connolly et al.\ 2006).
Likewise, retention of Na and S demands chondrules cooled from their peak temperatures at
rates $\sim 5000 \, {\rm K} \, {\rm hr}^{-1}$, or several hundred K in a few minutes
(Yu et al.\ 1995; Yu \& Hewins 1998).
The textures of different chondrule textural types are reproduced experimentally only
by certain cooling rates through the crystallization temperature range (roughly
1400 - 1800 K for common chondrule compositions).
In ordinary chondrites, 84\% of chondrules are pophyritic, with many euhedral crystals
(Gooding \& Keil 1981). These are reproduced by cooling rates
$\approx 5 - 1000 \, {\rm K} \, {\rm hr}^{-1}$ (Jones \& Lofgren 1993; Desch \& Connolly 2002).
Barred olivine textures, with many parallel laths of olivine, make up 4\% of ordinary
chondrite chondrules (Gooding \& Keil 1981), and require cooling rates
$\approx 250 - 5000 \, {\rm K} \, {\rm hr}^{-1}$ (see Desch \& Connolly 2002 and references therein).
Finally, radial pyroxene textures, with a few crystals radiating from a single nucleation
site, account for 8\% of ordinary chondrite chondrules (Gooding \& Keil 1981).
These textures probably require destruction of relict grains and production of a supercooled
liquid (Connolly et al.\ 2006), and can be reproduced by cooling rates in the range
$5 - 3000 \, {\rm K} \, {\rm hr}^{-1}$ (Lofgren \& Russell 1986).
Other chondrule textures exist, such as glassy chondrules that presumably cooled even faster
than these, but the salient point is that most chondrules were heated to temperatures
$> 1800 - 2000 \, {\rm K}$ for minutes only, cooling at $\sim 5000 \, {\rm K} \, {\rm hr}^{-1}$,
then cooled at slower rates $10^2 - 10^3 \, {\rm K} \, {\rm hr}^{-1}$ through their crystallization
temperatures $1400 - 1800 \, {\rm K}$.
Besides these constraints on chondrule thermal histories during the chondrule-forming event,
other constraints restrict the timing of chondrule formation.
Chondrules contain relict grains, including unmelted fragments of large particles.
The texture, chemistry, and oxygen isotopic composition of relict grains indicates that they are
fragments of chondrules, formed in previous generations.
This signifies that the event that melted chondrules occurred more than once, and that
individual chondrules may have experienced multiple heating events (Connolly et al.\ 2006;
Ruzicka et al.\ 2008; Kita et al.\ 2008; Connolly et al.\ 2009).
From Al-Mg systematics, most extant chondrules are known to have melted approximately 2 Myr after
CAIs formed (Russell et al.\ 1997; Galy et al.\ 2000; Tachibana et al.\ 2003; Bizzarro et al.\ 2004;
Russell et al.\ 2006).
These same data suggest timescales for chondrule formation of several Myr (Huss et al. 2001;
Tachibana et al. 2003; Wadhwa et al.\ 2007; Rudraswami et al.\ 2008; Hutcheon et al.\ 2009),
with 90\% formed between 1.5 and 2.8 Myr after CAIs (Villeneuve et al.\ 2009).
U-Pb systematics confirm these timescales (Amelin et al. 2002; Kita et al.\ 2005; Russell et al.\
2006; Connelly et al.\ 2008) and, not incidentally, indicate that the Al-Mg system is a valid
chronometer and that ${}^{26}{\rm Al}$ was homogeneously distributed in the solar nebula.
Finally, other constraints restrict the environment in which chondrules formed.
Chondrules almost certainly formed in the presence of dust that is to first order the
matrix grains in which the chondrules are sited.
Matrix in primitive carbonaceous chondrites contains forsterite grains that clearly
condensed from the gas and cooled at $\sim 10^{3} \, {\rm K} \, {\rm hr}^{-1}$ below 1300 K
(Scott \& Krot 2005).
The similarity in cooling rate suggests that these matrix grains formed in the chondrule-forming
events.
The cogenetic nature of matrix and chondrules is also strongly supported by the chondrule-matrix
chemical complementarity.
Relative to a solar composition and to CI chondrites, all chondrules and matrix are depleted
in volatiles, even moderate volatiles, and metal-silicate fractionation leads to variable amounts
of siderophile elements in chondrites; even the abundances of relatively refractory lithophiles
(e.g., Ti, Ca, Al, Si, Mg and Fe) can vary within chondrules and matrix.
However, the bulk abundances of refractory lithophiles in many chondrites are closer to solar
abundances than the abundances of chondrules or matrix alone, strongly implying that the
chondrules and matrix grains {\it within a given chondrite} formed in the same vicinity within
the solar nebula (Palme et al.\ 1993; Klerner \& Palme 2000; Scott \& Krott 2005; Ebel et al.\ 2008;
Hezel \& Palme 2008).
Hezel \& Palme (2008) analyzed the Ca/Al ratios in the matrix and chondrules of Allende and Y-86751, two
chondrites almost identical in bulk composition.
They found the Ca/Al ratio in the matrix of Allende to be sub-chondritic and the ratio in the matrix
to be super-chondritic, with the exact opposite true in Y-86751.
Ca and Al would be difficult to redistribute on the parent body, strongly implying that the chondrules
and matrix grains within these two chondrites formed from the same batch of material with near-solar
composition; the two batches underwent slightly different degrees of fractionation of Ca and Al to form
one set of chondrules and matrix in Allende, and another set of chondrules and matrix in Y-86751.
The cogenetic nature of chondrules and matrix within a given chondrite means that the chondrite did not
form from very different reservoirs of material separated by time and place in the nebula, but in a
particular time and place in the solar nebula, from solar-composition material, ostensibly near where
chondrites originate today.
The density of chondrules in the chondrule forming region can be estimated as well.
Cuzzi \& Alexander (2006) have investigated the lack of volatile loss from chondrules, which
strongly implies high vapor pressures of volatiles in the chondrule forming region.
So that evaporated volatiles remained in the vicinity of chondrules, the volume of
gas per chondrule must not exceed $\sim 0.1 \, {\rm m}^{3}$ or, equivalently, the chondrule
density was $> 10 \, {\rm m}^{-3}$.
So that volatiles not diffuse away from the chondrule-forming region, the chondrule-forming
region must have been $> 10^2 - 10^3 \, {\rm km}$ in extent.
In addition, about 2.4\% of chondrules in ordinary chondrites are compound, stuck to another
chondrule while semi-molten (Wasson et al.\ 1995).
If chondrules had relative velocities $< 0.1 \, {\rm km} \, {\rm s}^{-1}$ (to avoid
shattering upon impact) and were sufficiently plastic to stick for $\sim 10^{4} - 10^{5} \, {\rm s}$,
then the number density of chondrules ($\approx 300 \, \mu{\rm m}$ in diameter) must have
been $\approx \sim 0.1 - 1 \, {\rm m}^{-3}$ (Gooding \& Keil 1981), or
$\sim 10 \, {\rm m}^{-3}$ if the relative velocities were $\sim 10 \, {\rm cm} \, {\rm s}^{-1}$,
as implied by solar nebula turbulence models (Cuzzi \& Hogan 2003).
For chondrules with masses $\approx 3 \, \times \, 10^{-4} \, {\rm g}$, these number densities
imply a mass density of chondrules $\approx 3 \times 10^{-9} \, {\rm g} \, {\rm cm}^{-3}$,
larger than the nominal gas density, $\sim 10^{-9} \, {\rm g} \, {\rm cm}^{-3}$ (at 2-3 AU in a
disk with 10 times the mass of the minimum mass solar nebula of Weidenschilling 1977a), and implies
that the solids-to-gas ratio was locally $\sim 300$ times greater than the canonical 1\%.
The enhancement of the solids-to-gas ratio is supported by the inference that the chondrule formation
region was also relatively oxidizing.
FeO-rich chondrules clearly formed in a gas much more oxidizing than one of a solar composition
(Jones et al.\ 2000; Connolly \& Desch 2004; Fedkin et al.\ 2006).
Possibly the elevated oxidation is due to chondrule vapor and/or evaporation of fine dust or water ice
also concentrated in the chondrule-forming region (Fedkin et al.\ 2008; Connolly \& Huss 2010).
On the other hand, the solids-to-gas ratio may have been highly variable: FeO-poor chondrules apparently
formed in a more reducing environment, perhaps one as reducing as a solar-composition gas
(Zanda et al.\ 1994; Jones et al.\ 2000; Connolly \& Desch 2004), although this interpretation is
complicated by the possibility of reducing phases in the precursor assemblage such as C, so that
chondrules may not so faithfully record the oxygen fugacity of the chondrule formation region
(Connolly et al.\ 1994; Hewins 1997).
One last, important constraint is the observed correlation between chondrule textures and compound
chondrule frequency.
In ordinary chondrite chondrules overall, among the population of porphyritic, barred olivine and
radial pyroxene textures, 87\% are porphyritic, 4\% are barred, and 9\% are radial (Gooding \& Keil 1981).
Among compound chondrules in ordinary chondrites, which account for 2.4\% of all chondrules,
the proportions are 19\% porphyritic, 32\% barred, and 49\% radial (Wasson et al.\ 1995).
Barred olivines and radial pyroxenes are about an order of magnitude more common among compound
chondrules than chondrules overall.
Despite the rarity of compound chondrules, 24\% of barred olivines and 15\% of radial pyroxenes are
found in the compound chondrule population.
Porphyritic textures are consistent with cooling rates $5 - 1000 \, {\rm K} \, {\rm hr}^{-1}$, although
chemical zoning profiles favor lower cooling rates (Jones \& Lofgren 1993; Desch \& Connolly 2002),
while barred textures are reproduced only with cooling rates $250 - 3000 \, {\rm K} \, {\rm hr}^{-1}$.
The barred olivine textures that so strongly correlate with compound chondrules appear to require
faster cooling rates, by about an order of magnitude (the cooling rates of radial pyroxenes are not
well determined, but appear to have been similarly fast).
These data strongly imply that chondrule cooling rates were faster where compound chondrules were
more likely to form.
If the solids-to-gas ratio varied in space, compound chondrules would have formed in regions of
higher chondrule density.
A positive correlation between chondrule cooling rate and chondrule density is then strongly implied.
\subsection{CAI Formation}
CAIs have long been recognized to be the assemblages of very refractory minerals such as hibonite,
anorthite, spinel, perovskite and fassaite and high-temperature reaction products such as gehlenite
and melilite that are the first to form from a cooling solar-composition gas (Larimer 1967; Grossman 1972;
Ebel \& Grossman 2000).
These minerals are likely to have condensed out of a solar-composition gas as it cooled below 1800 K
(MacPherson 2003; Ebel 2006).
The site of this condensation is unknown: it may have occurred near the Sun, or in a transiently
heated region farther away.
That the gas in the condensation region was of solar composition is supported not just by the
mineralogy of CAIs but by constraints on the oxygen fugacity of the CAI formation environment.
The valence state of Ti (i.e., the Ti$^{4+}$/Ti$^{3+}$ ratio) in minerals such as fassaite
and rh\"{o}nite in CAIs, which is sensitive to the $f{\rm O}_{2}$ during formation,
routinely show that CAIs formed in an environment with oxygen fugacity very near that of
a solar composition gas, with $f{\rm O}_{2} \approx {\rm IW} - 6$, or 6 orders of magnitude less
oxidizing than the Iron-Wustite buffer (Beckett et al.\ 1986; Krot et al.\ 2000; Simon et al.\ 2010;
Paque et al.\ 2010).
Recently, the mineral osbornite [(Ti,V)N] has been detected in two CAIs: a CAI within the CB chondrite
Isheyevo (Meibom et al.\ 2007), and the object known as {\it Inti} collected in the
{\it STARDUST} sample return (Zolensky et al.\ 2006).
Significantly, osbornite can only condense in a gas that is very close in composition and oxidation
state to a solar-composition gas, with C/O ratios in the range 0.91 - 0.94 (Ebel 2006; Petaev et al.\ 2001).
It is not possible to condense osbornite in an environment as oxidizing as that associated with
chondrule formation, for example.
Most CAIs were melted some time after their minerals condensed and the CAIs formed, but some CAIs
(the ``Fluffy Type A" CAIs) did not.
For one class of melted CAIs (type B), peak temperatures $\approx$ 1700 K are inferred from the
crystallization of melilite (Stolper 1982; Stolper \& Paque 1986; Beckett et al. 2006).
Based on the inhomogeneous concentrations of V, Ti, and Cr within spinel grains, they are
constrained to be at these peak temperatures for less than a few tens of hours (Connolly \& Burnett 2003).
The cooling rates of Type B CAIs have been constrained to $0.5 - 50 \, {\rm K} \, {\rm hr}^{-1}$
(Paque \& Stolper 1983; MacPherson et al.\ 1984; Simon et al.\ 1996).
Like chondrules, Type B CAIs show such petrographic and geochemical evidence for multiple heating events,
including variations in minor element concentrations in spinels and Na content in melilites (Davis \&
MacPherson, 1996; Beckett et al., 2000, 2006; Connolly \& Burnett, 2000; Connolly et al.\ 2003).
According to Beckett et al.\ (2000), after melting, some CAIs experienced alteration in the nebula before
being re-melted.
The time of such alteration is still unconstrained, but is clearly less than 1 Myr (Kita et al.\ 2005,
2010: MacPherson et al.\ 2010).
The overall timescale of CAI production has been constrained form the inferred initial abundance of
${}^{26}{\rm Al}$ to be $\sim 10^{5}$ years (Young et al., 2005; Shahar \& Young, 2007; Kita et al., 2010;
MacPherson et al., 2010), suggesting that the processing of refractory materials into igneous rocks was
relatively rapid and stopped before chondrules were formed (Connolly et al., 2006).
Thus, the processing of CAIs within the disk was cyclic over a relatively short time period of at most
a few $\times 10^{5}$ years, but most likely $< 10^{5}$ years (Kita et al., 2010)
Like chondrules, CAIs (at least, those of type B) experienced similar peak temperatures and cooling rates,
and multiple melting events.
Unlike chondrules, CAIs equilibrated with a reducing gas with near-solar composition.
Their formation also occurred earlier in the nebula's evolution.
A reasonable interpretation is that CAIs formed earlier and were melted by a mechanism similar to that
that melted chondrules, but that CAIs were melted under different environmental conditions.
\subsection{Short-Lived Radionuclides}
At this time, there are 9 SLRs with half-lives of $\sim 10^7$ yr or less that are inferred
from meteorites to have existed in the early solar system.
The list of these SLRs, taken from the review by Wadhwa et al.\ (2007), is given in Table 1.
\begin{deluxetable}{lcll}
\tablecolumns{4}
\small
\tablewidth{0pt}
\tablecaption{Short-lived radionuclides in the early solar system}
\tablehead{\colhead{Parent Isotope} & \colhead{$\;\;\;$T$_{1/2}$\tablenotemark{a}$\;\;\;$}
& \colhead{Daughter Isotope} & \colhead{Solar System Initial Abundance}}
\startdata
$^{41}$Ca& 0.1 &$^{41}$K &$^{41}$Ca/$^{40}$Ca $\approx$ 1.5 x 10$^{-8}$ \\
$^{36}$Cl& 0.3 &$^{36}$Ar(98.1\%) &$^{36}$Cl/$^{35}$Cl $\approx$ 1.6 x 10$^{-4}$ ? \\
& &$^{36}$S(1.9\%) & \\
$^{26}$Al& 0.72 &$^{26}$Mg &$^{26}$Al/$^{27}$Al $\approx$ 5.7 x 10$^{-5}$ \\
$^{60}$Fe& 1.5 &$^{60}$Ni &$^{60}$Fe/$^{56}$Fe $\approx$ 3-10 x 10$^{-7}$ \\
$^{10}$Be& 1.5 &$^{10}$B &$^{10}$Be/$^{9}$Be $\approx$ 10$^{-3}$ \\
$^{53}$Mn& 3.7 &$^{53}$Cr &$^{53}$Mn/$^{55}$Mn $\approx$ 10$^{-5}$ \\
$^{107}$Pd& 6.5 &$^{107}$Ag &$^{107}$Pd/$^{108}$Pd $\approx$ 5-40 x 10$^{-5}$ \\
$^{182}$Hf& 8.9 &$^{182}$W &$^{182}$Hf/$^{180}$Hf $\approx$ 10$^{-4}$ \\
$^{129}$I& 15.7 &$^{129}$Xe &$^{129}$I/$^{129}$Xe $\approx$ 10$^{-4}$ \\
\enddata
\tablenotetext{a}{Half-life in millions of years.}
\end{deluxetable}
The longest lived of these isotopes may have been continuously created over Galactic history
and inherited from the Sun's molecular cloud.
Radionuclides are created by a variety of stellar nucleosynthetic processes, including
core-collapse supernovae, type Ia supernovae, novae, and outflows from Wolf-Rayet stars
and asymptotic-giant-branch (AGB) stars (Wadhwa et al.\ 2007).
These are injected into the interstellar medium at a given rate and subsequently decay.
To the extent that the newly created isotopes are injected into the hot phase of the interstellar
medium, incorporation of the SLRs into a forming solar system will only occur after the gas
cools and condenses into molecular clouds.
This process, during which the gas remains isotopically ``isolated," takes considerable time,
probably $\sim 10^8$ yr.
Recently Jacobsen (2005) and Huss \& Meyer (2009) have included such an isolation time in simple
Galactic chemical evolution models, and have used them to predict the abundances of SLRs inherited
from the interstellar medium.
Whether or not such intermediate-lived SLRs as ${}^{53}{\rm Mn}$, ${}^{107}{\rm Pd}$ and
${}^{182}{\rm Hf}$ were inherited is debatable and dependent on input parameters.
A substantial fraction of ${}^{129}{\rm I}$ appears to be mostly inherited from the
interstellar medium.
In fact, the solar nebula would have far too much of this SLR unless the isolation time
exceeds 100 Myr (Huss \& Meyer 2009).
Inheritance of ${}^{53}{\rm Mn}$ at meteoritic abundances, however, is not possible with an
isolation time longer than $\sim 50$ Myr, so this isotope was probably not inherited.
One robust finding of these studies is that even with a very short isolation time, inheritance
from the interstellar medium cannot yield the meteoritic abundances of ${}^{41}{\rm Ca}$,
${}^{36}{\rm Cl}$, ${}^{26}{\rm Al}$ and ${}^{60}{\rm Fe}$.
These four SLRs, and probably ${}^{53}{\rm Mn}$ (and ${}^{10}{\rm Be}$ for that matter),
are diagnostic of a late addition to the solar nebula.
Since the X-wind models were published, strong evidence has arisen for the presence of
live ${}^{36}{\rm Cl}$ ($t_{1/2} = 0.3 \, {\rm Myr}$) in the solar nebula, from Cl-S systematics
of sodalite in carbonaceous chondrites, at levels ${}^{36}{\rm Cl} / {}^{35}{\rm Cl} \sim 4 \times 10^{-6}$
(Lin et al.\ 2005; Hsu et al.\ 2006), corroborating earlier hints from Cl-Ar systematics (Murty et al.\ 1997).
As sodalite is thought to be a late-stage product of aqueous alteration, the initial
${}^{36}{\rm Cl} / {}^{35}{\rm Cl}$ value would have been higher if it were injected by a supernova early
in the nebula's evolution along with other SLRs.
An initial value ${}^{36}{\rm Cl} / {}^{35}{\rm Cl} \sim 10^{-4}$ is usually inferred
(Hsu et al.\ 2006; Wadhwa et al.\ 2007).
More recent analyses of Cl-S systematics in wadalite in the Allende carbonaceous chondrite
indicate an even higher ratio,
${}^{36}{\rm Cl} / {}^{35}{\rm Cl} \approx 1.72 \pm 0.25 \times 10^{-5}$, implying even
higher initial abundances of ${}^{36}{\rm Cl}$ (Jacobsen et al.\ 2009).
These levels are higher than those thought possible for supernova injection,
${}^{36}{\rm Cl} / {}^{35}{\rm Cl} \sim 10^{-6}$ (see discussion in Hsu et al.\ 2006),
and have been interpreted as evidence for a late stage of irradiation within
the solar nebula, producing ${}^{36}{\rm Cl}$ by direct bombardment of target nuclei by
energetic ions (Lin et al.\ 2005; Hsu et al.\ 2006; Jacobsen et al.\ 2009).
At this point it seems likely that this interpretation is correct, although the time and
place in the solar nebula where this irradiation took place are unknown.
An irradiation origin of ${}^{36}{\rm Cl}$ does not necessarily imply an irradiation origin
within the X-wind environment.
It is worth noting that Chaussidon et al.\ (2006) claimed evidence for the one-time
presence of ${}^{7}{\rm Be}$, which decays to ${}^{7}{\rm Li}$ with a half-life of only 57 days,
in a CAI from the carbonaceous chondrite Allende.
Li is notoriously mobile and subject to large isotope fractionations by chemical processes.
It is very difficult to distinguish radiogenic excesses of ${}^{7}{\rm Li}$ for these reasons.
Desch \& Ouellette (2006) identified several weaknesses of the analysis of Chaussidon et al.\
(2006).
They conclude that while Li indeed appears anomalous in this Allende CAI, perhaps representing
an admixture with spallogenic Li, the data are not conclusive whatsoever with any Li being
the decay product of ${}^{7}{\rm Be}$.
\section{Description of the X-wind Model}
The X-wind model originally was developed by Shu and collaborators (Shu et al.\ 1994a,b, 1995;
Najita \& Shu 1994; Ostriker \& Shu 1995), to explain the collimated outflows from protostars.
The X-wind model is first and foremost a model of gas dynamics in protostellar systems, and was
extended only later to investigate the formation of chondrules and CAIs near the protostar, by
Shu et al.\ (1996), Shu et al.\ (1997), and Shu et al.\ (2001), and to investigate nuclear processing
of solids, by Lee et al.\ (1998) and Gounelle et al.\ (2001).
(See also reviews by Shu et al.\ 2000, Shang et al.\ 2000.)
We note that the model evolved somewhat through the late 1990s; we consider the models of
Shu et al.\ (1996, 2001) for the dynamics and thermal processing of solids, and
Gounelle et al.\ (2001) for the irradiation products, to represent the most recent and most
detailed incarnations of the model.
\subsection{Dynamics}
We begin by summarizing the dynamics of gas and solids in the X-wind model.
At its heart, the X-wind is a magnetocentrifugal outflow, as in the classic work
of Blandford \& Payne (1982).
Magnetic field lines are anchored by flux freezing in the protoplanetary disk and
forced to co-rotate with it.
As they are whipped around by the disk, the inertia of matter tied to the field
lines causes the field lines far above and below the disk to bow outwards.
Ionized gas tied to the field lines acts like a bead on a wire: as the field line
(wire) is whipped around, the gas (bead) is flung outward.
This outflow carries significant angular momentum with it, and gas and entrained
solids accrete through the disk.
Gas and solids accrete until they reach the ``X point" at a distance $R_{\rm x}$ from
the protostar.
At the X point, the pressure of the stellar magnetic field prevents the inward flow
of disk gas and truncates the disk.
The value of $R_{\rm x}$, given by Equation 1 of Shu et al.\ (2001; see also Ghosh
\& Lamb 1979; Shu et al.\ 1994a) is easily reproduced under the assumption that the
magnetic pressure of the stellar magnetosphere balances the ram pressure of accreting
gas in the disk.
Outside $R_{\rm x}$, in the disk, gas is tied to open magnetic field lines that cross
the disk, and gas is driven outward by a magnetocentrifugal outflow.
Inside $R_{\rm x}$, magnetic field lines are tied to the protostar, and gas corotates
with the protostar.
Formally, the field lines and associated gas do not mix.
Shu et al.\ (1996, 2001) presume that ionization is low enough near the X point to allow
matter to diffuse across field lines and cross into the region interior to $R_{\rm x}$, but
this stage is not explicitly modeled.
As material crosses the X point, it is heated and expands along field lines.
Just farther than the X point, in the disk, where $T \approx 1500 \, {\rm K}$, the scale
height of the gas is $H \sim 2 \times 10^{10} \, {\rm cm}$ $\approx 0.03 \, R_{\rm x}$.
If the gas inside the X point is heated so that the scale height increases by a
factor of 30, the gas can flow directly onto the protostar, guided by the magnetic
field lines in a ``funnel flow".
Heating of the gas to $\sim 10^6 \, {\rm K}$ is sufficient, and can occur due
to heating by X-rays generated by reconnection events interior to the funnel flow.
The region interior to the funnel flow, denoted the ``reconnection ring", from
$r \approx 0.75 \, R_{\rm x}$ to $R_{\rm x}$, is modeled as having reversed poloidal components
across the midplane (Ostriker \& Shu 1995), leading to frequent magnetic reconnection
events akin to solar flares.
Shu et al.\ (2001) identify this region as a possible source of a component of protostellar X-rays
such as those observed by Skinner \& Walter (1998).
From such observations they infer an electron density
$n_{\rm e} \approx 3 \times 10^{8} \, {\rm cm}^{-3}$ and temperatures
$T \approx 8 \times 10^{6} \, {\rm K}$ in the reconnection ring, yielding sound speeds
$v_{\rm T} \sim 400 \, {\rm km} \, {\rm s}^{-1}$, gas densities
$\approx 5 \times 10^{-16} \, {\rm g} \, {\rm cm}^{-3}$, and pressures
$P \sim 10^{-7} \, {\rm atm}$.
Indeed, for the X wind model to work, this region needs to be the site of frequent
magnetic flares, so that solids in this region are irradiated by energetic ions and
undergo nuclear processing.
As gas accretes inward past the X point and joins the funnel flow, Shu et al.\ (2001) hypothesize
that a fraction $F \sim 0.01$ of the solid material leaves the flow and enters the
reconnection ring.
This can occur, they say, if solids spiral inward within the disk into the reconnection
ring, or if they fail to be lofted by the funnel flow.
Once in the reconnection ring, the solid particles orbit at Keplerian speeds through
a gas that is corotating with the protostar, and so experience a constant headwind.
This causes particles to lose angular momentum and spiral in towards the protostar
in a matter of years.
They are lost unless the magnetosphere of the protostar fluctuates, periodically waning
so that the disk can encroach on the reconnection ring, sweep up the particles and
launch them in a magnetocentrifugal outflow.
If they can be launched by the outflow, there is the possibility that the particles
can land in the disk, depending on their aerodynamic properties (Shu et al.\ 1996).
\subsection{Thermal Processing}
Shu et al.\ (2001) do not explicitly calculate the thermal histories of particles in the X wind.
They do not, for example, calculate temperature-dependent cooling rates $dT/dt$ vs. $T$.
They do, however, cite two possible mechanisms for thermally processing particles.
While in the reconnection ring or the disk, magnetic flares are presumed to heat chondrules
and especially CAIs; but CAIs and chondrules are {\it last} melted by sudden exposure to
sunlight as they are lofted away from the disk.
While in the reconnection ring, proto-CAIs are repeatedly exposed to magnetic flares
that heat particles, mostly by impacts by energetic ions and absorption of X-rays.
Depending on the flare energy luminosity and the area over which it is deposited,
CAIs can be mildly heated, destroyed completely in ``catastrophic flares", or
heated to the point where just their less refractory minerals evaporate.
While in the reconnection ring, it is assumed CAI material repeatedly evaporates
and recondenses.
An important component of the X-wind model as put forth by Shu et al.\ (2001) is that heating of
proto-CAI material will usually allow evaporation of ferromagnesian silicate
material, but leave unevaporated more refractory Ca,Al-rich silicate material.
Without this core/mantle segregation, irradiation by energetic ions (discussed below)
overproduces ${}^{41}{\rm Ca}$ with respect to ${}^{26}{\rm Al}$.
Flares are also presumed to heat chondrules in the transition region between
the reconnection ring and the disk.
Here the calculation of temperatures is very much intertwined with the structure
of the disk and the relative heating rates due to flares and sunlight.
The other mode of heating, and the one causing chondrules and CAIs to melt for
the last time before isotopic closure, arises when these particles are lofted by the
magnetocentrifugal outflows, above the disk in which they reside.
The presumed densities of proto-CAIs in the reconnection ring are such that they will
form an optically thick, if geometrically thin, disk.
Because this optically thick disk absorbs starlight obliquely, its effective temperature
due to heating by starlight, $T_{\rm disk}$, is lower than the particle blackbody
temperature $T_{\rm BB} = (L_{\star} / 16\pi r^2 \sigma)^{1/4}$ at that radius (where
$L_{\star}$ is the stellar luminosity and $\sigma$ the Stefan-Boltzmann constant).
Particles start within the disk at temperatures $\approx T_{\rm disk}$, but as they
are lofted their temperatures rise to $T_{\rm BB}$ as they are exposed to starlight.
Actually, they reach slightly higher $T$ because they are exposed to the radiation emitted
by the disk, as well; Shu et al.\ (1996, 2001) approximate this particle temperature, the highest
temperatures particles will reach, as $T_{\rm peak} \approx (T_{\rm disk}^{4} / 2 + T_{\rm BB}^{4})^{1/4}$.
For the parameters adopted by Shu et al.\ (1996, 2001) for the ``embedded" phase
(in which $\dot{M} \approx 2 \times 10^{-6} \, M_{\odot} \, {\rm yr}^{-1}$),
we find $T_{\rm BB} \approx 1700 \, {\rm K}$, $T_{\rm disk} \approx 1160 \, {\rm K}$, and
$T_{\rm peak} \approx 1750 \, {\rm K}$ (approximately what Shu et al.\ 1996 find).
Thus, Shu et al.\ (1996) state that launching either a CAI or chondrule in an outflow can raise
its temperature from $< 1200 \, {\rm K}$ to $1800 \, {\rm K}$ or more, within a span of ``a few hours."
This timescale is set by the dynamics of the particle, which must travel roughly a scale
height in the vertical direction.
As the heated CAIs or chondrules are flung to great distances, the absorption of
starlight lessens, and they cool.
It is straightforward to demonstrate that the cooling rates in this scenario are necessarily
\begin{equation}
\frac{d T}{dt} \approx -\frac{1}{2} \frac{ v_{r} }{ r } \, T_{\rm BB}(r).
\end{equation}
For the trajectories depicted in Figure 2 of Shu et al.\ (1996), $v_{r} \approx 50 \, {\rm km} \, {\rm s}^{-1}$
at $r \approx 0.1 \, {\rm AU}$ where particles will cool through their crystallization temperatures.
This means that all particles---CAIs and chondrules---necessarily cool from their peak temperatures
at the same rate, about $10 \, {\rm K} \, {\rm hr}^{-1}$.
\subsection{Radionuclide Production}
A final, major component of the X-wind model is the production of SLRs in CAIs.
The reconnection ring is the site of frequent magnetic reconnection events.
If these act like solar flares, they could accelerate hydrogen and helium ions to energies in
excess of 1 MeV/nucleon.
Gounelle et al.\ (2001) hypothesize that flares akin to solar ``gradual" flares and ``impulsive"
flares will take place
in the ring, and that ions are accelerated with the same efficiency, relative to the X-ray luminosity,
as in the solar atmosphere.
The flux today of energetic ($E > 10 \, {\rm MeV} \, {\rm nucleon}^{-1}$) ions at 1 AU today is
roughly $100 \, {\rm cm}^{-2} \, {\rm s}^{-1}$, yielding an energetic particle luminosity
$L_{\rm p} \sim 0.09 \, L_{\rm x}$ (Lee et al.\ 1998).
Because T Tauri stars have X-ray luminosities, presumably from flares, roughly five orders of
magnitude greater
(Feigelson \& Montmerle 1999; Feigelson et al.\ 2007; Getman et al.\ 2008 and references therein),
the fluence of such particles over, say, 20 years, if concentrated into the reconnection ring with
area $\sim 10^{24} \, {\rm cm}^{2}$, would reach $\sim 2 \times 10^{19} \, {\rm cm}^{-2}$.
Flares more akin to gradual flares would accelerate mostly protons and alpha particles and
lead to an energetic particle spectrum $\propto E^{-2}$, while
flares akin to impulsive flares would accelerate a comparable number of ${}^{3}{\rm He}$ ions,
and lead to an energetic particle spectrum $\propto E^{-4}$.
Proto-CAI material in the reconnection ring is constantly bombarded by these energetic ions,
which can initiate nuclear reactions in the rocky material, creating new isotopes.
Gounelle et al.\ (2001) simultaneously model the production of several SLRs within the context
of the X-wind model, attempting to match their initial abundances as inferred from meteorites.
They model the production of 4 isotopes in particular:
${}^{10}{\rm Be}$ ($t_{1/2} = 1.5 \, {\rm Myr}$),
${}^{26}{\rm Al}$ ($t_{1/2} = 0.7 \, {\rm Myr}$),
${}^{41}{\rm Ca}$ ($t_{1/2} = 0.1 \, {\rm Myr}$),
and ${}^{53}{\rm Mn}$ ($t_{1/2} = 3.7 \, {\rm Myr}$).
They also model production of the very long-lived isotopes ${}^{138}{\rm La}$ ($t_{1/2} > 10^{12} \, {\rm yr}$)
and ${}^{50}{\rm V}$ ($t_{1/2} \sim 10^{11} \, {\rm yr}$), on the grounds that these are not produced
in abundance by stellar nucleosynthesis.
Of course, these isotopes are so long-lived that they are not diagnostic of irradiation in the solar
nebula; they could have been produced by spallation in molecular clouds over Galactic history, for
example.
We therefore focus on the discussion in Gounelle et al.\ (2001) of
${}^{10}{\rm Be}$, ${}^{26}{\rm Al}$, ${}^{41}{\rm Ca}$
and ${}^{53}{\rm Mn}$.
These are produced overwhelmingly (but not exclusively) by nuclear reactions of H and He ions with
O, Mg and Al, Ca, and Fe nuclei, respectively.
Among the first findings of Gounelle et al.\ (2001) is that uniform irradiation of the average composition of proto-CAIs
will result in orders of magnitude more ${}^{41}{\rm Ca}$, relative to ${}^{26}{\rm Al}$, than is
observed in CAIs.
They found no way to reconcile the production rates of these two isotopes by irradiation, unless
two conditions were met: Ca (the primary target for ${}^{41}{\rm Ca}$) were sequestered in a core; and
the thickness of a Ca-free mantle surrounding the core were sufficiently thick to stop energetic ions.
Gounelle et al.\ (2001) assume that repeated evaporations of proto-CAIs preferentially leave behind a
residue of Ca,Al-rich refractory cores, onto which ferromagnesian silicates can condense.
Under these assumptions, Gounelle et al.\ (2001) found core sizes for which the meteoritic abundances of the 4 radionuclides
above were reproduced, to within factors of a few.
\section{Internal Inconsistencies of the X-wind Model}
\subsection{Are Jets Launched by X Winds?}
Protostellar jets are virtually ubiquitous among protostars.
Moreover, jets are associated with strong magnetic fields and are apparently collimated by
magnetic hoop stresses (Ray et al.\ 2007).
These observations strongly support models of protostellar outflows as
magnetocentrifugally launched.
They are also taken at times as support for the X-wind model in particular (Shu et al.\ 2000),
but it must be emphasized that jets could be taken as evidence for the X-wind
only if they can be shown to be launched from inside about 0.1 AU.
An ongoing debate in the astronomical community is whether protostellar jets
are launched from locations $\sim 0.1 \, {\rm AU}$ from the protostar,
as in the X-wind, or from $\sim 1 \, {\rm AU}$, as advocated by proponents of
``disk wind" models (Wardle \& K\"{o}nigl 1993; K\"{o}nigl \& Pudritz 2000; Pudritz et al.\ 2007).
To be blunt: just because one observes a protostellar jet and magnetocentrifugal
outflow from a disk does {\it not} mean that jets are launched from 0.1 AU,
let alone that solids in that disk are transported from a few AU, to 0.1 AU,
back out to a few AU.
In fact, the astronomical evidence at this time does not support the X-wind model,
and instead favors disk wind models.
Observations of radial velocities across jets reveal their angular momenta and
the launch point of the protostellar jets (Bacciotti et al.\ 2002; Anderson et al.\ 2003;
Coffey et al.\ 2004, 2007).
These observations are technically challenging and were only possible when the
{\it Hubble Space Telescope} / Space Telescope Imaging Spectrograph ({\it HST}-STIS)
was operational.
Not all observations were successful; in some cases jet rotation was not observed.
In other cases rotation was observed, but in the opposite sense of the disk's presumed
rotation, complicating the interpretation (Cabrit et al.\ 2006; Pety et al.\ 2006;
Coffey et al.\ 2007; Lee et al.\ 2006, 2007).
Prograde jet rotation was observed in some protostellar systems, though; in those
systems jets appear to be launched from much farther in the disk than the X point.
Coffey et al.\ (2004) observed jet rotation in RW Aur and LkH$\alpha$321, and
more detailed observations were carried out by Coffey et al.\ (2007).
In DG Tau, a high-velocity component appears launched from about 0.2 - 0.5 AU and a
low-velocity component from as far as 1.9 AU; in TH 28, the jet seems launched from
about 1.0 - 3.9 AU; and in CW Tau, from 0.5-0.6 AU (Coffey et al.\ 2007).
These authors admit they have not resolved the innermost jet and cannot exclude
a contribution from an X-wind; but Woitas et al.\ (2005) estimate that the jets
carry at least 60-70\% of the angular momentum to be extracted from the disk.
Clearly the disk winds dominate in these systems.
As yet, there is no direct evidence from observations of jet rotation that outflows
are launched by an X-wind rather than disk winds.
\subsection{Solids at the X point?}
Besides the question of whether outflows are launched from inside 0.1 AU at all,
a second major obstacle for the X-wind model is that neither the model itself nor
astronomical observations support the existence of solids at the X point.
The theoretical grounds for a lack of solids at the X point are simple.
In calculating the temperature of disk material, Shu et al.\ (1996, 2001) neglected
the heating of the disk due to its own accretion, focusing only on the passive heating
of the disk by starlight.
Specifically, they set
\begin{equation}
\sigma T^{4}_{\rm disk} = \frac{L_{\star}}{4\pi^2 R_{\star}^{2}} \,
\left[ \arcsin \left( \frac{ R_{\star} }{ r } \right)
-\left( \frac{ R_{\star} }{ r } \right) \,
\left( 1 - \frac{ R_{\star}^{2} }{ r^{2} } \right)^{1/2} \right],
\label{eq:tempstar}
\end{equation}
which for parameters they consider typical of the embedded phase
($L_{\star} = 4.4 \, L_{\odot}$, $r = R_{\rm x} = 4 R_{\star} = 12 R_{\odot}$)
yields $T_{\rm disk} \approx 1160 \, {\rm K}$.
For parameters they consider typical of the revealed phase
($L_{\star} = 2.5 \, L_{\odot}$, $r = R_{\rm x} = 5.3 R_{\star} = 16 R_{\odot}$),
$T_{\rm disk} \approx 820 \, {\rm K}$.
But an additional term must be added to the right side of Equation~\ref{eq:tempstar}
to account for energy released by disk accretion.
Setting
\begin{equation}
\sigma T_{\rm acc}^4 = \frac{3}{8\pi} \dot{M} \Omega^2,
\end{equation}
we can better estimate the effective temperature of the disk
(approximately the temperature at optical depths $\approx 1$ into the disk's surface) as
\begin{equation}
\sigma T_{\rm eff}^{4} = \sigma T_{\rm disk}^{4} + \sigma T_{\rm acc}^{4},
\end{equation}
(Hubeny 1990).
Using $\Omega \approx 8 \times 10^{-6} \, {\rm s}^{-1}$ at the X point
and assuming a mass accretion rate of $2 \times 10^{-6} \, M_{\odot} \, {\rm yr}^{-1}$
for the embedded phase, one derives $T_{\rm acc} = 2030 \, {\rm K}$ and
a temperature $T_{\rm eff} \approx 2090 \, {\rm K}$,
sufficient to evaporate all solids.
Even if one uses the lower mass accretion rate
$\dot{M} \approx 1 \times 10^{-7} \, M_{\odot} \, {\rm yr}^{-1}$,
appropriate for the revealed stage, $T_{\rm acc} = 960 \, {\rm K}$ and
$T_{\rm eff} \approx 1070 \, {\rm K}$.
The effective temperature is approximately the temperature at optical depths
$\approx 1$ into the disk's surface.
These high temperatures are exacerbated by the fact that $T_{\rm eff}$ is a lower
limit to the temperatures experienced by particles.
The effective temperature is approximately the temperature of the disk at 1 optical
depth into the disk.
Because accretional heating must be transported out of the disk by a radiative flux,
temperatures inside the disk, at optical depths $\gg 1$ (using the Rosseland mean opacity)
will exceed $T_{\rm eff}$, by a factor $\approx (3\tau/8)^{1/4}$ (Hubeny 1990).
For even moderate optical depths (e.g., $\tau = 10$), temperatures will rise above 1500 K,
even for the lower mass accretion rates of the revealed stage.
Considering optical depths $\ll 1$, temperatures will also exceed $T_{\rm eff}$, because
the particles will be exposed to starlight directly.
An isolated particle at a distance $r$ from the protostar will achieve a blackbody
temperature
\begin{equation}
T_{\rm BB} = \left( \frac{ L_{\star} }{16\pi \sigma r^2} \right)^{1/4}.
\end{equation}
For particles at the X point, $T_{\rm BB} \approx 1700 \, {\rm K}$ during
the embedded stage, and $\approx 1280 \, {\rm K}$ during the revealed stage.
In addition to the direct starlight, particles in the uppermost layers of the disk
will also absorb radiation from the disk as well, achieving temperatures well
approximated by
\begin{equation}
T^{4} \approx \frac{1}{2} T_{\rm eff}^{4} + T_{\rm BB}^{4}
\end{equation}
(Shu et al.\ 1996).
Even for the revealed stage, this temperature is 1360 K.
Finally, if the dust particles in the uppermost layers are submicron in size, they
will absorb optical radiation but will be unable to radiate in the infrared effectively,
and they will achieve even higher temperatures still.
Chiang \& Goldreich (1997) have explained the excess near-infrared emission in
spectral energy distributions (SEDs) of protostellar disks by accounting for this
``superheated" dust layer.
Even during the revealed stage, then, particles in the uppermost layers of the disk
at the X point will achieve temperatures in excess of 1360 K.
The significance of the dust temperatures $> 1360 \, {\rm K}$ is that silicates are
not stable against evaporation such high temperatures (at least in the disk
environment discussed here, mixed in a solar ratio with ${\rm H}_{2}$ gas).
Above 1400 K, for example, dust grains will evaporate in only hours (Morris \& Desch 2010).
Thus, temperatures are simply too high to have a dusty disk approach all the way to
the X point, even during the ``revealed" stage, when mass accretion rates are
$\leq 10^{-7} \, M_{\odot} \, {\rm yr}^{-1}$.
A calculation of the innermost radius where dust can stably reside is complicated by
the ``wall-like" structure of the disk there, and the poorly known thermodynamic
properties of dust materials, but has been considered by Kama et al.\ (2009), who show
that typically the inner edge where solids can exist is typically several
$\times \, 0.1 \, {\rm AU}$ from a protostar.
Astronomical observations confirm the absence of solids at the X point.
Eisner et al.\ (2005) have determined the inner edges of dust emission in the
protoplanetary disks surrounding 4 Sun-like protostars, through a combination
of NIR interferometry and SED fitting.
Through measurements of other stellar properties, they also determined the
locations of the corotation radius and the predicted locations of the X point.
They find that typically the corotation radius and magnetospheric truncation
radius are both $< 0.1 \, {\rm AU}$ and agree within the uncertainties, but
that the inner edge of the dust disk also typically lies beyond either of these
radii, at about $0.1 - 0.3 \, {\rm AU}$.
This is true even for V2508 Oph, the protostar with the least discrepancy (among
the 4 sampled) between the X point and the inner edge of the dust disk.
It is also a protostar with parameters that closely match those adopted by
Shu et al.\ (1996, 2001) for a protostellar system in the revealed stage:
$M_{\star} = 0.9 \, M_{\odot}$, $\dot{M} = 2.3 \times 10^{-7} \, M_{\odot} \, {\rm yr}^{-1}$,
and an age $\approx 0.6 \, {\rm Myr}$.
Eisner et al.\ (2005) attribute the existence of an inner edge to the dust disk
to sublimation of dust at that radius, consistent with their observation that
the maximum temperature associated with dust emission is in the range 1000 - 2000 K
($\approx$ 1500 K for V2508 Oph, albeit with considerable uncertainty).
Eisner et al.\ (2005) also note that in systems with higher mass accretion rates,
the X point (by construction) is pushed inward, and they observed the inner edge
of the dust disk to move outward.
This finding is also consistent with dust sublimation being the cause of the inner
edge of the disk.
Based on the theoretical arguments above, and the observations of Eisner et al.\
(2005), solid particles are not expected to exist at the X point in disks with
mass accretion rates $> 10^{-7} \, M_{\odot} \, {\rm yr}^{-1}$.
Altogether, by neglecting accretional heating, Shu et al.\ (1996, 2001) appear to have
underestimated the temperatures of solids, and predicted them to exist where
they should not be and, indeed, are not observed to be.
\subsection{Decoupling from the Funnel Flow?}
In order for the X-wind model to be a valid description of CAI or chondrule formation, these
objects must adhere to a specific dynamical history.
Specifically, Shu et al.\ (2001) assumed that a fraction $F \sim 0.01$ of all solid material
decouples from the funnel flow and enters the reconnection ring.
It is presumed to do so because it is bound in solid particles that experience a gravitational
force greater than the drag force exerted on them by the funnel flow.
We argue above that all material should evaporate at the X point, but assuming solids to exist,
their dynamical histories will depend critically on their sizes.
Clearly protoplanetary disks contain sub-micron and micron-sized grains,
as evidenced by silicate emission features at $10 \, \mu{\rm m}$ (e.g., Sargent et al.\ 2009).
Shu et al.\ (1996, 2001) specifically identify these micron-sized solid particles
with matrix grains in chondrites.
Importantly, within the context of the X-wind model, there are no other particles in chondrites
that can be identified as pre-existing in the protoplanetary disk, because chondrules and CAIs
form in the X-wind environment, and not in the disk.
Chondrites also contain large aggregations of smaller particles that are unmelted, only lightly
sintered and lithified, termed agglomeratic chondrules; but these are rare, making up only 2\%
of the volume of ordinary chondrites (Dodd \& van Schmus 1971; Weisberg \& Prinz 1996).
We discuss these below, but for now assert that if chondrules do not form in the disk, then
for practical purposes the only solid material entering the funnel flow would be micron-sized
grains.
Because solid particles in the disk are so small, they are almost certain to couple
strongly to the gas as it enters the funnel flow.
According to Weidenschilling (1977b), small particles with aerodynamic stopping times
much less than the dynamical time will basically move with the gas, but with a small
relative velocity $(\Delta g) t_{\rm stop}$, where
\begin{equation}
t_{\rm stop} = \frac{ \rho_{\rm s} a }{ \rho_{\rm g} v_{\rm T} }
\end{equation}
is the aerodynamic stopping time [in the Epstein drag limit where particles
are smaller than the mean free path of gas molecules, appropriate for micron-sized
particles in gas with density $< 10^{-4} \, {\rm g} \, {\rm cm}^{-3}$,
or chondrules in gas with density $< 10^{-7} \, {\rm g} \, {\rm cm}^{-3}$],
where $\rho_{\rm s}$ and $a$ are the particle density and radius,
$\rho_{\rm g}$ and $v_{\rm T}$ are the gas density and thermal velocity,
and $\Delta g$ is the difference between the accelerations felt by the gas and solids.
In the context of the disk proper, $\Delta g$ is the extra acceleration the gas
feels because of pressure support,
\begin{equation}
\Delta g = \frac{1}{\rho_{\rm g}} \, \frac{\partial P_{\rm g}}{\partial r},
\end{equation}
where $P_{\rm g}$ is the gas density.
Assuming $T \approx 1500 \, {\rm K}$ just outside the X point,
$\Delta g \sim v_{\rm T}^{2} / r \sim 0.1 \, {\rm cm} \, {\rm s}^{-2}$
(neglecting terms of order unity).
The disk scale height is $H \sim 2 \times 10^{10} \, {\rm cm}$, and assuming
a minimum mass solar nebula (Weidenschilling et al.\ 1977a), we estimate a disk
density $\Sigma \sim 10^5 \, {\rm g} \, {\rm cm}^{-2}$ at the X point, yielding
a gas density $\rho_{\rm g} \sim 10^{-6} \, {\rm g} \, {\rm cm}^{-3}$.
For a particle with radius $a = 1 \, \mu{\rm m}$ and internal density
$\rho_{\rm s} = 3 \, {\rm g} \, {\rm cm}^{-3}$, the aerodynamic stopping time
is $t_{\rm stop} \sim 10^{-3} \, {\rm s}$.
The relative velocity between gas and dust, within the disk, is therefore
$\sim 10^{-4} \, {\rm cm} \, {\rm s}^{-1}$.
This relative velocity is negligible, and gas and dust can be considered
perfectly coupled.
In the context of the transition between the disk and the funnel flow,
$\Delta g$ is given by the acceleration the gas experiences.
Shu et al.\ (1996, 2001) do not explicitly model this stage, but we can estimate the
acceleration as follows.
The gas starts essentially from rest at the X point, but by the time it
participates in the funnel flow it could be moving as much as the thermal
velocity in the reconnection ring, $V \sim 400 \, {\rm km} \, {\rm s}^{-1}$.
The distance over which this occurs is perhaps
$d \sim 0.1 \, R_{\rm x} \sim 10^{11} \, {\rm cm}$.
Thus $\Delta g \sim V^{2} / d \sim 10^{4} \, {\rm cm} \, {\rm s}^{-2}$
(about 10 g's).
As for the stopping time, we derive a lower limit to the gas density in
the funnel flow by assuming that it carries a total mass flux $\dot{M}_{\star}$ onto
the star.
The funnel flow arises from an area $A$, and is composed of gas moving at a velocity
$V$, with density $\rho_{\rm g} = \dot{M}_{\star} / (A V)$.
The lower limit to the density is found by setting $A$ and $V$ as large as they
can be, and using the smallest value of $\dot{M}_{\star}$.
The absolute largest $A$ can be is $4\pi R_{\rm x}^{2} \sim 8 \times 10^{24} \, {\rm cm}^{2}$,
but the size of the reconnection ring, $\sim 1 \times 10^{24} \, {\rm cm}^{2}$, is probably
still an overestimate to the true value of $A$.
We take the thermal velocity of the gas (after heating to $10^{7} \, {\rm K}$),
$v_{\rm T} \sim 400 \, {\rm km} \, {\rm s}^{-1}$, to represent the maximum velocity of
the gas.
Thus $\rho_{\rm g} > 10^{-13} \, {\rm g} \, {\rm cm}^{-3}$ in the funnel flow, and
$t_{\rm stop} < 10^{2} \, {\rm s}$.
Micron-sized particles (or their aerodynamic equivalents) therefore reach relative
velocities with respect to the gas no more than
$\sim (\Delta g) t_{\rm stop} < 10 \, {\rm km} \, {\rm s}^{-1}$.
This velocity sounds significant [indeed, it would probably lead to evaporation of the dust
grains by frictional drag cf. Harker \& Desch (2002)] until it is remembered that it is only
2\% of the total velocity: both gas and solid particles will move on nearly identical funnel-flow
trajectories.
Over the roughly 1 hour ($= d / V$) the gas takes to accelerate from the disk to the
funnel flow, particles will be displaced only about $2 \times 10^9 \, {\rm cm}$
$= 0.002 \, R_{\rm x}$, a negligible amount.
Put another way, if gas is funneled onto one spot on the protostar, taking $\sim 10 \, {\rm hr}$
to reach it, dust grains will arrive 10 minutes later, at a spot about 1\% of the protostar's
radius away.
Shu et al.\ (2001) argue that solid particles can ``fall out" of the funnel flow if the gravitational force
on them exceeds the drag force lifting them.
This requires
\begin{equation}
\frac{4\pi}{3}\rho_{\rm s} a^{3} \, \Omega^2 \, z >
\pi a^{2} \, \rho_{\rm g} (C_{\rm D} / 2) \, V_{\rm g}^{2},
\end{equation}
where $z$ is the height above the midplane.
Taking $z \sim H \sim 2 \times 10^{10} \, {\rm cm}$ (the scale height of the disk),
and $C_{\rm D} = (2/3) (\pi k T_{\rm p} / \bar{m} )^{1/2} / V_{\rm g}$ (Gombosi et al.\ 1986),
the condition to fall out of the flow becomes a lower limit to the particle size:
\begin{equation}
a > a_{\rm crit} = \frac{1}{4 \rho_{\rm s} \Omega^{2} z} \,
\left( \frac{ \pi k T_{\rm p} }{ \bar{m} } \right)^{1/2} \,
\frac{ \dot{M}_{\star} }{ A },
\end{equation}
where the same relationship between mass accretion rate and gas density in the funnel flow
as above was used.
Taking $T_{\rm p} = 1500 \, {\rm K}$, $\bar{m} = 0.6 \, m_{\rm H}$, a mass accretion
rate $\sim 10^{-7} \, M_{\odot} \, {\rm yr}^{-1}$ and and area $\sim 10^{24} \, {\rm cm}^{2}$,
the critical particle diameter to fall out of the funnel flow is $\sim 4 \, {\rm mm}$, and is much
larger for higher mass accretion rates.
The conclusion to be reached from all this is that solid material accreting inward, from
the disk, through the X point, will remain coupled to the gas as it participates in a funnel
flow, unless the solid material in the funnel flow is aerodynamically equivalent to
{\it compact} spheres, several millimeters in diameter.
Such particles cannot be chondrules and CAIs, since these are presumed not to form in the disk
in the X-wind model, and matrix grains are clearly too small to dynamically decouple from the
gas.
Agglomeratic chondrules are larger than matrix grains, with diameters $0.3 - 1 \, {\rm mm}$ typically,
but that is still too small to decouple from the funnel flow.
This is true even if they were compact objects in the nebula gas, but models of coagulation predict
that such aggregates would be fractal in shape (Dominik \& Tielens 1997).
It is quite possible these objects compacted only during accretion onto the parent body; if so,
they would have behaved aerodynamically like the smallest particles of which they are composed,
i.e., like micron-sized grains (Dominik \& Tielens 1997), making it even less likely that they could
have decoupled from the funnel flow.
Finally, the fact that agglomeratic chondrules make up only 2\% of the volume of ordinary
chondrites (Weisberg \& Prinz 1996), while chondrules make up 85\% (Gooding \& Keil 1981) is
difficult to reconcile with the idea that chondrules and CAIs formed, with low efficiency, from
such agglomerations.
Thus, there is no significant (i.e., at the $\sim 1\%$ level) component of solid material in the
disk that can be expected to decouple from the funnel flow.
The assumption that a fraction $F \sim 0.01$ of all solid material would leave the funnel flow
and enter the reconnection ring, an assumption Shu et al.\ (2001) themselves term ``{\it ad hoc},"
appears invalid.
Even if solid material existed at the X point, the fraction that would fall out of the funnel
flow would be $\ll 0.01$.
\subsection{Survival and Growth in the Reconnection Ring?}
The arguments above suggest that solids would not decouple from the funnel flow.
Assuming anyway that solid material can enter the reconnection ring, we examine the dynamics
of particles there, and also their growth and survival.
Growth of solid material in the reconnection ring is much dependent on the dynamics of particles,
because the relative velocities $w$ between particles will determine the sticking coefficient $S$, the
probability that the two particles will stick rather than bounce off or even destroy each other.
Shu et al.\ (2001) note (after their Equation 31) that $w$ is implicitly assumed to be small enough that
``molten rocks stick rather than splatter on colliding".
The upper limit on $w$ obviously will depend on particle composition and whether it is
molten or solid, but a typical upper limit adopted in the literature on compound chondrules,
which are molten as they collide, is $\sim 0.1 \, {\rm km} \, {\rm s}^{-1}$
(e.g., Gooding \& Keil 1981; Ciesla \& Hood 2004).
Dominik \& Tielens (1997) calculate that solid particles will on average shatter if they
collide at velocities $> 0.01 \, {\rm km} \, {\rm s}^{-1}$.
In any plausible scenario, however, falling out of the funnel flow would impart vertical velocities
to particles comparable to the Keplerian velocities, $\sim 10^2 \, {\rm km} \, {\rm s}^{-1}$,
essentially putting particles on orbits with different inclinations.
Necessarily, the relative velocities between particles will also be comparable to
these Keplerian velocities.
The gas drag forces acting on the particles in the reconnection ring are completely inadequate
to slow the incoming particles before they collide with and destroy particles already in the
reconnection ring (the surface density of gas, $\sim 10^{-5} \, {\rm g} \, {\rm cm}^{-2}$, will
not stop even micron-sized particles in less than dozens of disk crossings, while the optical
depth of particles in the reconnection ring is large enough to ensure an impact with every crossing).
Thus the actual relative velocities of colliding particles in the reconnection ring would exceed the
shattering limit, by orders of magnitude.
Put another way, so that particles in the reconnection ring do not collide and shatter, they must
exist in a very thin disk with low dispersion of relative velocities, $w_{z}$.
Defining, as Shu et al.\ (2001) do, $w_{z} \sim \alpha w$, where $\alpha \sim 0.3$, then the scale
height of the disk of proto-CAIs would have to be $H_{r} \sim w_{z} / \Omega \sim 3 \times 10^{8} \, {\rm cm}$
$\sim 10^{-2}$ times the scale height of the disk proper, in order for most particles
not shatter each other on impact.
As particles would overwhelmingly exit the funnel flow at much greater heights above the disk, it is
inevitable that they would not collect in the reconnection ring, but rather shatter upon impact there.
We calculate the effect of all of these particles falling out of the funnel flow as follows.
Assuming the mass flux in the funnel flow is $\dot{M} \sim 10^{-7} \, M_{\odot} \, {\rm yr}^{-1}$,
and a fraction $\sim 10^{-2}$ of that is in the form of solids, of which a portion $\sim 10^{-2}$
decouples from the funnel flow, then the flux of particles into the reconnection ring is
$\sim 10^{-11} \, M_{\odot} \, {\rm yr}^{-1}$, or
$\sim 2 \times 10^{22} \, {\rm g} \, {\rm yr}^{-1}$.
Spreading out this flux of particles over the area of the reconnection ring
$\sim 10^{24} \, {\rm cm}^{2}$, we estimate a solid particle flux
$\sim 2 \times 10^{-2} \, {\rm g} \, {\rm cm}^{-2} \, {\rm yr}^{-1}$.
A growing CAI has a radius $> 100 \, \mu{\rm m}$ and a cross section $\sim 3 \times 10^{-4} \, {\rm cm}^{2}$,
and so intercepts a mass $> 6 \times 10^{-6} \, {\rm g} \, {\rm yr}^{-1}$ from solid particles falling
out of the funnel flow, or $\approx 2 \times 10^{-4} \, {\rm g}$ over 30 years.
This mass exceeds by a large factor the mass of the growing CAI itself, so it is easy to see that
a growing CAI will collide with its own mass over its residence time in the disk, at speeds far exceeding
tens of km/s.
This alone will prevent particles from growing in this environment.
Supposing anyway that the relative velocities are slow enough so that particles don't shatter, it still
is not clear that the sticking coefficient will be sufficient to allow growth.
Shu et al.\ (2001) suggest that $S$ might be low unless particles are molten, immediately following heating
by a flare.
Since flares have a limited extent and duty cycle, Shu et al.\ (2001) adopt an effective sticking coefficient
$S \sim 8 \times 10^{-4} \, (2\pi)^{1/2} \, \alpha$, or $S < 10^{-3}$, as typical.
To assume a higher value for $S$, particles would have to somehow stick even while completely solid.
Shu et al.\ (2001) calculate the mass flux onto a particle as
\begin{equation}
4\pi a^{2} \rho_{\rm s} \frac{d a}{dt}
\approx +\frac{3}{4 (2\pi)^{1/2}} \, \left( \Sigma_{\rm r} \Omega \right) \, \frac{S}{\alpha},
\end{equation}
where $\Sigma_{\rm r}$ is the assumed surface density of rock in the reconnection ring.
(NB: This appears to overestimate the growth rate by a factor $3(\pi/8)^{1/2} \sim 2$.)
The important points about this formula are that the time rate of change of particle radius
is independent of radius, and that the growth rate is proportional to the surface density of
rocky material, which only reaches a maximum value
$\sim \Sigma_{\rm r} \sim 1.6 \, {\rm g} \, {\rm cm}^{-2}$ about 30 years after the last
``flushing" of the reconnection ring.
It is smaller at earlier times (see Figure 4 of Shu et al.\ 2001).
For their preferred value of $S$, the maximum growth rate (at 30 yr) is seen from Figure 4 of
Shu et al.\ (2001) to reach $d a / dt \sim +0.03 \, {\rm cm} \, {\rm yr}^{-1}$ at late times
(10 years or later).
From 1-2 years, the growth rates are much smaller, $< 3 \times 10^{-4} \, {\rm cm} \, {\rm yr}^{-1}$.
These growth rates are to be compared to the rate at which hydrogen ions in the plasma thermally
sputter the proto-CAIs, an effect neglected by Shu et al.\ (2001).
The density of hydrogen ions arises straightforwardly from the density of hydrogen gas Shu et al.\ (2001)
assume is trapped on field lines crossing the reconnection ring,
$\sim 5 \times 10^{-16} \, {\rm g} \, {\rm cm}^{-3}$.
Jones (2004) gives a simple formula for the sputtering rate in a hot plasma:
$d a / dt \sim -(n_{\rm H} / 10^{10} \, {\rm cm}^{-3} ) \, {\rm yr}^{-1}$.
For $n_{\rm H} \sim 3 \times 10^{8} \, {\rm cm}^{-3}$, this means even a large 1 cm CAI will be completely
sputtered in only 30 yr.
A more detailed discussion can be found in Draine \& Salpeter (1979), who calculate that
in a $T \sim 10^7 \, {\rm K}$ plasma, each impacting H ion yields roughly 0.02 atoms liberated
from an impacted silicate (and 0.2 atoms per impact of He ions).
Given the flux of H atoms $n_{\rm H} v_{\rm T} / 4 \sim 3 \times 10^{15} \, {\rm cm}^{-2} \, {\rm s}^{-1}$
in the ring, it is straightforward to show that particles, again, shrink at a rate
$d a / dt \sim - 0.03 \, {\rm cm} \, {\rm yr}^{-1}$.
This is competitive with the fastest growth rates of the largest particles at about 30 years, implying
that for the sticking coefficient assumed by Shu et al.\ (2001), particles do not grow faster than
they are sputtered.
The sputtering rate is independent of particle size, and acts even when particles are small.
Thus, about 1 year after material has been flushed out of the reconnection ring, when the largest
particles are about 70 microns in radius (according to Shu et al.\ 2001), thermal sputtering
acts about 100 times faster than growth by vapor deposition.
Particles at this stage could only grow if the effective (time-averaged) sticking coefficient were $> 0.1$,
which is implausibly high.
Neglect of thermal sputtering by Shu et al.\ (2001) is a serious oversight; inclusion of this
effect shows that particles will not survive, nor grow, in the reconnection ring.
\subsection{Retrieval in a Magnetocentrifugal Outflow?}
Above we have argued that large particles cannot grow in the reconnection ring, because
they are likely to be sputtered before they grow, or are likely to collide fast enough to
shatter each other.
Assuming that particles do grow, and do have low relative velocities, then in principle
they could be launched in magnetocentrifugal outflows when the protostellar magnetic cycle
ebbs and the disk encroaches on the reconnection ring; but in practice it is not clear
that particles can be launched.
Gas orbiting the protostar is launched in a magnetocentrifugal outflow when it is tied to
magnetic field lines inclined from vertical by a critical amount ($60^{\circ}$).
Ionized gas is tied to magnetic field lines (because of flux freezing) like beads on a
wire; when these wires are inclined to the vertical and spun around an axis, the beads
tied to the wire are flung outward.
Because of symmetry, magnetic field lines are exactly vertical when they penetrate the midplane
of a protoplanetary disk; gas at the midplane will not be flung outward.
Wardle \& K\"{o}nigl (1993) have examined the vertical structure of accretion disks from
which gas is being magnetocentrifugally launched.
They find that such outflows are launched only from heights $z$ above the midplane in
excess of 2 gas pressure heights $H$.
In order to be launched in a magnetocentrifugal outflow, large particles must be located
at least $2 H$ above the midplane; if they are not, they will be tied to gas that is
not moving upward and is not being flung out along field lines.
For parameters typical of the inner edge of the disk ($T = 1500 \, {\rm K}$
sound speed $2.3 \, {\rm km} \, {\rm s}^{-1}$, $\Omega = 1 \times 10^{-5} \, {\rm s}^{-1}$),
the pressure scale height is $H \sim C / \Omega = 2 \times 10^{10} \, {\rm cm}$, and particles
must reach heights $z > 4 \times 10^{10} \, {\rm cm}$ above the midplane to be launched.
The actual vertical distribution of particles as the disk encroaches on them is much smaller,
though, on the order of $w_{z} / \Omega$.
As $w_{z} < 0.03 \, {\rm km} \, {\rm s}^{-1}$ by necessity (or else proto-CAIs would shatter
on impact and never grow), their vertical distribution is limited to
$z < 3 \times 10^{8} \, {\rm cm}$, at least initially.
Without some intervening mechanism to vertically spread them, these CAIs will never be launched.
Shu et al.\ (2001) do not identify such a mechanism.
The most plausible mechanism for lofting large particles above the midplane is turbulence,
perhaps driven by a magnetorotational instability (MRI) acting at the X point, as Shu
et al.\ (2001) suggest acts to transfer gas across the X point.
Several opposing constraints must be satisfied for this to occur.
According to the X wind model,
the magnetic diffusivity of the gas must be sufficiently high that mass can diffuse off of
field lines threading the disk, and onto field lines tied to the star; but the diffusivity
can not be so high that it suppresses the MRI generating the turbulence.
It is not clear these conditions can both be met: a magnetic diffusivity
$> 0.3 \, H^2 \, \Omega \sim 2 \times 10^{15} \, {\rm cm}^{2} \, {\rm s}^{-1}$ at the
X point will suppress the MRI (e.g., Desch 2004); on the other hand, for matter to radially diffuse a
distance $\sim 0.1 R_{x}$ in 1 yr requires a comparable diffusivity
$> 2 \times 10^{14} \, {\rm cm}^2 \, {\rm s}^{-1}$.
At any rate, detailed modeling of the X point is required before the MRI can be
invoked as a source of turbulence, let alone yield the exact turbulence needed to loft
CAI-sized particles.
In the absence of such a mechanism, the proto-CAIs in the reconnection ring will retain
whatever vertical distribution they exhibited there, and they will not be launched.
A lack of detailed modeling also hinders judgment of the last element of launching
in the magnetocentrifugal outflow, the final trajectories taken by launched CAIs.
Examples of calculated trajectories are presented in Shu et al.\ (1996) but the
calculations on which they are based have not appeared in the refereed literature.
One conclusion about these trajectories that is probably robust is that the trajectories
taken by specific particles are {\it highly} sensitive to their aerodynamic properties.
Shu et al.\ (1996) define a parameter $\alpha$, inversely proportional to the product a
particle's density and radius.
Particles with identical $\alpha$ will follow identical trajectories, but particles with
slightly differing $\alpha$ will follow greatly varying trajectories.
A factor of 2 variation in particle size is the difference between falling back onto
the disk at 0.2 AU, or leaving the solar system altogether.
Given this sensitivity, it is not clear that many particles would be of the right size to
be launched on trajectories that deposit them in the 2-3 AU region.
\section{X-Wind Model Predictions and Meteoritic Constraints}
The X-wind model, as reviewed above, has many internal inconsistencies.
It also makes predictions about the formation of chondrules and CAIs that are
inconsistent with their petrology and other meteoritic constraints.
Formation of SLRs in their meteoritic abundances also faces difficulties
in the context of the X-wind model.
These inconsistencies are discussed in this section.
\subsection{Chondrule Formation}
The X-wind model is inconsistent with the thermal histories of chondrule formation, constraints
on which were discussed in \S 2.
The typical disk temperatures just outside the X point, where chondrules form in the X-wind model,
are typically $> 1160 \, {\rm K}$, far higher than the temperatures ($\approx 650 \, {\rm K}$)
require to condense primary sulfur.
The cooling rates of chondrules in the X-wind model are $\sim 10 \, {\rm K} \, {\rm hr}^{-1}$
for all particles.
These cooling rates match those required to produce porphyritic chondrule textures as they
pass through their crystallization temperatures; but they are not consistent with the cooling
rates of barred olivine chondrules, $250 - 3000 \, {\rm K} \, {\rm hr}^{-1}$.
They also are not consistent with the much more rapid cooling rates above the liquidus, needed to
retain volatiles such as S and Na.
Finally, the correlation between chondrule cooling rate and the compound chondrule frequency, which
is a robust prediction of the nebular shock model (Desch \& Connolly 2002; Ciesla \& Hood 2002), is
unexplained by the X-wind model.
Some aspects of the chondrule formation environment in the X-wind model are consistent with constraints,
others not.
The chondrule formation environment is not explicitly modeled within the X-wind model, but we
can estimate the gas density.
Adopting a minimum-mass solar nebula profile (Weidenschilling 1977a), we infer a gas density
$\approx 2 \times 10^{-6}$ at 0.05 AU, or higher if the disk mass exceeds the minimum-mass solar
nebula mass.
Assuming a typical solids/gas density ratio $5 \times 10^{-3}$ and a typical chondrule mass
$\approx 3 \times 10^{-4} \, {\rm g}$, we infer a number density of chondrules
$\approx 30 \, {\rm m}^{-3}$.
This is slightly higher but not inconsistent with the density of chondrules based on compound
chondrule frequency and volatile retention.
One prediction by the X-wind model about the chondrule formation environment is robust, though:
chondrules were heated near 0.1 AU and launched to the 2-3 AU region, where they joined cold
dust that had never been heated.
This is inconsistent with the presence of matrix dust that was indeed heated to high temperatures,
even condensed, in the chondrite-forming region (Scott \& Krot 2005).
Micron-sized matrix grains launched by the X-wind are predicted to not fall back on the disk, so it
is difficult to explain the presence of such grains.
Moreover, matrix grains and chondrules within a given chondrite are chemically complementary (at
least in their refractory lithophiles), meaning that chondrules and matrix grains are derived from
the same batch of solar-composition material.
Finally, the X-wind, model predicts that chondrules and CAIs are formed contemporaneously, and offers
no explanation for the observed time difference $\sim 2$ Myr between CAI and chondrule formation.
\subsection{CAI Formation}
One of the successes of the X-wind model was its prediction that comets would contain
CAIs (Shu et al.\ 1996), like the inclusion {\it Inti} retrieved by the {\it STARDUST} mission
from comet Wild 2 (Zolensky et al.\ 2006), although other physical models also predict outward
transport of CAIs in the disk (Desch 2007; Ciesla 2007).
The X-wind model is inconsistent with many other aspects of CAI formation.
It is a robust prediction of the X-wind model that CAIs should evaporate and recondense in a very
oxidizing environment.
According to Shu et al.\ (2001), the density of hydrogen gas in the reconnection ring
is ${\cal C}^{-1} \, \times (2 \times 10^{-16}) \,{\rm g} \, {\rm cm}^{-3}$, where
${\cal C}$ is a dimensionless quantity near unity (see discussion before their equation 12).
Alternatively, they estimate the electron density in this region to be
$n_{\rm e} \approx 3 \times 10^{8} \, {\rm cm}^{-3}$.
For an ionized hydrogen gas, this yields a density
$5 \times 10^{-16} \, {\rm g} \, {\rm cm}^{-3}$, which is the value we adopt.
In the X-wind model, proto-CAIs grow by condensation following large flares that evaporate
much of the solid material.
Following an event that evaporates all of the ferromagnesian mantle material from proto-CAIs,
Shu et al.\ (2001) estimate (their \S 5.1) a surface density $\sim 1.6 \, {\rm g} \, {\rm cm}^{-2}$
of rocky material (presumably FeO, MgO and ${\rm SiO}_{2}$) in the gas phase.
Initially this material is confined to the volume occupied by the thin disk of proto-CAIs, but it
will thermally expand.
If it is allowed to expand along field lines more than $\sim 10^{12} \, {\rm cm}$ above the
reconnection ring, the gas will be lost to the protostar; Shu et al.\ (2001) assert that
the gas will cool before that time.
At any rate, the very lowest density the rock vapor can have corresponds to the maximum
vertical distribution of about $10^{12} \, {\rm cm}$, which yields a density of rock vapor
$\sim (1.6 \, {\rm g} \, {\rm cm}^{-2}) / (2 \times 10^{12} \, {\rm cm})$
$\sim 1 \times 10^{-12} \, {\rm g} \, {\rm cm}^{-3}$.
That is, the mass density of heavy elements is 2000 times the density of hydrogen.
This is to be compared to the ratio in a solar-composition gas, $\sim 0.015$.
Expressed as an oxygen fugacity, it is seen that CAIs materials condense out of a gas that
is over 5 orders of magnitude more oxidizing than a solar-composition gas, i.e., with
$f{\rm O}_{2} \approx {\rm IW} - 1$.
The high oxygen fugacity of the gas in the reconnection ring during the times when
gas is condensing onto proto-CAIs is completely inconsistent with the barometers of
oxygen fugacity such as Ti valence states in fassaite and rh\"{o}nite, which imply
a near solar-composition gas (Krot et al.\ 2000).
It is also inconsistent with the condensation of osbornite in some CAIs, especially in
the object known as {\it Inti} in the {\it STARDUST} sample return (Meibom et al.\ 2007);
the osbornite also must have condensed in a solar composition gas (Ebel \& Grossman 2000).
Indeed, the presence of N in the reconnection ring in the first place may itself be
problematic, as it should be quickly swept up in the funnel flow.
\subsection{Radionuclide Production}
The X-wind model was developed to explain the abundances of the SLRs ${}^{41}{\rm Ca}$,
${}^{26}{\rm Al}$, ${}^{53}{\rm Mn}$ and ${}^{10}{\rm Be}$ together, but in fact the model
has difficulty matching the meteoritic abundances of these SLRs.
In the X-wind model, production of $^{26}$Al without overproducing $^{41}$Ca requires that
ferromagnesian silicate mantles surround CAI-like refractory cores, and that the two components form
immiscible melts during heating.
This absolute need arises because in their model ${}^{41}{\rm Ca}$ is produced by spallation of
${}^{40}{\rm Ca}$, whereas ${}^{26}{\rm Al}$ is produced from spallation of Mg.
Without sequestration of Ca in a core, beneath a mantle $> 1 \, {\rm mm}$ thick to shield
solar energetic particles, ${}^{41}{\rm Ca}$ is consistently overproduced in the X-wind model,
relative to ${}^{26}{\rm Al}$.
Shu et al.\ (2001) argue that Ca and Al should be sequestered in a core using theoretical
arguments, but experiments consistently show that Ca,Al-rich silicates have a lower melting
point than ferromagnesian silicates and do not form immiscible melts as Shu et al.\ (2001)
describe, instead being well mixed (Simon et al.\ 2002).
Significantly, both radionuclides are underproduced relative to ${}^{10}{\rm Be}$ in the X-wind
model.
This is because the dominant target nucleus, ${}^{16}{\rm O}$, is distributed throughout the
CAI, and because the reaction proceeds most rapidly due to higher energy
($\sim 50 \, {\rm MeV} \, {\rm nucleon}^{-1}$) solar energetic particles that can penetrate the CAI.
Gounelle et al.\ (2001) were able to marginally co-produce ${}^{26}{\rm Al}$ and ${}^{10}{\rm Be}$
using a theoretically derived rate for the reaction ${}^{3}{\rm He}({}^{24}{\rm Mg},p){}^{26}{\rm Al}$.
In fact, this reaction rate has been experimentally measured and found to be 3 times smaller than
Gounelle et al.\ (2001) had assumed (Fitoussi et al.\ 2004), meaning that ${}^{10}{\rm Be}$ is
overproduced by at least a factor of 3 relative to ${}^{26}{\rm Al}$ in CAIs in the X-wind model.
Recent modeling of radionuclide production in the X-wind environment confirms the overabundance
of ${}^{10}{\rm Be}$ relative to ${}^{26}{\rm Al}$ (Sahijpal \& Gupta 2009).
The discrepancy is worsened if, in fact, the majority of ${}^{10}{\rm Be}$ comes from trapped
GCRs, as advocated by Desch et al.\ (2004).
The X-wind model is not capable of explaining the presence of ${}^{60}{\rm Fe}$ in the early solar system.
The neutron-rich isotope ${}^{60}{\rm Fe}$ is underproduced relative to other radionuclides
(e.g., ${}^{26}{\rm Al}$) by orders of magnitude (Leya et al.\ 2003; Gounelle 2006).
In order to explain the abundance of ${}^{60}{\rm Fe}$ in the solar nebula, a separate, nucleosynthetic
source is required, probably a single nearby supernova (or a small number of nearby supernovae),
which could have injected many other SLRs at the same time.
The presence of ${}^{36}{\rm Cl}$ also does not appear to be explained by the X-wind model.
Its presence in the solar nebula has been interpreted as evidence for a late stage of irradiation
within the solar nebula, producing ${}^{36}{\rm Cl}$ by direct bombardment of target nuclei by
energetic ions (Lin et al.\ 2005; Hsu et al.\ 2006; Jacobsen et al.\ 2009).
The X-wind model provides a natural environment for irradiation to take place, but production
of ${}^{36}{\rm Cl}$ requires irradiation of the target nuclei S, Cl, Ar, and K.
The 50\% condensation temperatures of all of these elements exceed 1000 K (Lodders 2003), so at
the X-point none of these elements will condense.
If any of these elements are carried into the reconnection ring, they will quickly evaporate
and join the funnel flow and be accreted onto the star.
Significantly, if ${}^{36}{\rm Cl}$ were created in the reconnection ring, it would fail to
recondense following the evaporation of CAI material.
The presence of live ${}^{36}{\rm Cl}$ in meteoritic inclusions perhaps implies irradiation, but
only in a relatively cold environment ($< 1000 \, {\rm K}$), far cooler than the X-wind model
predicts.
The fact that the ${}^{36}{\rm Cl}$ occurs in late-stage alteration products like sodalite
also argues against production at the same time CAI were forming.
Within the context of the X-wind model, the SLRs ${}^{10}{\rm Be}$, ${}^{41}{\rm Ca}$, ${}^{26}{\rm Al}$
and ${}^{53}{\rm Mn}$ are corproduced in their observed proportions only after making assumptions
about the behavior of CAI melts and the cross section of the ${}^{16}{\rm O}(p,x){}^{10}{\rm Be}$
reaction that are not justified.
In particular, ${}^{10}{\rm Be}$ is likely to be overproduced significantly relative to other SLRs
in the X-wind environment.
The X-wind model also provides no explanation for ${}^{60}{\rm Fe}$ and ${}^{36}{\rm Cl}$ in the
early solar system, and these SLRs must have a separate origin, perhaps a nearby supernova or
irradiation in colder regions of the disk.
It is likely that these other sources would contribute to the inventories of other SLRs as well.
This is not to rule out contributions from the X-wind, but to point out that the X-wind model must
be seen as one model among many alternatives.
We now consider the ability of alternative models to explain chondrule and CAI formation, and the
origins of the SLRs.
\section{Alternatives to the X Wind}
The X-wind model attempted to connect three distinct problems in meteoritics
to a single astrophysical model, to advance the field toward ``an astrophysical
theory of chondrites".
The problems of chondrule formation, CAI formation, and the origins of the SLRs
are not wholly unconnected.
On the other hand, extensive petrological and cosmochemical measurements had
already led, and have continued to lead, the meteoritics community to develop
detailed theories for each of these problems.
We summarize these here, to provide the astrophysics community with a current
review of these fields, and to provide a comparison for the X wind model, so
that its successes and failures can be put into a proper perspective.
\subsection{Chondrule Formation}
At this time, the leading model for chondrule formation is passage through
nebular shock waves, in the protoplanetary disk.
The model was first proposed by Wood (1963) and subsequently developed by
Hood \& Horanyi (1991, 1993), Connolly \& Love (1998), Hood (1998), Iida et al.\ (2001),
Desch \& Connolly (2002), Ciesla \& Hood (2002), Miura \& Nakamoto (2006),
and Morris \& Desch (2010).
Reviews of chondrule formation and the shock model can be found in
Jones et al.\ (2000), Connolly \& Desch (2004), Desch et al.\ (2005),
Hewins et al.\ (2005), and Connolly et al.\ (2006).
Two leading candidates for the source of the shocks are gravitational instabilities
that drive spiral shocks through the disk, or bow shocks around planetesimals on
eccentric orbits.
Gravitational instabilities would naturally produce large shocks at high speeds
compatible with the shock models, if the disk can be shown to be unstable
(Boss \& Durisen 2005; Boley \& Durisen 2008).
Because instability requires a cold, massive disk, it may be delayed until
mass piles up in the disk and the disk cools; a delay of 2 Myr is not
unreasonable.
Planetesimal bow shocks should be ubiquitous if planetesimals form early
(by some process that does not rely on chondrule formation) and Jupiter can
pump up the eccentricities of these bodies (Hood et al.\ 2009).
Formation of a massive Jupiter might take 2 Myr, so a delay between CAI and
chondrule formation is again not unreasonable.
The two shock models and their relative merits are discussed further by
Desch et al.\ (2005).
In either model of chondrule formation by shocks, chondrule precursors are
melted in the disk, at about 2-3 AU, in the presence of dust, thereby complying
with the constraints of chondrule-matrix complementarity and the presence of
condensate grains discussed above.
Turbulence in the disk is capable of generating regions of varying chondrule
density (Cuzzi et al.\ 2001, 2008; Teitler et al.\ 2009),
exceeding $10^2$ on lengthscales $\sim 10^4 \, {\rm km}$ (Hogan \& Cuzzi 2007;
Cuzzi et al.\ 2008).
The shock wave is presumed to advance through the disk, and individual chondrules
would be melted in microenvironments varying in chondrule density and oxidation
state.
The models of Desch \& Connolly (2002) and Ciesla \& Hood (2002), as well
as Morris \& Desch (2010), are in general agreement and calculate similar
thermal histories for chondrules.
A typical case is depicted in Figure 1, for a pre-shock gas density
$10^{-9} \, {\rm g} \, {\rm cm}^{-3}$, chondrule-to-gas mass ratio of $3.75\%$
and shock speed $8 \, {\rm km} \, {\rm s}^{-1}$.
The disk gas is presumed to be cold enough to condense S, because at the time
of chondrule formation, 2 Myr after CAI formation, the disk is in the passively
heated protoplanetary disk stage (Chiang \& Goldreich 1997).
As the shock advances, radiation from already heated chondrules escapes to the
pre-shock region, pre-heating chondrules (perhaps forming melt that draws
fluffy aggregates into compact spheres before the shock hits).
Peak temperatures are reached immediately after the shock hits and are
$\approx 2000 \, {\rm K}$ for these parameters.
Peak temperatures are attributable to the combination of absorption of other
chondrules' radiation, thermal exchange with the compressed, heated gas, and the
drag heating as the chondrules equilibrated to the gas velocity.
This drag heating disappears in one aerodynamic stopping time, about 1 minute,
implying initial cooling rates $\sim 10^4 \, {\rm K} \, {\rm hr}^{-1}$.
Chondrules then cool from about 1700 K at the rates at which they pass many optical
depths from the shock front, $\sim 10 - 10^2 \, {\rm K} \, {\rm hr}^{-1}$ depending
on the density of chondrules which provide the opacity (dust is predicted to
evaporate in the shock: Morris \& Desch 2010).
The shock model predicts the cooling rate through the crystallization temperatures
is proportional to the chondrule density.
The two stages of cooling and the cooling rate proportional to chondrule density are
robust predictions unique to the shock model.
These are to be compared to the thermal histories of chondrules in the X wind model,
superimposed on Figure 1.
Parameters for the ``revealed stage", in which $\dot{M} = 1 \times 10^{-7} \, M_{\odot} \, {\rm yr}^{-1}$,
were adopted.
Temperatures in the X wind model are too high initially to condense S (at least in a
near-solar composition gas), do not heat by more than a few hundred K, do not reach
temperatures several hundred K above the liquidus, and do not exhibit two stages of
cooling with fast initial cooling rate and slower cooling rate at lower temperatures.
The chondrules' cooling rates also are not proportional to the chondrule density.
To summarize, the shock model conforms to many constraints that the X wind model does not.
It predicts thermal histories with cold initial temperature, rapid rise to the correct
peak temperatures, rapid cooling at first, then slow cooling through the crystallization
range.
The X wind model predicts high initial temperatures, a limited temperature increase to the
peak temperature, and a single cooling rate from the peak temperature.
The shock model predicts that chondrule cooling rates, which determine textures, are
proportional to the chondrule density, explaining why barred olivine textures, which demand
fast cooling rates and therefore chondrule densities, are more prevalent in compound chondrules.
The X wind model predicts no correlation of cooling rate with chondrule density, and no
correlation of chondrule texture with compound chondrule frequency.
The shock model is consistent with formation in the disk and therefore both the presence
of condensate grains and chondrule-matrix chemical complementarity.
The X wind model would predict no correlation between chondrules and the matrix in which
they are sited, and explicitly predicts that matrix grains have never been heated.
Either of the proposed mechanisms for shocks, gravitational instability and planetesimal
bow shocks, is compatible with a 2 Myr delay between CAI and chondrule formation.
The X wind model predicts contemporaneous production of CAIs and chondrules.
The shock model makes detailed predictions about the chondrule formation environment
and the thermal histories of chondrules; the X wind model is less detailed, but where it
makes predictions these often fail to conform to constraints.
The data overwhelmingly support an origin for chondrules in the disk, melted by
nebular shocks, rather than formation in the X-wind environment.
\subsection{CAI formation}
The formation of CAIs is a major unsolved problem in meteoritics.
Fluffy Type A CAIs and the precursors of other, melted, CAIs contain refractory
minerals that condense at high temperatures (Grossman 2002; MacPherson 2003).
Barometers of oxygen fugacity constrain this gas to be as reducing as one of
solar composition.
These factors point to condensation in the solar nebula, at a stage when it was
very hot, implying formation at an early time and/or location closer to the Sun.
At temperatures $\approx \, 1500 - 1650 \, {\rm K}$, for example, hibonites
and other Ca,Al-rich minerals in CAIs would condense, but ferromagnesian silicates
would not (Lodders 2003).
Models of the structure of protoplanetary disks that include realistic opacity,
convection and viscous heating predict temperatures $> 1400 \, {\rm K}$ only
inside about 0.5 AU, even if the mass accretion rate through the disk is as high
as $\dot{M} = 10^{-7} \, M_{\odot} \, {\rm yr}^{-1}$, a stage that can only last
for $\sim 0.5 \, {\rm Myr}$ or less.
Formation of CAIs during this restricted time of the disk's evolution is consistent
with the inferred spread in CAI ages $\approx 0.4 \, {\rm Myr}$, derived from Al-Mg
systematics (MacPherson et al.\ 1995; Kita et al.\ 2005, 2010; Shahar \& Young 2007).
The main objection to this straightforward interpretation is the so-called ``CAI storage
problem", the perceived inability of solids to remain in the protoplanetary disk for
the $\sim 2 \, {\rm Myr}$ needed so that CAIs can join chondrules in chondrites.
Aerodynamic drag, in particular, is expected to cause CAIs to spiral in toward the
Sun on timescales $\sim 10^5 \, {\rm yr}$ (Weidenschilling 1977b).
Cuzzi et al.\ (2003) have shown, however, that while the majority of CAIs may migrate
inward on $10^5 - 10^6 \, {\rm yr}$ timescales, turbulence causes CAIs to diffuse outward
as well on the same timescales.
This model predicts that smaller CAIs should diffuse outward more effectively than larger
particles, explaining the greater prevalence of Type A CAIs relative to the larger Type B
CAIs.
Within the context of the same model, Cuzzi et al.\ (2005a, 2005b) have also shown
that CAIs experience high temperatures for the long ($\sim 10^4 - 10^5 \, {\rm yr}$
timescales needed for elements to diffuse across the so-called Wark-Lovering rims
observed around many CAIs.
The igneous textures of most CAIs are potentially explained by passage through nebular
shocks, in much the same manner as chondrules are presumably melted.
The peak temperatures and cooling rates are consistent with this scenario.
It is not clear whether shocks that melted CAIs would have been identical to the
ones that melted chondrules, or perhaps were just due to the same mechanism but
acting in a different environment.
It is also not clear that such shocks could have acted at the times needed to melt CAIs.
In principle, however, shocks acting in the disk over many Myr could explain the
igneous textures of most CAIs.
Thus, storage of CAIs in the disk is not only allowed by disk models, but may be
necessary to explain their mineralogy and textures.
The scenario outlined above is consistent with the mineralogy of CAIs, especially
formation of CAIs in a reducing gas.
In contrast,
The X-wind model predicts that CAIs should condense in their own rock vapor,
devoid of almost all ${\rm H}_{2}$ gas, and is not consistent at all with the low
oxygen fugacity recorded by CAIs during their formation.
The scenario outlined above also is consistent with an early formation of CAIs over a
short interval, whereas the X wind model predicts that CAIs should form continuously
over many Myr.
\subsection{Short-lived Radionuclides}
\subsubsection{Iron 60 and Others}
Essentially the only explanation for the presence of ${}^{60}{\rm Fe}$ in the early solar
system is that it was injected into the solar nebula by one nearby supernova, or a small
number of nearby supernovae (Goswami et al.\ 2005; Meyer \& Zinner 2006; Wadhwa et al.\ 2007).
Irradiation within the solar nebula or in the X-wind environment fails to produce the
observed initial abundance of this neutron-rich isotope, by many orders of magnitude
(Leya et al.\ 2003; Gounelle 2006).
An external stellar nucleosynthetic source is demanded.
An AGB star has been suggested as the source (Wasserburg et al.\ 1994,
1995, 1996, 1998), but isotopic evidence argues against an AGB star origin (Wadhwa et al.\ 2007),
as well as the fact that a nearby AGB star at the time and place of the solar system's formation
is exceedingly improbable (Kastner \& Myers 1994; Ouellette et al.\ 2009).
The only plausible stellar source is a core-collapse supernova, because massive stars
($> 20 \, M_{\odot}$) can evolve off the main sequence and explode as supernovae in
$< 10 \, {\rm Myr}$, before they disperse from their birth clusters.
It is currently debated whether the solar nebula's ${}^{60}{\rm Fe}$ originated in a single
supernova, less than 1 pc away, or in many supernovae several parsecs distant.
Constraining which scenario applies is important for determine what radionuclides are
injected along with ${}^{60}{\rm Fe}$.
For a single supernova, the distances must be nearby, less than several parsecs (Looney et al.\
2006).
Injection by a single supernova, into the Sun's protoplanetary disk (Chevalier 2000), has been
advocated by Ouellette et al.\ (2005, 2007, 2010), who show that sufficient ${}^{60}{\rm Fe}$
could be injected into an extant protoplanetary disk if it were a few $\times 0.1 \, {\rm pc}$
from an isotropically exploding supernova, or up to a few parsecs away from a supernova as clumpy
as the ejecta in the Cassiopeia A supernova remnant (see also Looney et al.\ 2006).
Gounelle \& Meibom (2008) and Gaidos et al.\ (2009) have argued that young ($< 1 \, {\rm Myr}$
old) disks $< 1 \, {\rm pc}$ from a supernova are rare, occurring with $< 1\%$ probability;
Ouellette et al.\ (2010) likewise calculate a low probability $\sim 1\%$ for a disk at 2 pc
to be struck by ejecta.
Additionally, injection into the disk requires much of the ejecta to condense into dust grains
before encountering the disk: simulations show $< 1\%$ of the intercepted gas ejecta is injected
into a disk (Ouellette et al.\ 2007).
Injection of gas into a molecular cloud, instead of a disk, in principle can occur as far as a
few parsecs (Looney et al.\ 2006; Gaidos et al.\ 2009), but here again the injection efficiency
of gas ejecta is $\sim 1\%$ (Boss et al.\ 2010).
Recent models of supernova shock-triggered collapse by Boss \& Keiser (2010) do exhibit
shock fronts that are thinner and denser than those previously considered, and may allow
for greater injection efficiencies.
At this point, injection from a single supernova into either a protoplanetary disk or
molecular cloud are viable models, although they might entail improbable circumstances.
Gounelle et al.\ (2009) have proposed that the gas from which the Sun formed was
contaminated by several dozen core-collapse supernovae, then swept up into a molecular
cloud several Myr before the solar system formed.
Their ``Supernova Propagation and Cloud Enrichment" (SPACE) model invokes an astrophysical
setting like the Scorpius-Centaurus star-forming region, in which massive stars have triggered
collapse of nearby molecular clouds (either by winds or supernova shocks), triggering a
new round of massive star formation and supernovae, (cf. Preibisch \& Zinnecker 1999).
In their model, Gounelle et al.\ (2009) computed an average value
${}^{60}{\rm Fe} / {}^{56}{\rm Fe} \approx 3 \times 10^{-6}$ in a molecular cloud
over a 10-20 Myr span, assuming a half-life of 1.5 Myr; updating the half-life to
2.3 Myr (Rugel et al.\ 2009) potentially could raise the ${}^{60}{\rm Fe}$ abundance
by an order of magnitude, assuming the molecular cloud takes 10 Myr to form the Sun.
A weakness of the model is that the supernova ejecta is assumed to mix into the
swept-up material with 100\% efficiency.
Simulations of supernova ejecta interacting with protoplanetary disks (Ouellette et al.\
2007) and molecular clouds (Boss et al.\ 2010) typically find mixing efficiencies
$\sim 1\%$.
Gounelle et al.\ (2009) argue for high mixing efficiencies on the basis of simulations
of the thermal instability in interstellar shocks that do suggest high mixing ratios
(Koyama \& Inutsuka 2002; Audit \& Henebelle 2010).
These latter simulations, it should be noted, involve shock speeds of only a few
$\times 10 \, {\rm km} \, {\rm s}^{-1}$, for which the post-shock temperature is
$< 10^{4} \, {\rm K}$ and is consistent with a thermally unstable gas.
The shock speeds associated with supernova ejecta less than a few parsecs from
the explosion center are necessarily $\sim 10^{3} \, {\rm km} \, {\rm s}^{-1}$,
and in these shocks the post-shock gas is too hot to cool effectively.
We expect the mixing efficiency of supernova ejecta with swept-up gas to be
closer to 1\% than 100\%, and consider the mixing efficiency to be an unresolved issue
with the SPACE model.
Assuming the validity of either model, we can estimate the abundances of other
radionuclides injected along with ${}^{60}{\rm Fe}$, especially the shortest lived
of the SLRs, ${}^{41}{\rm Ca}$, ${}^{36}{\rm Cl}$, ${}^{26}{\rm Al}$, ${}^{10}{\rm Be}$
and ${}^{53}{\rm Mn}$.
Neither model is capable of explaining ${}^{10}{\rm Be}$, which is not created
by stellar nucleosynthesis; the case of ${}^{10}{\rm Be}$ is considered separately
below.
As for the others, it has been demonstrated that a single supernova can inject
the other radionuclides in the observed meteoritic proportions, provided the
progenitor is $> 20 \, M_{\odot}$ so that when it undergoes core collapse it may
result in the ``faint supernova" type in which the innermost layers fall back onto
the core (Umeda \& Nomoto 2002, 2005; Nomoto et al.\ 2006; Tominaga et al.\ 2007).
Because essentially all of the ${}^{53}{\rm Mn}$ in a supernova is produced in the
innermost $3 \, M_{\odot}$ (Nomoto et al.\ 2006), fallback of ejecta reduces the
${}^{53}{\rm Mn} / {}^{26}{\rm Al}$ ratio in the ejecta by orders of magnitude,
resulting in the observed meteoritic proportions of ${}^{41}{\rm Ca}$, ${}^{26}{\rm Al}$,
${}^{60}{\rm Fe}$ and ${}^{53}{\rm Mn}$ in the ejecta, assuming a reasonable 1 Myr
delay before isotopic closure (Takigawa et al.\ 2008).
The abundance of ${}^{36}{\rm Cl}$ in the early solar system appears to be too high
to be explained by injection from a single supernova (see discussion in Hsu et al.\
2006).
Injection of material from a single nearby supernova, either into the disk or into
the Sun's molecular cloud core, can simultaneously explain the abundances of the
other shortest-lived radionuclides ${}^{41}{\rm Ca}$, ${}^{26}{\rm Al}$,
${}^{60}{\rm Fe}$ and ${}^{53}{\rm Mn}$, if the progenitor was a massive star
experiencing fallback.
Within the context of the SPACE model, injection of ${}^{60}{\rm Fe}$ from
multiple supernovae may yield the meteoritic ${}^{26}{\rm Al} / {}^{60}{\rm Fe}$
ratio in the solar nebula, but cannot explain the abundances of ${}^{41}{\rm Ca}$
and ${}^{53}{\rm Mn}$.
The SPACE model does not lead to significant quantities of SLRs with half-lives
$< 1 \, {\rm Myr}$, because of the long timescales (10-20 Myr) associated with
the formation of the molecular cloud, so ${}^{41}{\rm Ca}$ and ${}^{36}{\rm Cl}$ would
be significantly underproduced.
This underproduction is inconsistent with studies that indicate a correlation between
${}^{26}{\rm Al}$ and ${}^{41}{\rm Ca}$ (Sahijpal \& Goswami 1998), unless ${}^{26}{\rm Al}$
is not derived primarily from these multiple supernovae.
Likewise, the SPACE model unavoidably and significantly {\it over}produces ${}^{53}{\rm Mn}$
(Gounelle et al.\ 2009).
Models of supernova ejecta generally show a ${}^{53}{\rm Mn} / {}^{60}{\rm Fe}$ ratio
10 - 100 times larger than the solar nebula ratio inferred from meteorites (Goswami \&
Vanhala 2000; Wadhwa et al.\ 2007; Sahijpal \& Soni 2006).
This general trend does not apply to ejecta from a single supernova, if the supernova's
progenitor was $> 20 \, M_{\odot}$ and experienced fallback, but considering the average ejecta
of dozens of supernovae of various masses, this outcome appears inevitable.
To summarize, ${}^{60}{\rm Fe}$ cannot be formed by the X-wind model and requires an
external supernova source.
The multiple supernovae in the SPACE model of Gounelle et al.\ (2009) explains the
the abundance of ${}^{60}{\rm Fe}$ in the early solar system, assuming that mixing
efficiencies approach unity.
Production of ${}^{41}{\rm Ca}$, ${}^{36}{\rm Cl}$ and ${}^{26}{\rm Al}$ in the X-wind
environment would not conflict with production of ${}^{60}{\rm Fe}$ in the SPACE model,
but the SPACE model inevitably and significantly overproduces ${}^{53}{\rm Mn}$,
making it incompatible with the X-wind model for SLR production, which also contributes
to ${}^{53}{\rm Mn}$.
Because of this severe overproduction of ${}^{53}{\rm Mn}$ relative to ${}^{60}{\rm Fe}$,
and because we expect mixing ratios of supernova ejecta should be $\sim 1\%$, we disfavor
the SPACE model as the source of the solar system's ${}^{60}{\rm Fe}$ and other SLRs.
This suggests strongly that the source of the solar nebula's ${}^{60}{\rm Fe}$ was instead
a single core-collapse supernova with progenitor mass $> 20 \, M_{\odot}$, that experienced
fallback onto the core.
Such a supernova would have underproduced ${}^{36}{\rm Cl}$, but could simultaneously explain
the observed abundances of ${}^{41}{\rm Ca}$, ${}^{26}{\rm Al}$, ${}^{60}{\rm Fe}$ and
${}^{53}{\rm Mn}$ (Takigawa et al.\ 2008), without contributions from multiple supernovae
of a previous generation of star formation.
Because a single supernova is favored source for ${}^{60}{\rm Fe}$, and because this
scenario can explain all of the SLRs (except ${}^{10}{\rm Be}$) that the X-wind model
produces, significant contributions of these SLRs from the X-wind most likely can be
excluded.
\subsubsection{The Special Case of Beryllium 10}
Since evidence for ${}^{10}{\rm Be}$ in the solar nebula was discovered (McKeegan et al.\ 2000),
it has been used to support the X-wind model.
Because this SLR is not produced in supernovae, Gounelle et al.\ (2001) called it a potential
``smoking gun" for the X-wind model.
However, the data point to an origin for ${}^{10}{\rm Be}$ that is distinct from ${}^{26}{\rm Al}$
and the other SLRs.
Marhas et al.\ (2002) analyzed a variety of meteoritic components thought to form early
in the solar nebula, including a so-called FUN (fractionation and unknown nuclear effects) CAI,
as well as hibonites.
They found evidence for ${}^{10}{\rm Be}$ in samples with firm upper limits on initial ${}^{26}{\rm Al}$,
and concluded that ${}^{10}{\rm Be}$ was not correlated with ${}^{26}{\rm Al}$, and the two
SLRs were ``decoupled," having separate origins, a conclusion supported by subsequent studies
(Ushikubo et al.\ 2006; Srinivasan et al.\ 2007).
In addition, the initial abundances of ${}^{10}{\rm Be}$ in a variety of samples are
remarkably uniform.
Desch et al.\ (2004) reviewed the dozen or so measurements up to that date and found them to
all cluster in a range ${}^{10}{\rm Be} / {}^{9}{\rm Be}$ $\approx \, 0.45 - 1.8 \times 10^{-3}$.
More measurements have been made since then, all of which again cluster in the same range
(Marhas et al.\ 2002; MacPherson et al.\ 2003; Ushikubo et al.\ 2006; Chaussidon et al.\ 2006;
Srinivasan et al.\ 2007; Liu et al.\ 2007).
These data strongly suggest that the source of ${}^{10}{\rm Be}$ not only was distinct from
the source of ${}^{26}{\rm Al}$ and other SLRs, but pre-dated the solar system.
Desch et al.\ (2004) interpret these data to mean that most of the ${}^{10}{\rm Be}$ was
inherited from the interstellar medium, as ${}^{10}{\rm Be}$ GCRs
that were slowed and trapped in the Sun's molecular cloud core as it collapsed.
They calculated the rate at which such low-energy ($< 10 \, {\rm MeV} \, {\rm nucleon}^{-1}$)
GCRs were trapped in the Sun's cloud core, accounting for magnetic focusing and mirroring,
and computed an initial ratio in the solar system of $^{10}{\rm Be} / {}^{9}{\rm Be} = 1.1 \times 10^{-3}$.
Other SLRs are not predicted to derive from this mechanism (Desch et al.\ 2004).
To the extent that any fraction of the ${}^{10}{\rm Be}$ in the solar nebula comes from
a source other than the X-wind, it exacerbates the problems of overproduction of
${}^{10}{\rm Be}$ in the X-wind, relative to other SLRs (\S 5.3).
If Desch et al.\ (2004) are correct in their interpretation that nearly all the ${}^{10}{\rm Be}$
came from trapped GCRs, it effectively rules out the X-wind model for SLR production.
Because of its important consequences for the X-wind model, the model of Desch et al.\ (2004)
has been questioned; here we address these criticisms.
Desch et al.\ (2004) predicted that the ${}^{10}{\rm Be} / {}^{9}{\rm Be}$ ratio was initially
homogeneous within the solar nebula, as it represents material that was trapped in the molecular
cloud core.
In truth, fewer GCRs would reach and be stopped in the center of cloud core, so the
${}^{10}{\rm Be} / {}^{9}{\rm Be}$ ratio would not have been completely homogeneous at
this stage; it is difficult to judge the degree of heterogeneity at this stage, although
it is probably less than a factor of 2.
At any rate, it is presumed that such heterogeneities are erased as the cloud core continues
to collapse into a protostar and disk, and the prediction of homogeneity of ${}^{10}{\rm Be}$
probably is robust.
Gounelle (2006) claimed that the variations in inferred initial
${}^{10}{\rm Be} / {}^{9}{\rm Be}$ ratios point to a non-homogeneous distribution of ${}^{10}{\rm Be}$.
Likewise, Liu et al.\ (2007) analyzed platy hibonites from CM chondrites and found one with
an initial ratio ${}^{10}{\rm Be} / {}^{9}{\rm Be} = 5.5 \pm 1.4 \times 10^{-4}$ which, they claimed,
was statistically significantly lower than the average values.
Because platy hibonites are believed to be older than other components, this lower value is
not attributed to decay of ${}^{10}{\rm Be}$ over time, implying that ${}^{10}{\rm Be}$ was
spatially heterogeneous.
Notably, though, Ushikubo et al.\ (2006) also measured platy hibonites from the CM2 chondrite
Murchison and the CO3 chondrite Kainsaz, and inferred higher initial values in similar samples,
${}^{10}{\rm Be} / {}^{9}{\rm Be} = 1.8 \pm 0.4 \times 10^{-3}$.
We choose to interpret the range of inferred initial ${}^{10}{\rm Be} / {}^{9}{\rm Be}$ ratios
as clustering about a uniform value, within the experimental uncertainties.
Clearly, further analyses will determine whether observed variations reflect true nebular
heterogeneities or differences in experimental techniques.
Gounelle (2006) also criticized many assumptions and other aspects of the Desch et al.\ (2004)
model.
First, they disputed the long cloud core collapse time $\sim 10 \, {\rm Myr}$ used in the
main simulation of Desch et al.\ (2004), implying that since observed collapse times of
molecular cloud cores are $\approx 0.3 - 1.6 \, {\rm Myr}$, (Lee \& Myers 1999), that
perhaps Desch et al.\ (2004) overestimated the ${}^{10}{\rm Be}$ by a factor $\approx 10$.
In fact, it is clear from Figure 3 of Desch et al.\ (2004) that the ${}^{10}{\rm Be} / {}^{9}{\rm Be}$
quickly saturates to values $\sim 1 \times 10^{-3}$, so longer collapse times do {\it not}
lead to higher ${}^{10}{\rm Be} / {}^{9}{\rm Be}$ ratios.
In fact, Desch et al.\ (2004) explored the sensitivity of ${}^{10}{\rm Be}$ abundance to
magnetic field strength and therefore collapse time (their Figure 4).
They found ${}^{10}{\rm Be} / {}^{9}{\rm Be} \approx 1 \times 10^{-3}$ even for parameters
that lead to collapse times $< 1 \, {\rm Myr}$.
Gounelle (2006) also criticized the assumption of Desch et al.\ (2004) that the GCR flux was
a factor of 2 higher 4.6 Gyr ago than today, calling it ``ad hoc".
In fact, as explained by Desch et al.\ (2004), the GCR flux scales with the supernova rate,
which scales with the star formation rate, which is well known to be decreasing over Galactic
history.
The GCR flux was definitely higher in the past than today, by a factor roughly $1.5 - 2.5$ higher
than today (Desch et al.\ 2004).
Gounelle (2006) also criticized the fact that the simulations of Desch \& Mouschovias (2001)
used by Desch et al.\ (2004) formed a $1 \, M_{\odot}$ star from a $45 \, M_{\odot}$ cloud.
We point out that the observed star formation efficiency is similarly low (Ward-Thompson et al.\
2007), and that the exact cloud structure is somewhat irrelevant: ${}^{10}{\rm Be}$ GCRs
will be trapped in collapsing cores, as demonstrated by Desch et al.\ (2004), as they transition
from low densities transparent to low-energy GCRs to high densities opaque to GCRs, passing
through surface densities $\sim 10^{-2} \, {\rm g} \, {\rm cm}^{-2}$.
This is true regardless of the details of the larger structures, because they are largely
transparent to such GCRs.
Other objections raised by Gounelle (2006), e.g., relating to the importance of magnetic mirroring,
are addressed directly by Desch et al.\ (2004)
The objections of Gounelle (2006) are readily refuted, and we consider the model of
Desch et al.\ (2004) to be valid.
Beryllium 10 is known to be decoupled from the other SLRs and to have a separate source.
The uniformity of the inferred initial ${}^{10}{\rm Be} / {}^{9}{\rm Be}$ ratios around
a value $\approx 1 \times 10^{-3}$ strongly suggests an origin before the formation of
the protoplanetary disk.
The model of Desch et al.\ (2004) predicts ${}^{10}{\rm Be} / {}^{9}{\rm Be} \approx 1 \times 10^{-3}$
due to trapping of low-energy ${}^{10}{\rm Be}$ GCRs as the Sun's molecular cloud core
contracts and becomes opaque to such GCRs.
To the extent that ${}^{10}{\rm Be}$ in the early solar system can be attributed to trapped
GCRs, then the contributions to ${}^{10}{\rm Be}$ must be significantly reduced or even
excluded, effectively ruling out significant contributions to the SLRs from the X-wind.
\section{Conclusions}
The X-wind model was originally developed to explain the dynamics of bipolar outflows from
protostars (Shu et al.\ 1994a,b, 1995; Najita \& Shu 1994; Ostriker \& Shu 1995).
It remains a viable model for protostellar jets, although not the only one: ``disk wind" models,
in which the magnetocentrifugal outflows are launched from 0.1 - 1 AU, rather than from $< 0.1$ AU,
also exist (Wardle \& K\"{o}nigl 1993; K\"{o}nigl \& Pudritz 2000; Pudritz et al.\ 2007).
Observational evidence from the rotation of protostellar jets tends to favor disk wind models
(Bacciotti et al.\ 2002; Anderson 2003; Coffey et al.\ 2004, 2007; Woitas et al.\ 2005),
and at this time the evidence for X-wind models in particular
is not conclusive.
In a series of papers (Shu et al.\ 1996, 1997, 2001; Gounelle et al.\ 2001), the X-wind model was
applied to three fundamental problems in meteoritics: the formation of chondrules, the formation of CAIs,
and the origin of the SLRs.
Progress toward an {\it astrophysical} theory of chondrites was sought.
In this paper, we have shown that the X-wind model is not applicable to the formation of chondrules,
to the formation of CAIs, nor the origin of the SLRs.
We have demonstrated that the model itself has internal inconsistencies.
It also makes predictions about chondrule and CAI formation at odds with experimental constraints.
In regard to the SLRs, it does not satisfactorily explain the coproduction of ${}^{10}{\rm Be}$,
${}^{26}{\rm Al}$, ${}^{41}{\rm Ca}$ and ${}^{53}{\rm Mn}$, and it leaves unexplained the
source of ${}^{60}{\rm Fe}$ and ${}^{36}{\rm Cl}$.
The internal inconsistencies can be summarized as follows.
First, material is brought to the reconnection ring only because of accretion, yet the heating
caused by this accretion was neglected in the X-wind model.
When it is included, the model predicts temperatures too high for most silicate material to exist.
This is consistent with astronomical observations, which also show no evidence for solids at the
X point.
Second, the X-wind model assumes rather arbitrarily that a fraction $F \sim 0.01$ of all solid
material falls out of the funnel flow and into the reconnection ring.
This factor is not determined from first principles, and our own calculations presented here
show that essentially all solids brought in from the disk will remain entrained in the funnel
flow and accreted on the star.
Third, the X-wind model asserts that particles falling from the funnel flow will join a
geometrically thin ``reconnection ring".
In fact, particles leaving the funnel flow are likely to enter the reconnection ring with
velocities comparable to the Keplerian orbital velocity there, $> 100 \, {\rm km} \, {\rm s}^{-1}$,
with significant orbital inclinations.
It is not clear how these inclinations would be damped so particles could join the reconnection
ring.
Also, particles already in the reconnection ring would experience shattering collisions with
incoming particles, at a rate sufficient to prevent particle growth in the reconnection ring.
Fourth, the X-wind model neglects thermal sputtering by the plasma in the reconnection ring.
We have shown that thermal sputtering will prevent growth of particles in the reconnection ring.
Fifth, the X-wind model necessarily posits that CAIs and chondrules, formed in the reconnection
ring, lie very close to the disk midplane ($< 3 \times 10^{9} \, {\rm cm}$), yet particles
must be far from the midplane ($> 4 \times 10^{10} \, {\rm cm}$) to be launched in a
magnetocentrifugal outflow.
Vertical diffusion of particles is not modeled.
The MRI is invoked, but the magnetic diffusivity needed to allow gas to diffuse across the X point
is close to the limit at which the MRI is suppressed.
Sixth, the trajectories of particles launched in the magnetocentrifugal outflow are not explicitly
modeled.
It does seem clear, though, that the mechanism is extremely sensitive to the size of the particles,
implying that only a small fraction of the material could be launched.
Ignoring these internal inconsistencies, the X-wind model makes a number of predictions about
chondrules that are inconsistent with constraints on their origins.
The thermal histories of chondrules are experimentally constrained by measurements of elemental
and isotopic fractionation, and by chemical zoning and textures.
The X-wind model does not allow chondrules to form from material containing primary S, as the
starting temperatures are too high.
It does not explain the very high peak temperatures of chondrules, nor the rapid cooling from the
peak.
It also predicts that all CAIs and chondrules melted in the X-wind will cool at $10 \, {\rm K} \, {\rm hr}^{-1}$,
which is not consistent with the cooling rates of barred olivine and some other chondrules,
$\sim 10^{3} \, {\rm K} \, {\rm hr}^{-1}$.
The observed correlation between compound chondrule frequency and textural type is also not
predicted by the X-wind model.
Very importantly, the X-wind model predicts that within a chondrite the chondrules are
formed at $< 0.1 \, {\rm AU}$ and the matrix grains at $\approx 2 - 3 \, {\rm AU}$, and that there
should be no correlation between their compositions.
This directly contradicts the observed chondrule-matrix chemical complementarity.
Finally, the X-wind model predicts contemporaneous formation of chondrules and CAIs, which
is contradicted by Pb-Pb dating and Al-Mg systematics, which show a 2 Myr age difference.
The X-wind model also makes a number of predictions about CAI formation.
The assumption of a refractory Ca,Al-rich core surrounded by a ferromagnesian silicate mantle
(necessary to prevent substantial overproduction of ${}^{41}{\rm Ca}$) is not supported by
observed behaviors of CAI melts.
Also, because CAIs are explicitly assumed to grow due to vapor recondensation, the oxygen fugacity
of the X-wind environment will be that of rock vapor itself; hydrogen and other volatile phases would
be accreted by the funnel flow onto the star.
This oxygen fugacity is orders of magnitude too oxidizing to be consistent with oxygen barometers
of CAI formation, which routinely indicate a gas of solar composition.
The discovery of osbornite in the CAI-like {\it Stardust} sample {\it Inti}, and in CAIs of
Isheyevo, likewise strongly indicate a gas of solar composition for the formation environments
of these particular inclusions, and not an X-wind environment.
The X-wind model also makes a number of predictions about the production of SLRs.
In the context of the X-wind model, even for the most favorable parameters (Gounelle et al.\ 2001),
${}^{10}{\rm Be}$ is overproduced, given that the cross section
${}^{24}{\rm Mg}({}^{3}{\rm He},p){}^{26}{\rm Al}$
is measured to be 3 times smaller than Gounelle et al.\ (2001) assumed (Fitoussi et al.\ 2004).
The overproduction of ${}^{10}{\rm Be}$ is more profound to the extent that ${}^{10}{\rm Be}$
has an external origin, such as trapped GCRs (Desch et al.\ 2004).
In the context of the X-wind model, the only way to avoid severe overproduction of ${}^{41}{\rm Ca}$
is if almost all the Ca in the CAI were sequestered in a core, surrounded by a silicate mantle
$\sim 1 \, {\rm cm}$ thick.
As Simon et al.\ (2002) point out, real CAI melts do not form immiscible liquids that would
segregate in this way.
Despite the likelihood that ${}^{36}{\rm Cl}$ in the solar nebula was created by irradiation,
the X-wind environment is too hot for either the target nuclei or ${}^{36}{\rm Cl}$ to condense.
Finally, the X-wind model cannot explain the existence of ${}^{60}{\rm Fe}$ in the solar nebula,
because this neutron-rich isotope is not sufficiently produced by spallation.
The problems of the X-wind model are even starker in the face of the viable alternatives that
exist in the literature.
Chondrule formation is explained in great detail by melting in nebular shocks.
This model is consistent with the detailed thermal histories of chondrules, their observed
correlation with compound chondrule frequency, and chondrule-matrix complementarity.
Formation of CAIs in the disk, during an earlier stage of disk evolution where the mass
accretion rates were higher, is consistent with an earlier formation of CAIs than chondrules,
with the solar oxygen fugacity of their formation environment, and allows some CAIs to remain
unmelted.
Finally, because ${}^{60}{\rm Fe}$ is not produced significantly in the X-wind environment,
its source must be one or more nearby core-collapse supernovae.
The overproduction of ${}^{53}{\rm Mn}$ relative to ${}^{60}{\rm Fe}$ appears to exclude
multiple supernovae.
Injection of material from a single, nearby core-collapse supernova is broadly consistent
and can explain simultaneously the abundances of ${}^{41}{\rm Ca}$, ${}^{26}{\rm Al}$,
${}^{60}{\rm Fe}$ and ${}^{53}{\rm Mn}$ (Takigawa et al.\ 2008).
Neither one nor several supernova, nor the X-wind model, appear capable of explaining
the high inferred initial abundance of ${}^{36}{\rm Cl}$, which may demand a separate
origin in a late stage of irradiation in the early solar system.
Supernova nucleosynthesis does not produce ${}^{10}{\rm Be}$, but this SLR is known to be
decoupled from ${}^{26}{\rm Al}$ and the other SLRs.
A unique origin as trapped GCRs qualitatively and quantitatively explains
its near-uniform abundance ${}^{10}{\rm Be} / {}^{9}{\rm Be} \sim 10^{-3}$ in a variety of
meteoritic inclusions (Desch et al.\ 2004).
Objections by Gounelle (2006) to the model of Desch et al.\ (2004) are readily refuted.
The origins of the SLRs are still unknown and are the focus of ongoing research; but the
working hypothesis of trapped ${}^{10}{\rm Be}$ GCRs and injection from a single supernova
with fallback appears more viable than the X-wind model plus multiple supernovae for ${}^{60}{\rm Fe}$.
In short, viable and more plausible alternative models exist for all the meteoritic components
the X-wind model purports to explain.
The X-wind model makes assumptions that are internally inconsistent.
The X-wind model makes predictions about the formation of chondrules and CAIs and the
production of SLRs that are contradicted by experimental constraints.
Better alternative models exist to explain the formation of chondrules and CAIs and
the production of SLRs.
We conclude the X-wind model is irrelevant to the problems of chondrule formation, CAI
formation, or the creation of short-lived radionuclides.
\acknowledgements
S.~J.~D.\ gratefully acknowledges the support for this work made available by NASA's Origins
of Solar Systems Program, grant NNG06GI65G, and by the NASA Astrobiology Institute.
|
1,116,691,497,324 | arxiv |
\section{Background}
\textbf{Infomin representation learning}. Let $X \in \mathbbm{R}^D$ be the data, $Y \in \mathbbm{R}^{D'}$ be the target we want to predict from $X$. The task we consider here is to learn some representation $Z = f(X)$ that is useful for predicting $Y$ but is uninformative about some target $T \in \mathbbm{R}^d$. Formally, this can be written as
\begin{equation}
\min_f \mathcal{L} (f(X); Y) + \beta \cdot I(f(X); T)
\label{formula:overall_objective}
\end{equation}
where $f$ is an encoder, $\mathcal{L}$ is some loss function quantifying the utility of $Z$ for predicting $Y$ and $I(f(X); T)$ quantifies the amount of information left in $Z$ about $T$. $\beta$ controls the trade-off between utility and uninformativeness. Many tasks in machine learning can be seen as special cases of this objective. For example, by setting $T$ to be (a set of) sensitive attributes e.g. race, gender or age, we arrive at fair representation learning \cite{zemel2013learning, madras2018learning, edwards2015censoring, song2019learning}. When using a stochastic encoder, by setting $Y$ to be $X$ and $T$ to be some generative factors e.g., a class label, we arrive at disentangled representation learning \cite{cheung2014discovering, higgins2016beta, kim2018disentangling, locatello2019challenging}. Similarly, the information bottleneck method \cite{tishby2015deep, alemi2016deep} corresponds to setting $T = X$, which learns representations expressive for predicting $Y$ while being compressive about $X$.
\textbf{Adversarial training for infomin learning}. A key ingredient in objective \eqref{formula:overall_objective} is to quantify $I(f(X); T)$ as the informativeness between $f(X)$ and $T$. One solution is to train a predictor for $T$ from $f(X)$ and use the prediction error as a measure of $I(f(X); T)$ \cite{madras2018learning, elazar2018adversarial, grari2019fairness, ganin2016domain}. Another approach is to first train a classifier to distinguish between samples from $p(Z, T)$ vs. $p(Z)p(T)$ \cite{kim2018disentangling, cheng2020estimating} or to distinguish samples $Z \sim p(Z|T)$ with different $T$ \cite{edwards2015censoring, xie2017controllable}, then use the classification error to quantify $I(f(X); T)$. All these methods involve the training of a neural network $t$ to provide a lower-bound estimate to $I(f(X); T)$, yielding a minmax optimisation problem
\begin{equation}
\min_f\max_t \mathcal{L}(f(X); Y) + \beta \cdot \hat{I}_t(f(X); T)
\label{formula:minmax_obj}
\end{equation}
where $\hat{I}_t(f(X); T)$ is an estimator constructed using $t$ that lower-bounds $I(f(X); T)$. The time complexity of optimising \eqref{formula:minmax_obj} is $O(L_1L_2)$ where $L_1$ and $L_2$ are the number of gradient steps for the min and the max step respectively. The strength of $t$ is crucial for the quality of the learned representation \cite{moyer2018invariant, feng2019learning, song2019overlearning, song2019learning, balunovic2021fair}. For a strong adversary, a large $L_2$ is possibly needed, but this means a long training time. Conversely, a weak adversary may not produce a truly infomin representation.
\section*{Checklist}
\begin{enumerate}
\item For all authors...
\begin{enumerate}
\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
\answerYes{}
\item Did you describe the limitations of your work?
\answerYes{}
\item Did you discuss any potential negative societal impacts of your work?
\answerYes{See introduction and conclusion}
\item Have you read the ethics review guidelines and ensured that your paper conforms to them?
\answerYes{}
\end{enumerate}
\item If you are including theoretical results...
\begin{enumerate}
\item Did you state the full set of assumptions of all theoretical results?
\answerYes{}
\item Did you include complete proofs of all theoretical results?
\answerYes{see Appendix A}
\end{enumerate}
\item If you ran experiments...
\begin{enumerate}
\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
\answerYes{codes provided in supplemental material; all used dataset are public}
\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
\answerYes{see page 5 'hyperparameter settings'}
\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
\answerYes{}
\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
\answerYes{see page 5 'computational resource'}
\end{enumerate}
\item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
\begin{enumerate}
\item If your work uses existing assets, did you cite the creators?
\answerYes{}
\item Did you mention the license of the assets?
\answerNo{all data are available in the public and the only requirement in the license is not to claim the ownership of data}
\item Did you include any new assets either in the supplemental material or as a URL?
\answerYes{the new codes developed for reproducing the experiments}
\item Did you discuss whether and how consent was obtained from people whose data you're using/curating?
\answerNo{all dataset used is available in the public}
\item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
\answerNo{no sensitive information is in the dataset}
\end{enumerate}
\item If you used crowdsourcing or conducted research with human subjects...
\begin{enumerate}
\item Did you include the full text of instructions given to participants and screenshots, if applicable?
\answerNA{}
\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
\answerNA{}
\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
\answerNA{}
\end{enumerate}
\end{enumerate}
\section{Conclusion}
This work proposes a new method for infomin learning without adversarial training. A major challenge
is how to estimate mutual information accurately and efficiently, as MI is generally intractable. We sidestep this challenge by only testing and minimising dependence in a sliced space, which can be achieved analytically, and we showed this is sufficient for our goal. Experiments on algorithmic fairness, disentangled representation learning and domain adaptation verify our method's efficacy.
Through our controlled experiments, we also verify that adversarial approaches indeed may not produce infomin representation reliably -- an observation consistent with recent studies. This suggests that existing adversarial approaches may not converge to good solutions, or may need more time for convergence, with more gradient steps needed to train the adversary fully. The result also hints at the potential of diverse randomisation methods as an alternative to adversarial training in some cases.
While we believe our method can be used in many applications for societal benefit (e.g. for promoting fairness), since it is a general technique, one must always be careful to prevent societal harms.
\vspace{-0.035cm}
\section*{Acknowledgement}
AW acknowledges support from a Turing AI Fellowship under grant EP/V025279/1, The Alan Turing Institute, and the Leverhulme Trust via CFI. YC acknowledges funding from Cambridge Trust.
\section{Experiments}
We evaluate our approach on four tasks: independence testing, algorithmic fairness, disentangled representation learning, domain adaptation. Code is available at \hyperlink{https://github.com/cyz-ai/infomin}{\texttt{github.com/cyz-ai/infomin}}.
\textbf{Evaluation metric}. To assess how much information is left in the learned representation $Z \in \mathbbm{R}^D$ about the target $T \in \mathbbm{R}^K$, we calculate the R\'enyi's maximal correlation $\rho^*(Z, T)$ between $Z$ and $T$:
\begin{equation}
\rho^*(Z, T) = \sup_{h, g}\rho(h(Z), g(T))
\label{formula:neural_renyi}
\end{equation}
which has the properties $\rho^*(Z, T) = 0$ if and only if $Z \perp T$ and $\rho^*(Z, T) = 1$ if $h(Z) = g(T)$ for some deterministic functions $h, g$ \cite{renyi1959measures}. One can also understand this metric as the easiness of predicting (the transformed) $T$ from $Z$, or vice versa.\footnote{It can be shown $\rho^*(Z, T)$ is equivalent to the normalised mean square error between $h(Z)$ and $g(T)$.\label{footnote:mse_vs_renyi}} As there is no analytic solution for the supremum in \eqref{formula:neural_renyi}, we approximate them by two neural networks $h, g$ trained with SGD. Early stopping and dropout are applied to avoid overfitting. The reliability of this neural approximation has been verified by the literature \cite{grari2019fairness} and is also confirmed by our experiments; see Appendix B.
This metric is closely related to existing metrics/losses used in fairness and disentangled representation learning such as demographic parity (DP) \cite{mary2019fairness} and total correlation (TC) \cite{kim2018disentangling}. For example, if $\rho^*(Z, T) \to 0$ then it is guaranteed that $\hat{Y} \perp T$ for any predictor $\hat{Y} = F(Z)$, so $\rho^*(Z, T)$ is an upper bound for DP . Similarly, $\rho^*(Z, T)$ coincides with TC which also assesses whether $Z \perp T$. In additional to this metric, we will also use some task-specific metric; see each experiment below.
\textbf{Baselines}. We compare the proposed method (denoted as ``Slice") with the following approaches:
\begin{itemize}[leftmargin=*]
\item \emph{Pearson}, which quantifies $I(Z; T)$ by the Pearson correlation coefficient $\frac{1}{DK}\sum^D_d\sum^K_k\rho(Z_d; T_k)$. It was used in \cite{cheung2014discovering, kumar2017variational} as an easy-to-compute proxy to MI to learn disentangled representations;
\item \emph{dCorr}, i.e. distance correlation, a non-parametric method for the quantifying the independence between two vectors \cite{szekely2014partial}. It was applied in \cite{chen2020neural} as a surrogate to MI for representation learning;
\item \emph{Neural R\'enyi}, an adversarial method for fair machine learning \cite{baharlouei2019r} which quantifies $I(Z; T)$ by the R\'enyi correlation $\rho^*(Z, T) = \sup_{h, g}\rho(h(Z), g(T)) $ with $h, g$ approximated by neural networks. It can be seen as training a predictor to predict (the transformed) $T$ from $Z$ and is closely related to many existing methods in algorithmic fairness and domain adaptation \cite{madras2018learning, edwards2015censoring, elazar2018adversarial, grari2019fairness, ganin2016domain};
\item \emph{Neural TC}, an adversarial method for learning disentangled representation \cite{kim2018disentangling, creager2019flexibly} which quantifies $I(Z; T)$ by the total correlation $TC(Z, T) = KL[p(Z, T) \| p(Z)p(T)]$. To computes TC, a classifier is trained to classify samples from $p(Z, T)$ and samples from $p(Z)p(T)$. This method can also be seen as a variant of the popular MINE method \cite{belghazi2018mutual} for mutual information estimate.
\item \emph{v-CLUB}, i.e. variational contrastive log upper bound, which introduces a (learnable) variational distribution $q(T|Z)$ to form an upper bound of MI \cite{cheng2020club}: $I(Z; T) \leq \mathbb{E}_{p(Z,T)}[\log q(T|Z)] - \mathbb{E}_{p(Z)p(T)}[\log q(T|Z)]$. Like adversarial method, $q(T|Z)$ can be learned by a few gradient steps.
\end{itemize}
For a fair comparison, for adversarial training-based approaches (i.e. Neural R\'enyi, Neural TC), we ensure that the training time of the neural networks in these methods is at least the same as the execution time of our method or longer. We do this by controlling the number of adversarial steps $L_2$ in Algorithm 1. The same setup is used for v-CLUB. See each experiment for the detailed time.
\textbf{Hyperparameter settings}. Throughout our experiments, we use $200$ slices. We find that this setting is robust across different tasks. An ablation study on the number of slices is given in Appendix B. The order of the polynomial used in \eqref{formula:slice_renyi_corr_joint} namely $K$ is set as $K = 3$ and is fixed across different tasks.
\textbf{Computational resource}. All experiments are done with a single NVIDIA GeForce Tesla T4 GPU.
\begin{figure}[t]
\centering
\begin{subfigure}{.26\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figures/independence/estimate_power_level_a.pdf}
\caption{\centering $a$}
\end{subfigure}
\hspace{-0.03\textwidth}
\begin{subfigure}{.26\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figures/independence/estimate_power_level_a2.pdf}
\caption{\centering $a^2$}
\end{subfigure}
\hspace{-0.03\textwidth}
\begin{subfigure}{.26\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figures/independence/estimate_power_level_sin_a.pdf}
\caption{\centering $\sin(a)$}
\end{subfigure}
\hspace{-0.03\textwidth}
\begin{subfigure}{.26\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figures/independence/estimate_power_level_tanh_a.pdf}
\caption{\centering $\text{tanh}(a)$}
\end{subfigure}
\caption{Comparison of the test power of different independence test methods. The x-axis corresponds to different values for the dependence level $\alpha$ and the y-axis corresponds to the test power. }
\label{fig:independence_test}
\end{figure}
\subsection{Independence testing}
We first verify the efficacy of our method as a light-weight but powerful independence test. For this purpose, we investigate the test power of the proposed method on various synthetic dataset with different association patterns between two random variables $X \in \mathbbm{R}^{10}, Y \in \mathbbm{R}^{10}$ and compared to that of the baselines. The test power is defined as the ability to discern samples from the joint distribution $p(X, Y)$ and samples from the product of marginal $p(X)p(Y)$ and is expressed as a probability $p \in [0, 1]$. The data is generated as $Y = (1-\alpha) \langle t(\mathbf{A} X) \rangle + \alpha\epsilon, X_d \sim \mathcal{U}[-3, 3], \mathbf{A}_{dd}=1, \mathbf{A}_{dk}=0.2$, $\epsilon \sim \mathcal{N}(\epsilon; \mathbf{0}, \mathbf{I})$, $\alpha \in (0,1)$ and $\langle \cdot \rangle$ is a scaling operation that scales the operand to the range of $[0, 1]$ according to the minimum and maximum values in the population. The function $t(\cdot)$ determines the association pattern between $X$ and $Y$ and is chosen from one of the following: $t(a) = a, a^2, \sin(a), \text{tanh}(a)$. The factor $\alpha$ controls the strength of dependence between $X$ and $Y$.
All tests are done on 100 samples and are repeated 1,000 times. We choose this sample number as it is a typical batch size in mini-batch learning. For methods involving the learning of parameters (i.e. Slice, Neural R\'enyi, Neural TC), we learn their parameters from 10,000 samples. The time for learning the parameters of Slice, Neural R\'enyi and Neural TC are 0.14s, 14.37s, 30.18s respectively. For completeness, we also compare with the `optimal test' which calculates the R\'enyi correlation $\rho^*(X, Y) = \rho(h(X), g(Y))$ with the functions $h, g$ exactly the same as the data generating process.
Figure \ref{fig:independence_test} shows the power of different methods under various association patterns $t$ and dependence levels $\alpha$. Overall, we see that the proposed method can effectively detect dependence in all cases, and has a test power comparable to neural network-based methods. Non-parametric tests, by contrast, fail to detect dependence in quadratic and periodic cases, possibly due to insufficient power. Neural TC is the most powerful test among all the methods considered, yet it requires the longest time to train. We also see that the proposed method is relatively less powerful when $\alpha \geq 0.8$, but in such cases the statistical dependence between $X$ and $Y$ is indeed very weak (also see Appendix B). The results suggest that our slice method can provide effective training signals for infomin learning tasks.
\begin{table}[t]
\caption{Learning fair representations on the US Census Demographic dataset. Here the utility of the representation is measured by $\rho^*(Z, Y)$, while $\rho^*(Z, T)$ is used to quantify the fairness of the representation. Training time is also provided as the seconds required per max step. }
\label{table:USCensus}
\vspace{0.2cm}
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{cccccccc}
\toprule
\cmidrule(r){1-8}
{}& \textbf{N/A} & \textbf{Pearson} & \textbf{dCorr} & \textbf{Slice} & \textbf{Neural R\'enyi} & \textbf{Neural TC} & \textbf{vCLUB}
\\
\midrule
$\rho^*(Z, Y) \uparrow$ & $0.95 \pm 0.00$ & $0.95 \pm 0.00$ & $0.95 \pm 0.00$ & $0.95 \pm 0.01$ & $0.95 \pm 0.01$ & $0.95 \pm 0.02$ & $0.94 \pm 0.02$ \\
\midrule
$\rho^*(Z, T) \downarrow$ & $0.92 \pm 0.02$ & $0.84 \pm 0.08$ & $0.47 \pm 0.08$ & $0.07 \pm 0.02$ & $0.23 \pm 0.10$ & $0.27 \pm 0.03$ & $0.16 \pm 0.10$ \\
\midrule
time (sec./max step) & $0.000$ & $0.012$ & $0.087$ & $0.102$ & $0.092$ & $0.097$ & $0.134$ \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table}[t]
\caption{Learning fair representations on the UCI Adult dataset. Here the utility of the representation is measured by $\rho^*(Z, Y)$, while $\rho^*(Z, T)$ is used to quantify the fairness of the representation. }
\vspace{0.2cm}
\label{table:UCIAdult}
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{cccccccc}
\toprule
\cmidrule(r){1-8}
{}& \textbf{N/A} & \textbf{Pearson} & \textbf{dCorr} & \textbf{Slice} & \textbf{Neural R\'enyi} & \textbf{Neural TC} & \textbf{vCLUB}
\\
\midrule
$\rho^*(Z, Y) \uparrow$ & $0.99 \pm 0.00$ & $0.99 \pm 0.00$ & $0.97 \pm 0.01$ & $0.98 \pm 0.01$ & $0.97 \pm 0.01$ & $0.98 \pm 0.02$ & $0.97 \pm 0.02$ \\
\midrule
$\rho^*(Z, T) \downarrow$ & $0.94 \pm 0.02$ & $0.91 \pm 0.06$ & $0.71 \pm 0.06$ & $0.08 \pm 0.02$ & $0.17 \pm 0.08$ & $0.36 \pm 0.13$ & $0.26 \pm 0.12$ \\
\midrule
time (sec./max step) & $0.000$ & $0.015$ & $0.071$ & $0.112$ & $0.107$ & $0.131$ & $0.132$ \\
\bottomrule
\end{tabular}
}
\end{table}
\subsection{Algorithmic fairness}
For this task, we aim to learn fair representations $Z \in \mathbbm{R}^{80}$ that are minimally informative about some sensitive attribute $T$. We quantify how sensitive $Z$ is w.r.t $T$ by R\'enyi correlation $\rho^*(Z, T)$ calculated using two neural nets. Smaller $\rho^*(Z, T)$ is better. The utility of the learned representation i.e., $\mathcal{L}(Z; Y)$ is quantified by $\rho^*(Z, Y)$. This formulation for utility, as aforementioned, is equivalent to measuring how well we can predict $Y$ from $Z$. In summary, the learning objective is:
\[
\max \rho^*(Z; Y) - \beta \hat{I}(Z; T),
\]
where $\hat{I}(Z; T)$ is estimated by the methods mentioned above. For each dataset considered, we use 20,000 data for training and 5,000 data for testing respectively. We carefully tune the hyperparameter $\beta$ for each method so that the utility $\rho^*(Z; Y)$ of that method is close to that of the plain model (i.e. the model trained with $\beta=0$, denoted as N/A below; other experiments below have the same setup). For all methods, we use $5,000$ samples in the max step (so $N'=5,000$ in Algorithm 1, 2).
\textbf{US Census Demographic data}. This dataset is an extraction of the 2015 American Community Survey, with 37 features about 74,000 census tracts. The target $Y$ to predict is the percentage of children below poverty line in a tract, and the sensitive attribute $T$ is the ratio of women in that tract. The result is shown in Table \ref{table:USCensus}. From the table we see that the proposed slice method produces highly fair representation with good utility. The low $\rho^*(Z, T)$ value indicates that it is unlikely to predict $T$ from $Z$ in our method. While adversarial methods can also to some extent achieve fairness, it is still not comparable to our method, possibly because the allocated training time is insufficient (in Appendix B we study how the effect of the training time). Non-parameteric methods can not produce truly fair representation, despite they are fast to execute. v-CLUB, which estimates an upper bound of MI, achieves better fairness than adversarial methods on average, but has a higher variance \cite{song2019understanding}.
\if0
and hence is less sensitive to the adversarial steps.
\fi
\textbf{UCI Adult data}. This dataset contains census data for 48,842 instances, with 14 attributes describing their education background, age, race, marital status, etc. Here, the target $Y$ to predict is whether the income of an instance is higher than 50,000 USD, and the sensitive attribute $T$ is the race group. The result is summarised in Table \ref{table:UCIAdult}. Again, we see that the proposed slice method outperforms other methods in terms of both fairness and utility. For this dataset, Neural R\'enyi also achieves good fairness, although the gap to our method is still large. Neural TC, by contrast, can not achieve a comparable level of fairness under the time budget given --- a phenomenon also observed in the US Census dataset. This is possibly because the networks in Neural TC require longer time to train. The v-CLUB method does not work very satisfactorily on this task, possibly because the time allocated to learn the variational distribution $q(Z|T)$ is not enough, leading to a loose upper bound of $I(Z; T)$.
\begin{figure}[t]
\begin{subfigure}{1.0\textwidth}
\includegraphics[width=1.0\linewidth]{figures/disentanglement/dSprite_Minmax.pdf}
\caption{Adversarial training}
\end{subfigure}
\begin{subfigure}{1.0\textwidth}
\includegraphics[width=1.0\linewidth]{figures/disentanglement/dSprite_Slice.pdf}
\caption{Slice}
\end{subfigure}
\caption{Label swapping experiments on Dsprite dataset. Left: the original image $X$. Middle: reconstructing $X \approx G(Z, T)$ using $Z = E(X)$ and the true label $T$. Right: reconstructing $X' = G(Z, T')$ using $Z = E(X)$ and a swapped label $T' \neq T$. Changing $T$ should only affects the style. }
\label{fig:disentangle_dsprite}
\end{figure}
\begin{figure}[t]
\begin{subfigure}{1.0\textwidth}
\includegraphics[width=1.0\linewidth]{figures/disentanglement/CMUpie_Minmax.pdf}
\caption{Adversarial training}
\end{subfigure}
\begin{subfigure}{1.0\textwidth}
\includegraphics[width=1.0\linewidth]{figures/disentanglement/CMUpie_Slice.pdf}
\caption{Slice}
\end{subfigure}
\caption{Label swapping experiments on CMU-PIE dataset. Left: the original image $X$. Middle: reconstructing $X \approx G(Z, T)$ using $Z = E(X)$ and the true label $T$. Right: reconstructing $X' = G(Z, T')$ using $Z = E(X)$ and a swapped label $T' \neq T$. Changing $T$ only affect the expression. }
\label{fig:disentangle_CMUpie}
\end{figure}
\subsection{Disentangled representation learning}
We next apply our method to the task of disentangled representation learning, where we wish to discover some latent generative factors irrelevant to the class label $T$. Here, we train a conditional autoencoder $X \approx G(Z, T)$ to learn representation $Z = E(X)$ which encodes label-irrelevant information of $X$. The target to recover is $Y = X$. The utility of $Z$ is therefore quantified as the reconstruction error: $\mathcal{L}(Z; Y) = \mathbb{E}[\| G(Z, T) - X \|^2_2]$, resulting in the following learning objective:
\[
\max \mathbb{E}[\|G(Z, T) - X \|^2_2] - \beta \hat{I}(Z; T).
\]
The conditional autoencoder uses a architecture similar to that of a convolutional GAN \cite{radford2015unsupervised}, with the difference being that we insert an adaption layer $Z' = \text{MLP}(Z, T)$ before feeding the features to the decoder. See Appendix B for the details of its architecture. All images are resized to 32 $\times$ 32. For all methods, we use $10,000$ samples in the max step (so $N'=10,000$ in Algorithms 1 and 2).
\begin{table}[t]
\caption{Learning label-irrelevant representations on the Dsprite dataset. Here the utility of the representation is measured by MSE, while $\rho^*(Z, T)$ is used to quantify the level of disentanglement of the representation. Training time is also provided as the seconds needed per max step. $\text{Acc}(\hat{T})$ is the accuracy trying to predict $T$ from $Z$. As there are 3 classes, the ideal value for $\text{Acc}(\hat{T})$ is 0.33. }
\vspace{0.2cm}
\label{table:dsprite}
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{cccccccc}
\toprule
\cmidrule(r){1-8}
{}& \textbf{N/A} & \textbf{Pearson} & \textbf{dCorr} & \textbf{Slice} & \textbf{Neural R\'enyi} & \textbf{Neural TC} & \textbf{vCLUB}
\\
\midrule
$\text{MSE} \downarrow$ & $0.37 \pm 0.01$ & $0.44 \pm 0.02$ & $0.55 \pm 0.03$ & $0.50 \pm 0.01$ & $0.61 \pm 0.04$ & $0.49 \pm 0.03$ & $0.65 \pm 0.04$ \\
\midrule
$\rho^*(Z, T)\downarrow$ & $0.91 \pm 0.03$ & $0.81 \pm 0.07$ & $0.62 \pm 0.07$ & $0.08 \pm 0.02$ & $0.48 \pm 0.05$ & $0.34 \pm 0.06$ & $0.22 \pm 0.08$ \\
\midrule
Acc($\hat{T}$) & $0.98 \pm 0.01$ & $0.89 \pm 0.03$ & $0.76 \pm 0.05$ & $0.32 \pm 0.02$ & $0.55 \pm 0.04$ & $0.54 \pm 0.04$ & $0.48 \pm 0.03$ \\
\midrule
time (sec./max step) & $0.000$ & $0.201$ & $0.412$ & $0.602$ & $0.791$ & $0.812$ & $0.689$ \\
\bottomrule
\end{tabular}
}
\end{table}
\textbf{Dsprite}. A 2D shape dataset \cite{dsprites17} where each image is generated by four latent factors: shape, rotation, scale, locations. Here the class label $T$ is the shape, which ranges from (square, ellipse, heart). For this dataset, we train the autoencoder 100 iterations with a batch size of 512. The dimensionality of the representation for this task is 20 i.e. $Z \in \mathbbm{R}^{20}$. As in the previous experiments, we provide quantitative comparisons of the utility and disentanglement of different methods in Table \ref{table:dsprite}. In addition, we provide a qualitative comparison in Figure \ref{fig:disentangle_dsprite} which visualises the original image, the reconstructed image and the reconstructed image with a swapped label. From Table \ref{table:dsprite}, we see that the proposed method achieve very low $\rho^*(Z, T)$ while maintaining good MSE, suggesting that we may have discovered the true label-irrelevant generative factor for this dataset. This is confirmed visually by Figure \ref{fig:disentangle_dsprite}(b), where by changing $T$ in reconstruction we only change the style. By contrast, the separation between $T$ and $Z$ is less evident in adversarial approach, as can be seen from Table \ref{table:dsprite} as well as from Figure \ref{fig:disentangle_dsprite}(a) (see e.g. the reconstructed ellipses in the third column of the figure. They are more like a interpolation between ellipses and squares).
\begin{table}[t]
\caption{Learning label-irrelevant representations on the CMU-PIE dataset. Here the utility of the representation is measured by MSE, while $\rho^*(Z, T)$ is used to quantify the level of disentanglement of the representation. Training time is also provided as the seconds needed per max step. $\text{Acc}(\hat{T})$ is the accuracy trying to predict $T$ from $Z$. As there are 2 classes, the ideal value for $\text{Acc}(\hat{T})$ is 0.50$^*$.}
\vspace{0.2cm}
\label{table:cmu_pie}
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{cccccccc}
\toprule
\cmidrule(r){1-8}
{}& \textbf{N/A} & \textbf{Pearson} & \textbf{dCorr} & \textbf{Slice} & \textbf{Neural R\'enyi} & \textbf{Neural TC} & \textbf{vCLUB}
\\
\midrule
$\text{MSE} \downarrow$ & $1.81 \pm 0.04$ & $1.85 \pm 0.05$ & $2.08 \pm 0.08$ & $2.15 \pm 0.07$ & $2.46 \pm 0.06$ & $1.99 \pm 0.12$ & $2.02 \pm 0.10$ \\
\midrule
$\rho^*(Z, T) \downarrow$ & $0.76 \pm 0.04$ & $0.55 \pm 0.03$ & $0.27 \pm 0.07$ & $0.07 \pm 0.01$ & $0.36 \pm 0.04$ & $0.39 \pm 0.06$ & $0.16 \pm 0.06$ \\
\midrule
Acc($\hat{T}$) & $0.91 \pm 0.00$ & $0.76 \pm 0.03$ & $0.71 \pm 0.06$ & $0.51 \pm 0.03$ & $0.73 \pm 0.03$ & $0.76 \pm 0.04$ & $0.68 \pm 0.05$ \\
\midrule
time (sec./max step) & $0.000$ & $0.184$ & $0.332$ & $0.581$ & $0.750$ & $0.841$ & $0.621$ \\
\bottomrule
\end{tabular}
}
\footnotesize{*For the plain model, $\text{Acc}(\hat{T})$ is not necessarily around $1.0$, as $Z$ does not encode all content of the image}.
\end{table}
\textbf{CMU-PIE}. A colored face image dataset \cite{gross2010multi} where each face image has different pose, illumination and expression. We use its cropped version \cite{tian2018cr}. Here the class label $T$ is the expression, which ranges from (neutral, smile). We train an autoencoder with 200 iteration and a batch size of 128. The dimensionality of the representation for this task is 128 i.e. $Z \in \mathbbm{R}^{128}$. Figure \ref{fig:disentangle_CMUpie} and Table \ref{table:cmu_pie} shows the qualitative and quantitative results respectively. From Figure \ref{fig:disentangle_CMUpie}, we see that our method can well disentangle expression and non-expression representations: one can easily modify the expression of a reconstructed image by only changing $T$. Other visual factors of the image including pose, illumination, and identity remain the same after changing $T$. Adversarial approach can to some extent achieve disentanglement between $Z$ and $T$, however such disentanglement is imperfect: not all of the instances can change the expression by only modifying $T$. This is also confirmed quantitatively by Table \ref{table:cmu_pie}, where one can see the relatively high $\rho^*(Z, T)$ values in adversarial methods. For this task, v-CLUB also achieves a low $\rho^*(Z, T)$ value, though it is still outperformed by our method.
\subsection[Domain adaptation]{Domain adaptation}
We finally consider the task of domain adaptation, where we want to learn some representation $Z$ that can generalise across different datasets. For this task, a common assumption is that we have assess to two dataset $\mathcal{D}_s = \{X^{(i)}, Y^{(i)}\}^n_{i=1}$ and $\mathcal{D}_t = \{X^{(j)}\}^m_{j=1}$ whose classes are the same but are collected differently. Only the data in $\mathcal{D}_s$ has known labels. Following \cite{cheng2020club}, we learn $Z$ as follows:
\[
Z_c = f_c(X), \qquad Z_d = f_d(X)
\]
\[
\mathcal{L}_c = \mathbb{E}_{X, Y \in \mathcal{D}_s}[Y^{\top}\log C(Z_c)], \qquad \mathcal{L}_d = \mathbb{E}_{X \in \mathcal{D}_s}[\log D(Z_d)] + \mathbb{E}_{X \in \mathcal{D}_t}[\log (1- D(Z_d))],
\]
\[
\min \mathcal{L}_c + \mathcal{L}_d + \beta \hat{I}(Z_c, Z_d),
\]
where $Z_c$, $Z_d$ are disjoint parts of $Z$ that encode the content information and the domain information of $X$ separately. $C$ is the content classifier that maps $X$ to a $(K-1)$-simplex ($K$ is the number of classes) and $D$ is the domain classifier that distinguishes the domain from which $X$ comes. Since the classifier $C$ only sees labels in $\mathcal{D}_s$, we call $\mathcal{D}_s$ the source domain and $\mathcal{D}_t$ the target domain. For the two encoders $f_c$ and $f_d$, we use Resnets \cite{he2016deep} with 7 blocks trained with 100 iterations and a batch size of 128. Here $Z_c, Z_d \in \mathbbm{R}^{256}$. We use $N' = 5,000$ samples in the max step for all methods.
\textbf{MNIST $\to$ MNIST-M}. Two digit datasets with the same classes but different background colors. Both datasets have 50,000 training samples. Table \ref{table:DA} shows the result,
indicating that our method can more effectively remove the information about the domain. This is further confirmed by the T-SNE \cite{van2008visualizing} plot in Figure \ref{fig:DA}, where one can hardly distinguish the samples of $Z_c$ from the two domains. This naturally leads to a higher target domain accuracy $\text{Acc}(\hat{Y_t})$ than other methods.
\textbf{CIFAR10 $\to$ STL10}. Two datasets of natural images sharing 9 classes. There are 50,000 and 5,000 training samples in the two datasets respectively. Following existing works \cite{french2018self, shu2018dirt, cheng2020club}, we remove the non-overlapping classes from both datasets. Table \ref{table:DA} and Figure \ref{fig:DA} show the result. Again, we see that our method can more effectively remove domain information from the learned representation.
\begin{figure}[t]
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[width=0.98\linewidth]{figures/da/tsne_m_mm0.pdf}
\caption{M $\to$ MM, adversarial}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[width=0.98\linewidth]{figures/da/tsne_m_mm.pdf}
\caption{M $\to$ MM, slice}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[width=0.98\linewidth]{figures/da/tsne_c_s0.pdf}
\caption{C $\to$ S, adversarial}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\centering
\includegraphics[width=0.98\linewidth]{figures/da/tsne_c_s.pdf}
\caption{C $\to$ S, slice}
\end{subfigure}
\caption{T-SNE plots of the learned content representations $Z_c$ in domain adaptation tasks. (a)(c) show the cases when the adversary is not trained thoroughly (i.e. $L_2$ in Algorithm 1 is set too small). }
\label{fig:DA}
\end{figure}
\begin{table}[t!]
\caption{Learning domain-invariant representations. Here $\text{Acc}(\hat{Y}_s)$ and $\text{Acc}(\hat{Y}_t)$ are the classification accuracy in the source and the target domains respectively. Time used per max step is given. }
\label{table:DA}
\begin{minipage}{.5\linewidth}
\vspace{0.2cm}
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{ccccc}
\multicolumn{5}{c}{MNIST $\to$ MNIST-M} \\
\toprule
\cmidrule(r){1-5}
{}& \textbf{N/A} & \textbf{Slice} & \textbf{Neural TC} & \textbf{vCLUB}
\\
\midrule
$\text{Acc}(\hat{Y_s}) \uparrow$ & $99.3 \pm 0.04$ & $99.2 \pm 0.02$ & $99.2 \pm 0.04$ & $99.0 \pm 0.03$ \\
\midrule
$\text{Acc}(\hat{Y_t}) \uparrow$ & $46.3 \pm 0.03$ & $98.5 \pm 0.45$ & $80.1 \pm 0.17$ & $93.8 \pm 0.10$ \\
\midrule
$\rho^*(Z_c, Z_d) \downarrow$ & $0.86 \pm 0.05$ & $0.06 \pm 0.01$ & $0.64 \pm 0.04$ & $0.49 \pm 0.12$ \\
\midrule
time (sec./step) & $0.000$ & $2.578$ & $3.282$ & $3.123$ \\
\bottomrule
\end{tabular}
}
\end{minipage}
\begin{minipage}{.5\linewidth}
\vspace{0.2cm}
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{ccccc}
\multicolumn{5}{c}{CIFAR10 $\to$ STL10} \\
\toprule
\cmidrule(r){1-5}
{}& \textbf{N/A} & \textbf{Slice} & \textbf{Neural TC} & \textbf{vCLUB}
\\
\midrule
$\text{Acc}(\hat{Y_s}) \uparrow$ & $93.0 \pm 0.03$ & $92.5 \pm 0.03$ & $92.4 \pm 0.03$ & $92.1 \pm 0.04$ \\
\midrule
$\text{Acc}(\hat{Y_t}) \uparrow$ & $75.9 \pm 0.09$ & $82.3 \pm 0.03$ & $80.8 \pm 0.08$ & $78.5 \pm 0.11$ \\
\midrule
$\rho^*(Z_c, Z_d) \downarrow$ & $0.43 \pm 0.05$ & $0.08 \pm 0.01$ & $0.39 \pm 0.07$ & $0.42 \pm 0.09$ \\
\midrule
time (sec./step) & $0.000$ & $3.146$ & $3.222$ & $3.080$ \\
\bottomrule
\end{tabular}
}
\end{minipage}
\end{table}
\section{Introduction}
Learning representations that are uninformative about some target but still useful for downstream applications is an important task in machine learning with many applications in areas including algorithmic fairness \cite{madras2018learning, edwards2015censoring, elazar2018adversarial, grari2019fairness}, disentangled representation learning \cite{cheung2014discovering, higgins2016beta, kim2018disentangling, locatello2019challenging}, information bottleneck \cite{tishby2015deep, alemi2016deep}, and invariant representation learning \cite{xie2017controllable, roy2019mitigating, jaiswal2020invariant, xie2022disentangled}.
A popular method for the above task is adversarial training \cite{madras2018learning, elazar2018adversarial, grari2019fairness, kim2018disentangling, cheng2020estimating, edwards2015censoring, xie2017controllable}, where
two neural networks, namely the encoder and the adversary, are trained jointly to compete with each other. The encoder's goal is to learn a representation with high utility but contains no information about the target. The adversary, on the contrary, tries to recover the information about the target from the learned representation as much as possible. This leads to a minmax game similar to that in generative adversarial networks \cite{goodfellow2014generative}. Adversarial training is effective with a strong adversary, however, it is often challenging to train the adversary thoroughly in practice, due to time constraints and/or optimisation difficulties \cite{salimans2016improved, song2019overlearning}. In fact, recent studies have revealed that adversarial approaches may not faithfully produce an infomin representation in some cases \cite{moyer2018invariant, feng2019learning, song2019overlearning, song2019learning, balunovic2021fair}. This motivates us to seek a good, adversarial training-free alternative for scalable infomin learning.
In this work, we propose a new method for infomin learning which is almost as powerful as using a strong adversary but is highly scalable. Our method is inspired by recent advances in information theory which proposes to estimate mutual information in the sliced space \cite{goldfeld2021sliced}. We highlight the following contributions:
\begin{itemize}[leftmargin=*]
\item We show that for infomin learning, an accurate estimate of mutual information (or its bound) is unnecessary: testing and optimising statistical independence in some sliced spaces is sufficient;
\item We develop an analytical approximation to such sliced independence test, along with a scalable algorithm for infomin learning based on this approximation. No adversarial training is required.
\end{itemize}
Importantly, the proposed method can be applied to a wide range of infomin learning tasks without any constraint on the form of variables or any assumption about the distributions. This contrasts our method to other adversarial training-free methods which are either tailored for discrete or univariate variables \cite{mary2019fairness, baharlouei2019r, baharlouei2019r, jiang2020wasserstein, balunovic2021fair} or rely on variational approximation to distributions
\cite{alemi2016deep, chen2018isolating, moyer2018invariant, roy2019mitigating}.
\section{Limitation}
\section{Problem formulation}
\section{Methodology}
We propose an alternative to adversarial training for optimising \eqref{formula:overall_objective}.
Our idea is to learn representation by the following objective, which replaces $I(f(X); T)$ in objective \eqref{formula:overall_objective} with its `sliced' version:
\begin{equation}
\min_f \mathcal{L}(f(X); Y) + \beta \cdot SI(f(X); T) ,
\label{formula:slice_infomin_learning}
\end{equation}
where $SI$ denotes the sliced mutual information, which was also considered in \cite{goldfeld2021sliced}. Informally, $SI$ is a `facet' of mutual information that is much easier to estimate (ideally has closed form) but can still to some extent reflect the dependence between $Z$ and $T$. Optimising \eqref{formula:slice_infomin_learning} is then equivalent to testing and minimising the dependence between $Z$ and $T$ from one facet. Importantly, while testing dependence through only a single facet may be insufficient, by testing and minimising dependence through various facets across a large number of mini-batches we eventually see
$I(Z; T) \to 0$.
We show one instance for realising $SI$ whose empirical approximation $\hat{SI}$ has an analytic expression. The core of our method is Theorem 1, which is inspired by \cite{grari2019fairness, goldfeld2021sliced}.
\begin{theorem}
1 Let $Z \in \mathbbm{R}^D$ and $T \in \mathbbm{R}^d$ be two random variables that have moments. $Z$ and $T$ are statistically independent if and only if $SI(Z, T) = 0$ where $SI(Z, T)$ is defined as follows
\begin{equation}
SI(Z; T) = \sup_{h, g, \theta, \phi} \rho(h(\theta^{\top}Z), g(\phi^{\top}T)) ,
\label{formula:slice_renyi_corr}
\end{equation}
where $\rho$ is the Pearson correlation, $h, g: \mathbbm{R} \to \mathbbm{R}$ are Borel-measurable non-constant functions, and $\theta \in \mathbb{S}^{D-1}$, $\phi \in \mathbb{S}^{d-1}$ are vectors on the surfaces on $D$-dimensional and $d$-dimensional hyperspheres.
\end{theorem}
\emph{Proof}. See the Appendix. \qed
We call $\theta$ and $\phi$ the slices for $Z$ and $T$ respectively, and $\theta^{\top}Z$, $\phi^{\top}T$ the sliced $Z$ and $T$ respectively.
We sketch here how this result relates to \cite{grari2019fairness,goldfeld2021sliced}.
\cite{goldfeld2021sliced} considers $\overline{SI}(Z;T)$, defined as the expected mutual information $\mathbb{E}[I(\theta^{\top}Z,\phi^{\top}T)]$ of the sliced $Z$ and $T$, where the expectation is taken over respective Haar measures $\theta \in \mathbb{S}^{D-1}$, $\phi \in \mathbb{S}^{d-1}$. Instead of considering the mutual information $I(\theta^{\top}Z, \phi^{\top}T)$ in the average case, we handle Pearson correlation over the supreme functions $h,g$ as defined above, which links to R\'{e}nyi's maximal correlation \cite{grari2019fairness,renyi1959measures,gebelein1941statistische,hirschfeld1935connection} and has some interesting properties suitable for infomin representation learning.
Intuitively, Theorem 1 says that in order to achieve $I(Z; T) \to 0$, we need not estimate $I(Z; T)$ in the original space; rather we can test (and maximise) independence in the sliced space as realised by \eqref{formula:slice_renyi_corr}. Other realisations of the sliced mutual information $SI$ may also be used. The major merit of the realisation \eqref{formula:slice_renyi_corr} is it allows an analytic expression for its empirical approximation, as shown below.
\textbf{Analytic approximation to \emph{SI}}. An empirical approximation to \eqref{formula:slice_renyi_corr} is
\[
SI(Z; T) \approx \sup_{i,j} \sup_{h_i, g_j} \rho(h_i(\theta^{\top}_i Z ), g_j(\phi^{\top}_j T)),
\]
\begin{equation}
\text{where } \quad \theta_i \sim \mathcal{U}(\mathbb{S}^{D-1}), \enskip i = 1, ..., S, \qquad \phi_j \sim \mathcal{U}(\mathbb{S}^{d-1}), \enskip j = 1, ..., S.
\label{formula:empirical_slice_renyi_corr}
\end{equation}
i.e., we approximate \eqref{formula:slice_renyi_corr} by randomly sampling a number of slices $\theta, \phi$ uniformly from the surface of two hyperspheres $\mathbb{S}^{D-1}$ and $\mathbb{S}^{d-1}$ and pick those slices where the sliced $Z$ and the sliced $T$ are maximally associated. With a large number of slices, it is expected that \eqref{formula:empirical_slice_renyi_corr} will approximate \eqref{formula:slice_renyi_corr} well. We refer to \cite{goldfeld2021sliced} for a theoretical analysis on the number of required slices in estimator-agnostic settings. In Appendix B we also investigate empirically how this number will affect performance.
For each slicing direction, we further assume that the supreme functions $h_i, g_j : \mathbbm{R} \to \mathbbm{R}$ for that direction can be well approximated by $K$-order polynomials given sufficiently large $K$, i.e.
\[
h_i(a) \approx \hat{h}_i(a) = \sum^K_{k=0} w_{ik}\sigma(a) ^k, \qquad g_j(a) \approx \hat{g}_j(a) = \sum^K_{k=0} v_{jk} \sigma(a)^k,
\]
where $\sigma(\cdot)$ is a monotonic function which maps the input to the range of $[-1, 1]$.
Its role is to ensure that $\sigma(a)$ always has finite moments, so that the polynomial approximation is well-behaved. Note that no information will be lost by applying $\sigma(\cdot)$. Here we take $\sigma(\cdot)$ as the tanh function. In Appendix A we also investigate theoretically the approximation error of this polynomial approximation scheme. Other approximation schemes such as random feature model \cite{lopez2013randomized} can also be used to approximate $h_i$ and $g_j$, which will be explored in the future. In this work, we simply set $K=3$.
\if0
One may wonder if ignoring the higher-order terms $\sigma(a)^{k'}, k' > K$ will cause inaccuracy in dependence modelling. In Appendix A we show that if $|\hat{h}_i(a) - h_i(a) | \leq \epsilon$ and $| \hat{g}_j(a) - g_j(a) | \leq \epsilon$ for $\forall i, j, a$, then as $\epsilon \to 0$ we have $|\rho^*_{ij} - \hat{\rho}^*_{ij}| = o(\epsilon)$ where $\rho^*_{ij} = \rho(h_i(\theta_i^{\top} Z ), g_j(\phi_j^{\top} T))$ is the true correlation and $\hat{\rho}^*_{ij} $ is its $K$-order approximation. In such case we have $\hat{\rho}^*_{ij} \approx 0$ whenever $\rho^*_{ij} \approx 0$.
\fi
With this polynomial approximation, the solving of each functions $h_i, g_j$ in \eqref{formula:empirical_slice_renyi_corr} reduces to finding their weights $w_i, v_j$:
\[
\sup_{h_i, g_j} \rho(h_i(\theta^{\top}_i Z ), g_j(\phi^{\top}_j T)) \approx \sup_{w_i, g_j} \rho(w_i^{\top}Z'_i, v_j^{\top}T'_j),
\]
\[
Z'_i = [1, \sigma(\theta^{\top}_i Z), ..., \sigma(\theta^{\top}_i Z)^K ], \qquad T'_j = [1, \sigma(\phi^{\top}_j T), ..., \sigma(\phi^{\top}_j T)^K]
\]
This is known as canonical correlation analysis (CCA) \cite{hotelling1936relations} and can be solved analytically by eigendecomposition. Hence we can find the weights for all pairs of $h_i, g_j$ by $S^2$ eigendecompositions.
In fact, the functions $h_i, g_j$ for all $i, j$ can be solved simultaneously by performing a larger eigendecomposition only once. We do this by finding $w, v$ that maximise the following quantity:
\begin{equation}
\hat{SI}_{\Theta, \Phi}(Z;T) = \sup_{w, v} \rho( w^{\top}Z', v^{\top}T'),
\label{formula:slice_renyi_corr_joint}
\end{equation}
where
\[Z' = [Z'_1, ..., Z'_i], \qquad T' = [T'_1, ..., T'_j]
\]
\[
Z'_i = [1,\sigma(\theta^{\top}_i Z), ..., \sigma(\theta^{\top}_i Z)^K ], \qquad T'_j = [1, \sigma(\phi^{\top}_j T), ..., \sigma(\phi^{\top}_j T)^K]
\]
\[
\theta_i \sim \mathcal{U}(\mathbb{S}^{D-1}), \enskip i = 1, ..., S, \qquad \phi_j \sim \mathcal{U}(\mathbb{S}^{d-1}), \enskip j = 1, ..., S.
\]
That is, we first concatenate all $Z'_1,...Z'_S $ and $T'_1,...T'_S$ into two `long' vectors $Z' \in \mathbbm{R}^{(K+1)S}$ and $T' \in \mathbbm{R}^{(K+1)S}$ respectively, then solve a CCA problem corresponding to $Z'$ and $T'$. We then use \eqref{formula:slice_renyi_corr_joint} to replace \eqref{formula:empirical_slice_renyi_corr}. The theoretical basis for doing so is grounded by Theorem 2, which tells that the solution of \eqref{formula:slice_renyi_corr_joint} yields an upper bound of \eqref{formula:empirical_slice_renyi_corr}, provided that the polynomial approximation is accurate.
\if0
This means that we are safe to replace \eqref{formula:empirical_slice_renyi_corr} by \eqref{formula:slice_renyi_corr_joint}
\fi
\begin{theorem}
2 Provided that each $h_i, g_j$ in \eqref{formula:empirical_slice_renyi_corr} are $K$-order polynomials \yl, given the sampled $\Theta = \{\theta_i\}^S_{i=1}, \Phi = \{\phi_j\}^S_{j=1}$, we have $ \hat{SI}_{\Theta, \Phi}(Z; T) \leq \epsilon \Rightarrow$ $\sup_{i, j} \sup_{h_i, g_j} \rho(h_i(\theta_i^{\top}Z), g_j(\phi_j^{\top}T)) \leq \epsilon$.
\end{theorem}
\emph{Proof}. See Appendix A. \qed
The intuition behind the proof of Theorem 2 is that if all $Z'_i$ and $T'_j$ as a whole cannot achieve a high correlation, each of them alone can not either.
The benefits for solving $f_i, g_j$ for all slices jointly are two-fold. (a) The first benefit is better computational efficiency, as it avoids invoking a for loop and only uses matrix operation. This has better affinity to modern deep learning infrastructure and libraries (e.g. Tensorflow \cite{abadi2016tensorflow} and PyTorch \cite{paszke2019pytorch}) which are optimised for matrix-based operations. In addition to computational efficiency, another benefit for solving $g_j, h_j$ jointly is (b) the stronger power in independence testing. More specifically, while some sliced directions may individually be weak for detecting dependence, they together as a whole can compensate for each other, yielding a more powerful test. This also echos with \cite{goldfeld2022k}.
\if0
It is also possible to optimise the slices by a few gradient steps, in the hope that one of the sampled slice will get closer to the optimal slice given large number of slices. We leave this to future work.
\fi
\begin{minipage}[!t]{0.495\linewidth}
\centering
\begin{algorithm}[H]
\caption{Adversarial Infomin Learning}
\label{alg:adv_infomin_learning}
\begin{algorithmic}
\STATE {\bfseries Input:} data $\mathcal{D} = \{X^{(n)}, Y^{(n)}, T^{(n)} \}^N_{n=1}$
\STATE {\bfseries Output:} $Z = f(X)$ that optimises \eqref{formula:overall_objective}
\STATE {\bfseries Hyperparams:} $\beta$, $N'$, $L_1$, $L_2$
\STATE {\bfseries Parameters:} encoder $f$, MI estimator $t$
\STATE \quad
\FOR{$l_1$ in $1$ to $L_1$ }
\STATE sample mini-batch $\mathcal{B}$ from $\mathcal{D}$
\STATE sample $\mathcal{D'}$ from $\mathcal{D}$ whose size $N' < N$
\STATE $\rhd$ \emph{Max-step}
\FOR{$l_2$ in $1$ to $L_2$ }
\STATE $t \leftarrow t + \eta \nabla_t \hat{I}_t(f(X); T)$ with data in $\mathcal{D}'$
\ENDFOR
\STATE $\rhd$ \emph{Min-step}
\STATE $f \leftarrow f - \eta \nabla_{f}[\mathcal{L}(f(X); Y) + \beta \hat{I}_t(f(X); T)$] with data in $\mathcal{B}$
\vspace{0.1cm}
\ENDFOR
\STATE \textbf{return} $Z = f(X)$
\end{algorithmic}
\end{algorithm}
\vspace{0.1cm}
\end{minipage}%
\hspace{+0.002\linewidth}
\begin{minipage}[!t]{0.500\linewidth}
\centering
\begin{algorithm}[H]
\caption{Slice Infomin Learning}
\label{alg:slice_infomin_learning}
\begin{algorithmic}
\STATE {\bfseries Input:} data $\mathcal{D} = \{X^{(n)}, Y^{(n)}, T^{(n)} \}^N_{n=1}$
\STATE {\bfseries Output:} $Z = f(X)$ that optimises \eqref{formula:overall_objective}
\STATE {\bfseries Hyperparams:} $\beta$, $N'$, $L$, $S$
\STATE {\bfseries Parameters:} encoder $f$, weights $w, v$ in $\hat{SI}$
\STATE \quad
\FOR{$l$ in $1$ to $L$ }
\STATE sample mini-batch $\mathcal{B}$ from $\mathcal{D}$
\STATE sample $\mathcal{D'}$ from $\mathcal{D}$ whose size $N' < N$
\STATE $\rhd$ \emph{Max-step}
\STATE sample $S$ slices $\Theta = \{\theta_i\}^S_{i=1}, \Phi = \{\phi_j\}^S_{j=1}$
\STATE solve the weights $w, v$ in $\hat{SI}$ \eqref{formula:slice_renyi_corr_joint} analytically with $\Theta, \Phi, \mathcal{D}'$ by eigendecomposition
\STATE $\rhd$ \emph{Min-step}
\STATE $f \leftarrow f - \eta \nabla_{f}[\mathcal{L}(f(X); Y) + \beta \hat{SI}(f(X); T)$] with data in $\mathcal{B}$
\vspace{-0.04cm}
\ENDFOR
\STATE \textbf{return} $Z = f(X)$
\end{algorithmic}
\end{algorithm}
\vspace{0.1cm}
\end{minipage}
\textbf{Mini-batch learning algorithm}. Given the above approximation \eqref{formula:slice_renyi_corr_joint} to $SI$, we can now elaborate the details of our mini-batch learning algorithm. In each iteration, we execute the following steps:
\begin{itemize}[leftmargin=*]
\item \emph{Max-step}. Sample $S$ slices $\Theta = \{\theta_i\}^S_{i=1}, \Phi = \{\phi_j\}^S_{j=1}$ and a subset of the data $\mathcal{D}' \subset \mathcal{D}$. Learn the weights $w, v$ of $\hat{SI}(Z; T)$ \eqref{formula:slice_renyi_corr_joint} with the sampled $\Theta, \Phi$ and the data in $ \mathcal{D}'$ by eigendecomposition;
\item \emph{Min-step}. Update $f$ by SGD: $f \leftarrow f - \eta \nabla_f [\mathcal{L}(f(X), Y) + \beta \hat{SI}(f(X), T)]$ with $Z = f(X)$ where the parameters $w, v$ of $\hat{SI}(f(X), T)$ is learned in the max-step: $\hat{SI}(Z, T) = \rho( w^{\top}Z', v^{\top}T')$.
\end{itemize}
The whole learning procedure is shown in Algorithm 2. We note that the data in the subset $\mathcal{D}'$ can be different from the mini-batch $\mathcal{B}$ and its size
can be much larger than the typical size of a mini-batch.
Compared to that of adversarial methods \cite{edwards2015censoring, xie2017controllable, madras2018learning, elazar2018adversarial, song2019learning, grari2019fairness} as shown in Algorithm 1, we replace neural network training in the max-step with a analytic eigendecomposition step, which is much cheaper to execute. As discussed in Sec 2, if the network $t$ is not trained thoroughly (due to e.g. insufficient gradient steps $L_2$ in the max-step), it may not provide a sensible estimate to $I_t(f(X); T)$ and can hence a weak adversary. Our method does not suffer from this issue as $\hat{SI}$ is solved analytically.
\if0
Note that the time complexity of eigendecomposition is $O(S^3)$.
\fi
Finally, as an optional strategy, we can improve our method by actively seeking more informative slices for independence testing by optimising the sampled slices with a few gradient steps (e.g. 1-3):
\begin{equation}
\Theta \leftarrow \Theta- \xi \nabla_{\Theta} \hat{SI}_{\Theta, \Phi}(Z, T), \qquad \qquad \Phi \leftarrow \Phi - \xi \nabla_{\Phi} \hat{SI}_{\Theta, \Phi}(Z, T)
\end{equation}
which is still cheap to execute. Such a strategy can be useful when most of the sampled slices are ineffective in detecting dependence, which typically happens in later iterations where $I(Z; T) \approx 0$.
\if0
which is still cheap to execute. Such a strategy can be useful when most of the sampled slices are ineffective in detecting dependence, which typically happens in later iterations of learning where $I(Z; T) \approx 0$.
One may activate this strategy whenever the estimated $SI$ is low enough (e.g. $<0.10$).
\fi
\if0
(e.g. $\hat{SI}(Z; T) < 0.05$).
Since the optimisation of slices only happens in late learning iterations, and we only apply a small number of gradient steps, the overall execution time will only increase negligibly.
\fi
\if0
\vspace{-0.20cm}
\begin{figure}[t!]
\begin{subfigure}{1.0\textwidth}
\centering
\includegraphics[width=0.995\linewidth]{figures/SI_demonstration.png}
\end{subfigure}
\caption{Demonstrating the idea of the proposed slice-based method for infomin learning. Rather than directly minimising $I(Z; T)$, which is intractable, we minimise its sliced version $SI(Z; T)$. }
\label{fig:SI}
\end{figure}
\fi
\if0
After optimising the slices, we re-use these optimised slices in subsequent iterations until they become `useless' (e.g. $\hat{SI}$ is low). Since the optimisation of slices only happens occasionally, and that it only happens in ending iterations, it will only slightly increase the overall execution time.
\fi
\clearpage
\section{Related works}
\textbf{Neural mutual information estimators}. A set of neural network-based methods \cite{belghazi2018mutual, hjelm2018learning, van2018representation, chen2020neural} have been proposed to estimate the mutual information (MI) between two random variables, most of which work by maximising a lower bound of MI \cite{poole2019variational}. These neural MI estimators are in general more powerful than non-parametric methods \cite{gretton2007kernel, bach2002kernel, poczos2012copula, szekely2014partial} when trained thoroughly, yet the time spent on training may become the computational bottleneck when applied to infomin learning.
\textbf{Upper bound for mutual information}. Another line of method for realising the goal of infomin learning without adversarial training is to find an upper bound for mutual information \cite{alemi2016deep, moyer2018invariant, roy2019mitigating, cheng2020club, wang2020infobert}. However, unlike lower bound estimate, upper bound often requires knowledge of either the conditional densities or the marginal densities \cite{poole2019variational} which are generally not available in practice. As such, most of these methods introduce a variational approximation to these densities whose choice/estimate may be difficult. Our method on the contrary needs not to approximate any densities.
\textbf{Slicing techniques}. A series of successes have been witnessed for the use of slicing methods in machine learning and statistics \cite{nadjahi2020statistical}, with applications in generative modelling \cite{deshpande2018generative, kolouri2018sliced, rowland2019orthogonal, song2020sliced}, statistical test \cite{gong2020sliced} and mutual information estimate \cite{goldfeld2021sliced}. Among them, the work \cite{goldfeld2021sliced} who proposes the concept of sliced mutual information is very related to this work and directly inspires our method. Our contribution is a novel realisation of sliced mutual information suitable for infomin learning.
\textbf{Fair machine learning}. One application of our method is to encourage the fairness of a predictor. Much efforts have been devoted for the same purpose, however most of the existing methods can either only work at the classifier level \cite{baharlouei2019r, mary2019fairness, grari2019fairness, xu2021controlling}, or only focus on the case where the sensitive attribute is discrete or univariate \cite{balunovic2021fair, chzhen2020fair, jiang2020wasserstein, xu2021controlling, zemel2013learning}, or require adversarial training \cite{edwards2015censoring, xie2017controllable, madras2018learning, elazar2018adversarial, song2019learning, grari2019fairness}. Our method on the contrary has no restriction on the form of the sensitive attribute, can be used in both representation level and classifier level, and require no adversarial training of neural networks.
\textbf{Disentangled representation learning}. Most of the methods in this field work by penalising the discrepancy between the joint distribution $P = q(Z)$ and the product of marginals $Q = \prod^D_d q(Z_d)$ \cite{higgins2016beta, kumar2017variational, chen2018isolating, burgess2018understanding, kim2018disentangling}.\footnote{Note there exist methods based on group theory \cite{higgins2018towards, quessard2020learning, zhu2021commutative} which do not assess distribution discrepancy.} However, such discrepancy is often non-trivial to estimate, so one has to resort to Monte Carlo estimate ($\beta$-TCVAE \cite{chen2018isolating}), to train a neural network estimator (FactorVAE \cite{kim2018disentangling}) or to assess the discrepancy between $P$ and $Q$ by only their moments (DIP-VAE \cite{kumar2017variational}). Our method avoids assessing distribution discrepancy directly and instead perform independence test in the sliced space.
|
1,116,691,497,325 | arxiv | \section{Introduction}
\IEEEPARstart{A}{daptive} beamforming aims to extract the signal from a certain direction while suppressing interference and noise. The technique has been widely used in many fields such as wireless communications, sonar and radar \cite{van2004detection}. However, standard adaptive beamformers are well known to be sensitive to steering vector (SV) mismatches, array imperfections or environmental uncertainties due to non-ideal conditions and many different factors (e.g., wavefront errors, local scattering and finite sample sizes) \cite{li2003robust}. Hence, various adaptive beamformers have been developed in order to mitigate the effects of these problems. Existing methods include diagonal loading \cite{mestre2006finite, kukrer2014generalised,l1stap}, the worst-case optimization in \cite{vorobyov2003robust,wc_ccm} and the projection techniques and subspace-based beamformer investigated in \cite{mohammadzadeh2018adaptive,hassanien2008robust, huang2012modified, shen2015robust,jio,beam_jio,jio_stap,wl_jio,jidf,sjidf,lrcc}. However, these approaches are limited when the SV mismatch is severe.
Recent works show that the main cause of beamformer performance degradation is the leakage of a component of the signal of interest (SOI) in the sample covariance matrix (SCM) \cite{mallipeddi2011robust}. Recently, a new approach to adaptive beamforming was presented that removes the effect of the SOI component from the covariance matrix by reconstructing the noise-plus-interference covariance (NPIC) matrix. The NPIC matrix in \cite{gu2012robust} is reconstructed by integrating the assumed SV and the corresponding Capon spectrum over a range of angles in which the SOI is absent. The NPIC-based beamformer in \cite{gu2014robust} relies on a sparse reconstruction method. In \cite{ruan2014robust,ruan2016}, computationally efficient algorithms via low complexity reconstruction of the NPIC matrix are presented. In \cite{yuan2017robust} a subspace-based NPIC matrix reconstruction algorithm was proposed. However, it requires that the angular sector of each interference should be known and separated from each other. Later, in \cite{zhang2016interference} an approach is developed using spatial power spectrum sampling (SPSS), but its performance degrades as the number of sensors is decreased. In \cite{chen2018adaptive} the SOI component is eliminated from the DoA of the related bases in order to construct an NPIC matrix directly from the signal-interference subspace.\\
\indent It is worth noting that the use of adaptive antenna arrays and their applications has a trade-off between computational complexity and performance which has a direct relation with the adaptation algorithm.
However, in practice and for large systems, these techniques require the computation of the inverse of the input data SCM (or NPIC matrix), rendering the method very complex.\\
\indent In this letter, we introduce a conjugate gradient (CG) type adaptation algorithm \cite{cg_beam,smcg,l1cg} for the maximum entropy power spectrum noise-plus-interference covariance technique, denoted MEPS-NPIC-CG. The proposed MEPS-NPIC-CG algorithm updates the beamforming weights with a reduced cost as it does not explicitly form the covariance matrices, relying instead on low-cost iterative techniques. The estimated weight vector is obtained from a coarse estimate of the angular sector where the desired signal lies, using CG iterations that avoid the explicit construction of the covariance matrix. We similarly implicitly reconstruct the desired signal covariance matrix and obtain a better desired signal SV estimate using low-cost iterations. An analysis of computational complexity shows that MEPS-NPIC-CG has low-complexity and outperforms other existing techniques.
\section{Problem Background}
Let us assume a linear antenna array with $M$ sensors, spaced by distance $d$, that receive narrowband signals which impinge on the array from several far-field sources. The array observation vector at the $t$-th snapshot can be modeled as
\begin{align}
\textbf{x}(t)=\textbf{a}(\theta_0) s_0(t)+\sum _{l=1}^L \textbf{a}(\theta_l) s_l(t)+\textbf{n}(t),
\end{align}
where $s_0(t)$ and $s_l(t)$ denote the waveforms of the SOI and $\textit{l}$th interfering signal, respectively. The additive white Gaussian noise vector $\textbf{n}(t)$ is assumed spatially uncorrelated from the desired signal and the interfering signals. The angles $\theta_0$ and $\theta_l$ ($l=1, \cdots, L$) denote the DoAs of the desired signal and interference, respectively. For a sensor array with $\textit{M}$ sensors, $\textbf{a}(\cdot)$ designates the corresponding SV, which has the general form
$\textbf{a}(\theta)= \big[1, e^{-j\pi \bar{d} \sin \theta}, \cdots, e^{-j\pi(M-1)\bar{d} \sin \theta} \big]^\mathrm{T}$,
where $\bar{d}=2d/\lambda$, $\lambda$ is the wavelength, and $(\cdot)^\mathrm{T}$ denotes the transpose. Assuming that the SV $\textbf{a}_0=\textbf{a}(\theta_0)$ is known, for a given beamformer $\textbf{w}$, the performance is evaluated by the output signal-to-interference-plus-noise ratio (SINR) as
\begin{align}\label{SINR}
\mathrm{SINR}=\dfrac{\sigma^{2}_0 |\textbf{w}^\mathrm{H} \textbf{a}_0|^2 }{\textbf{w}^\mathrm{H} \textbf{R}_\mathrm{i+n}\textbf{w}},
\end{align}
where $\textbf{R}_\mathrm{i+n}$ is the NPIC matrix, $\sigma^2_0$ is power of the desired signal and $ (\cdot)^\mathrm{H} $ stands for Hermitian transpose. The beamformer that maximizes (\ref{SINR}) is equivalent to finding the solution that maintains a distortionless response toward the SV $\textbf{a}_0$:
\begin{align}\label{MVDR}
\underset{{\textbf{w}}}{\operatorname{min}}\ \textbf{w}^\mathrm{H} \textbf{R}_\mathrm{i+n} \ \textbf{w}\ \hspace{.4cm} \mathrm{s.t.} \hspace{.4cm} \textbf{w}^\mathrm{H} \textbf{a}_0=1.
\end{align}
The solution to (\ref{MVDR}) yields the optimal beamformer given by
\begin{align}\label{optimal wegight vector}
\textbf{w}_{\mathrm{opt}}=\dfrac{\textbf{R}_\mathrm{i+n}^{-1} \textbf{a}_0}{\textbf{a}_0^\mathrm{H} \textbf{R}_\mathrm{i+n}^{-1}\textbf{a}_0},
\end{align}
which is the adaptive weight vector based on the minimum variance distortionless response (MVDR) criterion \cite{van2004detection}.
Moreover, the array covariance matrix $\textbf{R}=\mathrm{E}\{\textbf{x}(t)\textbf{x}^\mathrm{H}(t)\}$ is
\begin{equation} \label{Theoretical R}
\textbf{R}=\textbf{R}_\mathrm{i+n}+\textbf{R}_\mathrm{s}= \int_{\Phi} P(\theta)\textbf{a}(\theta)\textbf{a}^\mathrm{H}(\theta) d\theta,
\end{equation}
where $P(\theta)$ is the power spectrum of the signals and $\Phi=[\bar{\Theta} \cup \Theta]$ covers the union of the angular sectors of the noise-plus-interference signal, $\bar{\Theta}$, and of the desired signal region, $\Theta$ (obtained through some low-resolution direction finding methods \cite{van2004detection}), while $\textbf{R}_\mathrm{s}=\sigma_0^2\textbf{a}_0\textbf{a}_0^\mathrm{H}$ is the theoretical desired signal covariance matrix. Since $\textbf{R}_\mathrm{i+n}$ is unknown in practice, it is substituted by the data SCM \cite{spa} as $\hat{\textbf{R}}=(1/K)\sum_{t=1}^{K} \textbf{x}(t)\textbf{x}^\mathrm{H}(t)$, where $K$ is the number of received snapshot vectors.
\section{Proposed MEPS-NPIC-CG Algorithm}
Our approach is based on MEPS to reconstruct the NPIC matrix and estimate the desired signal SV separately. The NPIC matrix is never explicity computed, however -- instead, we use CG recursions to solve a linear system to obtain the desired weight vector. The desired signal SV is also obtained via a low-complexity approach.
\subsection{Maximum Entropy Power Spectrum }
In the proposed beamforming method, an approach different from prior works is adopted to reconstruct the NPIC and the desired signal covariance matrices. The essence of the idea is based on the use of the spatial spectrum distribution over all possible directions and coarse estimates of the angular regions where the desired signal and the interferers lie. In this work, we exploit maximum entropy power spectrum estimation \cite{lacoss1971data}:
\vspace{-1mm}
\begin{align}\label{Power of MEM}
\hat{{{P}}}_\mathrm{meps}=\dfrac{1}{\epsilon_p \vert \textbf{a}^\mathrm{H}(\theta)\hat{\textbf{R}}^{-1} \textbf{u}_1 \vert ^2}
\end{align}
where $\textbf{u}_1=[\begin{smallmatrix}1 & 0 & \cdots & 0\end{smallmatrix}]^\mathrm{T}$, $\epsilon_p=1/\textbf{u}_1^\mathrm{T}\hat{\textbf{{R}}}^{-1}\textbf{u}_1$.
\subsection{Desired Signal SV Estimation}
In practice, we have inaccurate SV estimates, resulting in performance degradation. Therefore, we utilize the knowledge of the angular sector of the SOI to construct a criterion which can be used to estimate the actual SV. This algorithm is based on the multiplication of an estimate of the desired signal covariance matrix and the nominal SV of the SOI, which results in a vector much closer to the SV of SOI. First, the desired signal covariance matrix can be reconstructed based on MEPS by numerically evaluating \eqref{Theoretical R} over $\Theta$
\begin{align}\label{proposed Rs}
\hat{\textbf{R}}_{\mathrm{s}} \approx \sum_{i=1}^{S} \hat{{{P}}}_\mathrm{meps} \ \textbf{a}(\theta_{s_i})\textbf{a}^\mathrm{H}(\theta_{s_i})\Delta\theta_s,
\end{align}
where $\Theta$ is sampled uniformly with $S$ sampling points spaced by $\Delta\theta_s$, so that $ \{\textbf{a}(\theta_{s_i}) \arrowvert \theta_{s_i} \in \Theta \} $ lies within the range space of $ \hat{\textbf{R}}_{\mathrm{s}} $. Let $\bar{\mathbf{a}}$ be the nominal desired signal SV. If the set $\Theta$ is such that the noise and interference power are dominated by the signal power in the covariance estimate \eqref{proposed Rs}, then
\begin{align}\label{etimted SV}
\hat{\textbf{a}}_0=\hat{\textbf{R}}_{\mathrm{s}} \bar{\textbf{a}}\simeq (\sigma_\mathrm{0}^2\textbf{a}_0\textbf{a}_0^\mathrm{H})\bar{\textbf{a}}_\simeq \sigma_\mathrm{0}^2(\textbf{a}_0^\mathrm{H}\bar{\textbf{a}})\textbf{a}_0,
\end{align}
is proportional to the desired signal’s SV (note that the nominal SV is usually a good enough approximation so that $\mathbf{a}_0^H\bar{\mathbf{a}}$ is far from zero).
\subsection{NPIC Matrix Reconstruction Using CG}\label{AA}
The classical least mean square (LMS) algorithm in \cite{frost1972algorithm} is based on adjusting the array of sensors in real-time toward a signal coming from the desired direction while the interferences are attenuated. The LMS algorithm is an CG algorithm which searches for the minimum of a quadratic cost function. We apply LMS to solve the MVDR optimization problem in (\ref{MVDR}) by using the Lagrange multiplier $\alpha$ to include the constraint into the objective function as
\begin{align} \label{cost function}
J(\textbf{w})=\textbf{w}^\mathrm{H} \hat{\textbf{R}}_\mathrm{i+n} \ \textbf{w}+\alpha(\textbf{w}^\mathrm{H} \hat{\textbf{a}}_0-1).
\end{align}
The cost function $J(\textbf{w})$ can be minimized by applying the steepest descent algorithm as follows
\begin{align} \label{Iteration}
\textbf{w}(t+1)=\textbf{w}(t)-\dfrac{1}{2} \mu \nabla J(\textbf{w}),
\end{align}
where $\nabla J(\textbf{w})$ is the gradient of the cost function with respect to $\textbf{w}(t)$. The gradient vector can be obtained from (\ref{cost function}) as
\begin{align} \label{Gradient}
\nabla J(\textbf{w})=2 \hat{\textbf{R}}_\mathrm{i+n} \textbf{w}(t)+\alpha \hat{\textbf{a}}_0.
\end{align}
Exploiting \eqref{Theoretical R} over angular $\bar{\Theta}$ and the MEPS estimate (\ref{Power of MEM}), the NPIC matrix can be reconstructed by numerically evaluating
\begin{align}\label{proposed Ri+n}
\hat{\textbf{R}}_{\mathrm{i+n}}=\int_{\bar{\Theta}}\hat{{{P}}}_\mathrm{meps}\textbf{a}(\theta)\textbf{a}^\mathrm{H}(\theta)d\theta,
\end{align}
Sampling $\bar{\Theta}$ uniformly with $Q$ sampling points
spaced by $\Delta\theta$, (\ref{proposed Ri+n}) can be approximated by
\begin{align} \label{Summation}
\hat{\textbf{R}}_{\mathrm{i+n}} \approx \sum_{i=1}^{Q} \dfrac{\textbf{a}(\theta_i)\textbf{a}^\mathrm{H}(\theta_i)}
{\epsilon_p \vert \textbf{a}^\mathrm{H}(\theta)\hat{\textbf{R}}^{-1} \textbf{u}_1 \vert ^2} \Delta\theta.
\end{align}
Here and in the next section we show how to apply the update \eqref{Iteration} while avoiding to compute \eqref{Summation} explicitly.
Rewriting (\ref{Gradient}) by substituting the expression for $\hat{\textbf{R}}_{\mathrm{i+n}}$, we get
\begin{align} \label{Grad with w}
\nabla J(\textbf{w}) =
\sum_{i=1}^{Q} 2\hat{{{P}}}_\mathrm{meps} \Big(\textbf{a}^\mathrm{H}(\theta_i) \textbf{w}(t)\Big)\textbf{a}(\theta_i)\Delta\theta +\alpha \hat{\textbf{a}}_0,
\end{align}
By substituting (\ref{Grad with w}) into (\ref{Iteration}) and rearranging, we obtain a recursion for the beamformer given by
\begin{align} \label{final W}
\textbf{w}(t+1)=\textbf{w}(t)-\mu \big[ \hat{\textbf{a}}_0- \textbf{r}(t) \big],
\end{align}
where $\textbf{r}(t)=\sum_{i=1}^{Q}\hat{{{P}}}_\mathrm{meps} \Big(\textbf{a}^\mathrm{H}(\theta_i) \textbf{w}(t)\Big)\textbf{a}(\theta_i)\Delta\theta$ and $\mu$ is the steepest descent step size. In order to find the beamformer, the conjugate gradient algorithm is used to solve the unconstrained
quadratic programming problem in \eqref{cost function} as algorithm 1.
\begin{algorithm}
\caption{Conjugate Gradient \cite{luenberger1984linear}}
\begin{algorithmic}[1]
\State Choose an initial iterate $\textbf{w}_0$;
\State Set $\textbf{g}_0= \nabla J(\textbf{w}_0)$ and
$\textbf{e}_0=-\textbf{g}_0$;
\State Set $t \gets 0$;
\While{$\Vert \nabla J(\textbf{w}_t) \Vert > \mathrm{tol.}$}
\State Define $\textbf{e}_t=\hat{\textbf{a}}_0- \textbf{r}_t$
\State Determine the step-size $\mu_t=-\dfrac{\textbf{e}_t^\mathrm{T} \textbf{g}_t }{\textbf{e}_t^\mathrm{T}\hat{\textbf{R}}_{\mathrm{i+n}} \textbf{e}_t }$
\State $\textbf{w}_{t+1}=\textbf{w}_t+\mu_t \textbf{e}_t$;
\State $\textbf{g}_{t+1}=\nabla J(\textbf{w}_{t+1})$
\State Determine $\beta_t$ as $\beta_t=\dfrac{\textbf{g}_{t+1}^\mathrm{T} (\textbf{g}_{t+1}-\textbf{g}_{t})}{\textbf{g}_{t}^\mathrm{T} \textbf{g}_{t}}$;
\State Set $\textbf{e}_{t+1}=-\textbf{g}_{t+1}+\beta_t\textbf{e}_t$;
\State Set $t \gets t+1$;
\EndWhile
\end{algorithmic}
\end{algorithm}
Hence, the weight vector is updated at each iteration by the recursion in (\ref{final W}) for reducing the complexity.\\
Up to now, the main difference here from prior works lies in the fact that the integral (\ref{proposed Ri+n}) is approximated by a summation (\ref{Summation}), which would require a complexity of $ \mathcal{O}(M^2Q)$ to be able to synthesize narrowband signal's power accurately. However, in the computation of \eqref{Grad with w} and in the final proposed algorithm in (\ref{final W}), \eqref{V recursion} and \eqref{xi.approx} (see below), we avoid actually computing expensive $O(M^2)$ outer products, so our algorithm requires $\mathcal{O}(MQ)$ for steps (2,4,5,7,8 and 10) while steps in (6 and 9) needs $\mathcal{O}(M)$ complexity. Since this algorithm iterates t times to finding the best step-size, $\mu_t$. Hence, the final computational complexity of the proposed method is only $\mathcal{O}(tMQ)$ while computing the beamformer without need for the inverse of the NPIC matrix.
\subsection{MEPS Estimation Using CG}\label{sec:MEPS.CG}
In order to compute \eqref{Summation} efficiently, we can use an iterative solution to the linear system, and take advantage of the structure of the SCM, $\hat{\textbf{R}}$. We write the term $\textbf{v}=\hat{\textbf{R}}^{-1} \textbf{u}_1$ in \eqref{Summation} and consider the optimization problem
\begin{align}\label{V}
\underset{{\textbf{v}}}{\operatorname{min}}\ \textbf{v}^\mathrm{H} \hat{\textbf{R}} \ \textbf{v}\ \hspace{.4cm} \mathrm{s.t.} \hspace{.4cm} \textbf{v}^\mathrm{H} \textbf{u}_1=1,
\end{align}
The corresponding CG algorithm is described by
\begin{align} \label{V rec}
\textbf{v}(t+1)=\textbf{v}(t)+ \xi (\textbf{u}_1-\hat{\textbf{R}}\textbf{v}(t)),
\end{align}
where $\xi$ is a step size.
Now, substituting the expression for $\hat{\textbf{R}}$ and multiplying by $\textbf{v}(t)$ yields
\begin{align} \label{R hat with v}
\hat{\textbf{R}}\textbf{v}(t)=\dfrac{1}{K}\sum_{t=1}^K \textbf{x}(t)\Big(\textbf{x}^\mathrm{H}(t)\textbf{v}(t) \Big),
\end{align}
By substituting (\ref{R hat with v}) into (\ref{V rec}), we obtain
\begin{align} \label{V recursion}
\textbf{v}(t+1)=\textbf{v}(t)+ \xi \Big(\textbf{u}_1- \sum_{t=1}^K \textbf{x}(t) \big( \dfrac{\textbf{x}^\mathrm{H}(t)\textbf{v}(t)}{K}\big) \Big).
\end{align}
In (\ref{V rec}), the step size, $\xi$, should satisfy $0 < \xi < 2/\lambda_\mathrm{max}$ ($\lambda_{\mathrm{max}}$ is the largest eigenvalue of $\hat{\mathbf{R}}$), with fastest convergence occurring for $\xi \approx 1/\lambda_\mathrm{max}$. Since computing $\lambda_\mathrm{max}$ requires $ \mathcal{O}(M^3)$ operations, it is more efficient to use an approximation.
Assume that $\lambda$ is an eigenvalue of $\hat{\textbf{R}}$ with respect to the eigenvalue $\textbf{z}$, so we can write
\begin{align}
\lambda \textbf{z}=\hat{\textbf{R}}\textbf{z}=\dfrac{1}{K}\sum_{t=1}^K \textbf{x}(t)\Big(\textbf{x}^\mathrm{H}(t)\textbf{z} \Big).
\end{align}
Taking norm in both sides
\begin{multline}
\mid \lambda \mid \parallel \textbf{z} \parallel = \dfrac{1}{K} \parallel \sum_{t=1}^K \textbf{x}(t)\textbf{x}^\mathrm{H}(t)\textbf{z} \parallel \\ \leqslant \dfrac{1}{K} \sum_{t=1}^K \parallel \textbf{x}(t) \parallel \mid \textbf{x}^\mathrm{H}(t)\textbf{z}\mid
\leqslant \dfrac{1}{K} \sum_{t=1}^K \parallel \textbf{x}(t) \parallel^2 \parallel \textbf{z} \parallel,
\end{multline}
Hence $\mid \lambda \mid \leqslant (1/K) \sum _{t=1}^K \parallel \textbf{x}(t) \parallel ^2 $.
An approximation to the step size, $\xi$, is given by
\begin{align}\label{xi.approx}
\xi \approx \dfrac{K}{\sum _{t=1}^K \parallel \textbf{x}(t) \parallel ^2}.
\end{align}
The computational complexity of MEPS-NPIC-CG is $ \mathcal{O}(tMQ)$. The solution of the quadratically constrained quadratic programming (QCQP) problem in \cite{gu2012robust} has complexity of at least $ \mathcal{O}(M^{3.5}+M^2Q) $, while the beamformer in \cite{huang2012modified} has a complexity of $ \mathcal{O}(KM)+\mathcal{O}(M^{3}) $ and the reconstructed NPIC matrices in \cite{ruan2014robust} and \cite{zhang2016interference} have a complexity of $ \mathcal{O}(M^3) $.
Also, the cost of the beamformer in \cite{zheng2018covariance} is $ \mathcal{O}(\mathrm{max}(M^2Q,M^{3.5})) $.
\section{Simulations}
In this section, a uniform linear array of $M=10$ omnidirectional sensors and half-wavelength interelement spacing is considered. Two interferers and a desired signal impinge on the sensor array with incident angles $50^o$, $20^o$ and $5^o$, respectively. The interference-plus-noise ratio (INR) for each interferer is assumed 30 dB in each sensor. The additive noise is modeled as spatially white Gaussian with zero mean and unit variance where 100 Monte Carlo runs are performed for each simulation. When we examine the performance of the output SINR versus input SNR, the number of snapshots is set to $K = 30$ whereas for the performance comparison of the adaptive beamformers versus the number of snapshots the SNR is set to 20 dB.
The proposed MEPS-NPIC-CG method is compared with LOCSME \cite{ruan2014robust}, the modified projection beamformer (Shrinkage) \cite{huang2012modified}, the reconstruction-estimation based beamformer (Rec-Est) \cite{gu2012robust}, the SPSS beamformer in \cite{zhang2016interference},
the algorithm based on noise-plus-interference covariance matrix reconstruction and SV estimation (INC-SV), \cite{zheng2018covariance} and the beamformer (SV-Est) in \cite{khabbazibasmenj2012robust}. The angular sector of the desired signal is set to $ \Theta=[-1^\circ,11^\circ] $ while the interference angular sector is $ \bar{\Theta}=[-90^\circ,-1^\circ)\cup(11^\circ,90^\circ] $. For the proposed MEPS-NPIC-CG beamformer, $\mathrm{tol}=0.001$, $t=7$, $S=10$ and $Q=90$ are used and the bound for the beamformer in \cite{zheng2018covariance} is set as $ \epsilon=\sqrt{0.1} $. To solve the optimization problems to compute the optimum solutions used for comparison, the Matlab CVX toolbox \cite{grant2017cvx} is used.
In the first scenario, we investigate the impact of the array arbitrary imperfection mismatch in an inhomogeneous medium when the wave propagation effects distort the signal spatial signature. Specifically, when the nominal SV components accumulate the phase increment, it is assumed that the phase increments are chosen independently from the random error vector of the Gaussian generator with zero mean and standard deviation 0.07, which in each simulation run remain fixed.
Figs.~\ref{SINR_SNRWave} and Figs.~\ref{SINR_SNWave} show the output SINR of the beamformers versus the input SNR and versus the snapshots. Similar to the previous scenario, the proposed beamformer significantly outperforms other beamformers due to its ability to reconstruct the NPIC matrix and estimate the desired signal SV with higher accuracy than other methods.
In the second scenario, we analyze the effect of the SV mismatch by the incoherent local scattering of the desired signal. We assume that the actual SV has a time-varying signature and the SV is expressed as $ \textbf{a}(t)=s_0(t)\textbf{a}(\theta_0)+\sum_{p=1}^4 s_p(t)\textbf{a}(\theta_p)$
where $s_0(t)$ and $s_p(t)$ $(p=1,2,3,4)$ are independently and identically distributed complex Gaussian random variables independently drawn from a random generator. In each simulation run, the DoAs $\theta_p$ $(p=0,1,2,3,4)$ are independently distributed in a uniform generator with mean $5^\circ$ and standard deviation $2^\circ$. Note that $\theta_p$ changes from run to run while remains fixed from snapshot to snapshot. Meanwhile, the random variables $s_0(t)$ and $s_p(t)$ change from both run to run and snapshot to snapshot. Fig.~\ref{SINR_SNRInco} and Fig.~\ref{SINR_SNInco} depict the output SINR of the tested beamformers versus the SNR and versus the snapshots under the incoherent local scattering case. It is demonstrated that the MEPS-NPIC-CG has high accuracy SINR for all snapshots. Also, it is seen that the result of the proposed (MEPS-NPIC-CG) beamformer outperforms the other beamformers.
\section{Conclusion}
A low-complexity approach to robust adaptive beamforming based on estimated weight vector through CG recursions, named MEPS-NPIC-CG, has been proposed. The computed weight vector is exploited to reconstruct accurate NPIC matrix without requiring matrix inversions. Simulations demonstrate that MEPS-NPIC-CG can offer a superior performance to recently reported robust adaptive beamforming methods.
\begin{figure}[!] \label{SNR}
\subfigure{\includegraphics[width=0.22\textwidth]{Incoherent_SNRs_NEW.eps}\label{SINR_SNRInco}}\hfill
\subfigure{\includegraphics[width=0.22\textwidth]{Wavefront_SNRs_NEW.eps}\label{SINR_SNRWave}}
\vspace{-0.85em}
\caption{SINR vs SNR \ a) Incoherent Local Scattering \ b) Wavefront }
\end{figure}
\begin{figure}[!] \label{Snapshots}
\subfigure{\includegraphics[width=0.22\textwidth]{Incoherent_Snapshots_NEW.eps}\label{SINR_SNInco}}\hfill
\subfigure{\includegraphics[width=0.22\textwidth]{Wavefront_Snapshots_NEW.eps}\label{SINR_SNWave}}
\vspace{-0.85em}
\caption{SINR vs Snapshots \ a) Incoherent Local Scattering \ b) Wavefront}\label{fig:snapshots}
\end{figure}
\ifCLASSOPTIONcaptionsoff
\fi
\bibliographystyle{IEEEtran}
|
1,116,691,497,326 | arxiv | \section{Introduction} \label{sec:intro}
Fragmentation -- the internal break-up of a core into two or more fragments -- is the leading contender for how the majority of multiple stars form \citep[e.g., review by][]{Goodwin2007}. Fission and capture, two other hypotheses considered for the formation of multiple star systems, have been ruled out or are disfavored in large part based on theoretical considerations. Two different mechanisms have been proposed to drive fragmentation: (i) bulk (large-scale ordered) rotation; and (ii) local (small-scale) turbulence \citep[for a succinct description of how these mechanisms lead to fragmentation, see][and also $\S\ref{sec:fragmentation}$]{Lim2016}. Depending on the circumstances involved, these two mechanisms can predict very different geometries and dynamics for the resulting binary system: i.e., alignment between the circumstellar disks and/or spin axes of the binary components, as well as alignment between their circumstellar disks and orbital plane or between their spin and orbital axes. Comparisons between binary properties and model predictions for their formation, however, are complicated by possible internal or external interactions during or after the protostellar phase. Depending on the nature of the interaction, the binary system can be driven either towards or away from alignment, altering its original geometry and dynamics thus masking its formation process.
Recently, we showed that the geometrical and dynamical relationship between the binary (protostellar) system and its surrounding bulk envelope (remnant parental core) provide the crucial distinction between the two possible modes of fragmentation \citep{Lim2016}. In the Class\,I system L1551\,IRS\,5, we found that the circumstellar disks of the binary protostars are not just closely parallel with each other, but also closely parallel with their surrounding flattened envelope. Furthermore, the protostars are orbiting each other in the same direction as the rotation of their surrounding envelope. The close relationship between all these different elements indicates that their angular momenta share a common axis, and points to large-scale ordered rotation for driving the fragmentation of the L1551\,IRS\,5 parental core. Orbital solutions to measurements of the relative proper motion between the binary protostars, omitting solutions for which their circumstellar disks are predicted to be tidally truncated to sizes smaller than are observed, favour a circular or low-eccentricity orbit tilted by up to $\sim$25\degr\ from the circumstellar disks. If the fragments that gave rise to the binary protostars in L1551\,IRS\,5 were produced at different heights or on opposite sides of the midplane in the flattened central region of a rotating core, the resulting protostars would then exhibit circumstellar disks parallel with each other and their surrounding flattened envelope but tilted from the orbital plane, as is observed. Early during their formation, tidal interactions between the individual protostars and their surrounding, much more massive, flattened envelope would have naturally given rise to an essentially circular orbit, which has presumably been (largely) preserved during the subsequent evolution (growth) of the binary protostars.
Here, we present observations that spatially resolve for the first time the circumstellar disks of the binary protostars in the Class\,I system L1551\,NE. Lying in the close vicinity of L1551\,IRS\,5, L1551\,NE is surrounded by a circumbinary disk \citep{Takakuwa2012,Takakuwa2014}, which itself is embedded in a flattened infalling envelope \citep{Takakuwa2013}. The circumbinary disk exhibits clear deviations from Keplerian motion that we successfully modelled as the action of gravitational torques from the central binary system \citep{Takakuwa2014}. These torques force material in opposing segments of the circumbinary disk to orbit faster and collide with material upstream that is orbiting more slower, resulting in a two-armed spiral pattern (comprising material compressed to higher density) imprinted onto the circumbinary disk. At opposing segments between the two spiral arms, torques from the binary prototellar system force material to orbit slower, resulting in inflows through the circumbinary disk. Successfully reproducing the observed spatial-kinematic structure of the circumbinary disk, the model assumes a coplanar binary system having an orbital motion in the same sense as the rotation of the circumbinary disk. In addition, based on the projected separation of the two protostars from the inferred kinematic center of the circumbinary disk, the model asopts a binary mass ratio of 0.19. The results presented here confirm that L1551\,NE is indeed a coplanar binary system, indicate an orbital motion for the binary protostars in the same sense as the rotational motion of their circumbinary disk, and provide entirely independent evidence in support of the inferred mass ratio of the binary system.
This manuscript is organized as follows. Our observations and data reduction are described in $\S\ref{sec:obs}$. To study the relative proper motion of the binary protostars, we reduced previously published archival data on the ionized jets in L1551\,NE dating back nearly 20\,yrs before our observations, as described also in $\S\ref{sec:obs}$. The results from all these data are presented in $\S\ref{sec:results}$. In $\S\ref{sec:CDs}$, we describe how we determined the physical parameters of the individual circumstellar disks. In $\S\ref{sec:orbits}$, we present the relative proper motion of the binary protostars. In $\S\ref{sec:discussion}$, we assemble all the available evidence (including that in the published literature) to infer the manner in which L1551\,NE formed. In $\S\ref{sec:summary}$, we provide a thorough summary of our results, analyses, and interpretation. Throughout this manuscript, we assume a distance to L1551\,NE of 140\,pc \citep{Kenyon1994,Bertout1999}.
\section{Observations}\label{sec:obs}
During our observations of L1551\,IRS\,5 with the Jansky Very Large Array (VLA) on 2012 November 16, 28, and 29 as reported in \citet{Lim2016}, we also observed L1551\,NE. The observations of L1551\,NE were interleaved with those of L1551\,IRS\,5; i.e., employing the scan sequence J0431$+$1731(the secondary calibrator) $\rightarrow$ L1551\,IRS\,5 $\rightarrow$ J0431$+$1731 $\rightarrow$ L1551\,NE $\rightarrow$ J0431$+$1731 $\rightarrow$ L1551\,IRS\,5 $\rightarrow$ J0431$+$1731 $\rightarrow$ L1551\,NE, etc. The observations spanned a total duration of $\sim$2.5\,hr on each day. To mitigate against rapid changes in absorption and refraction by the Earth's atmosphere, causing rapid fluctuations in the measured visibility amplitude and phase of the target source, we switched between L1551\,NE and the nearby quasar J0431$+$1731 every 20\,s. As a check of the quality of the amplitude and phase corrections, we performed similar observations of a quasar lying close to L1551\,NE, J0431$+$2037, every $\sim$30\,mins. This quasar also was used to check the pointing accuracy of the antennas, a task performed every $\sim$1\,hr. The bright quasar J0510+1800 served as the bandpass calibrator, and the quasar 3C48 as the flux calibrator.
We edited, calibrated, and made maps from the data using the Common Astronomy Software Applications (CASA) package. Considerable effort went into weeding out radio-frequency interference (RFI), which can be very weak and difficult to find, to ensure that the actual data used for making the maps is as free of contamination as is possible. The calibration was performed in the standard manner (e.g., examples in https://casaguides.nrao.edu/index.php/Karl\_G.\_Jansky\_VLA\_Tutorials) as recommended by the observatory. Maps were made using three different weighting schemes, natural (i.e., equal weights on all visibilities), $\rm Robust = 0.5$, and $\rm Robust = -0.25$ (robust utilize unequal weights designed to provide a more uniform sampling in $uv$-space), to accentuate different features of interest. The synthesized beams and root-mean-square (rms) noise fluctuations ($\sigma$) of the maps thus made are summarised in Table\,\ref{tab:map parameters}. Notice that the synthesized beams obtained using the different weighting schemes are close to circular, making it easier to visually interpret as well as to analyze the maps. All subsequent analyses of the images obtained were made using the Astronomical Image Processing System (AIPS) and GALFIT \citep{Peng2002,Peng2010} software packages.
For studying the relative proper motion of the binary protostars, we reduced data taken by \citet{Rodriquez1995} on 1994 Apr 10 and 22, and \citet{Reipurth2002} on 2000 Nov 26--29, using also the VLA but at a wavelength of 3.5\,cm.
We edited, calibrated, and made maps from the 1994 and 2000 observations (combining the data taken in each year) using AIPS. The synthesized beams and root-mean-square (rms) noise fluctuations ($\sigma$) of the maps, both made with natural weighting, are summarised in Table\,\ref{tab:map parameters}. In the observation of \citet{Rodriquez1995} in 1994, the telescope was pointed at L1551\,IRS5. L1551\,NE is located at an angular distance of 2\farcm5 from L1551\,IRS5, almost at the half-power point of the telescope primary beam (full-width half-maximum, FWHM, of 5\farcm3 at 3.5\,cm) in that observation. The map made was therefore corrected for the primary beam response of the antennas. In the observation of \citet{Reipurth2002} in 2002, the telescope was pointed at L1551\,NE.
{In all subsequent analyses, the quoted uncertainties in flux densities correspond to statistical uncertainties only, and thus do not include any systematic uncertainties (which are difficult to quantify) that arise in transferring the flux density of the primary calibrator to the secondary calibrator, and from the secondary calibrator to the target source.}
\section{Results}\label{sec:results}
\subsection{Ionized Jets}\label{sec:results jets}
Figure\,\ref{fig:jets} shows images of L1551\,NE at 3.5\,cm made from data taken by \citet{Rodriquez1995} in 1994 (Fig.\,\ref{fig:jets}$a$) and \citet{Reipurth2002} in 2002 (Fig.\,\ref{fig:jets}$b$). Two sources are detected in both maps: the stronger source, located to the south-east, was referred to by \citet{Reipurth2002} as source A, and the weaker source to the north-west as source B. We henceforth refer to these two sources in the same manner.
In Table\,\ref{tab:jets}, we list the parameters of the two sources based on a 2-dimensional Gaussian fit to each source. In the 2002 map, which is far superior in sensitivity (a factor of nearly four lower noise) to the 1994 map, the results of the fits show that both sources are clearly resolved (at a significance level of $14\sigma$ for source A and $10\sigma$ for source B) along their major but not their minor axes. In this map, the major axes of both sources are aligned to within measurement uncertainties (difference of $8\degr \pm 6\degr$) along an average position angle of $65\degr \pm 3\degr$ (whereas the synthesized beam has a position angle for its major axis of 52\degr.1). We note that similar model fittings to the two sources in a robust-weighted map, which provides a higher angular resolution and hence in which the two sources are better separated, do not improve the precision of the fitting parameters. The position angle of the ionized jets as measured at 3.5\,cm is identical to within the measurement uncertainties with the position angle of an [FeII] jet detected by \citet{Reipurth2000} and \citet{Hayashi2009} originating from the vicinity of L1551\,NE, oriented along a position angle of 63\degr\ \citep{Reipurth2000} or 64\degr\ \citep{Hayashi2009}. Herbig-Haro objects and a bipolar molecular outflow detected in CO, all driven by L1551\,NE, lie along approximately the same position angle as the [FeII] jet \citep{Moriarty-Schieven2000}. \citet{Reipurth2000} found that the axis of the [FeII] jet is offset from (lies to the south of) the apex of a cone-shaped nebula that is located just south-west of L1551\,NE. This nebula comprises scattered light from the nearer side of an outflow cavity evacuated by L1551\,NE. Based on the observed offset, \citet{Reipurth2000} attributed the [FeII] jet to source A, and associated the apex of the cone-shaped nebula with source B.
In the 1994 map, source A is formally resolved (at the 3.7$\sigma$ confidence level) but not source B (only at the 2.6$\sigma$ level). In this map, the major axis of source A is different by $45\degr \pm 15\degr$ from that of the same source in the 2002 map. Instead, in the 1994 map, the major axis of source A is aligned within measurement uncertainties to the major axis of the synthesized beam. Given that L1551\,NE was located close to the half-power point of the telescope primary beam in the 1994 observation and therefore subject to both bandwidth smearing and, perhaps even more detrimentally, telescope pointing errors, we place little weight on the measured source dimensions in this map.
\subsection{Circumstellar Disks}\label{sec: results disks}
Figure\,\ref{fig:CDs} show our images of L1551\,NE at 7\,mm made with three different weightings, natural weighting that provides the {lowest noise level but also the poorest angular resolution of $\rm 55.4\,mas \times 52.5\,mas$ or $\rm 7.8\,AU \times 7.4\,AU$ (Fig.\,\ref{fig:CDs}$a$), {$\rm Robust=0.5$} weighting that only slightly increases the noise level but significantly improves the angular resolution to $\rm 44.9\,mas \times 41.8\,mas$ or $\rm 6.3\,AU \times 5.9\,AU$ (Fig.\,\ref{fig:CDs}$b$), and {$\rm Robust=-0.25$} weighting that provides close to the highest angular resolution possible with our data of $\rm 36.3\,mas \times 33.8\,mas$ or $\rm 5.1\,AU \times 4.7\,AU$} at the expense of a significantly higher noise level (Fig.\,\ref{fig:CDs}$c$). A simple visual inspection reveals that both sources A and B are clearly resolved along their major and minor axes. Source A is much larger and also has a higher peak as well as integrated flux density than source B. Both sources are elongated in a direction perpendicular to their ionized jets (indicated by arrows in Fig.\,\ref{fig:CDs}) as traced at 3.5\,cm (Fig.\,\ref{fig:jets}$b$), and so their emission must originate primarily from dust in their circumstellar disks. {Emission from dust at larger spatial scales, namely that in the circumbinary disk as imaged at 0.85\,mm with Submillimeter Array (ALMA) \citep{Takakuwa2012} and in follow-up observations also at 0.85\,mm with the Atacama Large Millimeter and Submillimeter Array (ALMA) \citep{Takakuwa2014}, as well as dust in the envelope around the circumbinary disk as imaged at 0.85\,mm with the SMA \citep{Takakuwa2013}, is entirely resolved out in our observation (which has a much higher angular resolution, and lacks relatively short baselines, compared with the ALMA and SMA observations).}
In all the maps shown in Figure\,\ref{fig:CDs}, source B exhibits an appreciable elongation along the north-east to south-west direction that extends beyond, and is perpendicular to the major axis of, its main body. This elongation is aligned with its ionized jet, so that, at 7\,mm, the emission of source B along its minor axis must include a weak contribution from free-free emission associated with its ionized jet. This situation is similar to that found for both components of L1551\,IRS5 at 7\,mm, where the emission from each source is contributed by both ionized gas and dust \citep{Lim2006,Lim2016}. In the highest angular-resolution map at 7\,mm shown in Figure\,\ref{fig:CDs}$c$, the central peak in source A can be seen to be elongated in a direction perpendicular to its main body and aligned instead with its ionized jet. Thus, at 7\,mm, the emission from the central region of source A must also include a contribution from free-free emission associated with its ionized jet.
\section{Physical Parameters Circumstellar Disks}\label{sec:CDs}
Because, in the images at 7\,mm, the emission from both sources include a weak contribution from their ionized jets, we first tried to remove the jets before fitting models to the disks. Using the task IMFIT in AIPS, we started by attempting to fit a two component (one to represent the jet and the other the disk), 2-dimensional, Gaussian function to sources A and B individually in the naturally-weighted map (which provides the highest S/N ratio, and hence traces the circumstellar disks furthest out). All such attempts either failed to converge or provided non-physical results (e.g., negative intensities for one of the components) for both sources. This failure is in sharp contrast to our success using the same strategy for L1551\,IRS\,5, where a two component, 2-dimensional, Gaussian function provided a satisfactory fit to each of the two sources in this system at 7\,mm \citep{Lim2016}.
Below, we explain why such a model fails to fit the image of either sources in L1551\,NE.
GALFIT, unlike IMFIT (in AIPS), does not try to fit for the (spatially unresolved) central region of a source within an area spanned by the FWHM of the synthesized beam. This feature is convenient for our purpose so as to mitigate the contribution from the ionized jet emanating from the center of each source. (The model fitted by GALFIT therefore makes no statement about the radial intensity profile within a central area spanned by the FWHM of the synthesized beam.) We therefore started by fitting a 2-D Gaussian function (i.e., one component only, corresponding to the circumstellar disk) to source A in the naturally-weighted map. Figure\,\ref{fig:Gaussian fits}$c$ shows the resulting best-fit model {(reduced-$\chi^2 = 6.50$)}. This model can be directly compared with the image of source A shown in Figure\,\ref{fig:Gaussian fits}$a$, where the contour levels are plotted at the same levels in flux density (from 10\% to 90\%, in steps of 10\%, of the peak intensity of source A) and the colors span the same range in flux density (from the minimum to the maximum of the image shown in Fig.\,\ref{fig:Gaussian fits}$a$). Figure\,\ref{fig:Gaussian fits}$d$ shows the residuals (image$-$model) from the fit. The most prominent feature in the residual map is a strong central positive peak,
encircled by a conspicuous negative ring and a fainter outer positive ring, indicating that a Gaussian function provides a poor fit to the circumstellar disk. As a check, we blanked different sized areas at the center of source A, and fitted a 2-D Gaussian function to the remaining emission. Figure\,\ref{fig:Gaussian fits}$b$ shows an example where the central region of source A having a size of approximately the FWHM of the synthesized beam has been blanked out. The model fit of a 2-D Gaussian function {(reduced-$\chi^2 = 21.39$)} is shown in Figure\,\ref{fig:Gaussian fits}$e$, and the residuals in Figure\,\ref{fig:Gaussian fits}$f$. The residual map shows a negative ring around the central blanked area and a surrounding positive ring; the same pattern is seen no matter the size of the central area blanked out up to about twice the FWHM of the synthesized beam, the largest that we tried. Thus, the reason why a two-component, 2-dimensional, Gaussian function fails to provide a satisfactory fit to source A is because its circumstellar disk simply does not have a Gaussian radial intensity profile.
Source B is, visually, much smaller and spanned by fewer resolution elements than source A. Unlike source A, source B can be satisfactorily fit (reduced-$\chi^2 = 1.15$) by a 2-dimensional Gaussian function (corresponding to its circumstellar disk) as shown in Figure\,\ref{fig:Gaussian fits}$h$. The fitted Gaussian model can be directly compared with the image of source B shown in Figure\,\ref{fig:Gaussian fits}$g$, where the contour levels are plotted at the same levels in flux density (from 10\% to 90\%, in steps of 10\%, of the peak intensity of source B) and the colors span the same range in flux density (from the minimum to the maximum of the image shown in Fig.\,\ref{fig:Gaussian fits}$g$). The residuals are shown in Figure\,\ref{fig:Gaussian fits}$i$, all of which are below $3\sigma$ within the detectable body of source B. Thus, the failure to fit a two-component, 2-dimensional, Gaussian function to source B is because its jet is simply too weak to provide meaningful constraints.
The 2-dimensional Gaussian function fitted to source B has a FWHM along its major axis of 0\farcs99, roughly comparable to the values found by fitting 2-dimensional Gaussian functions to the two circumstellar disks in L1551\,IRS5 of 0\farcs122 and 0\farcs092 \citep[see Table\,2 of][]{Lim2016}. The maps used for these fits have a similar FWHM for their synthesized beams of about 0\farcs055. On the other hand, the circumstellar disk of source A, which as we show below is over twice as large as that of source B, cannot be fit by a 2-dimensional Gaussian function. Our ability to satisfactorily fit a 2-dimensional Gaussian function to the circumstellar disk of source B, as well as to each of the two circumstellar disks in L1551\,IRS5, is likely because their radial intensity profiles are dominated by their synthesized beams (which are Gaussian function) and not because these disks actually have Gaussian radial intensity profiles.
Physically-motivated models \citep[see brief review in][]{Lim2016} such as power-law profiles, designed to mimic power-law surface density and temperature profiles, having an inner as well as an outer truncation radius or taper are usually fitted to images of circumstellar disks. Unlike optically-revealed objects for which the spectral energy distributions in the near- to mid-infrared provide constraints on a central cavity in their circumstellar disks, no such constraints are possible for protostars. Conveniently, GALFIT does not attempt to fit for the centrally-unresolved region where a cavity might be present. We started by fitting a 2-dimensional power-law, with no outer truncation, to sources A and B. Figure\,\ref{fig:PL fits}$c$ shows the best fit of such a model {(reduced-$\chi^2 = 25.16$)} to the unblanked image of source A shown in Figure\,\ref{fig:PL fits}$a$, and Figure\,\ref{fig:PL fits}$d$ the residuals. Figure\,\ref{fig:PL fits}$e$ shows the corresponding model fit {(reduced-$\chi^2 = 21.39$)} and Figure\,\ref{fig:PL fits}$f$ the residuals for the centrally-blanked image of source A shown in Figure\,\ref{fig:PL fits}$b$. In both cases, the residual map shows a negative central circular region or negative ring around the central blanked area and a surrounding positive ring, indicating that an untruncated 2-dimensional power-law provides a poor fit to the circumstellar disk of source A. The same is true for source B, where Figure\,\ref{fig:PL fits}$j$ shows the fitted model {(reduced-$\chi^2 = 10.97$)} and Figure\,\ref{fig:PL fits}$k$ the residuals. The residual map also shows a negative central circular region and a surrounding positive ring, just like the residual map of source A shown in Figure\,\ref{fig:PL fits}$d$.
Given that circumstellar disks in binary systems are predicted to be truncated by tidal interactions with their neighboring companions, we then tried fitting a 2-dimensional power-law that is truncated at an outer radius.
Figure\,\ref{fig:PL fits}$g$ shows the model fit and Figure\,\ref{fig:PL fits}$h$ residuals for the unblanked image of source A (Fig.\,\ref{fig:PL fits}$a$). The fit is much improved (reduced-$\chi^2 = 3.58$, {versus a reduced-$\chi^2 = 6.50$ for a 2-dimensional Gaussian and a reduced-$\chi^2 = 25.16$ for a 2-dimensional power law with no outer truncation}) as reflected by the relatively weak residuals, although a faint negative ring is visible indicating a systematic deviation between the fitted model and the image. Figure\,\ref{fig:PL fits}$l$ shows the corresponding model fit and Figure\,\ref{fig:PL fits}$m$ the residuals for the unblanked image of source B (Fig.\,\ref{fig:PL fits}$i$). Once again, the fit is much improved (reduced-$\chi^2 = 1.16$) over an untruncated 2-dimensional power-law {(reduced-$\chi^2 = 10.97$)}, although in the case of source B providing no better a fit than a 2-dimensional Gaussian function {(reduced-$\chi^2 = 1.15$)}.
In L1551\,IRS5, a NUKER function, comprising a relatively shallow inner power-law and a very steep outer power-law (i.e., a tapered rather than a truncated profile), was fitted to the two circumstellar disks \citep{Lim2016}. This function provides a smooth transition between the inner inner and outer power-laws, a feature that was deemed to be more physical than a discontinuous transition. The NUKER function is parameterised as:
\begin{equation}
I(r) = I_b \ 2^{{\beta-\gamma}\over\alpha} ({r \over r_b})^{-\gamma} [1 + ({r \over r_b})^\alpha]^{\gamma-\beta \over \alpha} \ \ \ ,
\end{equation}
where $I(r)$ is the intensity, $I$, as a function of radius, $r$, $\gamma$ is the inner power-law slope, $\beta$ the outer power-law slope, $\alpha$ controls the sharpness of the transition between the two power laws (larger $\alpha$ indicating a sharper transition), $r_b$ the break radius at which the slope is the average of $\beta$ and $\gamma$ or, equivalently, the radius of maximum curvature in logarithmic units, and $I_b$ the intensity at $r_b$. Just like for the two circumstellar disks in L1551\,IRS5, we found that the central position, inclination (as determined from the ratio in dimensions of the minor to major axes), and position angle of the major axis of sources A and B to be essentially constant independent of $\alpha$. As $\alpha$ increases (i.e., the transition between the inner and outer power-law becomes sharper), the break radius $r_b$ decreases somewhat and very rapidly converges. Likewise, both $\gamma$ (the inner power-law index) and $\beta$ (the outer power-law index) also rapidly converge with increasing $\alpha$, such that $\beta \gg \gamma$ irrespective of $\alpha$.
Fixing therefore the central location, inclination, and position angle of each source, we list in Table\,\ref{tab:NUKER} the other parameters of the best-fit NUKER function at the largest value of $\alpha$ for which a solution is obtainable. In this way, we obtained a break radius of $r_b$$\sim$133\,mas ($\sim$18.6\,AU) for source A. Figure\,\ref{fig:NK fits}$c$ shows the model fit to the unblanked image of source A (Fig.\,\ref{fig:NK fits}$a$) and Figure\,\ref{fig:NK fits}$d$ the residuals. This fit (reduced-$\chi^2 = 2.25$) is, by far, the best among all those considered ({versus a reduced-$\chi^2 = 6.50$ for a 2-dimensional Gaussian, a reduced-$\chi^2 = 25.16$ for a 2-dimensional power law with no outer truncation, and a reduced-$\chi^2 = 3.58$ for a 2-dimensional power law truncated at an outer radius)}. Importantly, there are no clearly apparent systematic residuals indicating a systematic deviation between the fitted model and the image; nonetheless,
there are low-level residuals in the outer regions that limit the goodness of the fit. Fitting a NUKER function to the image of source A where its central region is blanked out (Fig.\,\ref{fig:NK fits}$b$), we obtained an essentially identical model fit {(reduced-$\chi^2 = 2.24$)} as shown in Figure\,\ref{fig:NK fits}$e$ and residual map as shown in Figure\,\ref{fig:NK fits}$f$. The position angle of the major axis thus derived for the circumstellar disk of source A is $\sim$$150\degr.9$, accurately orthogonal in projection to the position angle inferred for the axis of its ionized jet of $61\degr^{+4}_{-3}$. The corresponding model fit for source B (reduced-$\chi^2 = 1.07$) is shown in Figure\,\ref{fig:NK fits}$h$ and the residual map in Figure\,\ref{fig:NK fits}$i$. Like for source A, this model provides a superior fit (lower reduced-$\chi^2$) to source B than a 2-dimensional Gaussian {(reduced-$\chi^2 = 1.15$)}, {a power-law with no outer truncation (reduced-$\chi^2 = 10.97$)}, or a power-law truncated at an outer radius {(reduced-$\chi^2 = 1.16$)}. The position angle of the major axis thus derived for the circumstellar disk of source B is $\sim$$152\degr.1$, closely orthogonal in projection to the position angle inferred for the axis of its ionized jet of $69\degr^{+4}_{-5}$. Assuming both circumstellar disks to be circular and geometrically thin, the inclination derived for the circumstellar disk of source A is $\sim$57\degr.7 and that of source B is $\sim$58\degr.0. Their similar inclinations and position angles for their major axes imply that the two circumstellar disks are (closely) parallel.
GALFIT does not provide uncertainties for the model fitting parameters. As a measure of the uncertainties in the inclination and position angle for the circumstellar disk of source B, we also used IMFIT (which provides uncertainties in the model parameters) to fit a 2-dimensional Gaussian function to this source. In this manner, we derived an inclination of $56\degr.3 \pm 3\degr.8$\,deg (GALFIT reports 58\degr.2 for a 2-dimensional Gaussian fit and 58\degr.0 for a NUKER fit) and a position angle of $154\degr.5 \pm 4\degr.6$ (GALFIT reports 152\degr.2 for a 2-dimensional Gaussian fit and 152\degr.1 for a NUKER fit) for source B. With a position angle of $69\degr^{+4}_{-5}$ for its ionized jet (Table\,\ref{tab:jets}), the circumstellar disk of source B is, within the uncertainties, orthogonal ($85\degr.5 \pm 6\degr.8$) in projection to its ionized jet. As mentioned earlier, source A cannot be fit by a 2-dimensional Gaussian function, and so we cannot provide corresponding uncertainties for the inclination and position angle for its circumstellar disk using this method. Nevertheless, based on the exercise conducted for source B, the uncertainties in the inclination and position angle of the major axis derived from fitting a NUKER function to source A are probably no larger than a few degrees.
\section{Orbital Motion}\label{sec:orbits}
The 1994 observation of \citet{Rodriquez1995} was the first to show that L1551\,NE comprises two sources, as was subsequently confirmed in the 2002 observation of \citet{Reipurth2002}. To date, only these and our observation in 2012 provide useful measurements of the relative proper motion of the binary protostars. Table\,\ref{tab:rPM} lists the positions (repeated, for convenience, from Table\,\ref{tab:jets} for the 1994 and 2002 observations), relative separations, and relative orientations of the binary protostars on the aforementioned dates. The position for source B is derived from a 2-dimensional Gaussian fit (using IMFIT) to the natural-weighted map of this source (as mentioned earlier, IMFIT reports uncertainties in the fitting parameters, unlike GALFIT) in Figure\,\ref{fig:CDs}$a$. The position of source A is derived from a 2-dimensional Gaussian fit (also using IMFIT) to the ionized jet in the central region of this source in the $\rm Robust=-0.25$ image of Figure\,\ref{fig:CDs}$c$; the fitted Gaussian model shares a similar position angle ($67\degr^{+5\degr}_{-4\degr}$) as that derived for the ionized jet at 3.5\,cm ($61\degr^{+4\degr}_{-3\degr}$), but is resolved also along the minor axes suggesting a contribution from the circumstellar disk to the fit. Note that different secondary calibrators were used in the three observations of L1551\,NE, and so the positions listed in Table\,\ref{tab:rPM} are referenced with respect to a different position in the sky in each observation. The information listed in Table\,\ref{tab:rPM} should therefore be used with caution (i.e., the uncertainty in the position of the secondary calibrator needs to be included) for deriving the absolute proper motion of L1551\,NE (motion of the entire system across the sky).
Figure\,\ref{fig:pm} shows the angular separation and orientation of source B with respect to source A over an interval spanning $\sim$18.6\,yrs. As can be seen, there is no significant motion ({i.e., difference in positions of $\ge 3\sigma$}) of these two sources along the east-west direction, with their positions differing by $\rm 22.9\,mas \pm 14.6\,mas$ in right ascension between 2002 and 2012. On the other hand, between 2002 and 2012, source B has moved northwards with respect to source A by $\rm 33.6\,mas \pm 11.0\,mas$ (a significance level of $3.1\sigma$). Furthermore, source B is likely moving away (at a significance level of $2.5\sigma$) from source A. The uncertainties in the measured source positions in 1994 are too large to detect any corresponding motion in source B between this and the later observations.
\section{Discussion}\label{sec:discussion}
The circumstellar disks of the binary protostars in L1551\,NE are parallel to each other within measurement uncertainties of a few degrees. The close alignment between the circumstellar disks of binary protostars, however, does not by itself discriminate between different models for the formation of these systems. Even in those systems where the binary protostars are born with misaligned circumstellar disks, tidal interactions between the protostars can align their circumstellar disks with the orbital plane \citep{Lubow2000,Bate2000}. In such situations, tidal interactions induce disk precession; viscosity in the disks acts on the shearing motion to dissipate energy, gradually aligning the disks with the orbital plane. \citet{Bate2000} find that such dissipative processes can align protostellar disks and their orbital plane on timescales of order 20 orbital periods, which for binary systems with a total mass of $\sim$1\,M$_\sun$ and an orbital separation of $\sim$100\,AU correspond to an interval of just $\sim$$10^4$\,yrs. Instead, as pointed out by \citet{Lim2016} and demonstrated for the binary protostellar system L1551\,IRS5, the geometrical and dynamical relationship between the binary system and its surrounding bulk envelope provide the crucial distinction between different fragmentation models.
\subsection{Relationship with Circumbinary Disk}\label{sec:relationship circumbinary disk}
\citet{Takakuwa2012} inferred the bulk properties of the circumbinary disk in L1551\,NE by fitting a circular and geometrically-thin disk exhibiting Keplerian motion to channel maps in C$^{18}$O(3-2) as measured with the SMA. This simple model reproduces the global velocity behavior of the circumbinary disk, and provides best-fit parameters of $62\degr^{+25\degr}_{-17\degr}$ for its inclination and $167\degr^{+23\degr}_{-27\degr}$ for the position angle of its major axis.
In observations at a higher angular resolution and sensitivity with the ALMA, \citet{Takakuwa2014} found clear deviations from Keplerian rotation in the circumbinary disk as measured also in C$^{18}$O(3-2). They were able to reproduce these deviations by including gravitational torques from the binary protostars, assumed to have a circular coplanar orbit, but otherwise retained the geometry inferred by \citet{Takakuwa2012} for the circumbinary disk. Based on the angular separation of the two sources from the inferred dynamic center of the circumbinary disk, they found a binary mass ratio of $m_B/m_A = 0.19$, where $m_A$ is the mass of the protostar corresponding to source A and $m_B$ that corresponding to source B. From the measured orientation of sources A and B and the assumed circular coplanar orbit, \citet{Takakuwa2014} inferred an orbital separation of $\sim$145\,AU for the binary system.
The inclination and position angle for the major axis of the circumbinary disk in the model proposed by \citet{Takakuwa2012} agree, to within their measurement uncertainties, with the corresponding values we derived for the circumstellar disks (Table\,\ref{tab:NUKER}). Although the uncertainties in these parameters for the circumbinary disk are much larger than the uncertainties in the corresponding parameters for the circumstellar disks, we note that their formal values agree to within $\sim$5\degr\ in inclination and $\sim$15\degr\ in position angle. Thus, the circumstellar disks are not only parallel with each other, but also closely (if not accurately) parallel with their surrounding circumbinary disk.
Assuming that the equatorial plane of the circumbinary disk is orthogonal to the outflow cavity so that its eastern side is the near side, \citet{Takakuwa2012, Takakuwa2014} find that the circumbinary disk is rotating in an anticlockwise direction. For a coplanar binary system with a circular anticlockwise orbit, at their present orbital locations source B should be moving primarily northwards and somewhat eastwards with respect to source A, increasing in angular separation \cite[see Fig.\,10 of][]{Takakuwa2014}. The northward motion and likely increasing separation that we measure for source B with respect to source A, but smaller (no detectable) motion along the east-west direction ($\S\,\ref{sec:orbits}$), are therefore consistent with an orbital motion for the binary protostars in the same manner as the rotational motion of their surrounding circumbinary disk.
\subsection{Binary Mass Ratio}\label{sec:mass ratio}
If the binary protostars in L1551\,NE have a mass ratio of $\sim$0.19 and are in a circular orbit separated by $\sim$145\,AU as in the model described by \citet{Takakuwa2014}, the circumstellar disk of source A is predicted to be tidally truncated at a radius of $\sim$$58.4$\,AU and that of source B at a radius of $\sim$$23.5$\,AU \citep[derived from the calculations provided in][]{Pichardo2005}. By comparison, the inferred break radius ($r_b$) is $\sim$18.6\,AU for the circumstellar disk of source A and $\sim$8.9\,AU for the circumstellar disk of source B (Table\,\ref{tab:NUKER}), both a factor of $\sim$3 smaller than their predicted tidally-truncated radii. Equating their break radii with their tidally-truncated radii, then for a circular orbit, the predicted binary mass ratio is $\sim$0.23 and the orbital separation $\sim$47\,AU. The binary mass ratio (which, for a given orbital eccentricity, solely determines the tidally-truncated sizes of their constituent circumstellar disks) thus inferred is closely comparable to that inferred by \citet{Takakuwa2014} of $\sim$0.19. In this case, however, the predicted orbital separation is much smaller than the observed angular separation between the binary components in L1551\,NE of $71.5 \pm 0.4$\,AU. Thus, the binary components of this system cannot simultaneously have a circular orbit and break radii for their circumstellar disks corresponding to their tidally-truncated radii.
In L1551\,IRS5, the relative proper motion of the binary protostars have been measured with sufficient precision to make an exploration of orbital solutions meaningful \citep{Lim2016}. For circular orbits with orbital separations of up to $\sim$100\,AU, the (roughly comparable) break radii of the two circumstellar disks in this system can be closely comparable (somewhat smaller than) or at worse within a factor of $\sim$2 of their predicted tidally-truncated radii. Thus, either the circumstellar disks of the binary protostars in both L1551\,IRS5 and L1551\,NE do not extend to their tidally-truncated radii, or observations at 7\,mm do not trace the overall extents of these disks. {Observations at shorter wavelengths, where the dust emissivity is larger and hence the dust emission stronger, may better define the overall extents of these circumstellar dust disks. Furthermore, such observations can reveal any dependence in disk sizes with wavelength, as has been found for the Class\,0 source Per-emb-14 (also known as NGC\,1333\,IRAS\,4C) and the pre-main-sequence (Classical T\,Tauri) star AS209. The circumstellar disk of Per-emb-14 is much smaller (by a factor of about three) at 8\,mm \citep{Segura-Cox2016} than at 1.3\,mm \citep{Tobin2015}. Similarly, for AS209, the measured size of its circumstellar disk decreases towards longer wavelengths, a behavior attributed to the radial drift of dust grains \citep{Perez2012}.} Alternatively, the orbit of both L1551\,IRS5 and L1551\,NE may be highly eccentric, although \citet{Lim2006} found that even a moderate orbital eccentricity is highly unlikely in the case of L1551\,IRS5.
\subsection{Collimated Outflows}
As mentioned in $\S\ref{sec:results jets}$, \citet{Reipurth2000} argue that the apex of a cone-shaped reflection nebula --- comprising an outflow cavity --- associated with L1551\,NE is coincident with source B, and that the [FeII] jet detected from L1551\,NE originates from source A. To be detectable in [FeII], the jet from source A must have a higher density than that of source B at the same distance from their respective protostars (we rule out a much lower excitation for the jet from source B given that it, like the jet from source A, contains an ionized component detected in free-free emission at 3.5\,cm). Thus, source A must have either a more powerful or a more highly collimated jet, or both (possibly related, perhaps indirectly, to the higher mass of this protostar and its larger circumstellar disk), than source B. A more highly collimated jet provides a natural explanation for why the refection nebula has its axis passing through source B rather than source A; i.e., the walls of the outflow cavity are carved out by the poorly-collimated jet from source B. A more highly-collimated jet also produces stronger free-free emission than a more poorly-collimated jet for the same mass-loss rate \citep{Reynolds1986}, adding to the reasons why the ionized jet from source A is brighter than that from source B.
Other scenarios cannot be ruled out, but are not supported by the available evidence or contrived. For example, perhaps source B grew more quickly in mass (i.e., it experienced a higher accretion-rate) and drove a stronger outflow than source A, thus dominating the creation of the observed outflow cavity. In that case, source B ought to be the more massive than source A, in contradiction with the evidence presented above. Alternatively, the jet from source A only turned on recently and we are fortunate to be observing L1551\,NE soon after this event, an unlikely situation.
\subsection{Rotationally-Driven Fragmentation of L1551\,NE Parental Core}\label{sec:fragmentation}
Current models invoke either local (small-scale) turbulence in or the bulk (globally-ordered) rotation of cores to drive fragmentation. In cores that have little or no bulk rotation, turbulence introduces velocity and density inhomogeneities that can seed and drive the growth of multiple density perturbations to become self gravitating \citep[e.g.,][]{Bate2002, Bate2003, Bate2005, DelgadoDonate2004a, DelgadoDonate2004b, Goodwin2004a, Goodwin2004b, Goodwin2006, Matsumoto2015}. Multiple fragments produced in different turbulent cells are predicted to exhibit random orientations between the circumstellar disks of the binary components, and no particular relationship between the circumstellar disks and surrounding circumbinary material. If multiple fragments are produced in a common region where turbulence conspires to create local angular momentum, however, the binary system thus assembled can exhibit quite well aligned circumstellar disks. Nevertheless, once again, the circumstellar disks should not bear any particular relationship with their surrounding circumbinary material.
Alternatively, the large-scale ordered rotation of the core can drive dynamical instabilities to induce fragmentation during collapse. In such models, conservation of angular momentum forces cores to become increasingly flattened as they collapse. As a result, a disequilibrium disk-like (i.e., flattened and rotating) structure forms at the center of the core. The central region of the core can become especially flattened if magnetic fields are invoked to direct infalling matter onto the mid-plane of the disk-like structure; the resulting structures closely resemble, at least morphologically, rotationally-supported disks, and are therefore referred to as pseudodisks \citep{Galli1993a, Galli1993b}. By introducing an initial density or velocity perturbation, the large-scale ordered rotation of the core can drive dynamical instabilities in the form of a spiral, bar, or ring in its central flattened region \citep{Matsumoto2003, Cha2003, Machida2008}.
Fragments form in localised regions of the resulting dynamical instabilities that are gravitationally unstable (according to the Toomre criterion) and have masses exceeding the local Jeans mass. Binary prototellar systems that form through rotational fragmentation of disk-like structures should naturally exhibit a close alignment between the circumstellar disks of the binary components and the surrounding circumbinary material, and share the same sense in orbital motion. Such a close alignment and similar sense in orbital motion is what we find for L1551\,NE, as found also by \citet{Lim2016} for L1551\,IRS5, arguing for the formation of L1551\,NE through rotational fragmentation just like in the case of L1551\,IRS5.
In L1551\,IRS5, the circumstellar disks of the binary components have comparable sizes \citep[break radii of 12.2\,AU and 10.4\,AU respectively;][]{Lim2016}, suggesting that the binary prostars have comparable masses. In L1551\,NE, one protostar is about five times more massive than the other. Evidently, rotationally-driven fragmentation can lead to binary protostellar systems having either very similar or very different component masses.
\subsection{Fragmentation of L1551 Cloud}
Intriguingly, the spin axes of both the L1551\,NE and L1551\,IRS5 systems are closely oriented in space. Specifically, the circumstellar disks of the binary protostars in L1551\,IRS5 have inclinations of $\sim$46\degr\ and position angles for their major axes of $\sim$148\degr\ \citep{Lim2016}, compared with the circumstellar disks of the binary protostars in L1551\,NE that have inclinations of $\sim$58\degr\ and position angles for their major axes of $\sim$151\degr ($\S\ref{sec:CDs}$). Despite the close spatial orientation of their spin axes, however, the two systems exhibit opposite {senses in} spins. Specifically, as observed from the Earth, the two components of L1551\,NE are orbiting in an anticlockwise direction ($\S\ref{sec:relationship circumbinary disk}$), whereas the two components in L1551\,IRS5 are orbiting in a clockwise direction \citep{Lim2006,Lim2016}.
In theoretical simulations, an initial density or velocity perturbation is imprinted onto rotating cores to facilitate fragmentation driven by dynamical instablities. In molecular clouds, a ubiquitous source of perturbation is turbulence. Different turbulent cells may have been responsible for producing the parental cores of L1551\,NE and L1551\,IRS5, and imparted on them opposite spins. If so, then the close alignment between the spin axes of these two binary protostellar systems is purely coincidental.
L1551\,NE and L1551\,IRS5 make up one group of active star formation in the L1551 cloud. The other is the HL\,Tau group, which comprises HL\,Tau, XZ\,Tau, LkH$\alpha$\,358, and HH\,30$^*$ (driving source of the Herbig-Haro object HH\,30). HL\,Tau is classified as either a Class\,I object (protostar) or a Class\,II object (classical T\,Tauri star), and the others in the HL\,Tau group as Class\,II objects. The circumstellar disk of HL\,Tau has been very well resolved with ALMA. Based on a 2-dimensional Gaussian fit to its image, \citet{ALMA Partnership2015} derive an inclination of $46\degr.2 \pm 0\degr.2$ and a position angle for its major axis of $138\degr.2 \pm 0\degr.2$. The ionized jet from HL\,Tau has its major axis at a position angle of $\sim$51\degr\, \citep{Mundt1990, Lopez1995, Moriarty-Schieven2006}, closely orthogonal to the circumstellar disk of this object as projected onto the sky. Thus, surprisingly, the circumstellar disk of HL\,Tau also is closely aligned with the circumstellar disks of the binary protostars in L1551\,NE and L1551\,IRS5.
The circumstellar disk of LkH$\alpha$\,358 has been resolved with ALMA. Based on a 2-dimensional Gaussian fit to its image, \citet{ALMA Partnership2015} derive an inclination of $56\degr \pm 2\degr$ and a position angle for its major axis of $170\degr \pm 3\degr$. LkH$\alpha$\,358 does not exhibit any known jet. HH\,30$^*$ is a suspected binary based on wiggles in its optical jet \citep{Anglada2007}. The circumbinary disk of this system has an inclination of $81\degr \pm 2\degr$ and a position angle for its major axis of $125\degr \pm 1\degr$ \citep{Guilloteau2008}. The ionized jet from HH\,30$^*$ has its major axis at a position angle of $\sim$31\degr\, \citep{Mundt1990, Lopez1995, Moriarty-Schieven2006}, closely orthogonal to the circumbinary disk of this object as projected onto the sky. Although distributed over a wider range of angles, the circumstellar disk of LkH$\alpha$\,358 and the circumbinary disk of HH\,30$^*$ do not appear to be randomly oriented with respect to, but instead are aligned to within a few tens of degrees of, the circumstellar disks of L1551\,NE, L1551\,IRS5, and HL\,Tau. As a consequence, the outflows driven by all these objects, including the outflow from XZ\,Tau \citep{Krist1999, Mundt1990, Moriarty-Schieven2006} (a binary system whose circumstellar disks have not been spatially resolved), are all oriented in the north-east to south-west direction as projected onto the sky.
If not for the counter-rotating spins of L1551\,NE and L1551\,IRS5, it would have been natural to attribute the relative close alignment between the spin axes of all the young stellar objects in the L1551 cloud to a large-scale ordered rotation of this cloud. Instead, we note that the spin axes of all these objects are approximately orthogonal, in projection, to the major axis of the filament that comprises the L1551 cloud \citep[][whose measurements provide no evidence for any ordered rotation of this cloud]{Lin2016}; the L1551 cloud filament is itself aligned with the overall elongation of filamentary structures that make up the Taurus molecular cloud complex \citep{Mizuno1995,Goldsmith2008}. The close alignment in the spin axes of all the young stellar objects in the L1551 cloud may therefore reflect (faster) infall and the subsequent formation of cores that are flattened along the major axis of the cloud filament. Local turbulence may have imparted angular momentum to individual cores, thus giving rise to opposite spins between some cores.
\section{Summary and Conclusions}\label{sec:summary}
Using the VLA, we have fully resolved (i.e., along both their major and minor axes) the two circumstellar disks in the class\,I binary protostellar system L1551\,NE. We also reanalysed archival observations at 3.5\,cm that resolve along their major axes the two ionized jets in this system. These observations span nearly two decades, allowing us to study the relative proper motion of the binary protostars. We found that:
\begin{itemize}
\item the stronger ionized jet of source A has a position angle for its major axis of $61\degr^{+4\degr}_{-3\degr}$, and the weaker ionized jet of source B a position angle for its major axis of $69\degr^{+4\degr}_{-5\degr}$. Both jets are therefore aligned, as projected onto the sky, to within the measurement uncertainties (difference in position angles of $8\degr \pm 6\degr$).
\item the circumstellar disk of source A is much larger than that of source B. The images of both circumstellar disks are better fit by a double power-law that exhibits a smooth transition between the inner and outer power-laws, than a single power-law that is abruptly truncated. A single, untruncated, power-law is explicitly rejected for the circumstellar disks of both sources, as is a Gaussian for the circumstellar disk of source A.
\item although we find no unique solution for a double power-law fit to either circumstellar disks, the ratio of their major to minor axes as well as the position angle of their major axes do not depend on other parameters. Assuming implicitly that the circumstellar disks are intrinsically circular and geometrically thin, we find that the circumstellar disk of source A has an inclination of 57\degr.7 and a position angle for its major axis of 150\degr.9, and the circumstellar disk of source B an inclination of 58\degr.0 and a position angle for its major axis of 152\degr.1. With estimated uncertainties in these parameters of a few degrees, the two circumstellar disks are closely aligned if not parallel. Furthermore, the two circumstellar disks are accurately orthogonal in projection to their respective ionized jets.
\item for the sharpest transition between the inner and outer power-laws as might be expected of tidally-truncated disks, the radius of maximum curvature in this transition is $\sim$18.6\,AU for the circumstellar disk of source A and $\sim$8.9\,AU for the circumstellar disk of source B. Equating these transition radii with their theoretically predicted tidally-truncated radii, then, for a circular orbit, the ratio in disk sizes imply a binary mass ratio of $\sim$0.23. This binary mass ratio is closely comparable with that inferred by \citet{Takakuwa2014} of $\sim$0.19 based on the projected angular separation between each protostar and the inferred kinematic center of the circumbinary disk. Given the projected angular separation between the two protostars, however, the transition radii of both circumstellar disks are at least a factor of $\sim$1.5 times smaller than their predicted tidally-truncated radii if the system has a binary mass ratio of $\sim$0.2 and a circular orbit.
\item over an interval of 10\,yr, source B has moved northwards (at a significance level of $3.1\sigma$) with respect to source A. By contrast, there is no detectable motion of these two sources along the east-west direction (significance level of only $1.6\sigma$). Furthermore, source B is likely moving away (at a significance level of $2.5\sigma$) from source A. All these measurements agree with the model proposed by \citet{Takakuwa2014} for the relative orientation of the two protostars at their inferred orbital locations for an anticlockwise and circular orbital motion.
\end{itemize}
The two circumstellar disks are closely aligned if not parallel not just with each other but also with their surrounding circumbinary disk, which has an inclination of $62\degr^{+25\degr}_{-17\degr}$ and a position angle for its major axis of $167\degr^{+23\degr}_{-27\degr}$. Furthermore, the two protostars appear to be orbiting each other in same direction as the rotation of their circumbinary disk. Both the circumstellar and circumbinary disks, as well as the orbit, of this binary system therefore share the same axes for their angular momenta, indicating that L1551\,NE formed through the rotationally-driven fragmentation of its parental core, as is the case for L1551\,IRS5 \citep{Lim2016}. By contrast with L1551\,NE, where the two circumstellar disks have different sizes and their binary protostars different masses, the two circumstellar disks in L1551\,IRS5 have roughly comparable sizes suggesting that their binary protostars have comparable masses (for a given orbital eccentricity, the truncation radii of circumstellar disks in binary systems depend only on the binary mass ratio). Rotationally-driven fragmentation can therefore lead to binary systems having comparable or very different component masses.
Finally, we pointed out that the circumstellar disks of the binary protostars in both L1551\,NE and L1551\,IRS5, along with their circumbinary disks or flattened circumbinary envelopes, are closely oriented in space (i.e., similar inclinations, as well as position angles for their major axes). Indeed, all the young stellar objects in the L1551 cloud, including HL\,Tau, LkH$\alpha$358, HH30$^*$, and probably also XZ\,Tau, have spin axes that are approximately orthogonal in projection to the major axis of the filament that makes up the L1551 cloud, which itself is aligned with the major axes of the filamentary structures that make up the Taurus molecular cloud complex. This alignment may reflect (faster) infall along and the subsequent formation of cores that are flattened across the minor axes of these filaments. Local turbulence may have imparted angular momentum to individual cores, thus giving rise to opposite spins between some cores.
\acknowledgments
We thank the anonymous referee for constructive suggestions to improve the readability of the manuscript. J. Lim acknowledges support from the Research Grants Council of Hong Kong through grants HKU\,703512P and 17305115. Both T. Hanawa and T. Matsumoto acknowledge the Japan Society for the Promotion of Science (JSPS) Grants-in-Aid for Scientific Research (KAKENHI) through, respectively, Grant Number JP15K05017 and 26400233.
\vspace{5mm}
\facilities{VLA}
\software{CASA, AIPS, GALFIT}
|
1,116,691,497,327 | arxiv | \section{Introduction}\label{sec1}
Let $P_n$ be the orthogonal projection of $L^{2}([0, 2\pi)$ onto the linear subspace spanned by the
functions $\{e^{i m \theta}: 0\leq m\leq n; 0\leq \theta < 2\pi\}$. For any function $f$ defined on
$[0, 2\pi)$, let $[f]$ be the linear operator corresponding to multiplication by $f$.
In $1952$, G.~Szeg\"o proved that for a positive function $f \in \mathcal{C}^{1+\alpha}$ where
$\alpha > 0$, the following holds
\begin{equation}\label{oriszeg}
\lim_{n \to \infty}\tfrac{1}{n+1}\log \det P_{n}[f]P_{n}=\tfrac{1}{2\pi}\int_{0}^{2\pi}\log f
(\theta)\, d\theta.
\end{equation} Equivalently,~\eqref{oriszeg} can be expressed as
\begin{equation}\label{oriszegbis}
\lim_{n \to \infty}\tfrac{1}{n+1}\trace \log P_{n}[f]P_{n}=\tfrac{1}{2\pi}\int_{0}^{2\pi}\log f
(\theta)\, d\theta.
\end{equation}
The above result is known as the strong Szeg\"o limit theorem and we refer to \cite{greszeg,
szeg52} for more details and related results.
Today, the strong Szeg\"o limit theorem can be proved under much weaker conditions on $f$, see
e.g., \cite{goibra, hirs65, kac53}. For high dimensional extensions of the strong Szeg\"o limit
theorem, we refer to \cite{kok96} and the references therein.
In fact, the strong Szeg\"o limit theorem is a special case of a more general result proved by
Szeg\"o using the fact that $P_{n}[f]P_{n}$ is a Toeplitz form \cite{greszeg}. More specifically,
let $f$ be a real-valued integrable function such that $m\leq f(x) \leq M$. Then the eigenvalues
$\{\lambda_{k}^{(n)}\}_{k=1}^{n+1}$ of $P_{n}[f]P_{n}$ are contained in $[m, M]$. For any
continuous function $F$ defined on this interval, it was proved in \cite[Section 5.3]{greszeg} that
\begin{equation}\label{geneszego}
\lim_{n\to
\infty}\tfrac{1}{n+1}\sum_{k=1}^{n+1}F(\lambda_{k}^{(n)})=\tfrac{1}{2\pi}\int_{0}^{2\pi}F(f(x))\,
dx.
\end{equation}
Notice that~\eqref{oriszeg} is a specific case of~\eqref{geneszego}, when $F(x)=\log x$. It follows
from~\eqref{geneszego} that the bounded sequences $\{\lambda_{k}^{(n+1)}\}_{k=1}^{n+1}$ and
$\{f(\tfrac{2k\pi}{n+1})\}_{k=1}^{n+1}$ are equally distributed in the interval $[m, M]$; see
\cite[Chapter 5]{greszeg}.
By noticing that $e^{im\theta}$ is an eigenfunction of $\Delta= \tfrac{d^{2}}{dx^{2}}$, one can
view the above results as special cases of Szeg\"o limit theorem for the Laplace-Beltrami operator
(or more generally for pseudodifferential operators) on manifolds \cite{guok96, lasa96, kok96,
wid79}. \newline \noindent In the present paper, we prove the analogue of the strong Szeg\"o limit
theorem on the Sierpi\'nski gasket ($\text{SG}$). This set is an example of fractal on which a well
established theory of Laplacian exists \cite{Ba98, Ki93, Str99, Str06}. In this fractal setting,
the non-periodicity of the eigenfunctions of the Laplacian implies that the analogue of the matrix
$P_j[f]P_j$ is no longer related to a Toeplitz form. Thus our results no longer follow from any
known proof of~\eqref{oriszeg}, but rather rely on the existence of localized eigenfunctions for
the Laplacian on $\text{SG}$ \cite{BaKi97}.
The paper is organized as follows: In Section~\ref{anasg} we briefly introduce some key notions
from analysis on fractals and give a precise description of the Dirichlet spectrum of the Laplacian
on $\text{SG}$. In Section~\ref{singleeigen} we prove a special case of the strong Szeg\"o theorem that
we use in Section~\ref{geneszeg} to prove an analogue of~\eqref{oriszeg}. Finally,
Section~\ref{lastsec} contains a further extension of the strong Szeg\"o limit theorem on $\text{SG}$.
\section{Analysis on the Sierpi\'ski gasket}\label{anasg}
\subsection{Basic features}
In this section we collect some key facts from analysis on $\text{SG}$ that we need to state and prove
our results. These come from Kigami's theory of analysis on fractals, and may be found
in~\cite{Ki93}. An elementary exposition may be found in~\cite{Str99, Str06}.
Let $F_1, F_2$ and $F_3$ be the contractions defined on $\mathbb{R}^2$ respectively by $F_1(x) =
\frac{1}{2}x$, $F_2(x) = \frac{1}{2}x + (\frac{1}{2}, 0)$ and $F_3(x) = \frac{1}{2}x +
(\frac{1}{4}, \frac{\sqrt{3}}{4})$. The Sierpi\'nski gasket denoted by $\text{SG}$ is the unique nonempty
compact subset of $\mathbb{R}^2$ such that $ \text{SG}= \cup_{i=1}^{3}F_{i}(\text{SG})$. Alternatively, $\text{SG}$ can be
defined as a limit of graphs. For a word $\omega=(\omega_1, \omega_2, \hdots, \omega_m)$ of length
$m$, the set $F_{\omega}(\text{SG}) =F_{\omega_{1}}\circ \cdots \circ F_{\omega_{m-1}}\circ
F_{\omega_m}(\text{SG})$
with $\omega_i \in \{1, 2, 3\}$, is called an $m$-cell.
Let $V_{0}=\{(0, 0), (1, 0),
(\frac{1}{2}, \frac{\sqrt{3}}{2})\}$, be the boundary of $\text{SG}$ and $V_{n}= \cup_{i=1}^{3}
F_{i}V_{n-1}$, $n\geq 1$. Define a sequence of graphs $\Gamma_{m}$ with vertices in $V_{m}$ and
edge relation $x\sim_{m} y$ given inductively by: $\Gamma_{0}$ is the complete graph with vertices
in $V_{0}$, and $x\sim_{m}y$ if and only if $x \,\mbox{and}\, y$ belong to the same $m$-cell
$F_{\omega}(\text{SG})$.
In all that follows, we assume that $\text{SG}$ is equipped with the probability measure $\mu$ that
assigns the measure $3^{-m}$ to each $m-$cell. We will also need the energy or Dirichlet form that
is naturally defined on $\text{SG}$ and denoted $\mathcal{E}$. The precise definition of $\mathcal{E}$
will not be given here but can be found in \cite{Ki93, Str06}. All we will need in the sequel is
that $\mathcal{E}$ gives rise to a natural distance on $\text{SG}$ called {\it the effective resistance
metric} on $\text{SG}$, and defined for $x,y \in \text{SG}$ by
\begin{equation}\label{resitmet}
d(x, y) = \biggparen{\min\set{\en{u}: u(x) = 0 \, \mbox{and}\, u(y) =1}}^{-1}.
\end{equation}
It is known that $d(x,y)$ is bounded above and below by constant multiples of
$|x-y|^{\log(5/3)/\log2}$, where $|x-y|$ is the Euclidean distance.
For any integer $N>1$ we will consider a partition of the Sierpi\'nski gasket ($\text{SG}$) into
\begin{equation}\label{partition}
\text{SG}=\cup_{|\omega|=N}F_{\omega}\text{SG} ,\end{equation} where for each word $\omega$ of length $N$,
$F_{\omega}(\text{SG})$ is an $N-$cell. Note that there are exactly $3^N$ such cells, each of which has a
measure $3^{-N}$. If $u$ is a function having support entirely contained in a single $N$-cell then
we will say that $u$ is {\em localized at scale $N$}.
\subsection{The Laplacian and its spectrum}
A Laplacian can be defined on $\text{SG}$ either through a weak formulation using the energy and measure
or as a renormalized limit of graph Laplacians in the following manner~\cite{Ki93, Str06}. Define
the graph Laplacian $\Delta_{m}$ on $\Gamma_{m}$ by
\begin{equation}\label{graphlap}
\Delta_{m}f(x) = \sum_{y\sim_{m}x}f(y) - 4f(x)
\end{equation} for $x \in V_{m}\setminus V_0$. The Laplacian
on $\text{SG}$ can now be defined by
\begin{equation}\label{lapla}
\Delta = \tfrac{3}{2} \lim_{m \to \infty} 5^{m} \Delta_{m}.
\end{equation}
A complete description of the spectrum of $\Delta$ on $\text{SG}$ was given in~\cite{FuSh92} using the
method of spectral decimation introduced in~\cite{RaTo83}, and a description of the eigenfunctions
was obtained by this method in~\cite{DSV}, see also~\cite{Tep98}. In a nutshell, the spectral
decimation method completely determines the eigenvalues and the eigenfunctions of $\Delta$ on $\text{SG}$
from the eigenvalues and eigenfunctions of the graph Laplacians $\Delta_m$. More specifically, for
every Dirichlet eigenvalue $\lambda$ of $\Delta$ on $\text{SG}$, there exists an integer $j\geq 1$,
called the {\em generation of birth}, such that if $u$ is a $\lambda$-eigenfunction and $k\geq j$
then $u\evald{V_{k}}$ is an eigenfunction of $\Delta_{k}$ with eigenvalue $\gamma_{k}$. The only
possible initial values $\gamma_{j}$ are $2$, $5$ and $6$, and subsequent values can be obtained
from
\begin{equation}\label{spectdecrelnforevals}
\gamma_{k} = \frac{1}{2}\bigl( 5+ \epsilon_{k} \sqrt{25-4\gamma_{k-1}} \bigr)\text{ for }k>j
\end{equation}
where $\epsilon_{k}$ can take the values $\pm1$. The sequence $\gamma_{k}$ is related to $\lambda$
by
\begin{equation}\label{spectdecrelnofgammaktolambda}
\lambda = \tfrac{3}{2}\lim_{k\to \infty} 5^{k}\gamma_{k}.
\end{equation}
In particular the convergence of~\eqref{spectdecrelnofgammaktolambda} implies that $\epsilon_{k}=1$
for at most a finite number of $k$ values. We let $l=\min\{k:\epsilon_{k}=-1\}$ and call it the
{\em generation of fixation}.
An interesting and useful feature is that there are a great many eigenfunctions which satisfy both
Dirichlet and Neumann boundary conditions; this is a general property of the Laplacian on fractals
with sufficient symmetry~\cite{BaKi97}, and in the case of $\text{SG}$ it implies both that most
eigenfunctions are localized on small sets, and that eigenspaces have high multiplicity.
Using the spectral decimation algorithm and elementary properties of the map
in~\eqref{spectdecrelnforevals} one can see that the size of an eigenvalue depends (up to constant
factors) on its generation of fixation, and its multiplicity depends on its generation of birth. We
summarize the relevant features of eigenvalues at the bottom of the spectrum and their eigenspaces
in Proposition~\ref{evalsefnsetc} below, and refer the reader to the original treatments
~\cite{BaKi97, DSV, FuSh92}, or the exposition in~\cite{Str06} for proofs.
\begin{prop}\label{evalsefnsetc}
There is a constant $\kappa$ such that the $\tfrac{1}{2}(3^{m+1}-3)$ smallest eigenvalues of
$-\Delta$ are precisely those with size at most $\kappa5^{m}$, and all have generation of fixation
$l\leq m$. The eigenvalues, their multiplicities, and bases for their eigenspaces may be described
as follows.
\begin{itemize}
\item The $2$-series eigenvalues are those obtained from~\eqref{spectdecrelnforevals}
and~\eqref{spectdecrelnofgammaktolambda} with generation of birth $j=1$ and $\gamma_{j}=2$. Each
such eigenvalue has multiplicity $1$.
\item The $5$-series eigenvalues are those obtained from~\eqref{spectdecrelnforevals}
and~\eqref{spectdecrelnofgammaktolambda} with any generation of birth $j\geq1$ and $\gamma_{j}=5$.
There are $2^{m-j}$ such eigenvalues for each $1\leq j\leq m$, every one having multiplicity
$\tfrac{1}{2}(3^{j-1}+3)$. For each such eigenvalue, there is a basis for the corresponding
eigenspace in which all but two of the basis functions have support in a collection of
$(j-1)$-cells arranged in a loop around a ``hole'' of scale at least $(j-1)$ in $\text{SG}$. There are
$\tfrac{1}{2}(3^{j-1}-1)$ such holes and one eigenfunction for each hole.
\item The $6$-series eigenvalues are those obtained from~\eqref{spectdecrelnforevals}
and~\eqref{spectdecrelnofgammaktolambda} with any generation of birth $j\geq2$, $\gamma_{j}=6$ and
$\epsilon_{j+1}=+1$. There are $2^{m-j-1}$ such eigenfunctions for each $2\leq j<m$ and $1$ for
$j=m$, every one having multiplicity $\tfrac{1}{2}(3^{j}-3)$. For each such eigenvalue there is a
basis for the corresponding eigenspace that is indexed by points of $V_{j-1}\setminus V_{0}$, and
in which each basis element is supported on the union of the two $j$-cells that intersect at the
corresponding point in $V_{j-1}\setminus V_{0}$.
\end{itemize}
\end{prop}
Recall that a function is said to be localized at scale $N$ if its support is contained entirely in
a single $N$-cell. It is apparent from the above that there are eigenfunctions that are localized
at scale $N$ provided $j>N$. For later use, we compute the number of these that occur in each of
the $5$ and $6$-series.
Let $j>N$ and consider a $6$-series eigenvalue with generation of birth $j$. In the associated
eigenspace there are $\tfrac{1}{2}(3^{N+1}-3)$ basis functions corresponding to the vertices in
$V_{N}\setminus V_{0}$, and which are not localized at scale $N$. The remaining
$\tfrac{1}{2}(3^{j}-3^{N+1})$ basis functions correspond to vertices in $V_{j-1}\setminus V_{N}$
and are localized at scale $N$.
For $j>N$ and a fixed $5$-series eigenvalue, the basis elements for the eigenspace are supported on
loops or chains. A loop is contained in a cell of scale $N$ if and only if the hole it encircles
has scale at least $N+1$. The number of holes of scale at most $N$ is $\tfrac{1}{2}(3^{N}-1)$, so
the number of basis elements not localized at scale $N$ is this plus the two that are not loops,
giving $\tfrac{3^{N}}{2}$ in total. The remaining $\tfrac{1}{2}(3^{j-1}-3^{N})$ basis
eigenfunctions are localized at scale $N$.
\section{Szeg\"o limit theorem on $\text{SG}$ for a single eigenspace}\label{singleeigen}
In this section we prove a Szeg\"o limit theorem for a single $5$-series or $6$-series eigenspace
of the Laplacian on $\text{SG}$.
Let $\lambda_{j}$ be a $6$-series eigenfunction with generation of birth $j$ and eigenspace
$E_{j}$. Denote the span of those eigenfunctions corresponding to $\lambda_j$ that are localized at
scale $N<j$ by $E_{j}^{N}$. Let $d_{j}^{N}=\dim{E_{j}^{N}} = \tfrac{1}{2}(3^{j} -3^{N+1})$ and
$\alpha_{j}^{N}=\tfrac{1}{2}(3^{N+1}-3)$ be the dimension of the complementary space in $E_{j}$.
Since there are $3^{N}$ cells of scale $N$, we see that the number of eigenfunctions supported on a
single cell is $m_{j}^{N}=\tfrac{1}{2}(3^{j-N}-3)$.
An analogous construction may be done for a $5$-series eigenfunction, with the only change being
that in this case $\alpha_{j}^{N}=\tfrac{3^{N}-3}{2}$, $d_{j}^{N}=\tfrac{1}{2}(3^{j-1}-3^{N})$ and
$m_{j}^{N}=\tfrac{1}{2}(3^{j-N-1}-1)$.
For each $N$-cell, use the Gram-Schmidt process to orthonormalize the collection of eigenfunctions
supported on that cell. Since functions on separate cells are already orthogonal, taking the union
over all $N$-cells gives an orthonormal basis $\{\tilde{u}_{k}\}_{k=1}^{d_{j}^{N}}$ for
$E_{j}^{N}$. Adjoining the remaining basis elements of $E_{j}$ and again using the Gram-Schmidt
process extends this to an orthonormal basis
\begin{equation*}
\{u_{k}\}_{k=1}^{d_{j}}=\{\tilde{u}_{k}\}_{k=1}^{d_{j}^{N}}\cup \{v_{k}\}_{k=1}^{\alpha_{j}^{N}},
\end{equation*}
for $E_{j}$, where only the $v_{k}$ are not localized at scale $N$.
Let $P_{j}$ be the projection of $L^{2}(\text{SG})$ onto $E_{j}$. For $g \in L^{2}(\text{SG})$, $P_j$ is
defined by
\begin{equation}\label{defproj}
P_{j}g(x):=\sum_{k=1}^{d_{j}}g_{k}u_{k}(x)=\sum_{k=1}^{d_{j}}\ip{g}{u_{k}}u_{k}(x).
\end{equation}
For a real-valued function $f$ on $\text{SG}$, we recall that $[f]$ the operator corresponding to the
pointwise multiplication by $f$.
\subsection{The case of simple functions}\label{subsec1}
\begin{lemma}\label{simpleszeg}
Let $f=\sum_{k=1}^{3^{N}}a_{k}\chi_{C_{k}}$ where we assume that $a_{k}>0$ for all $k$ and
$\chi_{C_{k}}$ denotes the characteristic function of the $N$-cell $C_{k}$. Then for $P_{j}$ as
above,
$$\lim_{j\to \infty}\tfrac{1}{d_{j}}\log \det P_{j}[f]P_{j}=\int_{\text{SG}}\log f(x)\, d\mu(x).$$
Furthermore, for $j$ large enough,
$$\tfrac{1}{d_{j}}\log \det P_{j}[f]P_{j} - \int_{\text{SG}} \log f(x)\, d\mu(x) = O(d_{j}^{-1}).$$
\end{lemma}
\begin{proof}
For $j>N$ set $M_{j}=P_{j}[f]P_{j}$. Then $M_{j}$ is a $d_{j}\times d_{j}$ matrix which has block
structure
\begin{equation}\label{matrixmj}
M_{j}=\begin{bmatrix} R_{j}& \star \\
0 & N_{j}\end{bmatrix}
\end{equation}
with respect to the basis $\{u_{k}\}_{k=1}^{d_{j}}$. Here $R_j$ is an invertible $d_{j}^{N}\times d^{N}_{j}$ matrix
corresponding to the ``localized'' part, while $N_{j}$ is an invertible $\alpha_{j}^{N} \times
\alpha_{j}^{N}$ matrix corresponding to the ``non-localized'' part. Furthermore, $R_j$ is a block
diagonal matrix, where each block is an $m_{j}^{N}\times m_{j}^{N}$ matrix that corresponds to a
single $N-$cell $C_{k}$, and is therefore simply $a_{k}I_{m_{j}^{N}}$ where $I_{d}$ is notation for
the $d\times d$ identity matrix. It follows immediately that
\begin{align*}
\log\det M_{j}
&= \log\det R_{j} +\log\det N_{j}\\
&= \log \biggl(\prod_{k=1}^{3^{N}} a_{k}^{m_{j}^{N}}\biggr) +\log\det N_{j}\\
&= m_{j}^{N} \biggl( \sum_{k=1}^{3^{N}} \log a_{k} \biggr) +\log\det N_{j}.
\end{align*}
Using $d_{j}^{N}=3^{N}m_{j}^{N}$ we conclude
\begin{align*}
\tfrac{1}{d_{j}}\log \det M_{j}
&=\tfrac{m_{j}^{N}3^{N}}{d_{j}} \sum_{k=1}^{3^{N}}3^{-N}\log a_{k} + \tfrac{1}{d_{j}}\log \det N_{j} \notag \\
&=\tfrac{d^{N}_{j}}{d_{j}} \int_{\text{SG}}\log f (x)\, d\mu(x) + \tfrac{1}{d_{j}}\log \det N_{j},
\end{align*}
and since $d_{j}-d_{j}^{N}=\alpha_{j}^{N}$, we have
\begin{align}\label{errorterminsimplecase}
\lefteqn{\tfrac{1}{d_{j}}\log \det M_{j} - \int_{\text{SG}}\log f (x)\, d\mu(x)}\quad&\notag\\
&= \tfrac{-\alpha_{j}^{N}}{d_{j}}\int_{\text{SG}}\log f (x)\, d\mu(x) + \tfrac{1}{d_{j}}\log\det N_{j}.
\end{align}
We can now afford a crude estimate of the term $\log\det N_{j}$. Since $f\in L^{\infty}$, the
multiplier $[f]$ is bounded on $L^{2}$ by $\|f\|_{\infty}$. It follows that $\langle
N_{j}g,g\rangle \leq \|f\|_{\infty}$ for any $g\in E_{j}$ with $\|g\|_{2}=1$, and therefore that
$\det N_{j}\leq \|f\|_{\infty}^{\alpha_{j}^{N}}$. Combining this
with~\eqref{errorterminsimplecase} we see
\begin{equation*}
\biggl| \tfrac{1}{d_{j}}\log \det M_{j} - \int_{\text{SG}}\log f (x)\, d\mu(x) \biggr|
\leq \tfrac{\alpha_{j}^{N}}{d_{j}} \Bigl( \|\log f (x)\|_{1} + \|f\|_{\infty} \Bigr)
\end{equation*}
which completes the proof because $\alpha_{j}^{N}$ is bounded by a constant multiple of $3^{N}$ and
$d_{j}$ is comparable to $3^{j}$.
\end{proof}
\subsection{The case of continuous positive functions}\label{contszeg}
\begin{thm}\label{main1}
Let $f$ be a positive and continuous function on $\text{SG}$. Then
\begin{equation}\label{mainest1}
\lim_{j\to \infty}\tfrac{1}{d_{j}}\log \det P_{j}[f]P_{j}=\int_{\text{SG}}\log f(x)\, d\mu(x).
\end{equation}
If in addition, we assume that $f$ is H\"older continuous of order $\alpha$ in the resistance
metric $R$ on $\text{SG}$, then
\begin{equation}\label{improvedest1}
\tfrac{1}{d_{j}}\log \det P_{j}[f]P_{j} - \int_{\text{SG}}\log f(x)\, d\mu(x) = O(d_{j}^{-\beta})
\end{equation} where
$$\beta =\tfrac{\alpha \log(5/3)}{\log 3 + \alpha \log(5/3)}=1 - \tfrac{\log 3}{\alpha \log(5/3) + \log 3}.$$
\end{thm}
\begin{proof}
Since $\text{SG}$ is compact, $\min_{x\in\text{SG}} f(x)=m>0$. Given $\epsilon>0$, uniform continuity provides
$N$ and a simple function $f_{N}=\sum_{k=1}^{3^{N}}a_{k}\chi_{C_{k}}$ such that
\begin{equation*}
\|f-f_{N}\|_{\infty}<\min\bigl(\tfrac{1}{2},\tfrac{m}{2},\tfrac{\epsilon m}{2}\bigr),
\end{equation*}
from which the following are immediate,
\begin{gather}
\tfrac{\bigl| f(x)-f_{N}(x)\bigr|}{\bigl|f_{N}(x)\bigr|}
\leq \epsilon, \notag\\
1-\epsilon \leq \tfrac{f(x)}{f_{N}(x)} \leq 1+\epsilon,\label{ratioboundsinmain1thm}\\
-2\epsilon\leq \log(1-\epsilon) \leq \log \Bigl( \tfrac{f(x)}{f_{N}(x)}\Bigr) \leq
\log(1+\epsilon)\leq \epsilon. \label{logboundsinmain1thm}
\end{gather}
Note that~\eqref{logboundsinmain1thm} implies $\bigl| \int_{SG}\log f-\int_{SG}\log f_{N}\bigr|\leq
2\epsilon$.
Now let us estimate $\log\det P_{j}[f]P_{j}$ in the same manner as was done in
Lemma~\ref{simpleszeg}. It has a block structure like~\eqref{matrixmj}, but the diagonal blocks in
$R_{j}$ are no longer multiples of the identity matrix $I_{m_{j}^{N}}$. However it follows
from~\eqref{ratioboundsinmain1thm} that the values on the diagonal corresponding to $C_{k}$ are
bounded below by $a_{k}(1-\epsilon)$ and above by $a_{k}(1+\epsilon)$, and thus
\begin{equation*}
\tfrac{d_{j}^{N}}{d_{j}} \log(1-\epsilon)
\leq \tfrac{1}{d_{j}} \log\det R_{j} - \tfrac{d_{j}^{N}}{d_{j}} \int_{\text{SG}} f_{N} \, d\mu
\leq \tfrac{d_{j}^{N}}{d_{j}} \log(1+\epsilon)
\end{equation*}
and in particular $\bigl| d_{j}^{-1}\log\det R_{j} - d_{j}^{N}d_{j}^{-1} \int_{SG} f_{N} \bigr|\leq
2\epsilon$.
Combining these estimates with the same $\log\det N_{j} \leq \alpha_{j}^{N}\|f\|_{\infty}$ bound
used in Lemma~\ref{simpleszeg} we have
\begin{align}
\lefteqn{\biggl| \tfrac{1}{d_{j}} \log\det P_{j}[f]P_{j} - \int_{SG} \log f\, d\mu\biggr| }\quad& \notag\\
&\leq \biggl| \tfrac{1}{d_{j}} \log\det R_{j} - \tfrac{d_{j}^{N}}{d_{j}} \int_{SG} \log f_{N} \, d\mu \biggr|
+ \tfrac{\alpha_{j}^{N}}{d_{j}} \biggl| \int_{SG} \log f_{N} \, d\mu \biggr| \notag\\
&\quad + \biggl| \int_{SG} \log f_{N}- \log f \, d\mu \biggr| + \tfrac{1}{d_{j}}\log\det N_{j} \notag\\
&\leq 4\epsilon + c\tfrac{\alpha_{j}^{N}}{d_{j}}\bigl( \|f\|_{\infty}+\|\log f\|_{1} \bigr) \notag\\
&\leq 4\epsilon + c3^{N-j}\bigl( \|f\|_{\infty}+\|\log f\|_{1} \bigr). \label{mainthm1boundforall}
\end{align}
This gives the first statement of the theorem.
In the case that $f$ is H\"older continuous of order $\alpha$ in the resistance metric we see that
$$\epsilon = \nm{f-f_{N}}_{L^{\infty}} = O((\tfrac{3}{5})^{N\alpha}).$$ For a fixed large
$j$ we may then choose $N$ such that the bound in~\eqref{mainthm1boundforall} is minimized, which
occurs when $\epsilon\approx3^{N-j}.$ Setting $(3/5)^{N\alpha}=3^{N-j}$ we compute
\begin{equation*}
3^{N-j} =3^{\tfrac{-j\alpha \log(5/3)}{\log 3 + \alpha \log(5/3)}}
\end{equation*}
and substitute into~\eqref{mainthm1boundforall} to obtain~\eqref{improvedest1}, using
$d_{j}\approx3^{j}$.
\end{proof}
\begin{rem}
The following special cases of~\eqref{improvedest1} are worth pointing out. \newline If $\alpha =
1/2$ (which is the case when $f\in dom \mathcal{E}$), then $\beta = 1 -\tfrac{\log 9}{\log 15}.$
\newline If $\alpha =1$ (which is the case if $f \in dom \Delta$), then $\beta = 1-\tfrac{\log
3}{\log 5}.$
\end{rem}
\section{General Szeg\"o Theorem on $\text{SG}$}\label{geneszeg}
In this section we prove analogues of the results proved in Section~\ref{singleeigen} for the
situation where we look at all eigenvalues up to a certain value $\Lambda$. We therefore let
$E_{\Lambda}$ be the span of all eigenfunctions corresponding to eigenvalues $\lambda$ of $-\Delta$
for which $\lambda\leq \Lambda$, let $P_{\Lambda}$ be projection onto $E_{\Lambda}$, and set
$d_{\Lambda}=\dim(E_{\Lambda})$. We also suppose that a scale $N$ is fixed.
Since $-\Delta$ is self-adjoint the eigenspaces of distinct eigenvalues are orthogonal. For each
$\lambda<\Lambda$ from either the $5$-series or the $6$-series, and having generation of birth
$j>N$ we take an orthonormal basis for the corresponding eigenspace of the type described in
Section~\ref{singleeigen}. For all other eigenspaces in $E_{\Lambda}$ we simply take orthonormal
bases. The union of the basis vectors is then a basis for $E_{\Lambda}$ and in this basis the
operator $P_{\Lambda}[f]P_{\Lambda}$ is a block diagonal matrix $M_{\Lambda}$ with one block
$M_{\lambda}$ for each eigenvalue $\lambda\leq\Lambda$.
\begin{thm}\label{main2} Let $f> 0$ be a continuous function on $\text{SG}$. Then,
\begin{equation}\label{mainest2}
\lim_{\Lambda\to \infty}\tfrac{1}{d_{\Lambda}}\log \det M_{\Lambda} =\int_{\text{SG}}\log f(x)\, d\mu(x).
\end{equation}
If in addition we assume that $f$ is H\"older continuous of order $\alpha$ in the resistance metric
$R$ on $\text{SG}$ then
\begin{equation}\label{improvedest2}
\tfrac{1}{d_{\Lambda}}\log \det M_{\Lambda}-\int_{\text{SG}}\log f(x)\, d\mu(x)
=O(d_{\Lambda}^{-\tilde{\beta}})
\end{equation}
where $$\tilde{\beta} =\beta (1-\tfrac{\log 2}{\log 3})=(\tfrac{\alpha \log(5/3)}{\log 3 + \alpha
\log(5/3)})(1-\tfrac{\log 2}{\log 3}).$$
\end{thm}
\begin{proof}
Fix $\epsilon>0$. It is clear that $\log\det M_{\Lambda} = \sum_{\lambda\leq\Lambda}\log\det
M_{\lambda}$. If $\lambda$ is one of the $5$ or $6$-series eigenvalues with $j>N$ then replacing
$f$ by $f_{N_{0}}$ as in the proof of Theorem~\ref{main1} we have from~\eqref{mainthm1boundforall}
\begin{equation}\label{estimateforgoodeigenspaceblocks}
\biggl| \log\det M_{\lambda} - d_{\lambda}\int_{SG} \log f\, d\mu \biggr|
\leq 4\epsilon d_{\lambda} + c3^{N}\bigl( \|f\|_{\infty}+\|\log f\|_{1} \bigr).
\end{equation}
Now we let $\Gamma_{N}$ be the set of $\lambda<\Lambda$ with generation of birth $j>N$ and
sum~\eqref{estimateforgoodeigenspaceblocks} over $\lambda\in\Gamma_{N}$, noting that
$d_{\Lambda}^{-1}\sum_{\lambda\in\Gamma_{N}}d_{\lambda}\leq 1$. Using the trivial bound $\log\det
M_{\lambda}\leq d_{\lambda} \|f\|_{\infty}$ for the remaining terms in $\log\det M_{\Lambda}$, and
$d_{\lambda}\|\log f\|_{1}$ for those making up the integral, we obtain
\begin{align}
\lefteqn{\biggl| \tfrac{1}{d_{\Lambda}} \log\det M_{\Lambda} - \int_{SG} \log f\, d\mu
\biggr|}\quad&\notag\\
&\leq 4\epsilon + \bigl( \|f\|_{\infty}+\|\log f\|_{1} \bigr)
\biggl( \Bigr( \tfrac{c3^{N}}{d_{\Lambda}} \sum_{\lambda\in\Gamma_{N}} 1 \Bigr)+
\Bigl(\sum_{\lambda\not\in\Gamma_{N}} \tfrac{d_{\lambda}}{d_{\Lambda}} \Bigr) \biggr).
\label{main2intermedest}
\end{align}
The remaining work in the proof is to estimate the number of eigenvalues in $\Gamma_{N}$ and the
sum of the dimensions $d_{\lambda}$ for $\lambda\not\in\Gamma_{N}$, which we do using
Proposition~\ref{evalsefnsetc}. For this purpose, take $m\in\mathbb{N}$ so
$\kappa5^{m-1}\leq\Lambda<\kappa5^{m}$, where $\kappa$ is as in Proposition~\ref{evalsefnsetc}.
Since $\Gamma_{N}$ is empty and the estimate is trivial if $N\geq m$, we assume without loss of
generality that $N<m$. The eigenvalues less than $\kappa5^{m}$ and having generation of birth
$j\leq N$ number $2^{m-1}$ with multiplicity $1$ from the $2$-series, $2^{m-j}$ with multiplicity
$\tfrac{1}{2}(3^{j-1}+3)$ from the $5$-series and $2^{m-j-1}$ with multiplicity
$\tfrac{1}{2}(3^{j}-3)$ if $2\leq j\leq N$. Summing these gives
\begin{equation*}
2^{m-1} + \sum_{1}^{N} 2^{m-j-1}(3^{j-1}+3) + \sum_{2}^{N} 2^{m-j-2}(3^{j}-3)
=O(2^{m-N}3^{N}).
\end{equation*}
so
\begin{equation}\label{main2intermedest2}
\sum_{\{\lambda\not\in\Gamma_{N}: \lambda\leq \Lambda\}} d_{\lambda}
\leq \sum_{\{\lambda\not\in\Gamma_{N}: \lambda\leq \kappa 5^{m+1}\}} d_{\lambda}
=O(2^{m-N}3^{N}).
\end{equation}
Moreover the number of $\lambda\in\Gamma_{N}$ such that $\lambda\leq\kappa 5^{m}$ is
$\sum_{N+1}^{m}2^{m-j}$ from the $5$ series and $\sum_{N+1}^{m} 2^{m-j-1}$ from the $6$ series,
giving a total that is $O(2^{m-N})$. This implies
\begin{equation*}
\tfrac{c3^{N}}{d_{\Lambda}} \sum_{\lambda\in\Gamma_{N}} 1
=O( 2^{m-N} 3^{N}d_{\Lambda}^{-1}),
\end{equation*}
and substituting this and~\eqref{main2intermedest2} into~\eqref{main2intermedest}, along with
$d_{\Lambda}\geq \tfrac{1}{2}(3^{m}-3)$ because $\Lambda\geq\kappa5^{m-1}$, we have
\begin{equation*}
\biggl| \tfrac{1}{d_{\Lambda}} \log\det M_{\Lambda} - \int_{SG} \log f\, d\mu \biggr|
\leq 4\epsilon + c\Bigl(\tfrac{3}{2} \Bigr)^{N-m},
\end{equation*}
which proves the first statement of the theorem.
For H\"{o}lder continuous $f$ and fixed $\Lambda$ we may now optimize the choice of $N$ as in the
proof of Theorem~\ref{main1} to obtain~\ref{improvedest2}.
\end{proof}
\begin{rem}
Observe that in comparison with~\eqref{improvedest1}, the error in~\eqref{improvedest2} is decaying
at a slower rate. This is a consequence of the fact that, at the optimal $N$, the eigenfunctions
that are not localized at scale $N$ make up a larger proportion (in terms of dimension) of the
space $E_{\Lambda}$ than they do in the spaces $E_{\lambda}$ with $\lambda\approx\Lambda$.
\end{rem}
\section{``Almost'' equally distributed sequences}\label{lastsec}
As mentioned to in the Introduction,~\eqref{geneszego} can be translated into results on equally
distributed sequences. In this section we shall prove an analogue of~\eqref{geneszego} on $\text{SG}$.
This will be used to define the notion of ``almost'' equally distributed sequences on $\text{SG}$. We
recall the definition of equally distributed sequences due to H.~Weyl, for which we refer to
\cite[Chapter 5]{greszeg}.
\begin{deft}\label{eqseq}
Fix $K>0$. For each $n$ let $a_{1}^{(n)},\dotsc, a_{n+1}^{(n)}$ and $b_{1}^{(n)},\dotsc,
b_{n+1}^{(n)}$ be sets of $n+1$ numbers from the interval $[-K,K]$. We say that the sets
$\{a_{j}^{(n)}\}$ and $\{b_{k}^{(n)}\}$, $n\to \infty$, are equally distributed in the interval
$[-K, K]$, if given any continuous function $F$ on $[-K, K]$ we have
\begin{equation*}
\lim_{n \to \infty} \tfrac{1}{n+1} \sum_{j=1}^{n+1}[F(a_{j}^{(n)}) - F(b_{j}^{(n)})]=0.
\end{equation*}
\end{deft}
We first consider an extension of the results proved in Section~\ref{singleeigen}. Recall that
$M_j = P_{j}[f]P_{j}$, where $P_j$ is the orthogonal projection onto the eigenspace $E_{j}$
corresponding to a $5$ or $6$-series eigenvalue $\lambda_{j}$ of $-\Delta$ with generation of birth
$j$. Let $\bigl\{\sigma_{k}^{(j)}\bigr\}_{k=1}^{d_{j}}$ be the eigenvalues of $M_{j}$
\begin{lemma}\label{eqseq1}
Let $f=\sum_{k=1}^{3^{N}}a_{k}\chi_{C_{k}}$ with all $a_{k}>0$ and let $m=\min_{k}a_{k}$,
$M=\max_{k}a_{k}$. Let $F$ be continuous on $[m,M]$. Then
\begin{equation*}
\lim_{j\to \infty}\tfrac{1}{d_{j}}\sum_{k=1}^{d_{j}}F(\sigma_{k}^{(j)})=\int_{\text{SG}}F(f(x))\,
d\mu(x).
\end{equation*}
Moreover there is a set of points $\{s^{(j)}_{k}\}_{k}^{d_{j}}$ in $\text{SG}$ such that
$\{\sigma_{k}^{(j)}\}$ and $\{f(s_{k}^{(j)})\}$ are ``almost'' equally distributed in $[m, M]$ when
$j \to \infty$.
\end{lemma}
\begin{rem}
We use the term ``almost'' equally distributed because the above limit is computed along the
subsequence $d_{j}$ of the positive integers.
\end{rem}
\begin{proof}
Note from the proof of Lemma~\ref{main1} that all eigenvalues of $M_{j}$ satisfy $m \leq
\sigma_{k}^{(j)} \leq M$, and that each $a_k$ is an eigenvalue of $M_j$ with multiplicity
$m_{j}^{N}$. The remaining eigenvalues form a set $\Upsilon_{j}$ with $\#\Upsilon_{j}\leq
\alpha_{j}^{N}$. Using the fact that $m_{j}^{N}=3^{-N}d_{j}^{N}$ we compute
\begin{align*}
\lefteqn{\biggl|\tfrac{1}{d_{j}} \sum_{k=1}^{d_{j}}F(\sigma_{k}^{(j)}) - \int_{SG} F(f(x))\, d\mu
\biggr|}\quad&\\
&= \biggl| \tfrac{1}{d_{j}} \sum_{\Upsilon_{j}} F(\sigma_{k}^{(j)})
+ \tfrac{m_{j}^{N}}{d_{j}} \sum_{k=1}^{3^{N}}F(a_{k}) - \int_{SG} F(f(x))\, d\mu \biggr|\\
&\leq \tfrac{\alpha_{j}^{N}}{d_{j}}\|F\|_{L^{\infty}([m,M])} + \Bigl(\tfrac{d_{j}^{N}}{d_{j}}-1\Bigr) \int_{SG} F(f(x))\, d\mu \\
&\leq \tfrac{2\alpha_{j}^{N}}{d_{j}}\|F\|_{L^{\infty}([m,M])}.
\end{align*}
The last part of the result follows from approximating $\int_{\text{SG}}F(f(x))\, d\mu(x)$ with Riemann
sums.
\end{proof}
With this result we can prove the following extension of Theorem~\ref{main1}
\begin{thm}\label{maineqseq1}
Let $f>0$ be continuous on $\text{SG}$ and $M=\max_{\text{SG}}f(x)$. If $F$ is continuous on $[0, \infty)$,
then
$$\lim_{j\to \infty}\tfrac{1}{d_{j}}\sum_{k=1}^{d_{j}}F(\sigma_{k}^{(j)})=\int_{\text{SG}}F(f(x))\,
d\mu(x).$$ Moreover there is a set of points $\{s^{(j)}_{k}\}_{k}^{d_{j}}$ in $\text{SG}$ such that
$\{\sigma_{k}^{(j)}\}$ and $\{f(s_{k}^{(j)})\}$ are ``almost'' equally distributed in $[0,
\nm{f}_{L^{\infty}}]$ as $j\to\infty$.
\end{thm}
\begin{proof}
Let $\delta>0$ be given and let $0<\epsilon<\delta$ be such that $|a-b|\leq\epsilon$ and
$a,b\in[0,M]$ implies $|F(a)-F(b)|<\delta$. Take $f_{N}$ a simple function as in the proof of
Theorem~\ref{main1}. We saw in that proof that the eigenvalues of $P_{j}[f]P_{j}$ that correspond
to the eigenfunctions localized at scale $N$ are bounded below by $(1-\epsilon)a_{k}$ and above by
$(1+\epsilon)a_{k}$. Writing $\rho_{k}^{j}$ for the eigenvalues of $P_{j}[f_{N}]P_{j}$ we have
\begin{align*}
\lefteqn{\biggl| \tfrac{1}{d_{j}} \sum_{k=1}^{d_{j}}F(\sigma_{k}^{(j)}) - \int_{SG} F(f(x))\, d\mu
\biggr|}\quad&\notag\\
&\leq \biggl| \tfrac{1}{d_{j}} \sum_{k=1}^{d_{j}}F(\rho_{k}^{(j)}) - \int_{SG} F(f_{N}(x))\, d\mu \biggr|
+ \biggl| \tfrac{1}{d_{j}} \sum_{k=1}^{d_{j}}\Bigl( F(\sigma_{k}^{(j)}) - F(\rho_{k}^{(j)}) \Bigr) \biggr| \notag\\
&\quad + \biggl| \int_{SG} \Bigl( F(f(x))- F(f_{N}(x))\Bigr) \, d\mu(x) \biggr| \notag\\
&\leq \tfrac{2\alpha_{j}^{N}}{d_{j}}\|F\|_{L^{\infty}([m,M])} + \delta + \epsilon.
\end{align*}
This proves the first statement, and the second statement follows using Riemann sums as before.
\end{proof}
More generally we have the following extension of Theorem~\ref{main2}, in which we denote the
eigenvalues of $M_{\Lambda}$ by $\{\sigma_{k}^{(\Lambda)}\}_{k=1}^{d_{\Lambda}}$.
\begin{thm}\label{maineqseq2}
Let $f> 0$ be continuous on $\text{SG}$ and $F$ be continuous on $[0, \infty)$. Then
\begin{equation}\label{maineqseq2eqn}
\lim_{\Lambda\to\infty}\tfrac{1}{d_{\Lambda}} \sum_{k=1}^{d_{\Lambda}} F(\sigma_{k}^{(\Lambda)})
=\int_{\text{SG}}F(f(x))\, d\mu(x).
\end{equation}
Moreover there is a set of points $\{s^{(\Lambda)}_{k}\}_{k}^{d_{\Lambda}}$ in $\text{SG}$ such that
$\{\sigma_{k}^{(\Lambda)}\}$ and $\{f(s_{k}^{(\Lambda)})\}$ are ``almost'' equally distributed in $[0,
\nm{f}_{L^{\infty}}]$ as $\Lambda \to \infty$.
\end{thm}
\begin{proof}
For $\delta>0$ and $N$ as in Theorem~\ref{maineqseq1} we decompose the sum into terms corresponding
to $\lambda\in\Gamma_{N}$ and $\lambda\not\in\Gamma_{N}$ as in Theorem~\ref{main2}. For
$\lambda\in\Gamma_{N}$ we estimate as in the proof of Theorem~\ref{maineqseq1}, and follow the
argument of Theorem~\ref{main2} to find that the left and right sides of~\eqref{maineqseq2eqn}
differ by at most $2\delta+O(2^{m-N}3^N)$ where $\Lambda\approx5^{m}$.
\end{proof}
\begin{rem}
The results proved here for $\text{SG}$ should extend to other fractals on which localized eigenfunctions
come to predominate in the spectrum as the eigenvalues increase. Sufficient symmetry conditions
for the existence of high multiplicity eigenspaces with localized eigenfunctions were given
in~\cite{BaKi97}.
\end{rem}
\subsection*{Acknowledgment} The authors are grateful to Victor Guillemin for suggesting that we
investigate these questions.
|
1,116,691,497,328 | arxiv | \section{Introduction}
Many applications in the control of quantum systems involve
controlling a large ensemble using the same control signal
\cite{Li_Khaneja, Bernstein}. In many practical cases, the elements
of the ensemble show dispersions or variations in the parameters
which govern the dynamics of each individual system. For example,
in magnetic resonance experiments, the spins in an ensemble may have
large dispersions in their resonance frequencies (Larmor dispersion)
or in the strength of the applied radio frequency fields (rf
inhomogeneity) seen by each member of the ensemble. Another example
is in the field of NMR imaging, where a dispersion is intentionally
introduced in the form of a linear gradient \cite{Bernstein}, and
then exploited to successfully image the material under study.
A canonical problem in the control of quantum ensembles is the
design of rf fields (control laws) which can simultaneously steer a
continuum of systems, characterized by the variation in the internal
parameters governing the systems, from a given initial distribution
to a desired final distribution. Such control laws are called
compensating pulse sequences in the Nuclear Magnetic Resonance (NMR)
literature. From the standpoint of mathematical control theory, the
challenge is to simultaneously steer a continuum of systems between
points of interest using the same control signal. Typical designs
include excitation and inversion pulses in NMR spectroscopy and
slice selective pulses in NMR imaging \cite{Bernstein, Levitt,
Tycko, Tycko2, Shaka, Levitt2, Levitt3, Garwood, Skinner, Kobzar,
Kobzar2}. In many cases, one desires to find a control law that
prepares the final state as some desired function of the parameters.
A premier example is the design of slice selective pulse sequences
in magnetic resonance imaging applications, where spins are excited
or inverted depending upon their physical position in the sample
under study \cite{Bernstein, SLR, Silver, Shinnar, LeRoux, Conolly}.
In fact, the design of such pulses is a fundamental requisite for
almost all magnetic resonance imaging techniques. This paper
introduces the new method of Fourier synthesis pulse sequence design
for systems showing dispersions in the parameters governing their
dynamics.
In this paper we focus on the Bloch equations with a linear one
dimensional gradient, which arise in the context of NMR spectroscopy
and NMR imaging applications.
\begin{equation}
\frac{d}{dt}\left[\begin{array}{c} M_x \\
M_y \\
M_z \end{array}\right] = \left[\begin{array}{ccc} 0 & -G(t) s & \epsilon u(t) \\
G(t) s & 0 & -\epsilon v(t) \\
-\epsilon u(t) & \epsilon v(t) & 0 \end{array}\right]\left[\begin{array}{c} M_x \\
M_y \\
M_z \end{array}\right] \label{chap32_bloch}
\end{equation}
Here, $M(s, \epsilon) = [M_x \ M_y \ M_z]^T$ is the state vector,
$u(t) \in \Re$, $v(t) \in \Re$, and $G(t) \in \Re$ are controls and
the parameters $s \in [0, 1]$ and $\epsilon \in [1-\delta, 1], \
\delta > 0$ are dispersion parameters which will be explained
subsequently. Without loss of generality, we will always normalize
the initial state of the system (\ref{chap32_bloch}) to have unit
norm, so that the system evolves on the unit sphere in three
dimensions (Bloch sphere). A useful way to think about the Bloch
equations (\ref{chap32_bloch}), is by imagining a two dimensional
mesh of systems, each with a particular value of the pair
$(s,\epsilon)$. We are permitted to apply a single set of controls
$(u(t),v(t),G(t))$ to the entire mesh of systems, and the controls
should prepare the final state of each system as a desired function
of the parameters $(s,\epsilon)$ which govern the system dynamics.
From a physics perspective, the system (\ref{chap32_bloch})
corresponds to an ensemble of noninteracting spin-$\frac{1}{2}$ in a
static magnetic field $B_0$ along the $z$ axis and a transverse rf
field $(A(t) \cos(\psi(t)),A(t) \sin(\psi(t)))$ in the $x$-$y$
plane. The state vector $[M_x \ M_y \ M_z]^T$ represents the
coordinate of the unit vector in the direction of the net
magnetization vector for the ensemble \cite{Cavanagh}. The controls
$u(t)$ and $v(t)$ correspond to available rf fields we may apply to
the ensemble of spins. The dispersion in the magnitude of the rf
field applied to the sample is modeled by including a dispersion
parameter $\epsilon$ such that $A(t) = \epsilon A_0(t)$ with
$\epsilon \in [1-\delta, 1], \ \delta
>0$. Thus, the maximum amplitude for the rf field ($\epsilon = 1$) corresponds
to the maximum amplitude seen by any spin in the ensemble.
Similarly, we consider a linear gradient $G(t) s$, where $G(t)$ may
be thought of as a control, and $s$ represents the normalized
spatial position of the spin system in the sample of interest. In
(\ref{chap32_bloch}), we work in units with the gyromagnetic ratio
of the spins $\gamma = 1$. In this paper we give new design methods
which scale polynomially that can be used to design pulse sequences
for (\ref{chap32_bloch}) which prepare the final state of the system
as a function of the parameters $s$ and $\epsilon$.
\section{Design Method for rf Inhomogeneity}
Considering only the Bloch equations with rf inhomogeneity and no
linear gradient ($G(t) = 0$), we can rewrite (\ref{chap32_bloch}) in
terms of the generators of rotation in three dimensions as
\begin{equation}
\frac{d}{dt}\left[\begin{array}{c} M_x \\
M_y \\
M_z \end{array}\right] = \epsilon (u(t) \Omega_y + v(t) \Omega_x) \left[\begin{array}{c} M_x \\
M_y \\
M_z \end{array}\right] \label{bloch_eps}
\end{equation}
where
\begin{eqnarray}
\Omega_x &=& \left[\begin{array}{ccc} 0 & 0 & 0 \\
0 & 0 & -1 \\
0 & 1 & 0 \end{array}\right] \nonumber \\
\Omega_y &=& \left[\begin{array}{ccc} 0 & 0 & 1 \\
0 & 0 & 0 \\
-1 & 0 & 0 \end{array}\right] \nonumber \\
\Omega_z &=& \left[\begin{array}{ccc} 0 & -1 & 0 \\
1 & 0 & 0 \\
0 & 0 & 0 \end{array}\right]
\end{eqnarray}
We will come back to the full version of the Bloch equations
(\ref{chap32_bloch}) with both a linear gradient and rf
inhomogeneity later in the paper. The problem is to design $u(t) \in
\Re$ and $v(t) \in \Re$ to effect some desired evolution. We now
show how to construct controls to give a rotation of angle
$\phi(\epsilon)$ around the $x$ axis or the $y$ axis of the Bloch
sphere. From these constructions, an arbitrary rotation on the Bloch
sphere can be constructed using an Euler angle decomposition.
\begin{figure}
\center
\includegraphics[scale=0.65]{eps_seq.eps}
\caption{A schematic depiction of the pulse sequence element in
(\ref{Uk_prop}). The pulse sequence element consists of six
individual rotations which can be produced using the controls $u(t)$
and $v(t)$ as explained in the text. \label{fig:eps_seq}}
\end{figure}
\subsection{Rotation About $y$ axis}
In a time interval $dt$, we can use the controls to generate
rotations $\exp(\epsilon u_0 dt \Omega_y)$ and $\exp(\epsilon v_0 dt
\Omega_x)$ where $u_0$ and $v_0$ are constants to be specified.
Using this idea, consider generating the rotation
\begin{equation}
U_k = U_{1k}U_{2k} \label{Uk_prop}
\end{equation}
with
\begin{eqnarray}
U_{1k} &=& \exp(-\pi k\epsilon \Omega_x) \exp(\frac{1}{2} \epsilon
\beta_k
\Omega_y) \exp(\pi k \epsilon \Omega_x) \label{u_1k_e} \\
U_{2k} &=& \exp(\pi k \epsilon \Omega_x) \exp(\frac{1}{2} \epsilon
\beta_k \Omega_y) \exp(-\pi k \epsilon \Omega_x) \label{u_2k_e}
\end{eqnarray}
using the controls $u$ and $v$. Using the relation
\begin{eqnarray}
\exp(\alpha \Omega_x)\exp(\beta \Omega_y) \exp(-\alpha \Omega_x) = \nonumber \\
\exp(\beta(\cos (\alpha) \Omega_y + \sin(\alpha) \Omega_z ))
\end{eqnarray}
the matrices $U_{1k}$ and $U_{2k}$ may be rewritten as
\begin{eqnarray}
U_{1k} &=& \exp(\frac{1}{2} \epsilon \beta_k (\Omega_y \cos(\pi k
\epsilon) -
\Omega_z \sin(\pi k \epsilon))) \\
U_{2k} &=& \exp(\frac{1}{2} \epsilon \beta_k (\Omega_y \cos(\pi k
\epsilon) + \Omega_z \sin(\pi k \epsilon)))
\end{eqnarray}
For small $\beta_k$, we can make the approximation
\begin{equation}
U_{1k}U_{2k} \approx \exp(\epsilon \beta_k \cos(\pi k \epsilon)
\Omega_y) \label{approx}
\end{equation}
In (\ref{approx}), we have expanded the exponentials in $U_{1k}$ and
$U_{2k}$ to first order, performed the multiplication called for in
(\ref{Uk_prop}), and then rewritten the product as (\ref{approx})
keeping terms to first order. In the case when $\beta_k$ is too
large for (\ref{approx}) to represent a good approximation, we
should choose a threshold value $\beta_0$ such that (\ref{approx})
represents a good approximation and so that
\begin{equation}
\beta_k = n \beta_0
\end{equation}
with $n$ an integer. Defining
\begin{eqnarray}
U_{10} &=& \exp(\frac{1}{2} \epsilon \beta_0 (\Omega_y \cos(\pi k
\epsilon) -
\Omega_z \sin(\pi k \epsilon))) \\
U_{20} &=& \exp(\frac{1}{2} \epsilon \beta_0 (\Omega_y \cos(\pi k
\epsilon) + \Omega_z \sin(\pi k \epsilon)))
\end{eqnarray}
we can apply the total propagator
\begin{eqnarray}
[ \ U_{10} U_{20} \ ]^n \label{approx_b01} \\
\approx \left[ \exp(\epsilon
\beta_0 \cos(\pi k \epsilon) \Omega_y) \right]^n \label{approx_b0} \\
= \exp(\epsilon \beta_k \cos(\pi k \epsilon) \Omega_y)
\end{eqnarray}
where we used the approximation (\ref{approx}) in (\ref{approx_b0}).
More will be said about this approximation below.
If we then think about making the incremental rotation $U_k$ for
many different values of $k$, we will get a net rotation
\begin{equation}
U = \prod_k \exp(\epsilon \beta_k \cos(\pi k \epsilon) \Omega_y)
\end{equation}
so long as we keep $\beta_k$ sufficiently small to justify the
approximation (\ref{approx}). The total propagator $U$ for the
Bloch equations can then be rewritten as
\begin{equation}
U = \exp(\epsilon \sum_k \beta_k \cos(\pi k \epsilon) \Omega_y)
\label{propagator}
\end{equation}
If we now choose the coefficients $\beta_k$ so that
\begin{equation}
\sum_k \beta_k \cos(\pi k \epsilon) \approx
\frac{\phi(\epsilon)}{\epsilon}
\end{equation}
then we will have constructed a pulse sequence to approximate a
desired $\epsilon$ dependent rotation around the $y$ axis. Since
$\epsilon$ is bounded away from the origin,
$\phi(\epsilon)/\epsilon$ is everywhere finite, and we can
approximate it with a Fourier Series.
\subsubsection{Remark About the Approximation}
Here we consider the error introduced by the approximation
(\ref{approx_b0}). Define the error $E(Z,V)$ when a unitary matrix
$V$ is implemented instead of a desired unitary matrix $Z$ by
\begin{equation}
E(Z,V) = \max_x \| (Z-V)x \|
\end{equation}
With these identifications, in (\ref{approx_b01}) we have
\begin{eqnarray}
V & = & U_{10}U_{20} \nonumber \\
& = & I + \frac{\beta_k}{n} \cos(\pi k \epsilon) \Omega_y + M_1(n)
\end{eqnarray}
and
\begin{eqnarray}
Z & = & \exp(\beta_0 \cos(\pi k \epsilon) \Omega_y) \nonumber \\
& = & I + \frac{\beta_k}{n} \cos(\pi k \epsilon) \Omega_y + M_2(n)
\end{eqnarray}
where $M_1(n)$ and $M_2(n)$ are matrices with finite entries and of
maximum order $\frac{1}{n^2}$. Notice this implies that the
difference $(Z-V)$ is of order $\frac{1}{n^2}$. As defined
previously, $\beta_k = n \beta_0$. A well known result (see for
example \cite{Ike}) says that the maximum error introduced by
implementing (multiplying) the product of $n$ of the $V$ matrices
instead of implementing $n$ of the $Z$ matrices is the sum of the
individual errors. Thus, the total error $E_{\mathrm{total}}$
satisfies
\begin{eqnarray}
E_{\mathrm{total}} & \leq & n E(Z,V) \\
& \sim & \frac{1}{n}
\end{eqnarray}
and thus, by making $n$ sufficiently large, we may decrease the
total error introduced in the above method to an arbitrarily small
value. For the simulations done in this paper, we find a value
$\beta_0 \leq 30 \deg$ produces good results.
\subsection{Rotation About $x$ axis}
An analogous derivation can be made for rotations about the $x$ axis
of the Bloch sphere. Replacing (\ref{u_1k_e}) and (\ref{u_2k_e})
with
\begin{eqnarray}
U_{1k} &=& \exp(-\pi k\epsilon \Omega_y) \exp(\frac{1}{2} \epsilon
\beta_k \Omega_x)
\exp(\pi k \epsilon \Omega_y) \\
U_{2k} &=& \exp(\pi k \epsilon \Omega_y) \exp(\frac{1}{2} \epsilon
\beta_k \Omega_x) \exp(-\pi k \epsilon \Omega_y)
\end{eqnarray}
and following an analogous procedure leads to an approximate net
propagator
\begin{equation}
U = \exp(\epsilon \sum_k \beta_k \cos(\pi k \epsilon) \Omega_x)
\end{equation}
The coefficients $\beta_k$ may be chosen to approximate
$\phi(\epsilon) / \epsilon$, and thus we can approximately produce a
desired $\epsilon$ dependent rotation about the $x$ axis of the
Bloch sphere. Since an arbitrary rotation on the Bloch sphere may
be decomposed in terms of Euler angles, the methods presented can be
used to approximately synthesize any evolution on the Bloch sphere.
\subsection{Choosing the Coefficients $\beta_k$}
Suppose we wish to design a pulse with a uniform net rotation angle
of $\phi$ around either the $x$ or $y$ axis of the Bloch sphere
using the previously discussed algorithm. We focus on the case of a
uniform rotation (independent of $\epsilon$) because this is the
most useful pulse sequence in NMR. It is straightforward to
incorporate an $\epsilon$ dependent rotation $\phi(\epsilon)$ into
everything that follows. We face the problem of choosing $\beta_k$
and $k$ so that
\begin{equation}
\sum_k \beta_k \cos(\pi k \epsilon) \approx f(\epsilon), \ \
1-\delta \leq \epsilon \leq 1
\end{equation}
where
\begin{equation}
f(\epsilon) = \frac{\phi}{\epsilon}, \ \ 1-\delta \leq \epsilon \leq
1
\end{equation}
Since we only have $\cos(\pi k \epsilon)$ terms in the series, we
first will extend $f(\epsilon)$ to have even symmetry about
$\epsilon = 0$. To do this, we define $g(\epsilon)$ to be
\begin{equation}
g(\epsilon) = \left\{ \begin{array} {r@{,\quad}l}
f(\epsilon) & 1-\delta \leq \epsilon \leq 1 \\
f(1-\delta) & -(1-\delta) \leq \epsilon \leq 1-\delta \\
f(-\epsilon) & -1 \leq \epsilon \leq -(1-\delta)
\end{array} \right.
\label{norelaxsol}
\end{equation}
and now consider choosing $\beta_k$ and $k$ so that
\begin{equation}
\sum_k \beta_k \cos(\pi k \epsilon) \approx g(\epsilon), \ \ -1 \leq
\epsilon \leq 1
\end{equation}
A natural choice is to choose $k$ as nonnegative integers, in which
case $\beta_k$ may be computed using the orthogonality relation
\begin{equation}
\int_{-1}^{1} \cos(\pi k \epsilon) \cos(\pi k^\prime \epsilon)
d\epsilon = \delta_{kk^\prime}, \ \ k \neq 0
\end{equation}
where $\delta_{k k^\prime}$ is the Kronecker delta. We find for the
coefficients
\[
\beta_k = \int_{-1}^{1} \cos(\pi k \epsilon) g(\epsilon) \ d
\epsilon , \ \ k \neq 0 \] \[ \beta_0 = \frac{1}{2} \int_{-1}^{1}
g(\epsilon) \ d \epsilon, \ \ k = 0
\]
The number of terms kept in the series is decided by the pulse
designer.
\begin{figure}
\center
\includegraphics[scale=0.5]{figure1.eps}
\caption{An example design to approximate $g(\epsilon)$ over the
range $-1 \leq \epsilon \leq 1$ for the rotation angle $\phi = \pi /
2$. Five terms in the series expansion were retained. The relevant
range of rf inhomogeneity is $0.1 \leq \epsilon \leq 1$.
\label{fig:design}}
\end{figure}
A sufficient number of terms should be retained so that the error
across the relevant range of $\epsilon$ does not exceed some
acceptable value. Figure \ref{fig:design} depicts an example design
using a series with five terms. The region of interest for
$f(\epsilon)$ is $0.1 \leq \epsilon \leq 1$. In this region, we see
relatively small errors. We now give two examples to demonstrate the
usefulness of the algorithm.
\subsection{Simulations}
\subsubsection{$\frac{\pi}{2}$ pulse around $y$ axis} Suppose we wish
to design a $\frac{\pi}{2}$ pulse around the $y$ axis and we want to
consider rf inhomogeneity in the range $0.1 \leq \epsilon \leq 1$.
Then we should consider
\begin{equation}
f(\epsilon) = \frac{\pi}{2 \epsilon}, \ \ 0.1 \leq \epsilon \leq 1
\end{equation}
Figure \ref{fig:design1} shows the results of the designed pulse
sequence acting on the initial state $M(0) = [0 \ 0 \ 1]^T$ while
keeping five terms in the series expansion.
\begin{figure}
\center
\includegraphics[scale=0.5]{figure2.eps}
\caption{Results of pulse sequence designed to produce uniform
$\pi/2$ rotation about $y$ axis. The sequence was applied to the
initial state $M(0) = [0 \ 0 \ 1]^T$, and the plot shows the final
state as a function of $\epsilon$ after propagating the Bloch
equations. \label{fig:design1}}
\end{figure}
We see that the resulting pulse sequence reliably produces a net
evolution $\exp(\frac{\pi}{2}\Omega_y)$ across the entire range of
$\epsilon$ values. Figure \ref{fig:CDC_nieve_control_mod} shows the
results of applying $u(t) = \frac{\pi}{2}$ for one unit of time to
the system (\ref{bloch_eps}). This approach corresponds to assuming
the dispersion parameter $\epsilon$ is fixed at a nominal value
$\epsilon = 1$, so that every system sees the same control signals
$u(t)$ and $v(t)$. Systems corresponding to values $\epsilon \neq
1$ exhibit deteriorated performance as demonstrated in Figure
\ref{fig:CDC_nieve_control_mod}.
\begin{figure}
\center
\includegraphics[scale=0.5]{figure3.eps}
\caption{Results of applying $u(t) = \frac{\pi}{2}$ for one unit of
time to the initial state $M(0) = [0 \ 0 \ 1]^T$. The system
corresponding to the value $\epsilon = 1$ for the dispersion
parameter experiences a $\frac{\pi}{2}$ rotation about the $y$ axis
of the Bloch sphere, but systems corresponding to other values of
$\epsilon$ show deteriorated performance.
\label{fig:CDC_nieve_control_mod}}
\end{figure}
\subsubsection{$\pi$ pulse} As a second example, suppose we wish
to design a $\pi$ pulse around the $x$ axis and we want to consider
rf inhomogeneity in the range $0.5 \leq \epsilon \leq 1$. Then we
should consider
\begin{equation}
f(\epsilon) = \frac{\pi}{\epsilon}, \ \ 0.5 \leq \epsilon \leq 1
\end{equation}
Figure \ref{fig:design4} shows the results of the designed pulse
sequence acting on the initial state $M(0) = [0 \ 1 \ 0]^T$ while
keeping nine terms in the series expansion.
\begin{figure}
\center
\includegraphics[scale=0.5]{figure4.eps}
\caption{Results of pulse sequence designed to produce uniform $\pi$
rotation about $x$ axis. The sequence was applied to the initial
state $M(0) = [0 \ 1 \ 0]^T$, and the plot shows the final state as
a function of $\epsilon$ after propagating the Bloch equations.
\label{fig:design4}}
\end{figure}
We see that the resulting pulse sequence reliably produces a net
evolution $\exp(\pi \Omega_y)$ across the entire range of $\epsilon$
values.
It should be noted that although we consider design examples where
we wish to produce a uniform rotation that is independent of the
parameter $\epsilon$, the method presented in the paper can also be
used to design control laws which prepare the final state as a
function of the parameter $\epsilon$. We consider examples to
produce a uniform rotation, independent of $\epsilon$, because this
is the most useful application in NMR.
\section{Design Method for Position Dependent Rotations}
Now consider the Bloch equations with no rf inhomogeneity and with a
linear gradient
\begin{equation}
\frac{d}{dt}\left[\begin{array}{c} M_x \\
M_y \\
M_z \end{array}\right] = (G(t) s \Omega_z + u(t) \Omega_y + v(t) \Omega_x) \left[\begin{array}{c} M_x \\
M_y \\
M_z \end{array}\right]
\end{equation}
where $u(t) \in \Re$, $v(t) \in \Re$, and $G(t) \in \Re$ are time
dependent control amplitudes we may specify, and $s \in [0, 1]$ can
be thought of as a dispersion parameter. As previously discussed,
$s$ represents the spatial position of the spin system in the sample
under study. The goal is to engineer a control law that will effect
a net position dependent rotation, so that the final state is
prepared as a function of $s$. A common example in NMR imaging is a
so-called slice selective sequence, whereby the controls should
selectively perform a $\frac{\pi}{2}$ rotation on some range of $s$
values, while performing no net rotation on $s$ values falling
outside of that range.
Using the controls, consider generating the evolution
\begin{equation}
U_k = U_{1k}U_{2k}
\end{equation}
with
\begin{eqnarray}
U_{1k} &=& \exp(\pi ks \Omega_z) \exp(\frac{1}{2}\beta_k \Omega_y)
\exp(-\pi ks \Omega_z) \label{u_1k_s} \\
U_{2k} &=& \exp(-\pi ks \Omega_z) \exp(\frac{1}{2} \beta_k \Omega_y)
\exp(\pi ks \Omega_z) \label{u_2k_s}
\end{eqnarray}
The matrices $U_{1k}$ and $U_{2k}$ may be rewritten as
\begin{eqnarray}
U_{1k} &=& \exp(\frac{1}{2} \beta_k(\cos(\pi ks) \Omega_y - \sin(\pi ks) \Omega_x)) \\
U_{2k} &=& \exp(\frac{1}{2} \beta_k(\cos(\pi ks) \Omega_y + \sin(\pi
ks) \Omega_x))
\end{eqnarray}
Again performing a first order analysis on the exponentials as was
done in the previous section, we can make the approximation
\begin{equation}
U_{1k}U_{2k} \approx \exp(\beta_k \cos(\pi ks) \Omega_y)
\end{equation}
If we then think about making the rotation $U_k$ for different
values of $k$ and $\beta_k$, we will get the net propagator
\begin{equation}
U = \prod_k \exp(\beta_k \cos(\pi k s) \Omega_y) \label{prop_x_y}
\end{equation}
within the approximation previously discussed. The propagator
(\ref{prop_x_y}) may be rewritten as
\begin{equation}
U = \exp(\sum_k \beta_k \cos(\pi ks) \Omega_y)
\end{equation}
Choosing $k$ as the nonnegative integers, and choosing the $\beta_k$
so that
\begin{equation}
\sum_k \beta_k \cos(\pi ks) \approx \phi(s)
\end{equation}
where $\phi(s)$ is the desired position $s$ dependent rotation angle
results in a net rotation around the $y$ axis of the Bloch sphere
with the desired dependence on the parameter $s$.
An analogous procedure may be used to generate an $s$ dependent
rotation around the $x$ axis of the Bloch sphere. Replacing
(\ref{u_1k_s}) and (\ref{u_2k_s}) with
\begin{eqnarray}
U_{1k} &=& \exp(-\pi ks \Omega_z) \exp(\frac{1}{2} \beta_k \Omega_x)
\exp(\pi ks
\Omega_z) \\
U_{2k} &=& \exp(\pi ks \Omega_z) \exp(\frac{1}{2} \beta_k \Omega_x)
\exp(-\pi ks \Omega_z)
\end{eqnarray}
and following a similar procedure, we can approximately produce the
total propagator
\begin{equation}
U = \exp(\sum_k \beta_k \cos(\pi ks) \Omega_x)
\end{equation}
Choosing $k$ as the nonnegative integers, and choosing the $\beta_k$
appropriately results in a net rotation around the $x$ axis of the
Bloch sphere with the desired dependence on the parameter $s$. Since
any rotation can be decomposed in terms of Euler angles, we may use
the methods just discussed to approximately produce any position $s$
dependent rotation on the Bloch sphere.
\subsection{Design Example}
As an example design using the procedure just discussed, consider a
slice selective pulse sequence, where we wish to excite a certain
range of $s$ values while leaving systems with $s$ values falling
outside of that range unaffected at the end of the sequence.
\begin{equation}
\phi(s) = \left\{ \begin{array} {r@{,\quad}l}
\frac{\pi}{2} & 0.5 \leq s \leq 0.75 \\
0 & \mathrm{otherwise}
\end{array} \right.
\label{norelaxsol}
\end{equation}
Figure \ref{fig:xfigure} shows the results of a pulse sequence
designed using the procedure described in the text while keeping 30
terms in the series.
\begin{figure}
\center
\includegraphics[scale=0.5]{figure5.eps}
\caption{Results of pulse sequence designed to produce uniform
$\frac{\pi}{2}$ rotation about $y$ axis over the range $0.5 \leq s
\leq 0.75$. The sequence was applied to the initial state $M(0) = [0
\ 0 \ 1]^T$, and the plot shows the final state as a function of $s$
after propagating the Bloch equations. \label{fig:xfigure}}
\end{figure}
The ripples appearing in Figure \ref{fig:xfigure} result from the
ripples in the approximation of the sharp slice selective profile
using a Fourier Series. One method used to overcome this in
practice is to allow for a ramp between the $0$ and $\frac{\pi}{2}$
level on the slice.
\section{Control Laws Involving Position and rf Inhomogeneity}
We now come back to the problem of considering the full version of
the Bloch equations (\ref{chap32_bloch}) including the two
dispersion parameters $s$ and $\epsilon$. Rewriting
(\ref{chap32_bloch}) in terms of the generators of rotation we have
\begin{equation}
\frac{d}{dt}\left[\begin{array}{c} M_x \\
M_y \\
M_z \end{array}\right] = (G(t) s \Omega_z + \epsilon u(t) \Omega_y + \epsilon v(t) \Omega_x) \left[\begin{array}{c} M_x \\
M_y \\
M_z \end{array}\right] \label{bloch_xe}
\end{equation}
where the state vector is now a function of both parameters $s$ and
$\epsilon$. The control task is to choose $u(t) \in \Re$, $v(t) \in
\Re$, and $G(t) \in \Re$ to effect a desired rotation
$\phi(s,\epsilon)$. Proceeding along the lines of the previous two
sections, consider generating the propagators
\begin{eqnarray}
U_{1k} &=& \exp(\pi k_1 s \Omega_z) \exp(\frac{1}{4} \epsilon
\beta_k
\Omega_y) \exp(-\pi k_1 s \Omega_z) \nonumber \\
&=& \exp(\frac{1}{4} \epsilon \beta_k (\cos(\pi k_1 s)\Omega_y - \sin(\pi k_1 s)\Omega_x))
\end{eqnarray}
and
\begin{eqnarray}
U_{2k} &=& \exp(-\pi k_1 s \Omega_z) \exp(\frac{1}{4} \epsilon
\beta_k
\Omega_y)\exp(\pi k_1 s \Omega_z) \nonumber \\
&=& \exp(\frac{1}{4} \epsilon \beta_k (\cos(\pi k_1 s)\Omega_y + \sin(\pi k_1 s)\Omega_x))
\end{eqnarray}
Within a first order approximation for the exponentials, we have the
approximate total propagator
\begin{equation}
U_{1k}U_{2k} \approx \exp(\frac{1}{2} \epsilon \beta_k \cos(\pi k_1
s) \Omega_y)
\end{equation}
Building on this, we can produce the propagator
\begin{eqnarray}
U_{3k} &=& \exp(\pi k_2 \epsilon \Omega_x) U_{1k}U_{2k} \exp(-\pi
k_2 \epsilon
\Omega_x) \nonumber \\
& \approx & \exp(\frac{1}{2} \epsilon \beta_k \cos(\pi k_1 s) (\cos(\pi k_2 \epsilon) \Omega_y - \sin(\pi k_2 \epsilon)
\Omega_z)) \nonumber
\end{eqnarray}
Similarly, we can produce
\begin{eqnarray}
U_{4k} &=& \exp(-\pi k_2 \epsilon \Omega_x) U_{1k}U_{2k} \exp(\pi
k_2 \epsilon
\Omega_x) \nonumber \\
& \approx & \exp(\frac{1}{2} \epsilon \beta_k \cos(\pi k_1 s) (\cos(\pi k_2 \epsilon) \Omega_y + \sin(\pi k_2 \epsilon)
\Omega_z)) \nonumber
\end{eqnarray}
so that we can approximately produce the total propagator
\begin{eqnarray}
U_k &=& U_{3k}U_{4k} \nonumber \\
& \approx & \exp(\epsilon \beta_k
\cos(\pi k_1 s) \cos(\pi k_2 \epsilon) \Omega_y)
\end{eqnarray}
within the approximation for the exponentials. We can use the
method previously discussed in the case when $\beta_k$ is too large
for the approximation to be valid. Producing the propagator $U_k$
for different values of $k_1$, $k_2$, and $\beta_k$ results in the
net propagator
\begin{equation}
U = \prod_{\{k_1,k_2\}} \exp(\epsilon \beta_k \cos(\pi k_1 s)
\cos(\pi k_2 \epsilon) \Omega_y)
\end{equation}
A choice of $k_1$, $k_2$, and $\beta_k$ so that
\begin{equation}
\sum_{ \{k_1,k_2 \}} \beta_k \cos(\pi k_1 s) \cos(\pi k_2 \epsilon)
\approx \frac{\phi(s,\epsilon)}{\epsilon}
\end{equation}
where $\phi(s,\epsilon)$ is the desired position $s$ and rf
inhomogeneity parameter $\epsilon$ dependent rotation angle, results
in an approximate desired evolution for the Bloch equations
(\ref{bloch_xe}).
Analogous arguments show we may approximately produce a rotation
around the $x$ axis of the Bloch sphere
\begin{equation}
U_k = \exp(\epsilon \beta_k \cos(\pi k_1 s) \cos(\pi k_2
\epsilon)\Omega_x)
\end{equation}
and may thus approximately generate a net propagator
\begin{equation}
U = \prod_{ \{k_1,k_2 \}} \exp(\epsilon \beta_k \cos(\pi k_1 s)
\cos(\pi k_2 \epsilon) \Omega_x)
\end{equation}
and thus approximately produce a desired position $s$ and rf
inhomogeneity parameter $\epsilon$ rotation around the $x$ axis of
the Bloch sphere. Since any rotation on the unit sphere can be
decomposed in terms of Euler angles, an arbitrary $(s, \epsilon)$
dependent rotation can be approximately produced using these
methods.
\section{Conclusions}
In this paper we have provided new methods to design control laws
for the Bloch equations when certain dispersion parameters are
present in the system dynamics. These methods are of utmost
practical importance in the fields of NMR spectroscopy and NMR
imaging, and can be implemented in many well known experiments
immediately. The methods presented in the paper allow the design of
a compensating control law (pulse sequence) that will compensate for
dispersions in the system dynamics while providing a clear tradeoff
for the control law designer between total time required for the
sequence and amplitude of the available controls.
|
1,116,691,497,329 | arxiv | \section{Appendix}
This appendix provides simple, illustrative examples of the use of \tool{VerifOx} and \tool{hw-cbmc}, as well as further technical details.
\renewcommand{\topfraction}{0.9}
\renewcommand{\bottomfraction}{0.9}
\renewcommand{\textfraction}{0.0}
\subsection*{Worked Example of \tool{VerifOx}}
\lstdefinestyle{base}{
language=C,
emptylines=1,
breaklines=true,
basicstyle=\ttfamily\color{black},
moredelim=**[is][\color{red}]{@}{@},
}
Figure~\ref{fig:example} demonstrates the working of \tool{VerifOx} as a
property verifier in the absence of a reference model. Note that
equivalence checking is a special case of property verification
where the property is replaced by a reference model. Hence, \tool{VerifOx}
can be configured as a property verifier or as an equivalence checker.
Let us consider a software model as shown in column~1
in Figure~\ref{fig:example}. The program implements a high-level
power management strategy to orchestrate various modules, such as,
{\em core}, {\em memory} etc. Depending on the interrupt
status ($env$), power modes ($mode$) and power gated logic ($power\_gated$),
the call to {\em core} or {\em memory} is made. These units are
complex implementations of a processor core or a memory unit.
\begin{figure}[!b]
\scriptsize
\begin{tabular}{l|l|l}
\hline
Software Model (in C) & Sliced program (wrt.
property) & Pruned program (wrt. assumptions) \\
\hline
\begin{lstlisting}[mathescape=true,language=C]
#define threshold 15
if(reset) {
mode=TURN_OFF;
feedback=0;
} else {
// Trigger IP if env is set
if(env) {
// check the voltage level
if(voltage_level < threshold)
power_gated = 1;
else power_gated = 0;
// check the low-power modes
if(mode == STAND_BY ||
mode == TURN_OFF) {
// power gated logic,
// call to core
if(power_gated) {
core(reset,mode,power_gated
ser_in,&buf_out);
feedback = LOW;
}
else { // normal logic
core(reset,mode,power_gated
ser_in,&buf_out);
feedback = buf_out; }}
}
else
// call to memory
memory(size);
}
\end{lstlisting}
&
\begin{lstlisting}[mathescape=true,language=C]
#define threshold 15
if(reset) {
mode=TURN_OFF;
feedback=0;
} else {
// Trigger IP if env is set
if(env) {
// check the voltage level
if(voltage_level < threshold)
power_gated = 1;
else power_gated = 0;
// check the low-power modes
if(mode == STAND_BY ||
mode == TURN_OFF) {
// power gated logic,
// call to core
if(power_gated) {
core(reset,mode,power_gated
ser_in,&buf_out);
feedback = LOW;
}
else { // normal logic
core(reset,mode,power_gated
ser_in,&buf_out);
feedback = buf_out;
}
}}
}
$\begin{array}{l@{}c@{}l}
Assertion: (!(rst!=0) || (feedback>0))
\end{array}$
\end{lstlisting}
&
\begin{lstlisting}[mathescape=true,language=C]
$\begin{array}{l@{}c@{}l}
Assumption:(reset == 0) \wedge (env == 1) \\
\wedge (mode == STAND\_BY) \\
\wedge (voltage\_level == 10)
\end{array}$
#define threshold 15
// non-reset logic
else {
if(env) {
// check the voltage level
if(voltage_level < threshold)
power_gated = 1;
// check the low-power modes
if(mode == STAND_BY ||
mode == TURN_OFF) {
// power gated logic,
// call to core
if(power_gated) {
core(reset,mode,power_gated
ser_in,&buf_out);
feedback = LOW;
}
}
}
}
$\begin{array}{l@{}c@{}l}
Assertion: (!(rst!=0) || (feedback>0))
\end{array}$
\end{lstlisting}
\\
\hline
\end{tabular}
\caption{Example demonstrating automated slicing and path pruning in \tool{VerifOx}}
\label{fig:example}
\end{figure}
State-of-the-art verification tools may not be able verify the whole system
due to resource limitations. Therefore, it is a common practice to write
additional constraints, also known as {\em assumptions}, that exercise only
a fragment of the entire state-space. Verification engine can use these
assumptions to partition the state-space, thus decomposing a hard proof
into simpler sub-proofs.
Column~2 presents the result of property-driven
slicing on the input program. This step is purely syntactic, meaning that
we perform a backward dependency analysis~\cite{icse81} starting from the
property which only preserve those program fragments that are relevant to
the given property. The sliced program is then passed to the symbolic
execution engine that performs eager infeasibility based path-pruning. The
result of infeasible path pruning based on assumption is shown in column~3.
This step is semantic because \tool{VerifOx} determines the feasibility of paths
in the sliced program in an eager fashion with respect to the user-provided
assumptions using satisfiability queries.
An important point to note here
is that the number of path constraints after slicing and infeasible path
pruning are significantly less compared to the initial program.
Additionally, these per-path constraints are much easier to solve compared
to a monolithic formula generated from a BMC-style symbolic execution tool.
\medskip\noindent\textbf{\emph{Command to run \tool{VerifOx}.}}
Below are the commands to run \tool{VerifOx} in partial or full incremental mode.
When \tool{VerifOx} is used as an equivalence checker, the input file is usually a
miter in C which must include both the reference model and the implementation model.
However, in the absence of a reference model, one can write assertions inside the
software model to configure \tool{VerifOx} as a property verifier. The command line
switch \texttt{--unwind} is used to specify the unwind depth for the software model.
To use the SMT backend with \tool{VerifOx}, the command line switch is
\texttt{--smt2}, followed by the name of the SMT solver, for example
\texttt{--z3}. Note that the SMT solver must be installed in the
system. The switch \texttt{--help} shows the available command line options
for using \tool{VerifOx}.
\begin{small}
\begin{verbatim}
// partial incremental mode with SAT
verifox-pi filename.c --unwind N
// full incremental mode with SAT
verifox-fi filename.c --unwind N
// partial incremental mode with SMT
verifox-pi filename.c --smt2 --z3
\end{verbatim}
\end{small}
\begin{figure}[t]
\scriptsize
\begin{tabular}{l|l|l}
\hline
C Program & Verilog RTL & Miter \\
\hline
\begin{lstlisting}[mathescape=true,language=C]
struct st_up_counter{
unsigned char out;
};
struct st_up_counter
sup_counter;
void upcounter(unsigned char *out,
_Bool enable, _Bool clk,
_Bool reset)
{
unsigned char out_old;
out_old = sup_counter.out;
if(reset)
{
sup_counter.out = 0;
}
else if(enable)
{
sup_counter.out =
out_old + 1;
}
}
\end{lstlisting}
&
\begin{lstlisting}[mathescape=true,language=C]
module up_counter(out,
enable,clk, reset);
output [7:0] out;
input enable, clk, reset;
reg [7:0] out;
always @(posedge clk)
if(reset)
begin
out<=8'b0;
end
else if(enable)
begin
out<=out+1;
end
endmodule
\end{lstlisting}
&
\begin{lstlisting}[mathescape=true,language=C]
typedef unsigned char _u8;
struct module_up_counter {
_u8 out;
_Bool enable;
_Bool clk;
_Bool reset;
};
extern struct
module_up_counter up_counter;
int main()
{
// Inputs of C program
_Bool enable;
_Bool clk;
_Bool reset;
unsigned char out;
// reset the design
// call to C function
upcounter(&out, 0, clk, 1);
// set Verilog inputs
up_counter.enable = 0;
up_counter.reset = 1;
set_inputs();
next_timeframe();
assert(up_counter.out
== sup_counter.out);
while(1) {
// Start counting, set
// enable = 1 and reset = 0
up_counter.reset = 0;
up_counter.enable = 1;
set_inputs();
next_timeframe();
upcounter(&out, 1, clk, 0);
assert(up_counter.out
== sup_counter.out);
}
}
\end{lstlisting}
\\
\hline
\end{tabular}
\caption{Example of equivalence checking using \tool{hw-cbmc}}
\label{fig:hwcbmc-example}
\end{figure}
\subsection*{Worked Example of \tool{hw-cbmc}}
Figure~\ref{fig:hwcbmc-example} demonstrates the working of \tool{hw-cbmc} as a
C-RTL equivalence checker. Columns~1--3 present a C model of an
up-counter, an RTL model of the same device, and a miter that feeds the same input to
the C and RTL model and asserts equivalence of their outputs.
\tool{hw-cbmc} can be configured in {\em bit-level} or {\em word-level} mode. In bit-level
mode, the input models are synthesized to
And Inverter Graphs (AIG)\footnote{http://fmv.jku.at/aiger/}
and then passed to the SAT solver. In word-level mode,
the input models are synthesized into an intermediate word-level format, which are then
despatched to a word-level SMT solver.
\medskip\noindent\textbf{\emph{Command to run \tool{hw-cbmc}.}}
Shown below are the commands to configure \tool{hw-cbmc} in bit-level or word-level mode.
The first command using \texttt{--gen-interface} is used to generate the interface
for the hardware modules automatically. These interface signals are required to
construct the miter as shown in column~3 of Figure~\ref{fig:hwcbmc-example}.
Note that the \texttt{<VERILOG-FILE-NAME>} can be specified as $(.v)$ or $(.sv)$
file, where $(.v)$ is an extension for Verilog files and $(.sv)$ is an extension
for SystemVerilog files. We assume that the \texttt{<MITER-FILE-NAME>}
includes the reference model in C and implements the miter.
Note that \tool{hw-cbmc} expects the reference model and the miter
implementation to be C programs. The command line switch \texttt{--aig} instructs the tool
to operate in bit-level mode. Without this option, the default operating mode
in \tool{hw-cbmc} is word-level mode. The switch \texttt{--bound} and \texttt{--unwind}
is used to specify the unwind depth for the hardware and software transition system
respectively. The switch \texttt{--module} specifies the name of the top level
module in the Verilog design file. \tool{hw-cbmc} also provides an option,
\texttt{--vcd} to dump counterexamples in Value Change Dump ($vcd$) format
in case of assertion failure, which can be analyzed for debugging purpose using waveform viewer,
such as \texttt{gtkwave}.\footnote{http://gtkwave.sourceforge.net} The switch
\texttt{--help} shows the available command line options for using \tool{hw-cbmc}.
\begin{small}
\begin{verbatim}
// generate interface
hw-cbmc <VERILOG-FILE-NAME> --module <TOP-MODULE> --gen-interface
// bit-level mode
hw-cbmc <VERILOG-FILE-NAME> <MITER-FILE-NAME> --module <TOP-MODULE>
--bound N --unwind M --aig --vcd <VCD-FILE-NAME>
// word-level mode
hw-cbmc <VERILOG-FILE-NAME> <MITER-FILE-NAME> --module <TOP-MODULE>
--bound N --unwind M --vcd <VCD-FILE-NAME>
\end{verbatim}
\end{small}
\subsection*{Monolithic and Path-wise Approach to Equivalence Checking}
We investigated the structure of the ARM FPU and dual-path adder examples discussed the paper
to analyze the effect on runtimes
of the monolithic and path-based equivalence checking approaches
followed by \tool{hw-cbmc} and \tool{VerifOx} respectively.
We observe that the pipelined implementation of ARM FPU forces \tool{VerifOx} to traverse deep into a particular
path and then backtrack to a much higher level in the symbolic tree due to
infeasibility of the current path. This causes \tool{VerifOx} to throw away several
path fragments that were earlier considered feasible while going deep in the path only to be discovered as
infeasible much later.
This results in the wastage of significant computation time in \tool{VerifOx}.
On the other hand, the dual-path adder contains a state-machine that
implements separate cases for the addition of different types of numbers.
This allows \tool{VerifOx} to perform an early infeasibility check and prune most of
the irrelevant logic upfront in the symbolic execution phase using
assumptions. On the other hand, the monolithic constraint generated by \tool{hw-cbmc}
for the dual-path FP adder was extremely difficult to solve. In this way,
our experiments give some insight into how the path-based symbolic execution in \tool{VerifOx} and the monolithic
BMC-based approach in \tool{hw-cbmc} are sensitive to the structure of the
original floating-point design.
\lstdefinestyle{base}{
language=C,
emptylines=1,
breaklines=true,
basicstyle=\ttfamily\color{black},
moredelim=**[is][\color{red}]{@}{@},
}
\begin{figure}[t]
\begin{center}
\centering \scriptsize
\begin{tabular}{|l|l|}
\hline
Miter for VERIFOX & Miter for HW-CBMC \\
\hline
\begin{lstlisting}[mathescape=true]
int miter(float f, float g) {
roundmode $rmode;$
$softfloat\_uint64\_t$ $nan\_payload;$
float $sum\_ref, sum\_impl;$
int ROUNDMODE;
switch(ROUNDMODE) {
case 0: { // ROUND TO NEAREST EVEN
$fesetround(FE\_TONEAREST);$
$rmode = 3;$
$break;$
}
case 1: { // ROUND UP
$fesetround(FE\_UPWARD);$
$rmode = 0;$
$break;$
}
case 2: { // ROUND DOWN
$fesetround(FE\_DOWNWARD);$
$rmode = 1;$
$break;$
}
case 3: { // TOWARDZERO
$fesetround(FE\_TOWARDZERO);$
$rmode = 2;$
$break;$
}
}
// Invoke the reference model
$sum\_ref = f + g;$
// Invoke the ARM FPU ADD
$nan\_payload = 0x00080000;$
$sum\_impl = sfadd64(f, g, rmode, nan\_payload);$
// check the output
$assert(compareFloat(sum\_ref, sum\_impl));$
}
\end{lstlisting}
&
\begin{lstlisting}[mathescape=true]
int miter(float f, float g) {
float $C\_result, Verilog\_result;$
roundmode $rmode;$
int $ROUNDMODE;$
// reset the design
$add64.reset\_n = 0;$
$set\_inputs(); next\_timeframe();$
// pass the inputs to the RTL
$add64.reset\_n = 1;$
$add64.src0 = *(unsigned *)\&f;
add64.src1 = *(unsigned *)\&g;$
$set\_inputs(); next\_timeframe();$
// settings for RTL floating-point addition
$add64.pipe\_ready = 1; add64.valid\_in = 1; add64.lane\_mask = 3;$
switch(ROUNDMODE) {
$case 0:$ { // ROUND TO NEAREST EVEN
$add64.round\_mode = 0;$
$rmode = 3; break;$ }
$case 1:$ { // ROUND UP
$add64.round\_mode = 1;$
$rmode = 0; break;$ }
$case 2:$ { // ROUND DOWN
$add64.round\_mode = 2;$
$rmode = 1; break;$ }
$case 3:$ { // TOWARDZERO
$add64.round\_mode = 3;$
$rmode = 2; break;$ }
}
$set\_inputs(); next\_timeframe();$
$next\_timeframe(); next\_timeframe();$
$next\_timeframe();$
// Invoke the ARM FPU ADD
$nan\_payload = 0x00080000;$
$C\_result$ = $sfadd64(f, g, rmode, nan\_payload);$
// RTL result must be ready here
$Verilog\_result=*(float *)\&add64.res;$
// check the output
$assert(compareFloat(C\_result, Verilog\_result));$
}
\end{lstlisting}
\\
\hline
\end{tabular}
\end{center}
\vspace{-1.7mm}
\caption{Miter for equivalence checking
of a double precision floating-point adder from ARM}
\label{fig:harness}
\end{figure}
\subsection*{Synthesizable Constructs in \tool{hw-cbmc}}
Our Verilog front-end in \tool{hw-cbmc} support IEEE 1364.1 2005 Verilog standards.
This includes the entire synthesizable fragment of Verilog. The detailed list
of synthesizable Verilog constructs supported by our Verilog front-end is
available in our website www.cprover.org/ebmc/manual/verilog\_language\_features.shtml.
\subsection*{Miter Construction for Equivalence Checking}
Figure~\ref{fig:harness} shows an example miter for checking
equivalence of a 64-bit floating-point adder at the software
level and RTL phase using \tool{VerifOx} and \tool{hw-cbmc} respectively.
For the miter in \tool{VerifOx}, we provide the same floating-point
numbers as inputs to the reference design (built inside \tool{VerifOx})
and an externally provided untimed SW implementation (in C). We
then set the \texttt{rounding mode} of the reference model and
the SW implementation accordingly. Subsequently, the results of
addition from the reference model ($sum\_ref$) and the SW
implementation ($sum\_impl$) are checked for equivalence using the function,
\texttt{assert(compareFloat(sum\_ref, sum\_impl));}.
In a similar way, the miter in \tool{hw-cbmc} is constructed by providing the same
floating-point numbers as input to the SW and HW RTL implementations.
Note that the inputs are set to the HW signals in \tool{hw-cbmc} using a function
\lstinline!set_inputs()!. Since the ARM FPU is a pipeline implementation
with pipeline depth 4, we unwind the HW transition system up to a bound
of 4 using the function \lstinline!next_timeframe()!. Subsequently, the
results computed by the HW design and the C reference model are
compared using the \lstinline!compareFloat()! function.
\subsection*{Miter for Combinational Equivalence Checking in \tool{hw-cbmc}}
\lstset{language=C,basicstyle=\ttfamily}
Figure~\ref{fig:miter} shows an example miter for checking
combinational equivalence of a 32-bit floating-point
adder/subtractor circuit.
\begin{figure}[h]
\begin{center}
\centering \scriptsize
\begin{tabular}{l}
\begin{lstlisting}[mathescape=true]
void miter(float f, float g) {
// setting up the inputs to hardware FPU
$fp\_add\_sub.f=*(unsigned *)\&f;$
$fp\_add\_sub.g=*(unsigned *)\&g;$
$fp\_add\_sub.isAdd=1;$
// propagates inputs of the hardware circuit
$set\_inputs();$
// get result from hardware circuit
float $Verilog\_result=*(float *)\&fp\_add\_sub.result;$
// compute fp-add in Software with rounding mode RNE
float $C\_result=add(RNE, f, g); $
// compare the outputs
$assert(compareFloat(C\_result, Verilog\_result));$
}
\end{lstlisting}
\end{tabular}
\end{center}
\vspace{-1.7mm}
\caption{Miter for combinational equivalence checking
for a 32-bit floating-point adder/subtractor
for the case of addition in \tool{hw-cbmc}}
\label{fig:miter}
\end{figure}
We provide the same floating-point numbers as
inputs to the reference design (in C) and the hardware implementation (in
RTL Verilog) using \lstinline!set_inputs()!. Subsequently, we
indicate that we want to perform a floating-point addition by setting
\lstinline!isAdd=1!. The results computed by the hardware design and the C
reference model are compared using the \lstinline!compareFloat()! function.
Note that this is a combinational circuit, so there is no call to
\lstinline!next_timeframe()!.
\end{document}
\section{Introduction}
One of the most important tasks in Electronic Design Automation (EDA)
is to check whether the low-level implementation (RTL or gate-level)
complies with the system-level specification.
Figure~\ref{fig:eda-flow} illustrates the role of equivalence checking
(EC) in the design process. In this paper, we present
a new EC tool, \tool{VerifOx}, that is used for equivalence checking of an
untimed software (SW) model against a high-level reference model.
Later, a Register Transfer Level (RTL) model is implemented,
either manually by a hardware designer or
automatically by a synthesis tool. To guarantee that
the RTL is consistent with the SW model, we use an existing tool,
\tool{hw-cbmc}~\cite{CKY03}, to check the correctness of the synthesized hardware
RTL against a SW model.
In this paper, we address the most general and thus most difficult variant
of EC: the case where the high-level and the low-level design are
substantially different. State-of-the-art tools, such as
Hector~\cite{DBLP:conf/date/KoelblJJP09} from Synopsys and \tool{slec} from
Calypto,\footnote{http://calypto.com/en/products/slec/} rely on
\emph{equivalence points}~\cite{DBLP:conf/dac/WuH06}, and hence they are
ineffective in this scenario. We present an approach based on bounded
analysis, embodied in the tools \tool{VerifOx} and \tool{hw-cbmc}, that can handle
arbitrary designs.
\tool{VerifOx} is used for equivalence checking of an untimed
software model against a high-level reference model and \tool{hw-cbmc} is used for
equivalence checking of the RTL implementation against a software model.
EC is broadly classified into two separate categories: combinational
equivalence checking (CEC) and sequential equivalence checking (SEC).
CEC~is used for a pair of models that are cycle accurate and have the same
state-holding elements. SEC is used when the
high-level model is not cycle accurate or has a substantially different set
of state-holding elements~\cite{DBLP:conf/date/Eijk98,
DBLP:conf/iccd/BaumgartnerMPKJ06}. It is well-known that EC of
floating-point designs is
difficult~\cite{DBLP:conf/aspdac/XueCS13,DBLP:conf/cav/Fujita96}. So
there is a need for automatic tools that formally validate floating-point
designs at various stages of the synthesis flow, as illustrated by right side
flow of Figure~\ref{fig:eda-flow}.
\begin{figure}[tb]
\scalebox{.55}{\import{figures/}{eda-flow.pspdftex}}
\caption{Electronic Design Automation Flow
\label{fig:eda-flow}}
\end{figure}
\medskip \noindent \textbf{\emph{Contributions: }}
In this paper, we sketch two significant equivalence-verification tools:
\begin{enumerate}
\item {\tool{VerifOx}, a tool for equivalence checking of software models
given as C programs.} We present a path-based symbolic
execution tool, \tool{VerifOx}, for bounded equivalence checking
of floating-point software implementations against a IEEE
754 compliant reference model. \tool{VerifOx} supports C89,
C99 standards in the front-end. \tool{VerifOx} also supports
SAT and SMT backends for constraint solving. \tool{VerifOx} is available
at~\url{http://www.cprover.org/verifox}.
\item {\tool{hw-cbmc}, a tool for C versus RTL equivalence checking.}
\tool{hw-cbmc} is used for bounded equivalence checking of Verilog RTL
against C/C++ models. \tool{hw-cbmc} supports IEEE 1364-2005 System Verilog
standards and the C89, C99 standards. \tool{hw-cbmc} generates a monolithic
formula from the C and RTL description, which is then checked with
SAT/SMT solvers.
\end{enumerate}
\section{\tool{VerifOx}: A tool for equivalence checking of C programs}
\tool{VerifOx} is a path-based symbolic execution tool for equivalence checking of
C programs. The tool architecture is shown on the left side of
Figure~\ref{fig:tools}. \tool{VerifOx} supports the C89 and C99 standards. The
key feature is symbolic reasoning about equivalence between FP
operations. To this end, \tool{VerifOx} implements a model of the core IEEE
754 arithmetic operations---single- and double-precision addition,
subtraction, multiplication, and division---which can be used as reference
designs for equivalence checking.
So \tool{VerifOx} does not require external reference models for
equivalence checking of floating-point designs. This significantly
simplifies the users effort to do equivalence checking at
software level. The reference model in \tool{VerifOx} is equivalent to the
Softfloat
model.\footnote{http://www.jhauser.us/arithmetic/SoftFloat.html}
\tool{VerifOx} also supports SAT and SMT backends for constraint
solving.
\begin{figure}[tb]
\scalebox{.55}{\import{figures/}{tools.pspdftex}}
\caption{\tool{VerifOx} and \tool{hw-cbmc} Tool Architecture
\label{fig:tools}}
\end{figure}
\begin{figure}[b]
\scriptsize
\begin{tabular}{l|l|l|l|l}
\hline
Program & Path & Path & Path & Monolithic\\
& Constraint 1 & Constraint 2 & Constraint 3 & Path Constraint\\
\hline
\begin{lstlisting}[mathescape=true,language=C]
void top(){
if(reset) {
x=0;
y=0; }
else {
if(a > b)
x=a+b;
else
y=(a & 3)<<b; }}
\end{lstlisting}
&
\begin{minipage}{1.5cm}
$\begin{array}[t]{@{}l}
C_1 \; \equiv \\
\;\;\mathit{\mathit{reset}}_1 \neq 0\; \land \\
\;\; x_2 = 0\;\land \\
\;\; y_2 = 0
\end{array}$
\end{minipage}
&
\begin{minipage}{1.5cm}
$\begin{array}[t]{@{}l}
C_2 \; \equiv \\
\;\;\mathit{\mathit{reset}}_1=0\; \land \\
\;\;b_1 \ngeq a_1\; \land \\
\;\;x_3 = a_1 + b_1
\end{array}$
\end{minipage}
&
\begin{minipage}{2cm}
$\begin{array}[t]{@{}l}
C_3\; \equiv \\
\;\;\mathit{\mathit{reset}}_1=0\; \land \\
\;\;b_1 \ge a_1\; \land \\
\;\;y_3 = (a_1\&3) \\
\;\;<\!\!<b_1
\end{array}$
\end{minipage}
&
\begin{minipage}{3.75cm}
$\begin{array}{l@{\,\,}c@{\,\,}l}
C &\iff& ((\mathit{\mathit{guard}}_1 = \neg(\mathit{\mathit{reset}}_1 = 0)) \land \\
& & (x_2 = 0) \land (y_2 = 0) \land \\
& & (x_3 = x_1) \land (y_3 = y_1) \land \\
& & (\mathit{guard}_2 = \neg(b_1 >= a_1)) \land \\
& & (x_4 = a_1 + b_1) \land (x_5 = x_3) \land \\
& & (y_4 = (a_1 \& 3) <\!\!< b_1) \land \\
& & (x_6 = \mathit{ite}(\mathit{\mathit{guard}}_2, x_4, x_5)) \land \\
& & (y_5 = \mathit{ite}(\mathit{\mathit{guard}}_2, y_3, y_4)) \land \\
& & (x_7 = \mathit{ite}(\mathit{\mathit{guard}}_1, 0, x_6)) \land \\
& & (y_6 = \mathit{ite}(\mathit{\mathit{guard}}_1, 0, y_5)))
\end{array}$
\end{minipage}
\\
\hline
\end{tabular}
\caption{Single-path and Monolithic Symbolic Execution}
\label{figure:simulation}
\end{figure}
Given a reference model, an implementation model in C and a set of partition
constraints, \tool{VerifOx} performs depth-first exploration of program paths with
certain optimizations, such as eager infeasible path pruning and incremental
constraint solving. This enables automatic decomposition of the verification
state-space into subproblems, by input-space and/or state-space decomposition.
The decomposition is done in tandem in both models, exploiting the structure
present in the high-level model. The approach generates many but simpler SAT/SMT
queries, similar to the technique followed in \tool{klee}~\cite{DBLP:conf/osdi/CadarDE08}.
The main focus of our technique is to pass only those verification
conditions to the underlying solver for which the corresponding path conditions
are feasible with respect to the property under consideration and the
partitioning constraints such as case splitting.
Figure~\ref{figure:simulation} shows three feasible path
constraints corresponding to the three paths in the program on the
left. In contrast, the last column of Figure~\ref{figure:simulation} shows
monolithic path-constraint generated by \tool{hw-cbmc}.
\medskip \noindent \textbf{\emph{Incremental solving in \tool{VerifOx}.}} \tool{VerifOx} can be run in two
different modes: partial incremental and
full incremental. In partial incremental mode, only one solver instance is maintained
while going down a single path. So when making a feasibility
check from one branch $b_1$ to another branch $b_2$ along a single path, only the program
segment from $b_1$ to $b_2$ is encoded as a constraint and added to the existing solver
instance. Internal solver states and the information that the solver gathers during the search
remain valid as long as all the queries that are posed to the solver in succession are
monotonically stronger. If the solver solves a formula $\phi$, then posing $\phi \wedge \psi$ as
a query to the same solver instance allows one to reuse solver knowledge
it has already acquired, because any assignment that falsifies $\phi$ also
falsifies $\phi \wedge \psi$. Thus the solver need not revisit the assignments that
it has already ruled out. This results in speeding up the feasibility
check of the symbolic state at $b_2$, as the feasibility check at $b_1$ was $\mathit{true}$.
A new solver instance is used to explore a different path, after the current path is
detected as infeasible.
In full incremental mode, only one solver instance is maintained throughout the whole
symbolic execution. Let $\phi_{b_1b_2}$ denote the encoding of the path fragment
from $b_1$ to $b_2$. It is added in the solver as $B_{b_1b_2} \Rightarrow \phi_{b_1b_2}$.
Then, $B_{b_1b_2}$ is added as a \textit{blocking variable}\footnote{The SAT community uses the
term \textit{assumption variables} or \textit{assumptions}, but we will use the
term blocking variable to avoid ambiguity with assumptions in the program.}
to enforce constraints specified by $\phi_{b_1b_2}$. Blocking variables are
treated specially inside the solvers: unlike regular variables or clauses,
the blocking can be removed in subsequent queries without invalidating the
solver instance. When one wants to back-track the symbolic execution, the
blocking $B_{b_1b_2}$ is removed and a unit clause $\neg B_{b_1b_2}$ is
added to the solver, thus effectively removing $\phi_{b_1b_2}$.
\section{\tool{hw-cbmc}: A tool for equivalence checking of C and RTL}
\tool{hw-cbmc} is used for bounded equivalence checking of C and
Verilog RTL. The tool architecture is shown on the right side
of Figure~\ref{fig:tools}. \tool{hw-cbmc} supports IEEE 1364-2005 System Verilog
standards and the C89, C99 standards. \tool{hw-cbmc} maintains two
separate flows for hardware and software. The top flow in
Figure~\ref{fig:tools} uses synthesis to obtain either a bit-level
or a word-level netlist from Verilog RTL. The bottom flow
illustrates the translation of the C program into static single
assignment (SSA) form~\cite{Cytron:1989:EMC:75277.75280}.
These two flows meet only at the solver.
Thus, \tool{hw-cbmc} generates a monolithic formula from the C and RTL
description, which is then checked with SAT/SMT solvers. \tool{hw-cbmc}
provides specific handshake primitives such as $next\_timeframe()$
and $set\_inputs()$ that direct the tool to set the inputs to the
hardware signals and advance the clock, respectively. The details of \tool{hw-cbmc} are
available online.\footnote{http://www.cprover.org/hardware/sequential-equivalence/}
\section{Experimental Results}
In this section, we report experimental results for equivalence
checking of difficult floating-point designs. All our
experiments were performed on an Intel\textsuperscript{\textregistered} Xeon\textsuperscript{\textregistered} machine with 3.07\,GHz clock speed
and 48\,GB RAM. All times reported are in seconds.
MiniSAT-2.2.0~\cite{DBLP:conf/sat/EenB05} was used as underlying
SAT solver with \tool{VerifOx 0.1} and \tool{hw-cbmc 5.4}. The timeout
for all our experiments was set to 2~hours.
\medskip\noindent\textbf{\emph{Proprietary Floating-point Arithmetic Core: }}
We verified parts of a floating-point arithmetic unit (FPU) of a next
generation ARM\textsuperscript{\textregistered} GPU. The FP core is primarily composed of single-
and double-precision {\em ADD}, {\em SUB}, {\em FMA}
and {\em TBL} functional units, the register files,
and interface logic. The pipelined computation unit implements
FP operations on a 128-bit data-path. In this paper, we
verified the single-precision addition ({\em FP-ADD}),
rounding ({\em FP-ROUND}), minimum ({\em FP-MIN})
and maximum ({\em FP-MAX}) operations. The FP-ADD unit
can perform two operations in parallel by using two 64-bit
adders over multiple pipeline stages.
Each 64-bit unit can also perform operations with
smaller bit widths. The FPU decodes the incoming
instruction, applies the input modifiers and
provides properly modified input data to the respective
sub-unit. The implementation is around~38000 LOC,
generating tens of thousands of gates. We obtained the SW
model (in C) and the Verilog RTL model of the FPU core from ARM.
(Due to proprietary nature of the FPU design, we can not
share the commercial ARM IP.)
\medskip\noindent\textbf{\emph{Open-source Dual-path Floating-point Adder: }}
We have developed both a C and a Verilog implementation of an
\mbox{IEEE-754} 32-bit single-precision dual-path floating point
adder/subtractor. This floating-point
design includes various modules for packing, unpacking, normalizing,
rounding and handling of infinite, normal, subnormal, zero and NaN
(Not-a-Number) cases. We distribute the C and RTL implementation of
the dual-path FP adder at~\url{http://www.cprover.org/verifox}.
\medskip\noindent\textbf{\emph{Reference Model: }}
The IEEE 754 compliant floating-point implementations
in \tool{VerifOx} are used as the golden reference model
for equivalence checking at the software level. For
equivalence checking at the RTL phase, we used the untimed software model
from ARM as the reference model, as shown on the right side of
Figure~\ref{fig:eda-flow}.
\medskip\noindent\textbf{\emph{Miters for Equivalence Checking: }}
\lstset{language=C,basicstyle=\ttfamily}
A miter circuit~\cite{Brand} is built from two given circuits $A$ and $B$ as follows:
identical inputs are fed into $A$ and $B$, and the outputs of $A$ and $B$
are compared using a comparator. For equivalence checking at software level,
one of the circuits is a SW program and the other is a high-level
reference model. For the RTL phase, one of the circuits is a
SW program treated as reference model and the other is an RTL implementation.
\medskip\noindent\textbf{\emph{Case-splitting for Equivalence Checking: }}
Case-splitting is a common practice to scale up formal
verification~\cite{DBLP:conf/cav/Fujita96, DBLP:conf/aspdac/XueCS13,
DBLP:conf/date/KoelblJJP09} and is often performed
by user-specified assumptions. The \texttt{CPROVER\_assume(c)}
statement instructs \tool{hw-cbmc} and \tool{VerifOx} to restrict the analysis
to only those paths satisfying a given condition \texttt{c}. For
example, we can limit the analysis to those paths that are exercised
by inputs where the rounding mode is nearest-even (RNE) and both
input numbers are NaNs by adding the following line:
\medskip
\noindent\quad\texttt{CPROVER\_assume(roundingMode==RNE \&\& uf\_nan \&\& ug\_nan);}
\medskip\noindent\textbf{\emph{Discussion of Results: }}
Table~\ref{table:result} reports the run times for equivalence
checking of the ARM FPU and the dual-path FP adder. Column~1
gives the name of FP design and columns~2--6 show the runtimes
for partition modes INF, ZERO, NaN, SUBNORMAL, and NORMAL respectively.
For example, the partition constraint `INF' means addition of two
infinite numbers. Column~7 reports the total time for equivalence
checking without any partitioning.
\tool{VerifOx} successfully proved the
equivalence of all FP operations in the SW implementation of ARM FPU
against the built-in reference model. However, a bug in FP-MIN and
FP-MAX (reported as ERROR in Table~\ref{table:result}) is detected by
\tool{hw-cbmc} in the RTL implementation of ARM FPU when checked
against the SW model of ARM FPU for the case when both the input
numbers are NaN. This happens mostly due to bugs in the high-level
synthesis tool or during manual translation of SW model to RTL.
\tool{VerifOx} and \tool{hw-cbmc} is able to detect bugs in the SW
and RTL models of these designs respectively -- thereby
emphasizing the need for equivalence checking to validate
the synthesis process during the EDA flow.
Further, we investigate the reason for higher verification times for
subnormal numbers compared to normal, infinity, NaN's
and zero's. This is attributed to higher number of paths in
subnormal case compared to INF, NaN's and zero's. Closest to
our floating-point symbolic execution technique in \tool{VerifOx}
is the tool \textsc{KLEE-FP}~\cite{DBLP:conf/eurosys/CollingbourneCK11}.
We could not, however, run \textsc{KLEE-FP} on the software models because
the front-end of KLEE-FP failed to parse the ARM models.
\begin{table}[t]
\begin{center}
{
\begin{scriptsize}
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
& \multicolumn{5}{c|}{Case-splitting} & \multicolumn{1}{c|}{No-partition} \\
\cline{2-7}
Design & INF & ZERO & NaN & SUBNORMAL & NORMAL & Total \\
\cline{1-7}
\multicolumn{7}{|c|}{Equivalence checking at Software Level (\tool{VerifOx})} \\
\hline
FP-ADD & 9.56 & 11.54 & 9.95 & 1124.18 & 77.74 & 1566.72 \\ \hline
FP-ROUND & 1.24 & 1.36 & 1.32 & 3.78 & 1.63 & 4.71 \\ \hline
FP-MIN & 9.76 & 9.85 & 9.78 & 28.67 & 9.86 & 48.70 \\ \hline
FP-MAX & 9.80 & 9.88 & 9.97 & 28.70 & 9.90 & 35.81 \\ \hline
DUAL-PATH ADDER & 3.15 & 3.11 & 2.14 & 88.12 & 55.28 & 497.67 \\ \hline
\multicolumn{7}{|c|}{Equivalence checking at RTL (\tool{hw-cbmc})} \\ \hline
FP-ADD & 18.12 & 18.02 & 17.87 & 18.73 & 39.60 & 40.72 \\ \hline
FP-ROUND & 11.87 & 12.73 & 13.44 & 13.67 & 14.03 & 14.11 \\ \hline
FP-MIN & 13.72 & 13.62 & ERROR & 14.10 & 14.08 & 14.15 \\ \hline
FP-MAX & 13.70 & 13.58 & ERROR & 14.09 & 14.06 & 14.05 \\ \hline
DUAL-PATH ADDER & 0.88 & 0.87 & 0.99 & 169.49 & 22.42 & 668.61 \\ \hline
\end{tabular}
\end{scriptsize}
}
\end{center}
\vspace{-1.3mm}
\caption{Equivalence checking of ARM FPU and DUAL-PATH Adder (All time in seconds)
\label{table:result}}
\end{table}
\section{Related work}
The concept of symbolic execution~\cite{DBLP:journals/tse/Clarke76,
DBLP:conf/pldi/GodefroidKS05,DBLP:conf/osdi/CadarDE08} is prevalent in the
software domain for automated test generation as well as bug finding.
Tools such as Dart~\cite{DBLP:conf/pldi/GodefroidKS05}, Klee~\cite{DBLP:conf/osdi/CadarDE08},
EXE~\cite{DBLP:journals/tissec/CadarGPDE08},
Cloud9~\cite{DBLP:conf/pldi/KuznetsovKBC12} employ such a technique for
efficient test case generation and bug finding. By contrast, we used
path-wise symbolic execution for equivalence checking of software models
against a reference model. A user-provided assumption specifies certain testability
criteria that render majority of the design logic
irrelevant~\cite{DBLP:conf/cav/Fujita96, DBLP:conf/aspdac/XueCS13,
DBLP:conf/date/KoelblJJP09}, thus giving rise to
large number of infeasible paths in the design.
Conventional SAT-based bounded model checking~\cite{biere,CKY03,
Clarke:2003:HVU:1119772.1119831} can not exploit this infeasibility
because these techniques create a monolithic formula by unrolling
the entire transition system up to a given bound, which is then
passed to SAT/SMT solver. These tools perform case-splitting at
the level of solver through the effect of constant propagation.
Optimizations such as eager path pruning
combined with incremental encoding enable \tool{VerifOx} to
address this limitation.
\section{Concluding Remarks}
In this paper we presented \tool{VerifOx}, our path-based symbolic execution
tool, which is used for equivalence checking of arbitrary software
models in C. The key feature of \tool{VerifOx} is symbolic reasoning on
the equivalence between floating-point operations. To this end,
\tool{VerifOx} implements a model of the core IEEE 754 arithmetic
operations, which can be used for reference models.
Further, to validate the synthesis of RTL from software model, we used
our existing tool, \tool{hw-cbmc}, for equivalence checking of RTL designs
against the software model used as reference. We successfully
demonstrated the utility of our equivalence checking tool chain,
\tool{VerifOx} and \tool{hw-cbmc}, on a large commercial FPU core from ARM and
a dual-path FP adder. Experience suggests that the synthesis of
software models to RTL is often error prone---this emphasizes
the need for automated equivalence checking tools at various
stages of EDA flow. In the future, we plan to investigate
various path exploration strategies and path-merging techniques
in \tool{VerifOx} to further scale equivalence checking to complex
data and control intensive designs.
\section*{Acknowledgements}
Part of the presented work was conducted during an internship at ARM.
The authors want to thank in particular Luka Dejanovic, Joe Tapply,
and Ian Clifford for their help with setting up the experiments.
\bibliographystyle{splncs03}
|
1,116,691,497,330 | arxiv |
\section{Reconstruction of Bounded-Degree Graphs (Proof of \cref{thm:general})}
\label{sec:general}
In this section, we analyze \textsc{Simple}\xspace (\cref{algo:main}) on general graphs of bounded degree in the distance query model.
Recall that a set $B$ of vertex pairs $\{a,b\}\subseteq V$ is defined in \cref{lem:B}.
For every vertex $a\in V$, we define the set of vertices $B(a)\subseteq V$ as
\[B(a)=\big\{ b\in V \mid \{a,b\}\in B\big\}.\]
Intuitively, $B(a)$ consists of the vertices $b\in V$ that has few distinguishers with $a$.
We bound the size of the set $B(a)$ for any vertex $a$, in \cref{lem:Ba}.
\begin{lemma}
\label{lem:Ba}
Let $G$ be a general graph of bounded degree $\Delta$.
For any vertex $a\in V$, $|B(a)|\leq 9\Delta^3\cdot n^2\cdot(\log^2 n)/s^2$.
\end{lemma}
We defer the proof of \cref{lem:Ba} for the moment and first show how it implies \cref{thm:general}.
\begin{proof}[Proof of \cref{thm:general} using \cref{lem:Ba}]
By \cref{lem:query}, \textsc{Simple}\xspace is a reconstruction algorithm using $n\cdot s+|\hat E|$ distance queries, and in addition, \textsc{Simple}\xspace can be parallelized using 2 rounds.
It remains to further analyze the query complexity.
From \cref{fact:hat-E}, $|\hat E|=|E|+|\hat E\setminus E|$.
Since the graph has bounded degree $\Delta$, $|E|\leq \Delta n$.
From \cref{lem:B}, $\expect{S}{|\hat E\setminus E|}\leq |B|+o(1)$.
Therefore, the expected number of distance queries in \textsc{Simple}\xspace is at most $n\cdot s+\Delta n+|B|+o(1)$.
It suffices to analyze $|B|$.
Observe that $|B|\leq \sum_{a\in V} |B(a)|$ by definition of $\{B(a)\}_{a\in V}$.
From \cref{lem:Ba}, $|B(a)|\leq 9\Delta^3\cdot n^2\cdot(\log^2 n)/s^2$, for any vertex $a\in V$.
Hence $|B|\leq (9\Delta^3\cdot n^2\cdot(\log^2 n)/s^2)\cdot n$.
Thus the expected number of distance queries in \textsc{Simple}\xspace is at most $n\cdot s+ \Delta n+(9\Delta^3\cdot n^2\cdot(\log^2 n)/s^2)\cdot n+o(1)$, which is $\tilde O(n^{5/3})$ since $s=n^{2/3}$ and $\Delta=O(\polylog n)$.
\end{proof}
The rest of the section is dedicated to prove Lemma~\ref{lem:Ba}.
Let $a$ be any vertex in $V$.
Let $T$ be an (arbitrary) shortest-path tree rooted at $a$ and spanning all vertices in $V$.
For any vertex $b\in V$, let \emph{the shortest $a$-to-$b$ path} denote the path between $a$ and $b$ in the tree $T$.
To simplify the presentation, we assume that, for any $b\in B(a)$, $\delta(a,b)$ is even, so that the \emph{midpoint vertex} of the shortest $a$-to-$b$ path is uniquely defined.
We extend our analysis to the general setting in the end of the section.
For any vertex $m\in V$, define the set $B(a,m)\subseteq B(a)$ as
\[B(a,m)=\big\{b\in B(a) \mid \text{the midpoint vertex of the shortest $a$-to-$b$ path is $m$}\big\}.\]
Define the set $M(a)\subseteq V$ as
\[M(a)=\big\{m\in V\mid B(a,m)\neq \emptyset\big\}.\]
In other words, $M(a)$ consists of the vertices $m\in V$ that is the midpoint vertex of the shortest $a$-to-$b$ path for some $b\in B(a)$.
From the construction, we have
\begin{equation}
\label{eqn:Ba}
B(a)=\bigcup_{m\in M(a)} B(a,m).
\end{equation}
In order to bound the size of $B(a)$, first we bound the size of $B(a,m)$ for any midpoint $m\in M(a)$, in \cref{lem:simple-1}, and then we bound the number of distinct midpoints, in \cref{lem:simple-2}.
\begin{lemma}
\label{lem:simple-1}
For any $m\in M(a)$, $|B(a,m)|\leq 3\Delta\cdot n\cdot(\log n)/s$.
\end{lemma}
\begin{proof}
For any $b\in B(a,m)$, the vertex $m$ is the midpoint vertex of the shortest $a$-to-$b$ path by definition.
From the assumption, $\delta(a,b)$ is even for any $b\in B(a,m)$, so there exists for some positive integer $\ell$, such that $\delta(m,a)=\ell$ and $\delta(m,b)=\ell$ for any $b\in B(a,m)$.
For every neighbor $m'$ of $m$ such that $\delta(a,m')=\delta(a,m)+1$, define a set $Y(m')\subseteq B(a,m)$ that consists of the vertices $b\in B(a,m)$ such that $m'$ is on the shortest $a$-to-$b$ path.
Let $\hat m$ be a neighbor of $m$ such that $\delta(a,\hat m)=\delta(a,m)+1$ and that $|Y(\hat m)|$ is maximized, see \cref{fig:simple-1}.
Since the graph has bounded degree $\Delta$, we have
$|B(a,m)|\leq \Delta\cdot|Y(\hat m)|$.
It suffices to bound $|Y(\hat m)|$.
The main observation is that any vertex of $Y(\hat m)$ distinguishes $a$ and any other vertex of $Y(\hat m)$.
To see this, let $b_0$ be any vertex in $Y(\hat m)$.
By definition, $\delta(a,\hat m)=\delta(a,m)+1=\ell+1$.
Since $\hat m$ is on the shortest $a$-to-$b_0$ path, we have $\delta(\hat m,b_0)=\delta(a,b_0)-\delta(a,\hat m)=\ell-1$, thus $\delta(\hat m,b_0)=\delta(\hat m,a)-2$.
For any vertex $b_1\in Y(\hat m)$, from the triangle inequalities on $\delta$, we have
\[\delta(b_1,b_0) \leq \delta(b_1,\hat m)+\delta(\hat m,b_0) =\delta(b_1,\hat m)+\delta(\hat m,a)-2 = \delta(b_1,a)-2.\]
According to \cref{def:distinguish}, the vertex $b_1$ distinguishes $a$ and $b_0$, and equivalently, $b_1\in D(a,b_0)$.
Thus we have $Y(\hat m)\subseteq D(a,b_0)$, hence $|Y(\hat m)|\leq |D(a,b_0)|\leq 3n\cdot (\log n)/s$ using the fact that $b_0\in Y(\hat m)\subseteq B(a)$ and the definition of $B$ in \cref{lem:B}.
We conclude that
$|B(a,m)|\leq \Delta\cdot |Y(\hat m)|\leq 3\Delta\cdot n\cdot(\log n)/s.$
\end{proof}
\begin{figure}[t]
\centering
\includegraphics[scale=0.675]{fig_B}
\caption{
The vertex $m$ is the midpoint of the shortest path between $a$ and any vertex in $B(a,m)$.
The vertex $\hat m$ is a well-chosen neighbor of $m$.
Consider any vertex $b_0\in Y(\hat m)$.
We can show that any vertex $b_1\in Y(\hat m)$ distinguishes $a$ and $b_0$.
}
\label{fig:simple-1}
\end{figure}
\begin{lemma}
\label{lem:simple-2}
$|M(a)|\leq 3\Delta\cdot n\cdot(\log n)/s$.
\end{lemma}
\begin{proof}
For each vertex $m\in M(a)$, denote $x_m$ as the second-to-last vertex on the shortest $a$-to-$m$ path.
Denote $X(a)\subseteq V$ as the set of vertices $x_m$ for all $m\in M(a)$.
See \cref{fig:simple-2}.
Since $G$ has bounded degree $\Delta$, we have
$|M(a)|\leq \Delta\cdot |X(a)|$.
It suffices to bound $|X(a)|$.
Let $b^*$ be a vertex in $B(a)$ such that $\delta(a,b^*)$ is maximized.
From the assumption, $\delta(a,b^*)$ is even, so we denote $\delta(a,b^*)=2\ell$ for some positive integer $\ell$.
The main observation is that any vertex of $X(a)$ distinguishes $a$ and $b^*$.
To see this, let $x$ be any vertex in $X(a)$.
Let $m$ be any vertex in $M(a)$ such that $x$ is the second-to-last vertex on the shortest $a$-to-$m$ path.\footnote{Such a vertex $m$ exists according to the construction of $X(a)$.}
We have $\delta(a,m)\leq \ell$ and $\delta(a,x)=\delta(a,m)-1\leq \ell-1.$
By the triangle inequality on the distances, $\delta(b^*,x)\geq \delta(a,b^*)-\delta(a,x)\geq 2\ell-(\ell-1)=\ell+1$.
Thus $\delta(b^*,x)-\delta(a,x)\geq 2$.
According to \cref{def:distinguish}, the vertex $x$ distinguishes $a$ and $b^*$, and equivalently, $x\in D(a,b^*)$.
Thus $X(a)\subseteq D(a,b^*)$, hence $|X(a)|\leq |D(a,b^*)|\leq 3n\cdot (\log n)/s$ using the fact that $b^*\in B(a)$ and the definition of $B$ in \cref{lem:B}.
We conclude that $|M(a)|\leq \Delta\cdot |X(a)|\leq 3\Delta\cdot n \cdot (\log n)/s.$
\end{proof}
\begin{figure}[t]
\centering
\includegraphics[scale=0.675]{fig_M.pdf}
\caption{
Solid circular nodes represent the vertices $m\in M(a)$.
Solid curves represent the shortest $a$-to-$m$ paths.
Solid square nodes represent the vertices in $X(a)$.
Denote $b^*$ as a vertex in $B(a)$ that is farthest from $a$.
We can show that any vertex $x\in X(a)$ distinguishes $a$ and $b^*$.
}
\label{fig:simple-2}
\end{figure}
From \cref{eqn:Ba}, $|B(a)|\leq \sum_{m\in M(a)}|B(a,m)|$.
From \cref{lem:simple-1}, $|B(a,m)|\leq 3\Delta\cdot n\cdot(\log n)/s$ for every $m\in M(a)$.
From \cref{lem:simple-2}, $|M(a)|\leq 3\Delta\cdot n\cdot(\log n)/s$.
Therefore, $|B(a)|\leq 9\Delta^2\cdot n^2\cdot(\log^2 n)/s^2$.
Finally, consider the general setting in which $\delta(a,b)$ is not necessarily even for any $b\in B(a)$.
For a vertex $m$ on the shortest $a$-to-$b$ path, we say that $m$ is the \emph{midpoint vertex} of that path if $\delta(a,m)=\lfloor\delta(a,b)/2\rfloor$.
The definitions of $B(a,m)$ and $M(a)$ remain the same.
\cref{lem:simple-2} holds in the same way.
In \cref{lem:simple-1}, the upper bound of $|B(a,m)|$ is replaced by $3\Delta^2\cdot n\cdot(\log n)/s$. Indeed, to extend the proof of \cref{lem:simple-1}, instead of considering vertex $m'$ (resp., vertex $\hat m$) that is a neighbor of $m$, we consider $m'$ (resp., $\hat m$) that is at distance 2 from $m$.
We have $|B(a,m)|\leq \Delta^2\cdot|Y(\hat m)|$.
The bound $|Y(\hat m)|\leq 3n\cdot (\log n)/s$ remains the same, so we have $|B(a,m)|\leq 3\Delta^2\cdot n\cdot(\log n)/s.$
Hence $|B(a)|\leq 9\Delta^3\cdot n^2\cdot(\log^2 n)/s^2$.
We complete the proof of \cref{lem:Ba}.
Therefore, we obtain \cref{thm:general}.
\section{Introduction}
Discovering the topology of the Internet is a crucial step for building accurate network models and designing efficient algorithms for Internet applications.
The topology of Internet networks is typically investigated at the router level, using \texttt{traceroute}.
It is a common and reasonably accurate assumption that \texttt{traceroute} generates paths that are shortest in the network.
Unfortunately, sometimes routers block \texttt{traceroute} requests due to privacy and security concerns.
As a consequence, the inference of the network topology is rather based on the end-to-end delay information on those requests, which is roughly proportional to the shortest-path distances in the network.
In the \emph{graph reconstruction} problem, we are given the vertex set $V$ of a hidden connected, undirected, and unweighted graph and have access to information about the topology of the graph via an oracle, and the goal is to find every edge in $E$.
Henceforth, unless explicitly mentioned, all graphs studied are assumed to be connected.
This assumption is standard and shared by almost all references on the subject, e.g., \cite{beerliova2006network,erlebach2006network,kannan2018graph,rong2021reconstruction,sen2010covert}.
The efficiency of an algorithm is measured by the \emph{query complexity}, i.e., the number of queries to the oracle.
Motivated by \texttt{traceroute}, the literature has explored several types of query oracles.
\begin{itemize}
\item
One type consists of \emph{all-shortest-paths} and \emph{all-distances} queries, when querying a vertex yields either shortest paths from that vertex to all other vertices~\cite{beerliova2006network,sen2010covert} or distances from that vertex to all other vertices~\cite{erlebach2006network}. The latter, of course, is less informative.
\item A more refined type of query oracles, suggested in~\cite{beerliova2006network,erlebach2006network}, consists of \emph{shortest-path} and \emph{distance} queries, when querying a pair of vertices yields either a shortest path or the distance between them~\cite{kannan2018graph,reyzin2007learning,rong2021reconstruction}. Again, the latter is less informative.
\end{itemize}
In this work, we focus on the weakest of those four query oracles, that takes as input a pair of vertices $a$ and $b$ and returns the distance $\delta(a,b)$ between them.
Reyzin and Srivastava~\cite{reyzin2007learning} showed that graph reconstruction requires $\Omega(n^2)$ distance queries on general graphs, so we focus on the bounded degree case.
For graphs of bounded degree, Kannan, Mathieu, and Zhou~\cite{kannan2018graph} gave a reconstruction algorithm based on multi-phase Voronoi-cell decomposition and using $\tilde O(n^{3/2})$ distance queries, and raised an open question of whether $\tilde O(n)$ is achievable.\footnote{The notation $\tilde O(f(n))$ stands for $O(f(n)\cdot \polylog f(n))$.}
We provide a partial answer to that open question by analyzing a simple reconstruction algorithm (\cref{algo:main}).
We show that, on (uniformly) random $\Delta$-regular graphs, where every vertex has the same degree $\Delta$, our reconstruction algorithm uses $\tilde O(n)$ distance queries (\cref{thm:random}).
As by-products, we can reconstruct those graphs using $O(\log^2 n)$ queries to an all-distances oracle (\cref{cor:all-distances}) or using $\tilde O(n)$ queries to a betweenness oracle (\cref{cor:betweenness}), and we bound the metric dimension of those graphs by at most $\log^2 n$ (\cref{cor:metric_dimention}).
Our analysis exploits the \emph{locally tree-like} property of random $\Delta$-regular graphs, meaning that these graphs contain a small number of short cycles.
Our method might be applicable to other locally tree-like graphs, such as Erd\"os-R\'enyi random graphs and \emph{scale-free} graphs.
In particular, many real world networks, such as Internet networks, social networks, and peer-to-peer networks, are believed to have scale-free properties~\cite{barabasi1999emergence,jovanovic2001modeling,newman2002random}.
We defer the reconstruction of those networks for future work.
Our reconstruction algorithm has a very simple structure, and is highly parallelizable (\cref{cor:parallel}).
On general graphs of bounded degree, the same reconstruction algorithm has subquadratic query complexity (\cref{thm:general}).
\subsection{Related Work}
The problem of reconstructing a graph using queries that reveal partial information has been extensively studied in different contexts and has many applications.
\paragraph*{Reconstruction of Random Graphs}
The gist of our paper deals with random graphs.
The graph reconstruction problem has already attracted much interest in the setting of random graphs.
On Erd\"os-R\'enyi random graphs, Erlebach, Hall, and Mihal’{\'a}k~\cite{erlebach2007approximate} studied the approximate network reconstruction using all-shortest-paths queries;
Anandkumar, Hassidim, and Kelner~\cite{anandkumar2013topology} used end-to-end measurements between a subset of vertices to approximate the network structure.
Experimental results to reconstruct random graphs using shortest-path queries were given in~\cite{blondel2007distance,guillaume2005complex}.
On random $\Delta$-regular graphs, Achlioptas~et~al.~\cite{achlioptas2009bias} studied the bias of \texttt{traceroute} sampling in the context of the network reconstruction.
They showed that the structure revealed by \texttt{traceroute} sampling on random $\Delta$-regular graphs admits a power-law degree distribution~\cite{achlioptas2009bias}, a common phenomenon as in Erd\"os-R\'enyi random graphs~\cite{lakhina2003sampling} and Internet networks~\cite{faloutsos1999}.
\paragraph*{Metric Dimension and Related Problems}
Our work yields an upper bound on the \emph{metric dimension} of random $\Delta$-regular graphs.
The metric dimension problem was first introduced by Slater~\cite{slater1975leaves} and Harary and Melter~\cite{harary1976metric}, see also \cite{bailey2011base,caceres2007metric,chartrand2000resolvability,hernando2010extremal,khuller1996landmarks,oellermann2007strong,ramirez2016simultaneous,sebHo2004metric}.
The metric dimension of a graph is the cardinality of a smallest subset $S$ of vertices such that every vertex in the graph has a unique vector of distances to the vertices in $S$.
On regular graphs, the metric dimension problem was studied in special cases~\cite{chartrand2000resolvability,javaid2008families}.
In Erd\"os-R\'enyi random graphs, the metric dimension problem was studied by Bollob{\'a}s, Mitsche, and Pra{\l}at~\cite{bollobas2013metric}.
Mitsche and Ru\'e~\cite{mitsche2015limiting} also considered the random forest model.
A related problem is the \emph{identifying code} of a graph~\cite{karpovsky1998new}, which is a smallest subset of vertices such that every vertex of the graph is uniquely determined by its neighbourhood within this subset.
The identifying code problem was studied on random $\Delta$-regular graphs~\cite{foucaud2012bounds} and on Erd\"os-R\'enyi random graphs~\cite{frieze2007}.
Other related problems received attentions on random graphs as well, such as the
\emph{sequential metric dimension}~\cite{odor2020sequential} and the \emph{seeded graph matching}~\cite{mossel2020seeded}.
\paragraph*{Betweenness Oracle}
There exists an oracle that is even weaker than the distance oracle: the \emph{betweenness} oracle~\cite{abrahamsen2016graph}, which receives three vertices $u$, $v$, and $w$ and returns whether $w$ lies on a shortest path between $u$ and $v$.
Our work yields a reconstruction algorithm using $\tilde O(n)$ betweenness queries on random $\Delta$-regular graphs.
For graphs of bounded degree, Abrahamsen et al.~\cite{abrahamsen2016graph} generalized the $\tilde O(n^{3/2})$ result in the distance oracle model from \cite{kannan2018graph} to the betweenness oracle model.
\paragraph*{Tree Reconstruction and Parallel Setting}
Our paper focuses on the distance oracle and bounded degree, and considers the parallel setting.
All of those aspects were previously raised in the special case of the \emph{tree reconstruction}.
Indeed, motivated by the reconstruction of a phylogenetic tree in evolutionary biology, the tree reconstruction problem using a distance oracle is well-studied~\cite{hein1989optimal,king2003complexity,waterman1977additive}, in particular assuming bounded degree~\cite{hein1989optimal}.
Afshar et al.~\cite{afshar2020reconstructing} studied the tree reconstruction in the parallel setting, analyzing both the \emph{round complexity} and the \emph{query complexity} in the relative distance query model~\cite{kannan1996determining}.
\subsection{Our Results}
\label{sec:results}
Our reconstruction algorithm, called \textsc{Simple}\xspace, is given in \cref{algo:main}. It takes as input the vertex set $V$ of size $n$ and an integer parameter $s\in[1,n]$.
\begin{algorithm}
\caption{\textsc{Simple}\xspace$(V,s)$}
\label{algo:main}
\begin{algorithmic}[1]
\State $S\gets$ sample of $s$ vertices selected uniformly and independently at random from $V$
\For{$u\in S$ and $v\in V$}
\textsc{Query}$(u,v)$
\EndFor
\State $\hat E\gets$ set of vertex pairs $\{a,b\}\subseteq V$ such that, for all $u\in S$, $|\delta(u,a)-\delta(u,b)|\leq 1$
\For{$\{a,b\}\in \hat E$}
\textsc{Query}$(a,b)$
\EndFor
\State \Return set of vertex pairs $\{a,b\}\in \hat E$ such that $\delta(a,b)=1$ \label{line:return}
\end{algorithmic}
\end{algorithm}
Intuitively, the set $\hat E$ constructed in \textsc{Simple}\xspace consists of all vertex pairs $\{a,b\}\subseteq V$ that \emph{might} be an edge in $E$.
In order to obtain the edge set~$E$, it suffices to query uniquely the vertex pairs in~$\hat E$.
We remark that \textsc{Simple}\xspace correctly reconstructs the graph for any parameter $s \in [1,n]$, and that choosing an appropriate $s$ only affects the query complexity, see \cref{lem:query}.
\subsubsection{Random Regular Graphs}
Our first main result shows that \textsc{Simple}\xspace (\cref{algo:main}) uses $\tilde O(n)$ distance queries on random $\Delta$-regular graphs for an appropriately chosen $s$ (\cref{thm:random}).
The analysis exploits the \emph{locally tree-like} property of random $\Delta$-regular graphs.
The proof of \cref{thm:random} consists of several technical novelties, based on a new concept of \emph{interesting vertices} (\cref{def:interesting}). See \cref{sec:random}.
\begin{theorem}
\label{thm:random}
Consider a uniformly random $\Delta$-regular graph with $\Delta=O(1)$.
Let $s=\log^2 n$.
In the distance query model, \textsc{Simple}\xspace (\cref{algo:main}) is a reconstruction algorithm using $\tilde O(n)$ queries in expectation.
\end{theorem}
We extend \textsc{Simple}\xspace and its analysis to reconstruct random $\Delta$-regular graphs in the all-distances query model (\cref{cor:all-distances}), in the betweenness query model (\cref{cor:betweenness}), as well as in the parallel setting (\cref{cor:parallel}).
These extensions are based on the observation that the set $\hat E$ constructed in \textsc{Simple}\xspace equals the edge set $E$ with high probability (\cref{lem:single-iteration}),\footnote{This property (i.e., $\hat E=E$ with high probability) does not hold on general graphs of bounded degree.}
see \cref{sec:variant}.
\begin{corollary}
\label{cor:all-distances}
Consider a uniformly random $\Delta$-regular graph with $\Delta=O(1)$.
In the all-distances query model, there is a reconstruction algorithm using $O(\log^2 n)$ queries in expectation.
\end{corollary}
\begin{corollary}
\label{cor:betweenness}
Consider a uniformly random $\Delta$-regular graph with $\Delta=O(1)$.
In the betweenness query model, there is a reconstruction algorithm using $\tilde O(n)$ queries in expectation.
\end{corollary}
\begin{corollary}
\label{cor:parallel}
Consider a uniformly random $\Delta$-regular graph with $\Delta=O(1)$.
In the parallel setting of the distance query model, there is a reconstruction algorithm using $1+o(1)$ rounds and $\tilde O(n)$ queries in expectation.
\end{corollary}
We further extend the analysis of \textsc{Simple}\xspace to study the metric dimension of random $\Delta$-regular graphs (\cref{cor:metric_dimention}), by showing (in \cref{lem:landmarks}) that a random subset of $\log^2 n$ vertices is almost surely a \emph{resolving set} (\cref{def:resolving}) for those graphs, see \cref{sec:metric_dimension}.
\begin{corollary}
\label{cor:metric_dimention}
Consider a uniformly random $\Delta$-regular graph with $\Delta=O(1)$.
With probability $1-o(1)$, the metric dimension of the graph is at most $\log^2 n$.
\end{corollary}
With extra work, the parameter $s=\log^2 n$ in \cref{thm:random} can be reduced to $\log n\cdot (\log \log n)^{2+\epsilon}$, for any $\epsilon>0$, see \apref{remark:extra-work}.
As a consequence, the query complexity in the all-distances query model (\cref{cor:all-distances}) and the upper bound on the metric dimension (\cref{cor:metric_dimention}) can both be improved to $O(\log n\cdot (\log \log n)^{2+\epsilon})$.
\subsubsection{Bounded-Degree Graphs}
On general graphs of bounded degree, \textsc{Simple}\xspace (\cref{algo:main}) has subquadratic query complexity and is highly parallelizable (\cref{thm:general}), see \cref{sec:general}.
\begin{theorem}
\label{thm:general}
Consider a general graph of bounded degree $\Delta=O(\polylog n)$.
Let $s=n^{2/3}$.
In the distance query model, \textsc{Simple}\xspace (\cref{algo:main}) is a reconstruction algorithm using $\tilde O(n^{5/3})$ queries in expectation.
In addition, \textsc{Simple}\xspace can be parallelized using 2 rounds.
\end{theorem}
We note that the \textsc{Multi-Phase}\xspace algorithm\footnote{Algorithm~3 in \cite{kannan2018graph}.} from \cite{kannan2018graph} also reconstructs graphs of bounded degree in the distance query model.
How does \textsc{Simple}\xspace compare to \textsc{Multi-Phase}\xspace?
In terms of query complexity, on general graphs of bounded degree, \textsc{Simple}\xspace uses $\tilde O(n^{5/3})$ queries, so is not as good as \textsc{Multi-Phase}\xspace using $\tilde O(n^{3/2})$ queries; on random $\Delta$-regular graphs, \textsc{Simple}\xspace is more efficient than \textsc{Multi-Phase}\xspace: $\tilde O(n)$ versus $\tilde O(n^{3/2})$.
In terms of round complexity, \textsc{Simple}\xspace can be parallelized using 2 rounds on general graphs of bounded degree, and even $1+o(1)$ rounds on random $\Delta$-regular graphs; while \textsc{Multi-Phase}\xspace requires up to $3\log n$ rounds due to a multi-phase selection process for centers.\footnote{The number of rounds in \textsc{Multi-Phase}\xspace is implicit in the proof of Lemma~2.3 from \cite{kannan2018graph}.}
In terms of structure, \textsc{Simple}\xspace is much simpler than \textsc{Multi-Phase}\xspace, which is based on multi-phase Voronoi-cell decomposition.
In worst case instances of graphs of bounded degree, the query complexity of \textsc{Simple}\xspace is higher than linear.
For example, when the graph is a complete binary tree, \textsc{Simple}\xspace would require $\Omega(n\sqrt{n})$ queries (the complexity of \textsc{Simple}\xspace is minimized when $s$ is roughly $\sqrt{n}$).
Thus the open question from \cite{kannan2018graph} of whether general graphs of bounded degree can be reconstructed using $\tilde O(n)$ distance queries remains open and answering it positively would require further algorithmic ideas.
\section{Notations and Preliminary Analysis}
\label{sec:preliminary}
Let $G=(V,E)$ be a connected, undirected, and unweighted graph, where $V$ is the set of vertices such that $|V|=n$ and $E$ is the set of edges.
We say that $\{a,b\}\subseteq V$ is a \emph{vertex pair} if both $a$ and $b$ belong to $V$ such that $a\neq b$.
The \emph{distance} between a vertex pair $\{a,b\}\subseteq V$, denoted by $\delta(a,b)$, is the number of edges on a shortest $a$-to-$b$ path.
\begin{definition}[Distinguishing]
\label{def:distinguish}
For a vertex pair $\{a,b\}\subseteq V$, we say that a vertex $u\in V$ \emph{distinguishes} $a$~and~$b$, or equivalently that $u$ is a \emph{distinguisher} of $\{a,b\}$, if $|\delta(u,a)-\delta(u,b)|>1$.
Let $D(a,b)\subseteq V$ denote the set of vertices $u\in V$ distinguishing $a$ and $b$.
\end{definition}
Let $s\in[1,n]$ be an integer parameter.
The set $S$ constructed in \textsc{Simple}\xspace consists of $s$ vertices selected uniformly and independently at random from $V$.
The set $\hat E$ constructed in \textsc{Simple}\xspace consists of the vertex pairs $\{a,b\}\subseteq V$ such that $a$ and $b$ are not distinguished by any vertex in $S$, i.e., $D(a,b)\cap S= \emptyset$, or equivalently, $|\delta(u,a)-\delta(u,b)|\leq 1$ for all $u\in S$.
For any edge $(a,b)\in E$, it is easy to see that $|\delta(u,a)-\delta(u,b)|\leq 1$ for all $u\in V$, which implies that $\{a,b\}\in \hat E$.
Hence the following inclusion property.
\begin{fact}
\label{fact:hat-E}
$E\subseteq \hat E$.
\end{fact}
We show that \textsc{Simple}\xspace is correct and we give a preliminary analysis on its query complexity as well as on its round complexity, in \cref{lem:query}.
\begin{lemma}
\label{lem:query}
The output of \textsc{Simple}\xspace (\cref{algo:main}) equals the edge set $E$.
The number of distance queries in \textsc{Simple}\xspace is $n\cdot s+|\hat E|$.
In addition, \textsc{Simple}\xspace can be parallelized using 2 rounds.
\end{lemma}
\begin{proof}
The output of \textsc{Simple}\xspace consists of the vertex pairs $\{a,b\}\in \hat E$ such that $\{a,b\}$ is an edge in $E$.
Since $E\subseteq \hat E$ (\cref{fact:hat-E}), the output of \textsc{Simple}\xspace equals the edge set $E$.
Observe that the distance queries in \textsc{Simple}\xspace are performed in two stages.
The number of distance queries in the first stage is $|V|\cdot |S|=n\cdot s$.
The number of distance queries in the second stage is $|\hat E|$.
Thus the query complexity of \textsc{Simple}\xspace is $n\cdot s+|\hat E|$.
The distance queries in each of the two stages can be performed in parallel, so \textsc{Simple}\xspace can be parallelized using 2 rounds.
\end{proof}
From \cref{lem:query}, in order to further study the query complexity of \textsc{Simple}\xspace, it suffices to analyze $|\hat E|$, which equals $|E|+|\hat E\setminus E|$ according to \cref{fact:hat-E}.
Since $|E|\leq \Delta n$ in a graph of bounded degree $\Delta$, our focus in the subsequent analysis is $|\hat E\setminus E|$.
\begin{lemma}
\label{lem:B}
Let $s=\omega(\log n)$ be an integer parameter.
Let $B$ be the set of vertex pairs $\{a,b\}\subseteq V$ such that $\delta(a,b)\geq 2$ and $|D(a,b)|\leq 3n\cdot (\log n)/s$.
We have $\expect{S}{|\hat E\setminus E|}\leq |B|+o(1)$.
\end{lemma}
\begin{proof}
Denote $Z$ as the set $\hat E\setminus E$.
Observe that $|Z|\leq |B|+|Z\setminus B|$.
Since $B$ is independent of the random set $S$, we have $\expect{S}{|Z|}\leq |B|+\expect{S}{|Z\setminus B|}$.
It suffices to show that $\expect{S}{|Z\setminus B|}=o(1)$.
We claim that for any vertex pair $\{a,b\}\subseteq V$ such that $\{a,b\}\notin B$, the probability that $\{a,b\}\in Z$ is $o(n^{-2})$.
To see this, fix a vertex pair $\{a,b\}\notin B$.
By definition of $B$, either $\delta(a,b)=1$, or $|D(a,b)|>3n\cdot (\log n)/s$.
In the first case, $\{a,b\}\notin Z$ since $Z$ does not contain any edge of $E$.
In the second case, the event $\{a,b\}\in Z$ would imply that $\{a,b\}\in \hat E$, hence $D(a,b)\cap S= \emptyset$. Therefore,
\begin{align*}
&\proba{S}{\{a,b\}\in Z\mid\{a,b\}\notin B}\\
\leq & \proba{S}{D(a,b)\cap S= \emptyset\mid\{a,b\}\notin B}\\
< &\left(1-\frac{3n\cdot (\log n)/s}{n}\right)^s\\
=& o(n^{-2}),
\end{align*}
where the second inequality follows since $|D(a,b)|>3n\cdot (\log n)/s$ and the set $S$ consists of $s$ vertices selected uniformly and independently at random, and the last step follows since $s=\omega(\log n)$.
There are at most $n(n-1)/2$ vertex pairs $\{a,b\}\notin B$.
By the linearity of expectation, the expected number of vertex pairs $\{a,b\}\notin B$ such that $\{a,b\}\in Z$ is at most $o(n^{-2})\cdot n(n-1)/2=o(1)$, so $\expect{S}{|Z\setminus B|}=o(1)$.
Therefore, $\expect{S}{|Z|}\leq |B|+\expect{S}{|Z\setminus B|}= |B|+o(1)$.
\end{proof}
\section{Reconstruction of Random Regular Graphs (Proof of \cref{thm:random})}
\label{sec:random}
In this section, we analyze \textsc{Simple}\xspace (\cref{algo:main}) on random $\Delta$-regular graphs in the distance query model.
We assume that $\Delta\geq 2$ and that $\Delta n$ is even since otherwise those graphs do not exist.
We bound the expectation of $|\hat E\setminus E|$ on random $\Delta$-regular graphs, in \cref{lem:random}.
\begin{lemma}
\label{lem:random}
Let $G$ be a uniformly random $\Delta$-regular graph with $\Delta=O(1)$.
Let $s=\log^2 n$.
Let $S\subseteq V$ be a set of $s$ vertices selected uniformly and independently at random from $V$.
We have $\expect{G,S}{|\hat E\setminus E|}=o(1)$.
\end{lemma}
\begin{proof}[Proof of \cref{thm:random} using \cref{lem:random}]
By \cref{lem:query}, \textsc{Simple}\xspace is a reconstruction algorithm using $n\cdot s+|\hat E|=n\cdot\log^2 n+|\hat E|$ distance queries.
From \cref{fact:hat-E}, $|\hat E|=|E|+|\hat E\setminus E|$.
Since $G$ is $\Delta$-regular, $|E|=\Delta n/2$.
By \cref{lem:random}, $\expect{G,S}{|\hat E\setminus E|}=o(1)$.
Therefore, the expected number of distance queries in \textsc{Simple}\xspace is $n\cdot\log^2 n+\Delta n/2+o(1)$, which is $\tilde O(n)$ since $\Delta=O(1)$.
\end{proof}
It remains to prove \cref{lem:random} in the rest of this section.
\subsection{Configuration Model and the Structural Lemma}
We consider a random $\Delta$-regular graph generated according to the \emph{configuration model}~\cite{bollobas1980probabilistic,wormald1999models}.
Given a partition of a set of $\Delta n$ points into $n$ cells $v_1, v_2, \dots, v_n$ of $\Delta$ points, a \emph{configuration} is a perfect matching of the points into $\Delta n/2$ pairs. It corresponds to a (not necessarily connected) \emph{multigraph} $G'$ in which the cells are regarded as vertices and the pairs as edges: a pair of points $\{x, y\}$ in the configuration corresponds to an edge $(v_i, v_j)$ of $G'$ where $x\in v_i$ and $y\in v_j$.
Since each $\Delta$-regular graph has exactly $(\Delta!)^n$ corresponding configurations, a $\Delta$-regular graph can be generated uniformly at random by rejection sampling: choose a configuration uniformly at random,\footnote{To generate a random configuration, the points in a pair can be chosen sequentially: the first point can be selected using any rule, as long as the second point in that pair is chosen uniformly from the remaining points.} and reject the result if the corresponding multigraph $G'$ is not simple or not connected.
The configuration model enables us to show properties of a random $\Delta$-regular graph by analyzing a multigraph $G'$ corresponding to a random configuration.
Based on the configuration model, we are ready to state the following \emph{Structural Lemma}, which is central in our analysis.
\begin{lemma}[Structural Lemma]
\label{lem:Fvw}
Let $\Delta=O(1)$ be such that $\Delta\geq 3$.
Let $G'$ be a multigraph corresponding to a uniformly random configuration.
Let $\{v,w\}$ be a vertex pair in $G'$ such that $\delta(v,w)\geq 2$.
With probability $1-o(n^{-2})$, we have $|D(v,w)|>3n/\log n$.
\end{lemma}
In \cref{sec:proof:Fvw}, we prove the Structural Lemma, and in \cref{sec:proof-random}, we show \cref{lem:random} using the Structural Lemma.
\subsection{Proof of the Structural Lemma (\cref{lem:Fvw})}
\label{sec:proof:Fvw}
Let $G'$ be a multigraph corresponding to a uniformly random configuration, and let $V$ be the vertex set of $G'$.
Let $\{v,w\}\subseteq V$ be a vertex pair such that $\delta(v,w)\geq 2$.
For a vertex $x\in V$, denote $\ell(x)\in \mathbb{Z}$ as the distance in $G'$ between $x$ and the vertex pair $\{v,w\}$, i.e., $\ell(x)=\min(\delta(x,v),\delta(x,w))$.
For any integer $k\geq 0$, denote $U_k\subseteq V$ as the set of vertices $x\in V$ such that $\ell(x)=k$.
Denote $U_{\leq k}=\bigcup_{j\leq k}U_j$.
To construct the multigraph $G'$ from a random configuration, we borrow the approach from~\cite{bollobas1982distinguishing}, which proceeds in $n$ phases to construct the edges in $G'$, exploring vertices $x\in V$ in non-decreasing order of $\ell(x)$.
We start at the vertices of $U_0=\{v,w\}$.
Initially, i.e., in the $0$-th phase, we construct all the edges incident to $v$ or incident to $w$.
Suppose at the beginning of the $k$-th phase, for each $k\in[1,n-1]$, we have constructed all the edges with at least one endpoint belonging to $U_{\leq k-1}$.
During the $k$-th phase, we construct the edges incident to the vertices in $U_k$ one by one, till the degree of all the vertices in $U_k$ reaches $\Delta$.
The ordering of the edge construction within the same phase is arbitrary.
Let $G'$ be the resulting multigraph in the end of the construction.\footnote{When a multigraph corresponding to a random configuration is not connected, the resulting $G'$ consists of the union of the components of $v$ and of $w$, respectively, in that multigraph.
Note that any vertex $x\in V$ outside those two components cannot distinguish $v$ and $w$ (i.e., $x\notin D(v,w)$), thus $x$ is irrelevant to $|D(v,w)|$ in the statement of \cref{lem:Fvw}.}
The ordering of the edges in $G'$ is defined according to the above edge construction.
An edge $(a,b)$ in $G'$ is \emph{indispensable} if it explores either the vertex $a$ or the vertex $b$ for the first time in the edge construction.
In the first case, $b$ is the \emph{predecessor} of $a$; and in the second case, $a$ is the \emph{predecessor} of $b$.
An edge is \emph{dispensable} if it is not indispensable, in other words, if each of its endpoints either belongs to $\{v,w\}$ or is an endpoint of an edge constructed previously.
\begin{fact}
\label{fact:dispensable}
Neither $v$ or $w$ has a predecessor.
For any vertex in $V$, its predecessor, if exists, is unique.
If vertex $a$ is the predecessor of vertex $b$, then $\ell(b)=\ell(a)+1$.
\end{fact}
We introduce the concept of \emph{interesting vertices}, which is a key idea in the analysis.
\begin{definition}[Interesting Vertices]
\label{def:interesting}
A vertex $x\in V$ is \emph{$v$-interesting} if, for all vertices $z\in V\setminus\{v\}$ with $\delta(v,z)+\delta(z,x)=\delta(v,x)$, the edges incident to $z$ are indispensable.
Similarly, a vertex $x\in V$ is \emph{$w$-interesting} if, for all vertices $z\in V\setminus\{w\}$ with $\delta(w,z)+\delta(z,x)=\delta(w,x)$, the edges incident to $z$ are indispensable.
\end{definition}
For any finite integer $k\geq 1$, let $I_k(v)\subseteq V$ denote the set of $v$-interesting vertices $x\in V$ such that $\delta(v,x)=k$, and let $I_k(w)\subseteq V$ denote the set of $w$-interesting vertices $x\in V$ such that $\delta(w,x)=k$.
We show in \cref{lem:interesting-distinguish} that interesting vertices distinguish the vertex pair $\{v,w\}$, and we provide a lower bound on the number of interesting vertices in \cref{lem:number-interesting}.
These two lemmas are main technical novelties in our work.
Their proofs are in \cref{sec:proof-interesting-distinguish,sec:proof:number-interesting}, respectively.
\begin{lemma}
\label{lem:interesting-distinguish}
For any finite integer $k\geq 1$, we have $I_k(v)\cup I_k(w)\subseteq D(v,w)$.
\end{lemma}
\begin{lemma}
\label{lem:number-interesting}
Let $\Delta=O(1)$ be such that $\Delta\geq 3$.
Let $k$ be any positive integer such that $k\leq \lceil \log_{\Delta-1} (3n/\log n) \rceil+2$.
With probability $1-o(n^{-2})$, we have $|I_k(v)\cup I_k(w)|> (\Delta-2-o(1))(\Delta-1)^{k-1}.$
\end{lemma}
The Structural Lemma (\cref{lem:Fvw}) follows easily from \cref{lem:interesting-distinguish,lem:number-interesting}, see \cref{sec:proof-structral-lemma}.
\subsubsection{Proof of \cref{lem:interesting-distinguish}}
\label{sec:proof-interesting-distinguish}
Fix a finite integer $k\geq 1$.
From the symmetry of $v$ and $w$, it suffices to prove $I_k(v)\subseteq D(v,w)$.
Let $x$ be any vertex in $I_k(v)$.
By definition, $x$ is $v$-interesting and $\delta(v,x)=k$.
Let $a_0 =v,a_1,\dots,a_k=x$ be any shortest $v$-to-$x$ path.
For any vertex $a_i$ with $i\in[1,k]$, the edges incident to $a_i$ are indispensable according to \cref{def:interesting}.
We claim that, for any $i\in[1,k]$, $a_{i-1}$ is the predecessor of $a_i$, and in addition, $\ell(a_i)=i$.
The proof is by induction.
First, consider the case when $i=1$.
The edge $(a_0,a_1)$ is incident to the vertex $a_1$, so is indispensable.
Thus either $a_0$ is the predecessor of $a_1$, or $a_1$ is the predecessor of $a_0$.
Since $a_0$ $(=v)$ has no predecessor (\cref{fact:dispensable}), $a_1$ cannot be the predecessor of $a_0$, so $a_0$ is the predecessor of $a_1$.
Again using \cref{fact:dispensable}, we have $\ell(a_1)=\ell(a_0)+1$.
Since $\ell(a_0)=\ell(v)=0$, we have $\ell(a_1)=1$.
Next, consider the case when $i\geq 2$, and assume that the claim holds already for $1,\dots,i-1$.
The edge $(a_{i-1},a_i)$ is incident to the vertex $a_i$, so is indispensable.
Thus either $a_{i-1}$ is the predecessor of $a_i$, or $a_i$ is the predecessor of $a_{i-1}$.
By induction, $a_{i-2}$ is the predecessor of $a_{i-1}$.
Since the predecessor of $a_{i-1}$ is unique (\cref{fact:dispensable}), $a_i$ cannot be the predecessor of $a_{i-1}$, so $a_{i-1}$ is the predecessor of $a_i$.
Again using \cref{fact:dispensable}, we have $\ell(a_i)=\ell(a_{i-1})+1$.
Since $\ell(a_{i-1})=i-1$ by induction, we have $\ell(a_i)=i$.
In order to show that $x\in D(v,w)$, we prove in the following that $\delta(w,x)\geq k+2$.
Indeed, since $\delta(v,x)=k$, the event $\delta(w,x)\geq k+2$ implies that $x\in D(v,w)$ by \cref{def:distinguish}.\footnote{When $\delta(w,x)$ is infinite (i.e., $w$ and $x$ are not connected in $G'$), it is trivial that $x\in D(v,w)$, since $\delta(v,x)$ is finite. Therefore, it suffices to consider the case when $\delta(w,x)$ is finite in the rest of the proof.}
Let $b_0 =w,b_1,\dots,b_{k'}=x$ be any shortest $w$-to-$x$ path, for some integer $k'$.
See \cref{fig:dispensable}.
Let $i^*\in [0,k]$ be the largest integer such that $a_{k-j}=b_{k'-j}$ for all $j\in[0,i^*]$.
Let $z$ denote the vertex $a_{k-i^*}$, which equals $b_{k'-i^*}$.
If $i^*=k$, the $v$-to-$x$ path $a_0,a_1,\dots,a_k$ is a subpath of the $w$-to-$x$ path $b_0,b_1,\dots,b_{k'}$.
Since $\delta(w,v)\geq 2$, we have $\delta(w,x)=\delta(w,v)+\delta(v,x)\geq 2+k$, which implies that $x\in D(v,w)$.
From now on, it suffices to consider the case when $i^*<k$.
\begin{figure}[t]
\centering
\includegraphics[scale=0.8]{fig_dispensable}
\caption{
$a_0,a_1,\dots,a_k$ is a shortest $v$-to-$x$ path, and $b_0,b_1,\dots,b_{k'}$ is
a shortest $w$-to-$x$ path.
The vertex $z$ represents the branching point of these two paths.
Since the vertex $x$ is $v$-interesting, the highlighted edges are indispensable.
}
\label{fig:dispensable}
\end{figure}
Let $y$ denote the vertex $b_{k'-i^*-1}$.
Since $y$ is on a shortest $w$-to-$x$ path, we have
\begin{equation}
\label{eqn-wx}
\delta(w,x)=\delta(w,y)+\delta(y,x)=\delta(w,y)+(i^*+1)\geq \ell(y)+(i^*+1),
\end{equation}
where the inequality follows from the definition of $\ell(y)$.
It remains to analyze the value of $\ell(y)$.
The edge $(z,y)$ is incident to the vertex $z$ $(=a_{k-i^*})$, so is indispensable.
Thus either $y$ is the predecessor of $z$, or $z$ is the predecessor of $y$.
From the previous claim, $a_{k-i^*-1}$ is the predecessor of $z$.
Since the predecessor of $z$ is unique (\cref{fact:dispensable}) and $y\neq a_{k-i^*-1}$ (by definition of $i^*$), $y$ cannot be the predecessor of $z$, so $z$ is the predecessor of $y$.
Again by \cref{fact:dispensable}, $\ell(y)=\ell(z)+1$.
Since $\ell(z)=\ell(a_{k-i^*})=k-i^*$ by the previous claim, we have $\ell(y)=k-i^*+1$.
We conclude from \cref{eqn-wx} that \[\delta(w,x)\geq (k-i^*+1)+(i^*+1)=k+2,\] which implies that $x\in D(v,w)$.
We proved that $I_k(v)\subseteq D(v,w)$.
Similarly, $I_k(w)\subseteq D(v,w)$.
Therefore, $I_k(v)\cup I_k(w)\subseteq D(v,w)$.
We complete the proof of \cref{lem:interesting-distinguish}.
\subsubsection{Proof of \cref{lem:number-interesting}}
\label{sec:proof:number-interesting}
To begin with, we show that there are relatively few dispensable edges within a neighborhood of $\{v,w\}$.
This property, also called the \emph{locally tree-like} property, was previously exploited by Bollob{\'a}s~\cite{bollobas1982distinguishing} for three levels of neighborhoods on random $\Delta$-regular graphs in the context of automorphisms of those graphs.
In \cref{lem:dispensable}, we extend the analysis from \cite{bollobas1982distinguishing} to show the locally tree-like property for $M=\lceil\log\log n\rceil$ levels of neighborhoods.
\begin{lemma}
\label{lem:dispensable}
Let $M=\lceil\log\log n\rceil$.
We can construct two non-decreasing sequences $\{g_i\}_{1\leq i\leq M}$ and $\{L_i\}_{1\leq i\leq M}$, such that all of the following properties hold when $n$ is large enough:
\begin{enumerate}
\item \label{prop:g_i} $g_1=3$; and for any $i\in [2,M]$, $g_{i}=o\left((\Delta-1)^{L_{i-1}}/M\right)$;
\item \label{prop:ell_M} $L_M\geq \lceil \log_{\Delta-1} (3n/\log n) \rceil+2$;
\item \label{prop:dispensable} With probability $1-o(n^{-2})$, for all $i\in [1,M]$, strictly less than $g_i$ edges are dispensable among the edges incident to vertices in $U_{\leq L_i}$.
\end{enumerate}
\end{lemma}
\begin{proof}[Proof of \cref{lem:dispensable}]
First, we define two sequences $\{g_i\}_{1\leq i\leq M}$ and $\{f_i\}_{1\leq i\leq M}$ as follows: $g_1=3$, $f_1=\left\lceil n^{1/8}\right\rceil$, and for each $i\in [2,M]$, let
\[g_i=\left\lceil n^{1-7/2^{i+1}}/(\log n)^{1/2}\right\rceil,\]
\[f_i=\left\lceil n^{1-7/2^{i+2}}/(\log n)^{1/3}\right\rceil.\]
Next, we define the sequence $\{L_i\}_{1\leq i\leq M}$ as follows: for each $i\in[1,M]$, let \[L_i=\left\lceil\log_{\Delta-1} f_i\right\rceil-6.\]
It is easy to see that all of the three sequences $\{g_i\}_{1\leq i\leq M}$, $\{f_i\}_{1\leq i\leq M}$, and $\{L_i\}_{1\leq i\leq M}$ are non-decreasing.
To show Property~\ref{prop:g_i} of the statement, observe that for any $i\in[2,M]$,
\[g_i\cdot M=\left\lceil n^{1-7/2^{i+1}}/(\log n)^{1/2}\right\rceil\cdot \lceil\log\log n\rceil=o\left( n^{1-7/2^{i+1}}/(\log n)^{1/3}\right).\]
Thus $g_i\cdot M=o(f_{i-1})$ by definition of $f_{i-1}$.
From the definition of $L_{i-1}$ and the fact that $\Delta=O(1)$, we have $f_{i-1}=\Theta\left((\Delta-1)^{L_{i-1}}\right)$.
Therefore, $g_i\cdot M =o\left((\Delta-1)^{L_{i-1}}\right)$, hence $g_i =o\left((\Delta-1)^{L_{i-1}}/M\right)$.
To show Property~\ref{prop:ell_M} of the statement, observe that
\[f_M\geq n^{1-7/2^{(\log\log n)+2}}/(\log n)^{1/3}=2^{-7/4}\cdot n/(\log n)^{1/3}>(\Delta-1)^8\cdot 3n/\log n,\]
where the last inequality follows since $\Delta=O(1)$ and $n$ is large enough. Therefore,
$L_M=\lceil\log_{\Delta-1} f_M\rceil-6\geq \lceil \log_{\Delta-1} (3n/\log n) \rceil+2$.
It remains to show Property~\ref{prop:dispensable} of the statement.
Consider any integer $i\in[1,M]$.
Since the graph is $\Delta$-regular, the number of vertices in $U_{\leq L_i}$ is at most \[2+2\Delta\cdot\sum_{j=0}^{L_i-1}(\Delta-1)^j=2+\frac{2\Delta(\Delta-1)^{L_i}-2\Delta}{\Delta-2}<\frac{2\Delta(\Delta-1)^{L_i}}{\Delta-2}.\]
Let $n_i$ be the number of edges incident to vertices in $U_{\leq L_i}$.
Since each vertex is incident to $\Delta$ edges, we have $n_i< \frac{2\Delta^2(\Delta-1)^{L_i}}{\Delta-2}$.
Since $L_i=\left\lceil\log_{\Delta-1} f_i\right\rceil-6<\left(\log_{\Delta-1} f_i\right)-5$, we have
$n_i<\frac{2\Delta^2}{(\Delta-2)(\Delta-1)^5} \cdot f_i$, which is less than $f_i$ since $\Delta\geq 3$.
In order to bound the number of dispensable edges incident to vertices in $U_{\leq L_i}$, it suffices to bound the number of dispensable edges
among the first $f_i$ edges in the ordering of edge construction.
For any integer $t\in [1,\Delta n/2]$, denote $p(t)$ as the probability that the $t$-th edge in the construction is dispensable.
We use the argument of Bollob{\'a}s~\cite{bollobas1982distinguishing} to bound $p(t)$ as follows.
Before constructing the $t$-th edge, the previously constructed $t-1$ edges are incident to at most $t+1$ vertices.
For each of these $t+1$ vertices, at most $\Delta-1$ incident edges are not yet constructed.
Thus $p(t)\leq \frac{(\Delta-1) (t+1)}{\Delta n-2(t-1)}$, which is less than $\frac{2t}{n}$ as soon as $t=o(n)$.
From the definition of $f_i$, we have $f_i\leq n/(\log n)^{1/3}=o(n)$, thus $p(f_i)<\frac{2f_i}{n}$.
The probability that there exist $g_i$ dispensable edges among the first $f_i$ edges is at most
\[\binom{f_i}{g_i}\cdot \left(\frac{2f_i}{n}\right)^{g_i}<\left(\frac{e\cdot f_i}{g_i}\right)^{g_i}\cdot \left(\frac{2f_i}{n}\right)^{g_i},\]
where the inequality follows from Stirling's formula.
When $i=1$, we have
\[\left(\frac{e\cdot f_1}{g_1}\right)^{g_1}\cdot \left(\frac{2f_1}{n}\right)^{g_1}= \left(\frac{2e\cdot \left(\lceil n^{1/8}\rceil\right)^2}{3n}\right)^3= o(n^{-17/8}),\] and when $i\geq 2$, we have
\[\left(\frac{e\cdot f_i}{g_i}\right)^{g_i}\cdot \left(\frac{2f_i}{n}\right)^{g_i}=O\left(\left(\frac{2e}{(\log n)^{1/6}}\right)^{g_i}\right)=o(n^{-17/8}),\] by definition of $g_i$ and $f_i$ and by observing that $g_i\geq n^{1/8}/(\log n)^{1/2}$ for any $i\geq 2$.
Thus for any $i\in [1,M]$, with probability $1-o(n^{-17/8})$, strictly less than $g_i$ edges are dispensable among the first $f_i$ edges, hence strictly less than $g_i$ edges are dispensable among the edges incident to vertices in $U_{\leq L_i}$.
Therefore, with probability $1-o(M\cdot n^{-17/8})=1-o(n^{-2})$, for all $i\in [1,M]$, strictly less than $g_i$ edges are dispensable among the edges incident to vertices in $U_{\leq L_i}$.
This completes the proof for Property~\ref{prop:dispensable} of the statement.
\end{proof}
We condition on the occurrence of the high probability event in Property~\ref{prop:dispensable} of \cref{lem:dispensable}.
Let $\mathcal{E}$ denote this event.
We say that a dispensable edge is \emph{trivial} if it is incident to $v$ or incident to $w$, and \emph{non-trivial} otherwise.
Let $E_0$ be the set of trivial dispensable edges.
Let $E_1$ be the set of non-trivial dispensable edges that are incident to vertices in $U_{\leq L_1}$.
The event $\mathcal{E}$ implies that strictly less than $g_1(=3)$ edges are dispensable among the edges incident to vertices in $U_{\leq L_1}$.
Hence $|E_0|+|E_1|\leq 2$.
Let $F_0\subseteq U_1$ be the set of vertices $u\in U_1$ such that $u$ is not incident to any trivial dispensable edge.
We claim that $|F_0|\geq 2\Delta-2|E_0|$.
If $E_0=\emptyset$, it is clear that $|F_0|=2\Delta$.
If $E_0\neq \emptyset$, there are three cases for each trivial dispensable edge in $E_0$: (1) a self-loop at $v$ or at $w$, (2) a parallel edge incident to $v$ or incident to $w$, and (3) an edge $(v,u)$ when $u$ is a neighbor of $w$, or an edge $(w,u)$ when $u$ is a neighbor of $v$.
In all the three cases, the existence of each trivial dispensable edge in $E_0$ decreases the size of $F_0$ by at most 2.
Hence $|F_0|\geq 2\Delta-2|E_0|$.
For each $u\in F_0$, define \[T(u)=\{x\in U_{\leq L_1}\mid \ell(x)=\delta(x,u)+1\}.\]
Let $F\subseteq F_0$ be the set of vertices $u\in F_0$ such that $T(u)$ contains no vertex incident to a dispensable edge in $E_1$.
Since each dispensable edge in $E_1$ is incident to two vertices, we have \[|F|\geq |F_0|-2|E_1|\geq 2\Delta-2|E_0|-2|E_1|\geq 2(\Delta-2).\]
Since $F\subseteq F_0\subseteq U_1$, one of $v$ and $w$ has at least $|F|/2\geq \Delta-2$ neighbors in $F$.
Without loss of generality, we assume that $v$ has at least $\Delta-2$ neighbors in $F$.
We show that, under this assumption, $|I_k(v)|\geq (\Delta-2-o(1))(\Delta-1)^{k-1}$.
Our proof proceeds in increasing order on $k\geq 1$.
First, consider any integer $k\in[1,L_1]$.
Let $u$ be any neighbor of $v$ in $F$.
Since $T(u)$ contains no vertex incident to a dispensable edge, $T(u)$ corresponds to a complete $(\Delta-1)$-ary tree.
Consider any vertex $x\in T(u)$ such that $\delta(v,x)=k$.
Any vertex $z\in V\setminus\{v\}$ such that $\delta(v,z)+\delta(z,x)=\delta(v,x)$ belongs to the (unique) shortest $x$-to-$u$ path.
Since the shortest $x$-to-$u$ path is completely within $T(u)$, we have $z\in T(u)$, thus the edges incident to $z$ are indispensable.
Hence $x$ is $v$-interesting according to \cref{def:interesting}.
Since $\delta(v,x)=k$, we have $x\in I_k(v)$.
There are at least $\Delta-2$ choices of $u$, and for a fixed $u$, there are $(\Delta-1)^{k-1}$ choices of $x$.
Therefore, the size of $I_k(v)$ is at least $(\Delta-2)(\Delta-1)^{k-1}$.
Next, consider any integer $k\in [L_1+1,L_2]$.
For any vertex $x\in I_{L_1}(v)$, define
\[T'(x)=\{y\in U_{\leq L_2}\mid \ell(y)=\delta(y,x)+L_1\}.\]
Let $F'\subseteq I_{L_1}(v)$ be the set of vertices $x\in I_{L_1}(v)$ such that $T'(x)$ contains no vertex incident to a dispensable edge.
The event $\mathcal{E}$ implies that strictly less than $g_2$ dispensable edges are incident to vertices in $U_{\leq L_2}$.
Since each dispensable edge is incident to two vertices, we have $|F'|> |I_{L_1}(v)|-2g_2$.
Let $x$ be any vertex in $F'$.
Since $T'(x)$ contains no vertex incident to a dispensable edge, $T'(x)$ corresponds to a complete $(\Delta-1)$-ary tree.
Consider any vertex $y\in T'(x)$ such that $\delta(v,y)=k$.
Any vertex $z\in V\setminus\{v\}$ such that $\delta(v,z)+\delta(z,y)=\delta(v,y)$ belongs either to the (unique) shortest $x$-to-$v$ path or to the (unique) shortest $y$-to-$x$ path.
In the first case, since $x$ is $v$-interesting, the edges incident to $z$ are indispensable by \cref{def:interesting}.
In the second case, since the shortest $y$-to-$x$ path is completely within $T'(x)$, we have $z\in T'(x)$, thus the edges incident to $z$ are indispensable.
Hence $y$ is $v$-interesting according to \cref{def:interesting}.
Since $\delta(v,y)=k$, we have $y\in I_k(v)$.
There are $|F'|> |I_{L_1}(v)|-2g_2$ choices of $x$, and for a fixed $x$, there are $(\Delta-1)^{k-L_1}$ choices of $y$.
Therefore,
\begin{align*}
|I_k(v)|&> (|I_{L_1}(v)|-2g_2)\cdot (\Delta-1)^{k-L_1}\\
&\geq ((\Delta-2)(\Delta-1)^{L_1-1}-2g_2)\cdot (\Delta-1)^{k-L_1}\\
&= (\Delta-2-o(1/M))(\Delta-1)^{k-1},
\end{align*}
where the equality follows because $g_2=o((\Delta-1)^{L_1}/M)$ from \cref{lem:dispensable} and since $\Delta=O(1)$.
We move on to larger values of $k$.
Let $i$ be any integer in $[3,M]$.
From \cref{lem:dispensable}, strictly less than $g_i$ edges are dispensable among the edges incident to vertices in $U_{\leq L_i}$ and that $g_{i}=o\left((\Delta-1)^{L_{i-1}}/M\right)$.
For any integer $k\in[L_{i-1}+1,L_{i}]$, by extending the previous argument, we have
\[|I_k(v)|> (\Delta-2-i\cdot o(1/M))(\Delta-1)^{k-1}=(\Delta-2-o(1))(\Delta-1)^{k-1}.\]
We conclude that for any $k\in[1,L_M]$, we have $|I_k(v)|\geq (\Delta-2-o(1))(\Delta-1)^{k-1}$.
In the other case that $w$ has at least $\Delta-2$ neighbors in $F$, similarly, we have $|I_k(w)|\geq (\Delta-2-o(1))(\Delta-1)^{k-1}.$
Hence $|I_k(v)\cup I_k(w)|\geq (\Delta-2-o(1))(\Delta-1)^{k-1}$.
The event $\mathcal{E}$, on which the above analysis is conditioned, occurs with probability $1-o(n^{-2})$ according to \cref{lem:dispensable}.
Therefore, with probability $1-o(n^{-2})$, we have
\[|I_k(v)\cup I_k(w)|\geq (\Delta-2-o(1))(\Delta-1)^{k-1}, \text{ for any } k\in[1,L_M].\]
Again by \cref{lem:dispensable}, we have $L_M\geq \lceil \log_{\Delta-1} (3n/\log n) \rceil+2$.
Thus the above inequality holds for any positive integer $k\leq \lceil \log_{\Delta-1} (3n/\log n) \rceil+2$.
We complete the proof of \cref{lem:number-interesting}.
\subsubsection{Proof of the Structural Lemma (\cref{lem:Fvw}) using \cref{lem:interesting-distinguish,lem:number-interesting}}
\label{sec:proof-structral-lemma}
We set $k=\lceil \log_{\Delta-1} (3n/\log n) \rceil+2$.
By \cref{lem:interesting-distinguish}, $|D(v,w)|\geq |I_{k}(v)\cup I_{k}(w)|.$
By \cref{lem:number-interesting}, with probability $1-o(n^{-2})$, we have \[|I_{k}(v)\cup I_{k}(w)|>(\Delta-2-o(1))(\Delta-1)^{k-1}\geq (\Delta-2-o(1))(\Delta-1)\cdot(3n/\log n),\]
where the last inequality follows from the definition of $k$.
Since $\Delta\geq 3$, we have $(\Delta-2-o(1))(\Delta-1)>1$.
Thus with probability $1-o(n^{-2})$, we have $|I_{k}(v)\cup I_{k}(w)|>3n/\log n$, which implies that $|D(v,w)|>3n/\log n$.
We complete the proof of \cref{lem:Fvw}.
\subsection{Proof of \cref{lem:random} using the Structural Lemma}
\label{sec:proof-random}
Let $G$ be a random graph and let $S$ be a random subset of vertices, both defined in the statement of \cref{lem:random}.
According to \cref{lem:B}, $\expect{G,S}{|\hat E\setminus E|}\leq \expect{G}{|B|}+o(1)$.
It suffices to prove that $\expect{G}{|B|}=o(1)$.
First, we consider the case when $\Delta=O(1)$ is such that $\Delta\geq 3$.
Our analysis is based on the configuration model.
Let $G'$ be a multigraph corresponding to a uniformly random configuration.
Let $\expect{G'}{|B|}$ denote the expected size of the set $B$ defined on $G'$.
Since each $\Delta$-regular graph corresponds to the same number of configurations and because the probability spaces of configurations and of $\Delta$-regular graphs, respectively, are uniform, we have
$\expect{G}{|B|}\leq \expect{G'}{|B|}/p$, where $p$ is the probability that $G'$ is both simple and connected.
According to~\cite{wormald1999models}, when $\Delta\geq 3$, $p\sim e^{(1-\Delta^2)/4}$, which is constant since $\Delta=O(1)$.
Thus $\expect{G}{|B|}=O(\expect{G'}{|B|})$.
In order to bound $\expect{G'}{|B|}$, consider any vertex pair $\{v,w\}$ in $G'$ such that $\delta(v,w)\geq 2$.
From the Structural Lemma (\cref{lem:Fvw}), the event $|D(v,w)|\leq 3n/\log n$ occurs with probability $o(n^{-2})$.
Equivalently, the event $|D(v,w)|\leq 3n\cdot (\log n)/s$ occurs with probability $o(n^{-2})$, since $s=\log^2 n$.
Thus the event $\{v,w\}\in B$ occurs with probability $o(n^{-2})$ according to the definition of $B$ in \cref{lem:B}.
There are $n(n-1)/2$ vertex pairs $\{v,w\}$ in $G'$.
By linearity of expectation, $\expect{G'}{|B|}$ is at most $o(n^{-2})\cdot n(n-1)/2=o(1)$.
Hence $\expect{G}{|B|}=O(\expect{G'}{|B|})=o(1)$.
In the special case when $\Delta=2$, a 2-regular graph $G$ is a ring.
Consider any vertex pair $\{v,w\}$ in $G$ such that $\delta(v,w)\geq 2$.
It is easy to see that at least $n-4$ vertices $u$ in the ring $G$ are such that $|\delta(u,v)-\delta(u,w)|> 1$, so $|D(v,w)|\geq n-4$ by \cref{def:distinguish}.
When $n$ is large enough, $n-4>3n/\log n$, so $|D(v,w)|>3n/\log n$.
Equivalently, we have $|D(v,w)|>3n\cdot (\log n)/s$, since $s=\log^2 n$.
Thus $\{v,w\}\notin B$ according to the definition of $B$ in \cref{lem:B}.
Therefore, $B=\emptyset$ and $\expect{G}{|B|}=0$.
We conclude that $\expect{G}{|B|}=o(1)$ for any $\Delta=O(1)$. Thus $\expect{G,S}{|\hat E\setminus E|}\leq \expect{G}{|B|}+o(1)=o(1)$.
We complete the proof of \cref{lem:random}.
\begin{remark}
\label{remark:extra-work}
With more care in the construction of the sequences in \cref{lem:dispensable}, we can improve the bound in Property~\ref{prop:ell_M} of \cref{lem:dispensable} by
\[L_M\geq \lceil \log_{\Delta-1} (3n/(\log \log n)^{2+\epsilon}) \rceil+2,\] for any $\epsilon>0$.
As a result, the range of $k$ in \cref{lem:number-interesting} can be extended to
$k\leq \lceil \log_{\Delta-1} (3n/(\log \log n)^{2+\epsilon}) \rceil+2$, and consequently, the event in \cref{lem:Fvw} can be replaced by $|D(v,w)|>3n/(\log \log n)^{2+\epsilon}$.
Therefore, \cref{lem:random} holds for $s=\log n \cdot(\log \log n)^{2+\epsilon}$.
This implies that the parameter $s$ in \cref{thm:random} can be reduced to $\log n \cdot(\log \log n)^{2+\epsilon}$.
\end{remark}
\section{Other Reconstruction Models (Proofs of \cref{cor:betweenness,cor:all-distances,cor:parallel})}
\label{sec:variant}
In this section, we study the reconstruction of random $\Delta$-regular graphs in the all-distances query model, in the betweenness query model, as well as in the parallel setting.
By extending the analysis from \cref{sec:random}, we observe that the set $\hat E$ constructed in \textsc{Simple}\xspace (\cref{algo:main}) equals the edge set $E$ with high probability, in \cref{lem:single-iteration}.
\begin{lemma}
\label{lem:single-iteration}
Let $G$ be a uniformly random $\Delta$-regular graph with $\Delta=O(1)$.
Let $s=\log^2 n$.
Let $S\subseteq V$ be a set of $s$ vertices selected uniformly and independently at random from $V$.
With probability $1-o(1)$, $|\hat E|=\Delta n/2$.
In addition, the event $|\hat E|=\Delta n/2$ implies $\hat E=E$.
\end{lemma}
\begin{proof}
From \cref{lem:random}, $\expect{G,S}{|\hat E\setminus E|}=o(1)$.
By Markov's inequality, the event that $|\hat E\setminus E|\geq 1$ occurs with probability $o(1)$.
Thus with probability $1-o(1)$, we have $\hat E\subseteq E$.
On the other hand, $E\subseteq \hat E$ by \cref{fact:hat-E}.
Therefore, the event that $\hat E= E$ occurs with probability $1-o(1)$, and this event occurs if and only if $|\hat E|=|E|$.
The statement follows since $|E|=\Delta n/2$ in a $\Delta$-regular graph.
\end{proof}
\subsection{A Modified Algorithm}
\cref{lem:single-iteration} enables us to design another reconstruction algorithm in the distance query model, called \textsc{Simple-Modified}\xspace, which is a modified version of \textsc{Simple}\xspace, see \cref{algo:variant}.
\textsc{Simple-Modified}\xspace repeatedly computes a set $\hat E$ as in \textsc{Simple}\xspace, until the size of $\hat E$ equals $\Delta n/2$.
The parameter $s$ is fixed to $\log^2 n$.
\begin{algorithm}
\caption{\textsc{Simple-Modified}\xspace$(V)$}
\label{algo:variant}
\begin{algorithmic}[1]
\Repeat
\State $S\gets$ sample of $s=\log^2 n$ vertices selected uniformly and independently at random from $V$
\For{$u\in S$ and $v\in V$}
\textsc{Query}$(u,v)$
\EndFor
\State $\hat E\gets$ set of vertex pairs $\{a,b\}\subseteq V$ such that, for all $u\in S$, $|\delta(u,a)-\delta(u,b)|\leq 1$
\Until $|\hat E|=\Delta n/2$
\State \Return $\hat E$
\end{algorithmic}
\end{algorithm}
\begin{lemma}
\label{lem:variant}
Let $G$ be a uniformly random $\Delta$-regular graph with $\Delta=O(1)$.
In the distance query model, \textsc{Simple-Modified}\xspace (\cref{algo:variant}) is a reconstruction algorithm, i.e., its output equals the edge set $E$.
The expected number of iterations of the \textbf{repeat} loop in \textsc{Simple-Modified}\xspace is $1+o(1)$.
\end{lemma}
\begin{proof}
Upon termination of the \textbf{repeat} loop in \textsc{Simple-Modified}\xspace, we have $|\hat E|=\Delta n/2$, which implies $\hat E=E$ by \cref{lem:single-iteration}.
Thus the output of \textsc{Simple-Modified}\xspace equals the edge set $E$.
In each iteration of the \textbf{repeat} loop, the event that $|\hat E|=\Delta n/2$ occurs with probability $1-o(1)$ by \cref{lem:single-iteration}.
Thus the expected number of iterations of the \textbf{repeat} loop is $1+o(1)$.
\end{proof}
\subsection{All-Distances Query Model (Proof of \cref{cor:all-distances})}
By \cref{lem:variant}, \textsc{Simple-Modified}\xspace is a reconstruction algorithm in the distance query model.
We extend \textsc{Simple-Modified}\xspace to the all-distances query model.
Observe that in \textsc{Simple-Modified}\xspace, the distance queries are performed between each sampled vertex $u\in S$ and all vertices in the graph.
This is equivalent to a single query at each sampled vertex $u\in S$ in the all-distances query model. Hence each iteration of the \textbf{repeat} loop in \textsc{Simple-Modified}\xspace corresponds to $|S|=\log^2 n$ all-distances queries.
Again by \cref{lem:variant}, the expected number of iterations of the \textbf{repeat} loop in \textsc{Simple-Modified}\xspace is $1+o(1)$.
Therefore, in the all-distances query model, an algorithm equivalent to \textsc{Simple-Modified}\xspace reconstructs the graph using $(1+o(1))\cdot \log^2 n=O(\log^2 n)$ all-distances queries in expectation.
\subsection{Betweenness Query Model (Proof of \cref{cor:betweenness})}
In the betweenness query model, Abrahamsen~et~al.~\cite{abrahamsen2016graph} showed that $\tilde O(\Delta^2\cdot n)$ betweenness queries suffice to compute the distances from a given vertex to all vertices in the graph (it is implicit in Lemma~16 from~\cite{abrahamsen2016graph}), so an all-distances query can be simulated by $\tilde O(\Delta^2\cdot n)$ betweenness queries.
As a consequence of \cref{cor:all-distances}, we achieve a reconstruction algorithm using $\tilde O(\Delta^2\cdot n\cdot \log^2 n)=\tilde O(n)$ betweenness queries in expectation, since $\Delta=O(1)$.
\subsection{Parallel Setting (Proof of \cref{cor:parallel})}
By \cref{lem:variant}, \textsc{Simple-Modified}\xspace is a reconstruction algorithm in the distance query model.
We analyze \textsc{Simple-Modified}\xspace in the parallel setting.
Each iteration of the \textbf{repeat} loop consists of $n\cdot \log^2 n$ distance queries, and the distance queries within the same iteration of the \textbf{repeat} loop can be performed in parallel.
Again by \cref{lem:variant}, the expected number of iterations of the \textbf{repeat} loop in \textsc{Simple-Modified}\xspace is $1+o(1)$.
Thus the expected number of rounds in \textsc{Simple-Modified}\xspace is $1+o(1)$, and the expected number of distance queries in \textsc{Simple-Modified}\xspace is $(1+o(1))\cdot n \cdot\log^2 n=\tilde O(n)$.
\section{Metric Dimension (Proof of \cref{cor:metric_dimention})}
\label{sec:metric_dimension}
In this section, we study the metric dimension of random $\Delta$-regular graphs.
To begin with, we show an elementary structural property of random $\Delta$-regular graphs, in \cref{lem:metric-dimension}, based on a classical result on those graphs.
\begin{lemma}
\label{lem:metric-dimension}
Let $G=(V,E)$ be a uniformly random $\Delta$-regular graph with $\Delta=O(1)$.
With probability $1-o(1)$, for any edge $(a,b)$ of the graph $G$, there exists a vertex $c\in V\setminus \{a,b\}$ that is adjacent to $b$ but is not adjacent to $a$.
\end{lemma}
\begin{proof}
First, consider the case when $\Delta=2$.
A $2$-regular graph is a ring.
Let $(a,b)$ be any edge of the graph.
The vertex $b$ has two neighbors, the vertex $a$ and another vertex, let it be $c$.
We have $c\in V\setminus \{a,b\}$ and $c$ is not adjacent to $a$ (as soon as $n>3$).
The statement of the lemma follows.
Next, consider the case when $\Delta=O(1)$ is such that $\Delta\geq 3$.
Let $\mathcal{E}$ denote the event that, for any edge $(a,b)$ of $G$, there do not exist two vertices $c_1$ and $c_2$ in $G$, such that all of the 4 edges $(a,c_1),(a,c_2),(b,c_1),(b,c_2)$ belong to $G$.
We show that $\mathcal{E}$ occurs with probability $1-o(1)$.
Indeed, if for some edge $(a,b)$ of $G$, there exist two vertices $c_1$ and $c_2$ such that $(a,c_1),(a,c_2),(b,c_1),(b,c_2)$ are edges of $G$, then the induced subgraph on $\{a,b,c_1,c_2\}$ consists of at least 5 edges.
A classical result on random $\Delta$-regular graphs shows that, for any constant integer $k$, the probability that there exists an induced subgraph of $k$ vertices with at least $k+1$ edges is $o(1)$, see, e.g., Lemma~11.12 in \cite{frieze_karonski_2015}.
Therefore, $\mathcal{E}$ occurs with probability $1-o(1)$.
We condition on the occurrence of $\mathcal{E}$.
For any edge $(a,b)$ of $G$, let $N(a)$ be the set of $\Delta-1$ neighbors of $a$ that are different from $b$, and let $N(b)$ be the set of $\Delta-1$ neighbors of $b$ that are different from $a$.
Since $\Delta\geq 3$, we have $|N(a)|=|N(b)|\geq 2$.
The event $\mathcal{E}$ implies that $N(a)\neq N(b)$, so there exists a vertex $c\in N(b)\setminus N(a)$.
By definition, $c$ is adjacent to $b$ but is not adjacent to $a$, and $c\in V\setminus \{a,b\}$.
Since $\mathcal{E}$ occurs with probability $1-o(1)$, we conclude that, with probability $1-o(1)$, for any edge $(a,b)$ of the graph $G$, there exists a vertex $c\in V\setminus\{a,b\}$ that is adjacent to $b$ but is not adjacent to $a$.
\end{proof}
\begin{definition}[e.g., \cite{bailey2011base,chartrand2000resolvability}]
\label{def:resolving}
A subset of vertices $S\subseteq V$ is a \emph{resolving set} for a graph $G=(V,E)$ if, for any pair of vertices $\{a,b\}\subseteq V$, there is a vertex $u\in S$ such that $\delta(u,a)\neq \delta(u,b)$.
The \emph{metric dimension} of $G$ is the smallest size of a resolving set for $G$.
\end{definition}
Based on the analysis of \textsc{Simple}\xspace from \cref{lem:single-iteration} and the structural property from \cref{lem:metric-dimension}, we show that, with high probability, a random subset of $\log^2 n$ vertices is a resolving set for a random $\Delta$-regular graph, in \cref{lem:landmarks}.
\begin{lemma}
\label{lem:landmarks}
Let $G=(V,E)$ be a uniformly random $\Delta$-regular graph with $\Delta=O(1)$.
Let $S\subseteq V$ be a sample of $s=\log^2 n$ vertices selected uniformly and independently at random from $V$.
With probability $1-o(1)$, the set $S$ is a resolving set for the graph $G$.
\end{lemma}
\begin{proof}
Let $\mathcal{E}_1$ denote the event that, for any edge $(a,b)$ of the graph $G$, there exists a vertex $c\in V\setminus\{a,b\}$ that is adjacent to $b$ but is not adjacent to $a$.
By \cref{lem:metric-dimension}, the event $\mathcal{E}_1$ occurs with probability $1-o(1)$.
Let $\mathcal{E}_2$ denote the event $\hat E=E$.
By \cref{lem:single-iteration}, the event $\mathcal{E}_2$ occurs with probability $1-o(1)$.
Thus with probability $1-o(1)$, both events $\mathcal{E}_1$ and $\mathcal{E}_2$ occur simultaneously.
We condition on the occurrences of both events $\mathcal{E}_1$ and $\mathcal{E}_2$ in the subsequent analysis.
First, consider any vertex pair $\{a,b\}\subseteq V$ such that $\delta(a,b)\geq 2$.
The event $\mathcal{E}_2$ implies that $\{a,b\}\notin \hat E$.
By definition, there exists some vertex $u\in S$ such that $|\delta(u,a)-\delta(u,b)|\geq 2$, which implies that $\delta(u,a)\neq \delta(u,b)$.
Next, consider any vertex pair $\{a,b\}\subseteq V$ such that $\delta(a,b)=1$.
The event $\mathcal{E}_1$ implies that there exists a vertex $c\in V\setminus\{a,b\}$ that is adjacent to $b$ but is not adjacent to $a$. Since $\delta(a,c)\geq 2$, the event $\mathcal{E}_2$ implies that $\{a,c\}\notin \hat E$.
By definition, there exists some vertex $u\in S$ such that $|\delta(u,a)-\delta(u,c)|\geq 2$.
Using an elementary inequality of $|x-y|+|y-z|\geq |x-z|$ for any three real numbers $x$, $y$, and $z$, we have
\begin{align*}
|\delta(u,a)-\delta(u,b)|&\geq |\delta(u,a)-\delta(u,c)|-|\delta(u,b)-\delta(u,c)|\\
&\geq |\delta(u,a)-\delta(u,c)|-\delta(b,c)&\text{(by triangle inequality)}\\
&\geq 2-\delta(b,c) & \text{(by definition of $u$)} \\
&\geq 1 & \text{(since $(b,c)$ is an edge)}&.
\end{align*}
Thus $\delta(u,a)\neq \delta(u,b)$.
Therefore, conditioned on the occurrences of both events $\mathcal{E}_1$ and $\mathcal{E}_2$, for any vertex pair $\{a,b\}\subseteq V$, there exists a vertex $u\in S$ such that $\delta(u,a)\neq \delta(u,b)$.
We conclude that, with probability $1-o(1)$, the set $S$ is a resolving set for $G$.
\end{proof}
From \cref{lem:landmarks}, with probability $1-o(1)$, the metric dimension of a random $\Delta$-regular graph is at most $\log^2 n$.
This completes the proof of \cref{cor:metric_dimention}. |
1,116,691,497,331 | arxiv | \section{Introduction}
The accurate tunability of time dependent fields has allowed the access and manipulation of quantum
systems by the resonant illumination of atoms, finding interesting
effects such as the possibility of trapping the atom in a non-absorbing coherent
superposition ({\it dark state}) which is known as
Coherent Population Trapping
\cite{doubleres,gray,darkst}.
This effect has been applied to non-conducting states in quantum dots (QD) -- also known as {\it artificial
atoms}-- for spinless electrons
\cite{tobiasRenzoni,chu}, having revealed several advantages for
practical issues such as electronic current
switching\cite{tobiasRenzoni} or de-coherence
probing\cite{michaelis}.
Great interest is recently focussed in the coherent control of
electron spin states in the search of candidates for qubits.
Within this scope, optical trapping of localized spins has been
treated in self-assembled quantum dots\cite{economou} and achieved
in diamond deffects\cite{santori}. Electron spin states in QD's
have been proposed as qubits because of their long spin
de-coherence and relaxation times\cite{loss,burkard}. The
controlled
rotation of a single electron spin is one of the challenges for
quantum computation purposes. In combination with the recently
measured controlled exchange gate between two neighboring spins,
driven coherent spin rotations would permit universal quantum
operations.
Recently, experimental and theoretical efforts have been devoted
to describe Electron Spin Resonance (ESR) in single\cite{engel}
and double quantum dots (DQD's)\cite{kopp2,laird}. There, an AC magnetic
field, $B_{\rm AC}$, with a frequency resonant with the Zeeman
splitting $\Delta$ induced by a DC magnetic field, $B_{\rm DC}$,
drives electrons to perform spin coherent rotations which can be
perturbed by electron spin flip induced by scattering processes
such as spin orbit or hyperfine interactions. These are
manifested as a damping of the oscillations. In particular,
hyperfine interaction between electron and nuclei spins induces
flip-flop transitions and an effective Zeeman splitting which adds
to the one induced by $B_{\rm DC}$\cite{naza,fran,kopp2}. ESR mechanism also allows to access spin-orbit physics in the presence of AC electric fields\cite{meier,nowack} or vibrational degrees of freedom in nano-mechanical resonators\cite{lambert}.
In the experiments of Ref. \cite{kopp2}, fast electric field
switching was required in order to reach the Coulomb blockade
regime and to manipulate the spin electron system. In the present
work we analyze theoretically a simpler configuration, easier to
perform experimentally than the one proposed in \cite{kopp2},
which does not require to bring the double occupied electronic
state in the right dot to the Coulomb blockade configuration and
which consists on conventional tunnel spectroscopy in a DQD under
crossed DC and AC magnetic fields, {\it without additional
electric pulses}.
The main purpose of this paper is to analyze
the spin dynamics and the tunneling current and to propose for the
first time how to trap electrons in a DQD performing coherent spin
rotations by a resonant AC magnetic field
which can be unambiguously detected by
conventional tunneling spectroscopy measurements.
We also show how to trap electrons by
means of resonant bi-chromatic magnetic fields in the case where
the Zeeman splitting is different in both QD's (as it usually
happens in the presence of hyperfine interaction).
We consider a DQD in the {\it spin blockade}
regime\cite{weinmann},
i.e., inter-dot tunneling is suppressed due to Pauli exclusion principle\cite{ono} as the electrons in the DQD have
parallel spins.
This effect may be lifted by the rotation of the electrons spin,
under certain conditions, by the introduction of crossed
$B_{\rm DC}$ and $B_{\rm AC}$. Then, when $B_{\rm AC}$ is resonant with the Zeeman splitted level, the electrons both rotate their spins within each QD and tunnel, performing
spatial oscillations between the left and right QD.
The electronic current through such a system performs coherent
oscillations which depend non trivially on both the AC intensity
and the inter-dot coupling.
We will see that, when the effective $B_{\rm DC}$ is
homogeneous through the sample, current is quenched since
the system is coherently trapped in the triplet subspace ({\it dark subspace}) in spite of the driving field.
However, a finite current may flow as a consequence of spin
relaxation processes.
If $\Delta$ is different within eachQD (it can be due to an
inhomogeneous $B_{\rm DC}$, different g factors or the presence of
hyperfine interaction\cite{naza} with different intensity within each
QD),
$B_{\rm AC}$ is resonant only in one of them and the trapping is lifted.
Then, off-resonance dynamics of the other electron should in principle affect the total dynamics of the system and it
should be
included in a theoretical description not restricted to the
Rotating Wave Approximation\cite{hanggi} which is valid just at
resonance. Finally we will show that it is possible to trap the
electrons also in this configuration, where $\Delta$ is different
within each QD, by applying a bichromatic $B_{\rm AC}$, such that each
frequency matches the Zeeman splitting in each QD.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth,clip]{esr.ps}
\end{center}
\caption{\label{esquema}\small Schematic diagram of the DQD in the presence of crossed DC and AC magnetic fields.
}
\end{figure}
\section{Model}\label{model}
Our system consists on two weakly coupled QD's connected to two
fermionic leads, described by the model Hamiltonian:
\begin{equation}
\hat H(t)=\hat H_0+\hat{H}_{\rm LR}+\hat H_{\rm T}(t)+\hat H_{\rm
leads},
\end{equation}
where $\hat{H}_0=\sum_{i\sigma}\varepsilon_{i}\hat
c_{i\sigma}^\dagger\hat c_{i\sigma}+\sum_{i}U_{i}\hat
n_{i\uparrow}\hat n_{i\downarrow}+V\hat n_{\rm L}\hat n_{\rm R}$
describes the uncoupled DQD, $\hat{H}_{\rm
LR}=-\sum_{\sigma}\left(t_{\rm LR}\hat{c}_{\rm
L\sigma}^\dagger\hat{c}_{\rm R\sigma}+h.c.\right)$ is the
inter-dot coupling and $\hat H_{\rm T}=\sum_{l\epsilon\{\rm
L,R\}k\sigma}(\gamma_{l}\hat{d}_{lk\sigma}^\dagger\hat{c}_{l\sigma}+h.c.)$
gives the tunneling between the DQD and the leads, described by:
$\hat{H}_{\rm leads}=\sum_{lk\sigma}\varepsilon_{lk}\hat
d_{lk\sigma}^\dagger\hat d_{lk\sigma}$, where $\varepsilon_i$ is
the energy of an electron located in dot $i$ and $U_i$ ($V$) is
the intra-dot (inter-dot) Coulomb repulsion. For simplicity, we
disregard the Heisenberg exchange interaction\cite{fran,ono}.
Finite exchange, would slightly split the inter-dot singlet-triplet energy separation without modifying qualitatively
the results presented here. The chemical potentials of the leads,
$\mu_i$, are such that only two electrons (one in each dot) are
allowed in the system:
$\varepsilon_i<\mu_i-V<\varepsilon_i+U_i$ and
$\mu_i<\varepsilon_i+2V$. In this configuration, the spin blockade
is manifested when a bias voltage is applied such that the state
with two electrons in the right dot (the one which contributes to
the current) is in resonance with those with one electron in each
dot. The current is then quenched when the electrons in each QD
have the same spin polarization and
Pauli exclusion principle avoids the inter-dot tunneling\cite{ono}.
We now introduce a magnetic field with a
DC component along the $Z$ axis (which breaks the spin
degeneration by a Zeeman splitting $\Delta_i=g_iB_{z,i}$) and a
circularly polarized AC component in the perpendicular plane $XY$
that rotates the $Z$ component of the electron spin when its
frequency satisfies the resonance condition, $\omega=\Delta_i$:
\begin{equation}
\hat{H}_{\rm B}(t)=\sum_i\left[\Delta_iS_z^i+B_{\rm AC}\left(S_x^i\cos\omega
t+S_y^i\sin\omega t\right)\right],
\end{equation}
where ${\bf S}_i=(1/2)\sum_{\sigma \sigma'} c^\dagger_{i\sigma} {\bf
\sigma}_{\sigma \sigma'} c_{i\sigma'}$ are the spin operators of
each dot (see Fig. \ref{esquema}).
The dynamics of the system is given by the time evolution of the reduced density matrix elements,
whose equation of motion, within the Born-Markov approximation\cite{blum}, reads:
\begin{eqnarray}
\dot\rho_{ln}(t)&=&-i\langle l|[H_0+H_{\rm LR}+H_{\rm B}(t),\rho]|n\rangle\\
&&+\sum_{k\ne n}\left(\Gamma_{nk}\rho_{kk}-\Gamma_{kn}\rho_{nn}\right)\delta_{ln}-\Lambda_{ln}\rho_{ln}(1-\delta_{ln}).\nonumber
\end{eqnarray}
where the first term in the right hand side accounts for the coherent dynamics within the double quantum dot. $\Gamma_{ln}$ are the transition rates from state $|n\rangle$ to $|l\rangle$ including those induced by the coupling to the leads--being $\Gamma_i=2\pi|\gamma_i|^2$ when they occur through lead $i\epsilon\{{\rm L,R}\}$--and the eventual spin scattering processes (introduced phenomenologically
by the spin relaxation rate, $T_{1}^{-1}$ ~\cite{prb}). Decoherence appears due to the term $\Lambda_{ln}=\frac{1}{2}\sum_k(\Gamma_{kl}+\Gamma_{kn})+T_2^{-1}$, being ${T_2}=0.1T_{1}$ the intrinsic spin decoherence time. The evolution of the occupation probabilities is
given by the diagonal elements of the density matrix.
In our configuration, the states relevant to the dynamics are:
$|0,\uparrow\rangle$, $|0,\downarrow\rangle$,
$|T_+\rangle=|\uparrow,\uparrow\rangle$,
$|T_-\rangle=|\downarrow,\downarrow\rangle$,
$|\uparrow,\downarrow\rangle$, $|\downarrow,\uparrow\rangle$,
$|S_R\rangle=|0,\uparrow\downarrow\rangle$. This latest state is the only one that contributes to tunneling to the right lead, so the current is given by:
\begin{equation}
I(t)=2e\Gamma_{\rm R}\rho_{S_{\rm R},S_{\rm R}}(t).
\end{equation}
Each coherent process is described by a {\it
Rabi-like} frequency. For instance, in the case of two {\it
isolated} spins, one in each QD, which are in resonance with
$B_{\rm AC}$
($\Delta_L=\Delta_R$), the oscillation
frequency is: $\Omega_{\rm AC}=2B_{\rm AC}$, see Appendix \ref{appesr}.
On the other hand, the inter-dot tunneling events can be
described by the resonance transitions between the states
$|\uparrow,\downarrow\rangle$, $|\downarrow,\uparrow\rangle$ and
$|S_{\rm R}\rangle$,
whose populations oscillate with a frequency $\Omega_{\rm T}=2\sqrt{2}t_{\rm LR}$, as shown in Appendix \ref{apptun}
\subsection{$\Delta_{\rm L}=\Delta_{\rm R}$}
We consider initially the case where
$B_{\rm DC}$ is homogeneous, so that
$\Delta_{\rm R}=\Delta_{\rm L}$ and both spins rotate simultaneously.
Then, the dynamics of the system is properly described in terms of
the dynamics of the total spin of the DQD. $B_{\rm AC}$ acts only
on the states with a finite total magnetic moment: $|T_\pm\rangle$
and
$|T_0\rangle=\frac{1}{\sqrt{2}}(|\uparrow,\downarrow\rangle+|\downarrow,\uparrow\rangle)$,
while the inter-dot tunneling, that does not change the spin, is
only possible between $|S_{\rm R}\rangle$ and
$|S_0\rangle=\frac{1}{\sqrt{2}}(|\uparrow,\downarrow\rangle-|\downarrow,\uparrow\rangle)$.
Therefore, in the absence of spin relaxation, spin rotation
and inter-dot hopping
are independent processes so any eventual singlet component will
decay by tunneling to the contacts. This produces a finite
current in the transitory regime which drops to zero for longer
times. This process is independent of $B_{\rm AC}$, which is
manifested in the frequency of the current oscillations,
$\Omega_{\rm T}$, cf. Fig. \ref{ds}a. Thus, for large enough times
($t\gg \Gamma_i^{-1}$), transport is cancelled and one electron
will be confined in each QD. The electrons will be coherently
trapped in the inter-dot triplet subspace, $T_\pm$, $T_0$ (dark
subspace) and behave as
an isolated single particle of angular momentum $S=1$ performing
coherent spin rotations with a frequency $\Omega_{AC}$ (Fig. \ref{ds}b).
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth,clip]{Itransient.ps}
\end{center}
\caption{\label{ds}\small (a) I(t) for initial state
$|\uparrow,\downarrow\rangle$ in the absence of spin relaxation
for $\Delta_{\rm L}=\Delta_{\rm R}=\Delta$ and $\Omega_{\rm
AC}=\Omega_{\rm T}/2$. (b) The corresponding occupation
probabilities: $|\uparrow,\downarrow\rangle$ (solid),
$|\downarrow,\uparrow\rangle$ (dash-dotted),
$|0,\uparrow\downarrow\rangle$ (dotted) and
$|\uparrow,\uparrow\rangle$ and $|\downarrow,\downarrow\rangle$
(dashed). Parameters ($e=\hbar=1$): $\Gamma_L=\Gamma_R=\Gamma=10^{-3} {\rm
meV}$, $T_{1(2)}^{-1}=0$, $\Omega_{\rm T}={\rm 11.2 GHz}$ and
holding for the rest of the plots (in meV):
$\varepsilon_{\rm L}=1.5$, $\varepsilon_{\rm R}=0.45$, $\Delta=0.026$ ($B_{\rm DC}\sim1T$), $U_{\rm L}=1$,
$U_{\rm R}=1.45$, $V=0.4$, $\mu_{\rm L}=2$ and $\mu_{\rm R}=1.1$.
}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=\linewidth,clip]{DLrelw.eps}
\end{center}
\caption{\label{DLrelw}\small Effect of (a) finite spin relaxation rates, $T_1^{-1}$ and (b) the Zeeman inhomogenity, $\Delta_{\rm L}/\Delta_{\rm R}$, on the stationary current when tuning the frequency of the magnetic field. In (a), $\Delta_{\rm L}=\Delta_{\rm R}$; in (b), $T_1=0$. (Same parameters as in Fig. \ref{ds} but $\Gamma=10^{-2}{\rm meV}$).
}
\end{figure}
A finite spin relaxation time mixes the dynamics of the singlet
and the triplet subspaces, so that inter-dot tunneling is
allowed and finite current appears, cf. Fig. \ref{DLrelw}a. The shorter the spin relaxation time, the larger is the
singlet-triplet mixing and therefore, the higher is the current,
cf. Fig. \ref{irel}a, up to relaxation times fast enough to
dominate the electron dynamics ($T_1^{-1}\gg\Omega_{\rm AC}$). In this
case, ESR
is not effective in order to rotate the spins and spin blockade is recovered, cf. Fig. \ref{irel}b.
Since both, spin rotations and spatial delocalization are resonant
processes,
this singlet-triplet mixing produces complicated dynamics in the current that shows oscillations with a
frequency that depends both on the inter-dot coupling and the AC
field intensity, cf. Fig. \ref{irel}c.
When $B_{\rm AC}$
increases, the frequency of the current oscillations increases but
not linearly due to the interplay with the hopping. This effect is
small for long spin relaxation times.
\par
\subsection{$\Delta_{\rm L}\ne\Delta_{\rm R}$}
However, if one introduces an inhomogeneous $ B_{\rm DC}$, so that
only one of the electrons is in resonance with $B_{\rm AC}$ (for
instance, $\omega=\Delta_{\rm R}\ne\Delta_{\rm L}$), the total spin symmetry is broken and then
the electron in each QD behaves differently. In fact, the states
$|\downarrow,\uparrow\rangle$ and $|\uparrow,\downarrow\rangle$
have different occupation probabilities
and
inter-dot hopping induces
the delocalization of the individual spins. This populates the state
$|S_{\rm R}\rangle$ and a finite current appears showing a double peak whose position shifts following the inhomogenity, cf. Fig. \ref{DLrelw}b. This double peak may be the origin of the under-resolved structure measured in Ref. \cite{kopp2}. By tuning the Zeeman splittings difference, the current presents an
anti-resonance of depth $\sim 0.1{\rm nA}$ near $\Delta_{\rm
L}=\Delta_{\rm R}$, cf. Fig. \ref{izl}a, pretty similar to the coherently trapped atom spectrum in quantum optics\cite{gray}. As expected, taking one
electron slightly out of resonance, the frequency of the current
oscillation is modified in comparison with the double resonance
situation. If one electron is far enough from resonance, the
frequency of the current oscillation becomes roughly half of the
value as it would be the case for the rotation of one electron
spin, cf. Fig. \ref{izl}b. Otherwise, the off-resonant electron
modifies the Rabi frequency for spin rotations in a more
complicated way depending on $B_{\rm AC}$, $t_{\rm LR}$ and how
much both dynamics are mixed (which is related to $\Delta_{\rm
L}-\Delta_{\rm R}$), cf. Fig. \ref{izl}c.
The limiting case when
$\Delta_{\rm L}$ and $\Delta_{\rm R}$ are very different and
only the electron in the right QD is affected effectively by $B_{\rm AC}$ is analyzed in Appendix \ref{appesrtun}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth,clip]{deltasIg.ps}
\end{center}
\caption{\label{irel}\small (a) I(t) for different spin-flip times (in $\mu{\rm s}$), with
$\Omega_{\rm AC}=\Omega_{\rm T}=11.2 {\rm GHz}$ and $\Delta_{\rm L}=\Delta_{\rm R}=\Delta$. The initial state here is
$|\uparrow,\uparrow\rangle$, then, for $T_1^{-1}=0$, there is no mixing
of the triplet and singlet subspaces and therefore, no current flows through the system. Spin relaxation
processes contribute to populate the singlet, producing a finite current. (b) Stationary
current as a function of spin relaxation time. For long $T_1$, electrons remain in the dark space. As $T_1$ decreases,
I begins to flow, being again suppressed for short enough $T_1$, as discussed in the text.
(c) I(t) for different ratios between the AC field intensity and the inter-dot hopping, i.e.,
between $\Omega_{\rm AC} $ and $\Omega_{\rm T}$, with $T_1\sim 0.1 \mu{\rm s}$. (Same parameters as in Fig. \ref{DLrelw}).
}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth,clip]{deltasdist.ps}
\end{center}
\caption{\label{izl}\small (a) Dependence of the stationary
current on the DC field inhomogeneity $\Delta_{\rm L}-\Delta_{\rm
R}$ for different relaxation times (in $\mu{\rm s}$). The
quenching of the current for $\Delta_{\rm L}=\Delta_{\rm R}$ is
lifted by spin relaxation. (b) I(t) for different values of
$\Delta_{\rm L}/\Delta_{\rm R}$ when the electron in the right QD
is kept in resonance, in the absence of relaxation. A crossover to
the one electron spin resonance is observed by increasing the
difference between $\Delta_{\rm L}$ and $\Delta_{\rm R}=\Delta$.
(c) Dependence of the current oscillations on $B_{\rm AC}$ for
$\Delta_{\rm L}=0.99\Delta_{\rm R}$ and $T_1^{-1}\sim 0.1\mu{\rm
s}$. Same parameters as in Fig. \ref{irel}.
}
\end{figure}
\section{Bichromatic field}
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth,clip]{doscampos2.ps}
\end{center}
\caption{\label{2campos}\small (a) Transient current in the
presence of a bichromatic B, when $\omega_{1(2)}=\Delta_{\rm
L(R)}$ for: $\Delta_{\rm L}=\Delta_{\rm R}/2$ (solid) and
$\Delta_{\rm L}=0.9\Delta_{\rm R}$ (dotted) and $T^{-1}_1$=0. Left
inset: detail of the current oscillation. In the case where
$\Delta_{\rm L}=\Delta_{\rm R}/2$, I oscillates with $\Omega_{\rm
T}$ and it presents faster oscillations over-imposed (more
important for $\Delta_{\rm L}=0.9\Delta_{\rm R}$) coming from the
effect of each frequency on its off-resonance electron.
Right Inset: occupation probabilities for $\Delta_{\rm
L}=\Delta_{\rm R}/2$: $|\uparrow,\uparrow\rangle$ (solid),
$|\downarrow,\downarrow\rangle$ (dotted),
$|\uparrow,\downarrow\rangle\sim|\downarrow,\uparrow\rangle$
(dashed) and $|0,\uparrow\downarrow\rangle$ (dash-dotted, remaining very close to zero). The
occupation of $|0,\uparrow\downarrow\rangle$ drops to zero, and
therefore, I drops as well. At long times the electrons fall in a
coherent superposition of $|\uparrow,\uparrow\rangle$ and
$|\downarrow,\downarrow\rangle$. (b) Stationary current
as a function of $\omega_1$ when $\omega_2=\Delta_{\rm R}$, for
different relaxation rates, $T_1^{-1}$. I drops at $\omega_1$
=$\Delta_{\rm L}$. ($\Delta_{\rm R}=\Delta$, $\Gamma=10^{-3}{\rm
meV}$, $\Omega_{\rm T}=1.12{\rm GHz}$).}
\end{figure}
There is a way for trapping the system in a dark state even for
different Zeeman splittings by introducing a bichromatic $B_{\rm
AC}$ with a different frequency that also brings into resonance
the electron in the left QD:
\begin{equation}\label{hb2}
\hat{H}_{\rm B}^{(2)}(t)=\sum_{{i={\rm L,R}}\atop{j={1,2}}}\left[\Delta_i\hat{S}_z^i+B_{\rm AC}\left(\hat{S}_x^i\cos\omega_j t+\hat{S}_y^i\sin\omega_j t\right)\right],
\end{equation}
with $\omega_1=\Delta_{\rm L}$ and
$\omega_2=\Delta_{\rm R}$. Then, each electron is resonant with
one of the field frequencies.
In this case, as $\Delta_i$ is different in both QD's, $|T_0\rangle$ mixes
with $|S_0\rangle$ and a finite current flows until the electrons
fall in the superposition:
$|S_2\rangle=\frac{1}{\sqrt{2}}(|\uparrow,\uparrow\rangle-|\downarrow,\downarrow\rangle)$
which is not affected by the magnetic field but for
off-resonant oscillations that can be averaged out. In effect, if the non-resonant terms are disregarded, Eq. (\ref{hb2}) is reduced to $\hat{\tilde H}_{\rm
B,0}^{(2)}(t)=\sum_{i}\left[\Delta_i\hat{S}_z^i+B_{\rm AC}\left(\hat{S}_x^i\cos\Delta_it+\hat{S}_y^i\sin\Delta_i t\right)\right]$ and $\hat{\tilde H}_{\rm B,0}^{(2)}|S_2\rangle=0$. Then the
population of the states $|\uparrow,\downarrow\rangle$,
$|\downarrow,\uparrow\rangle$ and $|S_{\rm R}\rangle$ and, therefore, the current drop to zero, see Fig.~\ref{2campos}a. This transport
quenching also allows to operate the system as a {\it current
switch} by tuning the frequencies of the AC fields (Fig.\ref{2campos}b) and the preparation of the system in a concrete superposition to be manipulated.
The application of a bichromatic magnetic field
provide a direct measurement of the Zeeman splittings of
the dots
by tuning the frequencies untill the current is brought to a
minimum as in Fig. \ref{2campos}b.
Then, by {\it switching} one of the frequencies off
and tuning the Zeeman splitting by an additional $B_{\rm DC}$ in one
of the dots, the antiresonance configuration of Fig. \ref{izl}a
could be achieved. In this case, electrons in both QD's perform
coherent spin rotations, as shown in Fig. \ref{ds}b.
\section{Conclusions}
In summary, we present for the first time the complete electron spin dynamics in a
DQD, in the spin blockade regime, with up to two
extra electrons, where crossed DC and AC magnetic fields and a DC voltage are
applied. In the experimental set up that we propose, different Rabi oscillations
(due to the ac magnetic field and the interdot tunneling) compete:
The time dependent magnetic field produces coherent spin
rotations between spin up and down states while resonant inter-dot
hopping allows the spatial delocalization of the electrons. We show how the interplay between
coherent oscillations coming from inter-dot tunnel and those due to $B_{\rm AC}$ gives rise to
a non trivial electron dynamics which strongly depends on the
ratio between the different Rabi frequencies involved. We show as
well that if $\Delta$ has the same value for the left and the
right QD, electrons remain performing coherent spin rotations in
the $S=1$ subspace and current is quenched. This electron
trapping is removed by spin relaxation or inhomogeneous $B_{\rm
DC}$ and finite current flows. {\it Measuring the current will allow to control coherent spin rotations} and also to
extract information on the spin relaxation time.
We propose as well how to block the current by a bichromatic magnetic field in a
DQD where the effective Zeeman splitting is different within each dot
(and where current would otherwise flow due to singlet-triplet mixing).
We demonstrate that the bichromatic field induces spin blockade
in this configuration and that the system evolves to
a stationary superposition of states, thus serving for spin rectification and state preparation.
Then, our results show that tunneling spectroscopy experiments in
DQD's under tunable mono- and bichromatic magnetic fields allow
to drive electrons to perform coherent spin rotations which
{\it can be unambiguously detected} by measuring the tunneling
current. We also show how to induce spin blockade in DQDs with different Zeeman splittings by means of a bichromatic magnetic field.
We acknowledge J. I\~narrea and C. Emary for fruitful discussions. This
work has been supported by the MEC (Spain) under grant
MAT2005-06444 and by the EU Marie Curie Network: Project number
504574.
|
1,116,691,497,332 | arxiv | \section{Description of the results}
In this paper, we are concerned with some qualitative properties for nonnegative {\it $p$-map} solutions $\mathcal{U}=(u_1,\dots,u_p): \mathbb{R}^{n}\setminus\{0\}\rightarrow \mathbb{R}^p$ to the following fourth order system in the {\it punctured space},
\begin{equation}\label{oursystem}
\Delta^{2} u_{i}=c(n)|\mathcal{U}|^{2^{**}-2}u_{i} \quad {\rm in} \quad \mathbb{R}^{n}\setminus\{0\},
\end{equation}
where $n\geqslant5$,
$\Delta^{2}$ is the bi-Laplacian and $|\mathcal{U}|$ is the Euclidean norm, that is, $|\mathcal{U}|=(\sum_{i=1}^{p} u_i^2)^{1/2}$.
System \eqref{oursystem} is strongly coupled by the {\it Gross--Pitaevskii nonlinearity} $f_i(\mathcal{U})=c(n)|\mathcal{U}|^{2^{**}-2}u_i$ with associated potential $F(\mathcal{U})=(f_1(\mathcal{U}),\dots,f_p(\mathcal{U}))$, where $s\in(1,2^{**}-1)$ with $2^{**}=2n/(n-4)$ the {\it critical Sobolev exponent}, and $c(n)=[n(n-4)(n^2-4)]/{16}$ a normalizing constant.
By a {\it classical solution} to \eqref{oursystem}, we mean a $p$-map $\mathcal{U}$ such that each component $u_i \in C^{4,\zeta}(\mathbb{R}^{n}\setminus\{0\})$, for some $\zeta\in(0,1)$, and it satisfies \eqref{oursystem} in the classical sense.
A solution may develop an isolated singularity when $x=0$, that is, some components may have a non-removable singularity at the origin.
More accurately, a solution $\mathcal{U}$ to \eqref{oursystem} is said to be {\it singular}, if there exists $i\in I:=\{1,\dots,p\}$ such that the origin is a {\it non-removable singularity} for $u_{i}$. Otherwise, if the origin is a {\it removable singularity} for all components $u_i$, a solution $\mathcal{U}$ is called {\it non-singular}, and $u_i$ can be extended continuously to the whole domain.
Let us notice that when $p=1$, \eqref{oursystem} becomes the following fourth order equation,
\begin{equation}\label{scalarsystem}
\Delta^{2}u=c(n)u^{2^{**}-1} \quad {\rm in} \quad \mathbb{R}^n\setminus\{0\}.
\end{equation}
In this sense, the Gross--Pitaevskii nonlinearity is the more natural strong coupling term such that \eqref{oursystem} generalizes \eqref{scalarsystem}. Our objective is to present a classification results for both non-singular and singular solutions to our conformally invariant system.
Our first main result on \eqref{oursystem} is motivated by the fundamental classification theorem due to C. S. Lin \cite[Theorem~1.3]{MR1611691} for solutions to \eqref{scalarsystem} with a removable singularity at the origin
\begin{theoremletter}\label{theoremA}
Let $u$ be a nonnegative non-singular solution to \eqref{scalarsystem}. Then, there exist $x_0\in\mathbb{R}^n$ and $\mu>0$ such that $u$ is radially symmetric about $x_0$ and
\begin{equation}\label{sphericalfunctions}
u_{x_0,\mu}(x)=\left(\frac{2\mu}{1+\mu^{2}|x-x_0|^{2}}\right)^{\frac{n-4}{2}}.
\end{equation}
Let us call $u_{x_0,\mu}$ a fourth order spherical solution.
\end{theoremletter}
This $(n+1)$-parameter family of solutions can also be regarded as maximizers for the Sobolev embedding theorem $\mathcal{D}^{2,2}(\mathbb{R}^n)\hookrightarrow L^{2^{**}}(\mathbb{R}^n)$, that is,
\begin{equation}\label{Sobolev-emb}
\|u_{x_0,\mu}\|_{L^{2^{**}}(\mathbb{R}^n)}= S(2,2,n)\|u_{x_0,\mu}\|_{\mathcal{D}^{2,2}(\mathbb{R}^n)},
\end{equation}
where $S(2,2,n)$ is the best Sobolev constant (see \eqref{fourthtalaub} below).
The existence of extremal functions for \eqref{Sobolev-emb} was obtained by P.-L. Lions \cite[Section~V.3]{MR778970}.
Besides, these optimizers were found in a more general setting by E. Lieb \cite[Theorem~3.1]{MR717827}, using an equivalent dual formulation.
Subsequently, in the fourth order case, D. E. Edmunds et al. \cite[Theorem~2.1]{MR1076074} completed this classification.
We also mention that X. Xu \cite[Theorem~1.1]{MR1769247} reproved this result using a moving spheres method, which is based on a superharmonicity property for non-singular solutions to \eqref{scalarsystem}.
On the second main result, we provide a classification theorem for singular solutions to \eqref{oursystem}.
On this subject, we should mention that when the origin is a non-removable singularity, C. S. Lin \cite[Theorem~1.4]{MR1611691} obtained radial symmetry for solutions to \eqref{scalarsystem} using the asymptotic moving planes technique. Recently, Z. Guo et al. \cite[Theorem~1.3]{MR4094467} proved the existence of periodic solutions applying a mountain pass theorem and conjectured that all solutions should be periodic.
Later R. L. Frank and T. K\"onig \cite[Theorem~2]{MR3869387} answered this conjecture, obtaining more accurate results concerning the classification for global singular solutions to \eqref{scalarsystem}. More precisely, they used the Emden--Fowler change coordinates (see Section~\ref{sec:cylindricaltransform}) to transform \eqref{scalarsystem} into the fourth order Cauchy problem,
\begin{equation}\label{fowler4order}
\begin{cases}
v^{(4)}-K_2v^{(2)}+K_0v=c(n)v^{2^{**}-1}\\
v(0)=a,\ v^{(1)}(0)=0,\ v^{(2)}(0)=b(a),\ v^{(3)}(0)=0,
\end{cases}
\end{equation}
where $K_2,K_0$ are constants depending on the dimension (see \eqref{coefficients}).
In this work, positive $T$-periodic solutions $v_{a,T}$ to \eqref{fowler4order} are obtained by a topological shooting method based on the parameter $b(a)$.
For the matter of existence, one needs to be restricted to the situation $a\in (0,a_0]$, where $a_0=[n(n-4)/(n^2-4)]^{(n-4)/8}$ with period $T\in (0,T_{a}]$, where $T_{a}\in\mathbb{R}$ is the fundamental period of $v_a$.
\begin{theoremletter}\label{theoremB}
Let $u$ be a singular solution to \eqref{scalarsystem}. Then, $u$ is radially symmetric about the origin.
Moreover, there exist $a \in (0,a_0]$ and $T\in (0,T_{a}]$ such that
\begin{equation}\label{emden-folwersolution}
u_{a,T}(x)=|x|^{\frac{4-n}{2}}v_{a}(\ln|x|+T),
\end{equation}
where $v_{a}$ is the unique $T$-periodic bounded solution to \eqref{fowler4order} and $T_a\in\mathbb{R}$ its fundamental period. Let us call both $u_{a,T}$ and $v_{a,T}$ Emden--Fowler (or Delaunay-type) solutions.
\end{theoremletter}
Let us remark that differently from Theorem~\ref{theoremA} where the solution can be classified by $(n+1)$-parameter family, in Theorem~\ref{theoremB} we have a two-parameter family of solutions.
However, we should mention that it is possible to construct a $n$-parameter family of deformations for \eqref{emden-folwersolution}, which are called the {\it deformed Emden--Fowler solutions} \cite[page 241]{MR1666838}, and are produced by composing three conformal transformations.
In this sense, the {\it necksize} $a\in (0,a_0]$ of a singular solution to \eqref{scalarsystem} plays a similar role as the parameter $\mu>0$ for the non-singular solutions to \eqref{scalarsystem}.
In the light of Theorems~\ref{theoremA} and \ref{theoremB}, we present our main results.
\begin{theorem}[Liouville--type]\label{theorem1}
Let $\mathcal{U}$ be a nonnegative non-singular solution to \eqref{oursystem}. Then, there exists $\Lambda \in\mathbb{S}^{p-1}_{+}=\{ x \in \mathbb{S}^{p-1} : x_i \geqslant 0 \}$ and a fourth order spherical solution given by \eqref{sphericalfunctions} such that
\begin{equation*}
\mathcal{U}=\Lambda u_{x_0,\mu}.
\end{equation*}
\end{theorem}
As an application, we show that non-singular solutions classified above are the extremal maps for a higher order Sobolev-type inequality. Moreover, the best constant associated with this embedding coincides with $p=1$ \cite{MR2221095,MR2852264}.
\begin{theorem}[Classification]\label{theorem2}
Let $\mathcal{U}$ be a nonnegative singular solution to \eqref{oursystem}. Then, there exists $\Lambda^*\in\mathbb{S}^{p-1}_{+,*}=\{ x \in \mathbb{S}^{p-1} : x_i > 0 \}$ and an Emden--Fowler solution given by \eqref{emden-folwersolution} such that
\begin{equation*}
\mathcal{U}=\Lambda^* u_{a,T}.
\end{equation*}
\end{theorem}
The proof of Theorems~\ref{theorem1} and \ref{theorem2} are based on sliding techniques, which shows that all components solutions are rotationally invariant and radially monotonically decreasing.
The first step in our argument relies on the fact that any component solution cannot vanish unless it is identically zero.
In the singular case, we can prove that all components are strictly positive and blow-up at the origin with the same prescribed rate.
Since singular solutions to the blow-up limit equation \eqref{oursystem} are the natural candidates for asymptotic models of the same system in the punctured ball, the last theorem is the first step in describing the local asymptotic behavior for positive singular solutions to
\begin{equation*}
\Delta^{2} u_{i}=c(n)|\mathcal{U}|^{2^{**}-2}u_{i} \quad {\rm in} \quad B^{n}_{1}\setminus\{0\}.
\end{equation*}
This asymptotic analysis would be a version of the celebrated Caffarelli--Gidas--Spruck \cite{MR982351} or Korevaar--Mazzeo--Pacard--Schoen \cite{MR1666838} result for the context of fourth order systems.
When $p=1$, we should mention that the subcritical cases of \eqref{scalarsystem} were addressed in \cite{MR1436822,MR4123335}.
However, the question about the asymptotic local behavior for singular solutions to \eqref{scalarsystem} near the isolated singularity remains unsolved; this problem is posed by R. L. Frank and T. K\"{o}nig \cite[page 1103]{MR3869387}.
We will pursue these asymptotics in an upcoming paper.
\begin{remark}\label{existence}
The existence of non-singular $($singular$)$ solutions to \eqref{oursystem} follows directly from Theorem~\ref{theoremA} $($Theorem~\ref{theoremB}$)$.
In fact, for any $\Lambda\in\mathbb{S}^{p-1}_{+}$ $($$\Lambda^*\in\mathbb{S}^{p-1}_{+,*}$$)$, we observe that $\mathcal{U}=\Lambda u_{x_0,\mu}$ $($$\mathcal{U}=\Lambda^* u_{a,T}$$)$ is a non-singular $($singular$)$ solution to \eqref{oursystem}.
Roughly speaking, our results classify these solutions as the only possible expressions for nontrivial solutions to \eqref{oursystem}.
We also should mention that in \cite{MR2528497} it is proved the existence of nontrivial solutions to a more general class of potentials involving polyharmonic operators on Riemannian manifolds.
\end{remark}
The primary sources of difficulties in seeking for some qualitative properties for \eqref{oursystem} are the lack of maximum principle and the failure of truncation methods provoked by the on the left-hand side of \eqref{oursystem}, the strongly-coupled setting caused by the Gross--Pitaevskii nonlinearity in the right-hand side of \eqref{oursystem}, and the non-removable singularity at the origin of punctured space.
One of the first results on the classification for solutions to second order equations dates back to the seminal work of L. A. Caffarelli et al. \cite{MR982351} (see also \cite{MR1333503,MR1666838,MR1374197}). This challenging analysis for singular PDEs has been motivated by the classical papers \cite{lane,emden,fowler} regarding the {\it Lane--Emden--Fowler equation}
\begin{equation}\label{lane-emden}
-\Delta u=u^{s} \quad {\rm in} \quad \mathbb{R}^n\setminus\{0\},
\end{equation}
for $n\geqslant3$ and $s>1$, which models the distribution of mass density in spherical polytropic star in hydrostatic equilibrium \cite{MR0092663}. In addition, when $s=2^{*}-1$, where $2^{*}:=2n/(n-2)$ is the critical Sobolev exponent, \eqref{lane-emden} corresponds, up to a constant, to the conformal scalar curvature equation, a famous problem in differential geometry, which can be set as
\begin{equation}\label{secscalarsystem}
-\Delta u =\frac{n(n-2)}{4}u^{2^{*}-1} \quad {\rm in} \quad \mathbb{R}^n\setminus\{0\}.
\end{equation}
It is well known that \eqref{secscalarsystem} is a particular case of the Yamabe problem on a non-compact complete Riemannian manifold $(M^n,g)$ with {\it simple structure at infinity}, that is, there exists $\widetilde{M}$ containing $M^n$ such that $M^n=\widetilde{M}\setminus \mathcal{Z}$, where $\mathcal{Z}$ is a closed subset called {\it the ends} of $M^n$. Thus, this problem can be reduced to obtaining positive solution to the {\it singular Yamabe equation}
\begin{equation}\label{yamabe}
\begin{cases}
-\Delta_g u+\frac{(n-2)}{4(n-1)}R_gu=\frac{n(n-2)}{4}u^{2^{*}-1} \quad {\rm on} \quad \widetilde{M}\setminus \mathcal{Z}\\
\displaystyle\lim_{d_g(x,\mathcal{Z})\rightarrow0}u(x)=\infty,
\end{cases}
\end{equation}
where $-\Delta_g$ is the Laplace--Beltrami operator and $R_g$ is the scalar curvature.
In this way, \eqref{secscalarsystem} is related to \eqref{yamabe} when $\widetilde{M}=\mathbb{S}^{n-1}$ is the round sphere, and $\mathcal{Z}$ is a unique point.
The geometric operator $L_g:=-\Delta_g+\frac{(n-2)}{4(n-1)}R_g$ on the left-hand side of \eqref{yamabe} is the so-called {\it conformal Laplacian}.
The study of singular solutions to geometric equations like \eqref{yamabe} is related to the characterization of the size of the limit set of the image domain in $\mathbb{S}^{n}$ of the developing map for a locally conformally flat $n$-dimensional manifold, which was highlighted by the works of R. Schoen and S.-T. Yau \cite{MR929283,MR994021,MR931204}.
More specifically, conformal metrics $\bar{g}=u^{4/(n-2)}g$ with constant scalar curvature, are generated by positive solutions to \eqref{yamabe}. These conformal metrics are complete, whenever singular solutions to \eqref{yamabe} have a suitable blow-up rate close to the singular set. Then, for the geometrical point of view,
to study the local behavior for singular solutions to \eqref{yamabe} near the singular set is essential to understand
the asymptotic behavior of these conformal metrics at the ends of $M^n$.
In \cite{MR982351}, using ODE methods, it was proved that if $u$ is a non-singular solution to \eqref{secscalarsystem}, then there exist $x_0\in\mathbb{R}^n$ and $\mu>0$ such that
\begin{equation}\label{secaubintalenti}
u(x)=\left(\frac{2\mu}{1+\mu^2|x-x_0|^2}\right)^{\frac{n-2}{2}}.
\end{equation}
This classification result can be seen as a complement to the works of T. Aubin \cite{MR0448404} and G. Talenti \cite{MR0463908} (see also \cite{MR0303464,MR634248,MR544879}). Moreover, they as well dealt with the singular case, proving that if $u$ is a singular solution to \eqref{secscalarsystem}, there exist $a \in (0,[(n-2)/n]^{(n-2)/4}]$ and $T\in (0,T_a]$ such that
\begin{equation}\label{secfowler}
u(x)=|x|^{\frac{2-n}{2}}v_{a}(-\ln|x|+T),
\end{equation}
where $v_{a,T}$ is the unique $T$-periodic bounded solution to the following second order problem
\begin{equation}\label{secondorderfowler}
\begin{cases}
v^{(2)}-\displaystyle\frac{(n-2)^2}{4}v+\frac{n(n-2)}{4}v^{2^{*}-1}=0\\
v(0)=a,\ v^{(1)}(0)=0,
\end{cases}
\end{equation}
where $T_a\in\mathbb{R}$ is the fundamental period of $v_a$. In this situation, asymptotic properties for global solutions to \eqref{secondorderfowler} can be inferred using standard ODE methods, such as conservation of energy, phase-plane analysis and Floquet theory (or Bloch wave theory).
Now let us consider nonnegative $p$-map solutions $\mathcal{U}=(u_1,\dots,u_p): \mathbb{R}^{n}\setminus\{0\}\rightarrow \mathbb{R}^p$ to the following critical second order Gross--Pitaevskii system extending \eqref{secscalarsystem},
\begin{equation}\label{secondordersystem}
-\Delta u_{i}=\frac{n(n-2)}{4}|\mathcal{U}|^{2^{*}-1}u_{i} \quad {\rm in} \quad \mathbb{R}^{n}\setminus\{0\}.
\end{equation}
As in Remark~\ref{existence}, we observe that the existence of non-singular (singular) solutions to \eqref{secondordersystem} is a direct consequence of the results due to P.-L. Lions \cite{MR778970} (R. Fowler \cite{fowler}). Indeed, for every $\Lambda\in \mathbb{S}^{p-1}_+$ ($\Lambda^*\in\mathbb{S}^{p-1}_{+,*}$) unit vector with nonnegative (positive) coordinates and $u$ a non-singular (singular) solution to \eqref{secscalarsystem}, we have that $\mathcal{U}=\Lambda u$ is a non-singular (singular) solution to \eqref{secondordersystem}. Moreover, O. Druet et al. \cite[Proposition~1.1]{MR2558186} on System \eqref{secondordersystem} proved the Liouville-type theorem stated below. We also refer to \cite[Theorem~1]{MR2510000} for related results on integral systems with critical exponents.
\begin{theoremletter}\label{theoremC}
Let $\mathcal{U}$ be a non-singular solution to \eqref{secondordersystem}. Then, $\mathcal{U}=\Lambda u$ for some $\Lambda\in\mathbb{S}^{p-1}_{+}$, where $u$ is given by \eqref{secaubintalenti}.
\end{theoremletter}
At this point, a natural question that arises is whether Theorem~\ref{theoremC} still holds in the singular case. Recently, R. Caju et al. \cite[Theorem~1.2]{MR4002167} gave an affirmative answer for this.
\begin{theoremletter}
Let $\mathcal{U}$ be a singular solution to \eqref{secondordersystem}. Then, $\mathcal{U}=\Lambda^* u$ for some $\Lambda^*\in\mathbb{S}^{p-1}_{+,*}$, where $u$ is given by \eqref{secfowler}.
\end{theoremletter}
Strongly coupled fourth order systems appear in several important branches of mathematical physic. For instance, in hydrodynamics, for modeling the behavior of deep-water and Rogue waves in the ocean \cite{dysthe,lo-mei}.
Also, in the Hartree--Fock theory for Bose--Einstein double condensates \cite{MR2040621,PhysRevLett.78.3594}.
Moreover, in conformal geometry \eqref{scalarsystem} can be seen as the limit equation to the $Q$-curvature problem for a non-compact complete Riemannian manifold $(M^n,g)$ with simple structure at infinity.
Hence, in the same way of the singular Yamabe problem, solutions to \eqref{scalarsystem} give rise to complete conformal metrics on $M^n$ with a constant $ Q $-curvature.
For more details on the $Q$-curvature problem and its applications in general relativity, see, for instance, \cite{MR3618119,MR2149088}.
Motivated by its applications in nonlinear analysis, minimal surface theory, and differential geometry, classification for singular solutions to PDEs has been a topic of intense study in recent years. We should mention that there also exists a vast literature for related problems arising in conformal geometry. For instance, in prescribing different types of curvature, such as the $Q^k$-curvature \cite{MR1679783,MR1432813}, the fractional curvature \cite{MR3198648,MR3694655,arXiv:1804.00817}, and the $\sigma_k$-curvature \cite{MR2165306,MR2214582,MR2737708}. We emphasize that for each type above, the transformation law relating the curvature of two conformal metrics involves respectively: higher order operators (poly-Laplacian), nonlocal operators (fractional Laplacian) and fully nonlinear operators ($k$-Hessian).
Here is a brief description of our plan for the remaining part of the paper.
In Section~\ref{section2}, we summarize some basic definitions.
In Section~\ref{section3}, we prove that solutions to \eqref{oursystem} are non-singular and weakly positive.
In addition, we show that Theorem~\ref{theorem1} holds for weak solutions to \eqref{oursystem}.
Hence, we apply a moving spheres method to provide that classical solutions are weak solutions as well. Besides, we also prove that solutions from Theorem~\ref{theorem1} are extremal functions for a Sobolev embedding theorem.
In Section~\ref{section4}, we obtain that singular solutions are as well classical. Thus, we employ an asymptotic moving planes method to show they are rotationally invariant about the origin. Therefore, on the singular case \eqref{oursystem} is equivalent to a fourth order ODE system in the real line.
In this direction, we use its Hamiltonian energy to define a suitable Pohozaev-type invariant. Finally, we perform a delicate ODE analysis to prove a removable-singularity classification for solutions to \eqref{oursystem} based on the sign of the Pohozaev invariant.
Then, as a direct consequence, we provide the proof of Theorem~\ref{theorem2}.
\section{Basic definitions}\label{section2}
In this section, let us introduce some basic definitions used in the remaining part of the text.
Here and subsequently, we always deal with nonnegative solutions $\mathcal{U}$ of \eqref{oursystem}, that is, $u_i \geqslant 0$ for all $i\in I$, where we recall the notation $I=\{1,\dots,p\}$.
Let us split the index set $I$ into two parts $I_0=\{i : u_i\equiv0\}$ and $I_{+}=\{i : u_i>0\}$.
Then, following standard notation for elliptic systems, we divide solutions to \eqref{oursystem} into two types:
\begin{definition}
Let $\mathcal{U}$ be a nonnegative solution to \eqref{oursystem}. We call $\mathcal{U}$ {\it strongly positive} if $I_+=I$. On the other hand, when $I_0\neq\emptyset$, we say that $\mathcal{U}$ is {\it weakly positive}.
\end{definition}
\begin{remark}
For the proof of Theorems~\ref{theorem1} and \ref{theorem2}, it is crucial to show that solutions to \eqref{oursystem} are weakly positive.
We need to guarantee that nontrivial solutions to \eqref{oursystem} do not develop zeros in the domain.
Namely, our strategy is to prove that the so-called {\it quotient function} $q_{ij}=u_i/u_j$ is constant for all $i,j\in I_+$. First, for the quotient to be well defined, it is necessary to the denominator to be strictly positive.
It was observed by E. Hebey \cite[Remark~1.1]{MR2237439} that contrary to the case $p=1$, nonnegative solutions to some inhomogeneous elliptic coupled systems are not necessarily weakly positive, and thus not strongly positive as well.
\end{remark}
\begin{definition}
Let $\Omega=\mathbb{R}^n$ ($\Omega=\mathbb{R}^n\setminus\{0\}$ be the punctured space) be the whole space, and $\mathcal{U}$ be a non-singular (singular) solution to \eqref{oursystem}. We say that $\mathcal{U}$ is a {\it weak solution}, if it belongs to $\mathcal{D}^{2,2}(\Omega,\mathbb{R}^p)$ and satisfies \eqref{oursystem} in the weak sense, that is,
\begin{equation}\label{weakformulattion}
\displaystyle\int_{\mathbb{R}^n}\Delta u_i\Delta \phi_i \mathrm{d} x=\displaystyle c(n)\int_{\mathbb{R}^n}|\mathcal{U}|^{2^{**}-2}u_{i}\phi_i \mathrm{d} x, \quad {\rm for \ all} \quad \Phi\in C^{\infty}_c(\Omega,\mathbb{R}^p).
\end{equation}
Here $\mathcal{D}^{2,2}(\Omega,\mathbb{R}^p)$ is the classical Beppo--Levi space, defined as the completion of the space of compactly supported smooth $p$-maps, denoted by $C^{\infty}_{c}(\Omega,\mathbb{R}^p)$, under the Dirichlet norm $\|\mathcal{U}\|_{\mathcal{D}^{2,2}(\Omega,\mathbb{R}^p)}^2=\sum_{i=1}^p\|\Delta u_i\|_{L^{2}(\Omega)}^2$.
\end{definition}
\begin{remark}\label{classical/weak}
In what follows, we use classical regularity theory to prove that any weak non-singular (singular) solution to \eqref{oursystem} is also a classical non-singular (singular) solution.
Since we are working on unbounded domains, it is not direct, though, to verify that classical solutions to \eqref{oursystem} are also weak.
In general, it is true that, by the Green identity, classical solutions $\mathcal{U}\in C^{4,\zeta}(\Omega,\mathbb{R}^p)$ also satisfy \eqref{weakformulattion}. Nevertheless, to show that $\mathcal{U}\in\mathcal{D}^{2,2}(\Omega,\mathbb{R}^p)$ is an entire solution to \eqref{oursystem}, one needs to prove some suitable decay at both the origin and infinity (see Subsection~\ref{proofoftheorem1}).
\end{remark}
\section{Liouville-type theorem for non-singular solutions}\label{section3}
This section is devoted to present the proof of Theorem~\ref{theorem1}.
Using the regularity lifting theorem based on \cite{MR1338474}, we aim to obtain regularity results for solutions to \eqref{oursystem} with a removable singularity at the origin.
Hence, employing an iteration argument from \cite{MR1769247}, we show that non-singular solutions to \eqref{oursystem} are weakly positive.
Besides, adopting a variational technique from O. Druet and E. Hebey \cite{MR2603801}, we prove that the Liouville-type result holds for weak solutions to \eqref{oursystem}.
Then, we perform a moving spheres technique from O. Druet et al. \cite{MR2558186} and Y. Li and L. Zhang \cite{MR2001065} to obtain that non-singular solutions to \eqref{oursystem} are rotationally invariant about some point.
This argument provides as a by-product an estimate for the Sobolev norm of solutions to \eqref{oursystem}, yielding that classical solutions to \eqref{oursystem} are also weak (see Remark~\ref{classical/weak}).
Finally, as an application of our main result, we show that non-singular solutions to \eqref{oursystem} are indeed extremal maps for the Sobolev embedding of the space $\mathcal{D}^{2,2}(\mathbb{R}^n,\mathbb{R}^p)$ into $L^{2^{**}}(\mathbb{R}^n,\mathbb{R}^p)$.
Since the origin is a removable singularity, System \eqref{oursystem} can be modeled in the entire space, in the sense that solutions can be smoothly extended to be defined in $\mathbb{R}^n$. In this situation, \eqref{oursystem} is reduced to
\begin{equation}\label{regularsystem}
\Delta^{2}u_i=c(n)|\mathcal{U}|^{2^{**}-2}u_i \quad {\rm in} \quad \mathbb{R}^n.
\end{equation}
Subsequently, the idea is to provide some properties for solutions to \eqref{regularsystem} by writing this system as a nonlinear fourth order Schr\"{o}dinger equation
with potential $V: \mathbb{R}^n\rightarrow\mathbb{R}$ defined by
\begin{equation*}
V(x)=c(n)|\mathcal{U}(x)|^{2^{**}-2}.
\end{equation*}
\subsection{Regularity}
Initially, we prove that weak solutions to \eqref{oursystem} are as well as classical solutions.
We should mention that De Giorgi--Nash--Moser bootstrap techniques combined with the Br\'{e}zis-Kato method are standard strategies to produce regularity results for second order elliptic PDEs involving critical growth.
Unfortunately, this tool does not work in our critical fourth order setting.
More precisely, the nonlinearity on the right-hand side of \eqref{oursystem} has critical growth, so $|\mathcal{U}|^{2^{**}-2}u_i\in L^{{2n}/{(n+4)}}(\mathbb{R}^n)$.
Notice that we cannot conclude, using the Sobolev embedding theorem, that $|\mathcal{U}|^{2^{**}-2}u_i$ belongs to $ L^{q}(\mathbb{R}^n)$ for some $q>{2n}/{(n+4)}$ and any $i\in I$.
We can overcome this lack of integrability by applying the lifting method due to W. X. Chen and C. Li \cite{MR1338474}.
\begin{propositionletter}\label{reglift}
Let $Z$ be a Hausdorff topological space, $\|\cdot\|_{X},\|\cdot\|_{Y}:Z\rightarrow[0,\infty]$ be extended norms in $Z$ and $X,Y$ be subspaces defined by $X=\{z \in Z: \|z\|_{X}<\infty\}$ and $Y=\{z \in Z: \|z\|_{Y}<\infty\}$.
Suppose that $T$ is a contraction map from $X$ into itself and from $Y$ into itself, and that for $u\in X$, there exists $\widetilde{u}\in X\cap Y$ such that $u=Tu+\widetilde{u}$.
Then, $u\in X\cap Y$.
\end{propositionletter}
In the next step, we apply Proposition~\ref{reglift} to show that it is possible to improve the Lebesgue class in which solutions to \eqref{regularsystem} lie.
Here our strategy is to prove that they indeed belong to the Lebesgue space $L^{s}(\mathbb{R}^n,\mathbb{R}^p)$ for any $s>2^{**}$.
\begin{proposition}
Let $\mathcal{U}\in\mathcal{D}^{2,2}(\mathbb{R}^n,\mathbb{R}^p)$ be a nonnegative weak non-singular solution to \eqref{regularsystem}.
Then, $\mathcal{U}\in L^{s}(\mathbb{R}^n,\mathbb{R}^p)$ for all $s>2^{**}$.
\end{proposition}
\begin{proof}
Let us consider the $Z=C^{\infty}_c(\mathbb{R}^n)$, $X=L^{{2n}/{(n-4)}}(\mathbb{R}^n)$ and $Y=L^{q}(\mathbb{R}^n)$ for $q>2n/(n-4)$. Let $\Gamma_2(x,y)=C_n|x-y|^{4-n}$ be the fundamental solution to $\Delta^2$ in $\mathbb{R}^n$. Thus, it is well-defined the inverse operator
\begin{equation*}
(Tu)(x)=\int_{\mathbb{R}^n}\Gamma_2(x,y)u(y)dy.
\end{equation*}
Hence, using the Hardy--Littlewood--Sobolev inequality (see \cite{MR717827}), we get that for any $q\in(1,n/4)$, there exists $C>0$ such that
\begin{equation*}
\|Tu\|_{L^{\frac{nq}{n-4q}}(\mathbb{R}^n)}=\|\Gamma_2\ast u\|_{L^{\frac{nq}{n-4q}}(\mathbb{R}^n)}\leqslant C\|u\|_{L^{q}(\mathbb{R}^n)}.
\end{equation*}
For $M>0$, let us define $\widetilde{V}_M(x)=V(x)-V_M(x)$, where
\begin{equation*}
V_M(x)=
\begin{cases}
V(x), \ {\rm if} \ |V(x)|\geqslant M,\\
0,\ {\rm otherwise}.
\end{cases}
\end{equation*}
Applying the integral operator $T_Mu:=\Gamma_2\ast V_Mu$ on \eqref{regularsystem}, we obtain that $u_i=T_Mu_i+\widetilde{T}_Mu_i$, where
\begin{equation*}
(T_M u_i)(x)=\int_{\mathbb{R}^n}\Gamma_2(x,y)V_{M}(y)u_i(y)\mathrm{d} y \quad \text{and} \quad \widetilde{T}_Mu_i(x)=\int_{\mathbb{R}^n}\Gamma_2(x,y)\widetilde{V}_M(y)u_i(y)\mathrm{d} y.
\end{equation*}
\noindent{\bf Claim 1:} For $n/(n-4)<q<\infty$, there exists $M\gg1$ large such that $T_M:L^{q}(\mathbb{R}^n)\rightarrow L^{q}(\mathbb{R}^n)$ is a contraction.
\noindent In fact, for any $q\in(n/(n-4),\infty)$, there exists $m\in (1,n/4)$ such that $q=nm/(n-4m)$. Then, by the H\"{o}lder inequality, for any $u\in L^{q}(\mathbb{R}^n)$, we get that there exists $C>0$ satisfying
\begin{equation*}
\|T_Mu\|_{L^{q}(\mathbb{R}^n)}\leqslant\|\Gamma_2\ast V_{M}u\|_{L^{q}(\mathbb{R}^n)}\leqslant C\|V_M\|_{L^{{n}/{4}}(\mathbb{R}^n)}\|u\|_{L^{q}(\mathbb{R}^n)}.
\end{equation*}
Since $V_{M}\in L^{n/4}(\mathbb{R}^n)$ it is possible to choose a large $M\gg1$ satisfying that $\|V_M\|_{L^{{n}/{4}}(\mathbb{R}^n)}<{1}/{2C}$. Therefore, we arrive at $\|T_Mu\|_{L^{q}(\mathbb{R}^n)}\leqslant{1}/{2}\|u\|_{L^{q}(\mathbb{R}^n)}$, which yields that $T_M$ is a contraction.
\noindent{\bf Claim 2:} For any $n/(n-4)<q<\infty$, it follows that $\widetilde{T}_Mu_i\in L^{q}(\mathbb{R}^n)$.
\noindent Indeed, for any $n/(n-4)<q<\infty$, choose $1<m<n/4$, satisfying $q=nm/(n-4m)$. Since $\widetilde{V}_M$ is bounded, we obtain
\begin{equation*}
\|\widetilde{T}_Mu_i\|_{L^{q}(\mathbb{R}^n)}=\|\Gamma_2\ast\widetilde{V}_Mu_i\|\leqslant C_1\|\widetilde{V}_Mu_i\|_{L^{m}(\mathbb{R}^n)}\leqslant C_2\|u_i\|_{L^{r}(\mathbb{R}^n)}.
\end{equation*}
However, by the Sobolev embedding theorem, we have that $u_i\in L^{q}(\mathbb{R}^n)$ when $m=2n/(n-4)$, which implies $q=2n/(n-8)$. Thus, we find that $u_i\in L^{q}(\mathbb{R}^n)$ when
\begin{equation*}
\begin{cases}
1<q<\infty,& {\rm if} \ 5\leqslant n\leqslant 8\\
1<q\leqslant\frac{2n}{n-8},& {\rm if} \ n\geqslant9.
\end{cases}
\end{equation*}
Now we can repeat the argument for $m=2n/(n-8)$ to obtain that $u_i\in L^{q}(\mathbb{R}^n)$ for
\begin{equation*}
\begin{cases}
1<q<\infty,& {\rm if} \ 5\leqslant n\leqslant 12\\
1<q\leqslant\frac{2n}{n-12},& {\rm if} \ n\geqslant13.
\end{cases}
\end{equation*}
Therefore proceeding inductively as in the last argument, the proof of the claim follows.
Combining Claims 1 and 2, we can apply Proposition~\ref{reglift} to show that $u_i \in L^{q}(\mathbb{R}^n)$ for all $q>2^{**}$ and $i\in I$. In particular, the proof of the proposition is concluded.
\end{proof}
\begin{corollary}[Regularity] \label{regregularity}
Let $\mathcal{U}\in\mathcal{D}^{2,2}(\mathbb{R}^n,\mathbb{R}^p)$ be a nonnegative weak non-singular solution to \eqref{oursystem}.
Then, $\mathcal{U}\in C^{4,\zeta}(\mathbb{R}^n,\mathbb{R}^p)$ is a classical non-singular solution to \eqref{oursystem}.
\end{corollary}
\begin{proof}
By Morrey embedding theorem, we have that $u_i\in C^{0,\zeta}(\mathbb{R}^n)$ for some $\zeta\in(0,1)$. Finally using Schauder estimates, one concludes $u_i\in C^{4,\zeta}(\mathbb{R}^n)$, which provides that $\mathcal{U}\in C^{4,\zeta}(\mathbb{R}^n,\mathbb{R}^p)$.
\end{proof}
\begin{remark} In \cite[Proposition~3.1]{MR1809291} using a different approach, K. Uhlenbeck and J. Viaclovski proved regularity for solutions to a class of general geometric fourth order PDEs, which could also be used to prove that for some $q>2^{**}$ it holds $\mathcal{U}\in L^{q}(\mathbb{R}^n,\mathbb{R}^p)$.
\end{remark}
\subsection{Superharmonicity}
We aim to obtain a strong maximum principle for nonnegative solutions to \eqref{oursystem}. In this direction, we prove that any component solution to \eqref{oursystem} is superharmonic. We are inspired in \cite[Theorem~2.1]{MR1769247}. The main difference in our approach is the appearance of the strong coupling term on the right-hand side of \eqref{regularsystem}. This coupled nonlinearity could imply the failure of the method for some components. However, we can overcome this issue thanks to an inequality involving the norm of the $p$-map solution. Before proving the superharmonicity result, we need to establish two technical lemmas, which proofs are merely calculus argument and can be found in \cite[Lemma~2.2 and Lemma~2.3]{MR1769247}, respectively.
\begin{lemmaletter}\label{lemmaxu1}
Suppose that $l_0=2$ and $\{l_k\}_{k\in\mathbb{N}}$ given by the formula $l_{k+1}=sl_k+4$ for some $s>1$. Then, for all $k\in\mathbb{N}$,
\noindent {{\rm (i)} Recursion formula:} $l_{k+1}=\frac{2s^{k+2}+2s^{k+1}-4}{s-1}$;\\
\noindent {{\rm (ii)} Upper estimate:} $(n+sl_k)(2+sl_k)(n+2+sl_k)(4+sl_k)\leqslant(n+2+2s)^{4(s+1)}$.
\end{lemmaletter}
\begin{lemmaletter}\label{lemmaxu2}
Suppose that $b_0=0$ and define $\{b_k\}_{j\in\mathbb{N}}$ by $b_{k+1}=sb_k+4(j+1)$. Then, for all $k\in\mathbb{N}$,
\begin{equation*}
b_{k+1}=4\left[\frac{s^{k+2}-(k+2)s+k+1}{s^2}\right].
\end{equation*}
\end{lemmaletter}
The superharmonicity result can be stated as follows.
\begin{proposition}\label{superharmonicity}
Let $\mathcal{U}$ be a nonnegative non-singular solution to \eqref{oursystem}. Then, $-\Delta u_i\geqslant0$ in $\mathbb{R}^n$ for all $i\in I$.
\end{proposition}
\begin{proof}
Supposing by contradiction that the proposition does not hold, there exists $i\in I$ and $x_0\in\mathbb{R}^n$ satisfying $-\Delta u_{i}(x_0)<0$. Since the Laplacian is invariant under translations, we may suppose without loss of generality that $x_0=0$. Let us reformulate \eqref{oursystem} as the following system in the whole space
\begin{align}\label{lane-emden-system}
\begin{cases}
-\Delta {u}_i&={h}_i\\
-\Delta {h}_i&=c(n)|{\mathcal{U}}|^{2^{**}-2}{u}_i.
\end{cases}
\end{align}
Let $B_r\subseteq\mathbb{R}^n$ be the ball of radius $r>0$, and $\omega_{n-1}$ be
the $(n-1)$-dimensional surface measure of the unit sphere, we consider
\begin{equation*}
\overline{u}_i=\frac{1}{n\omega_{n-1}r^{n-1}}\displaystyle\int_{\partial{B_r}}u_i\mathrm{d} \sigma_r \quad {\rm and} \quad \overline{h}_i=\frac{1}{n\omega_{n-1}r^{n-1}}\displaystyle\int_{\partial{B_r}}h_i\mathrm{d} \sigma_r,
\end{equation*}
the spherical averages of $u_i$ and $h_i$, respectively. Now taking the spherical average on the first line of \eqref{lane-emden-system}, and using that $\overline{\Delta u_i}=\Delta\overline{u}_i$, implies
\begin{equation}\label{portugal}
\Delta\overline{u}_i+\overline{h}_i=0.
\end{equation}
Furthermore, we rewrite the second equality of \eqref{lane-emden-system} to get $\Delta h_i+c(n)|\mathcal{U}|^{2^{**}-2}u_i=0$, from which, by taking again the spherical average in both sides, provides
\begin{equation*}
0=\frac{1}{n\omega_{n-1}r^{n-1}}\displaystyle\int_{\partial{B_r}}\left(\Delta h_i+c(n)|\mathcal{U}|^{2^{**}-2}u_i\right)\mathrm{d}\sigma_r
=\Delta\overline{h}_i+\frac{c(n)}{n\omega_{n-1}r^{n-1}}\displaystyle\int_{\partial{B_r}}|\mathcal{U}|^{{2^{**}-2}}{u}_i \mathrm{d} \sigma_r.
\end{equation*}
Hence,
\begin{equation}\label{greece}
\Delta\overline{h}_i=-\frac{c(n)}{n\omega_{n-1}r^{n-1}}\displaystyle\int_{\partial{B_r}}|\mathcal{U}(x)|^{{2^{**}-2}}{u}_i(x) \mathrm{d} \sigma_r,
\end{equation}
which, by using that $0\leqslant u_i(x)\leqslant|\mathcal{U}(x)|$, for any $x\in\mathbb{R}^n$, implies
\begin{align}\label{revision}
-\frac{c(n)}{n\omega_{n-1}r^{n-1}}\displaystyle\int_{\partial{B_r}}|\mathcal{U}(x)|^{{2^{**}-2}}{u}_i(x) \mathrm{d} \sigma_r
&\leqslant-\frac{c(n)}{n\omega_{n-1}r^{n-1}}\displaystyle\int_{\partial{B_r}}|u_i(x)|^{{2^{**}-1}}\mathrm{d} \sigma_r&\\\nonumber
&\leqslant-c(n)\left(\frac{1}{n\omega_{n-1}r^{n-1}}\displaystyle\int_{\partial{B_r}}|u_i(x)|\mathrm{d} \sigma_r\right)^{{2^{**}-1}}&\\\nonumber
&=-c(n)\overline{u}_i^{2^{**}-1},&
\end{align}
where on the second inequality, we used the Jensen inequality for the convex function $t\mapsto t^{2^{**}-1}$. Finally, combining \eqref{greece} and \eqref{revision}, we get
\begin{equation}\label{correctinequality}
\Delta \overline{h}_i+c(n)\overline{u}_i^{2^{**}-1}\leqslant0.
\end{equation}
By the definition of spherical average, we have that $\overline{h}_i(0)=h_i(0)<0$. In addition, by \eqref{correctinequality}, we find
\begin{equation}\label{spain}
\Delta \overline{h}_i\leqslant0.
\end{equation}
Then, multiplying equation \eqref{spain} by $r^{n-1}$, and integrating, we arrive at
\begin{equation*}
r^{n-1}\frac{\mathrm{d}}{\mathrm{d} r}\overline{h}_i\leqslant0.
\end{equation*}
It clearly implies that $\overline{h}_i$ is monotonically decreasing such for all $r>0$, we obtain
\begin{equation}\label{colombia}
\overline{h}_i(r)\leqslant h_i(0).
\end{equation}
Substituting \eqref{colombia} into \eqref{portugal}, and integrating, it follows
\begin{equation}\label{belgium}
\overline{u}_i(r)\geqslant-\frac{h_i(0)}{2n}r^2.
\end{equation}
Putting \eqref{belgium} in \eqref{correctinequality}, multiplying both side of inequality by $r^{n-1}$, and integrating, we have
\begin{equation}\label{poland}
\overline{h}_i(r)\leqslant-\frac{c_0^{s^2}r^{2s+2}}{(n+2+2s)(2s+4)},
\end{equation}
where $c_0=-{h_i(0)}/{2n}>0$ and $s=(n+4)/(n-4)$. Then, combining \eqref{poland} with \eqref{portugal}, and repeating the same procedure, it provides
\begin{equation}\label{ukraine}
\overline{u}_i(r)\geqslant\frac{c_0^{s^2}r^{2s+4}}{(n+2s)(s+2)(n+2+2s)(2s+4)}.
\end{equation}
Based on \eqref{ukraine} and thanks to Lemma~\ref{lemmaxu2}, we may assume that for some $k\in\mathbb{Z}$ and $l_k,b_k \in \mathbb{R}$, it holds
\begin{equation}\label{netherlands}
\overline{u}_i(r)\geqslant\frac{c_0^{s^k}r^{l_k}}{(n+2+2s)^{b_k}}.
\end{equation}
Again, we can use estimate \eqref{netherlands} combined with \eqref{portugal} and \eqref{correctinequality} to obtain
\begin{equation*}
\overline{h}_i(r)\leqslant-\frac{c_0^{s^{k+1}}r^{sl_k+2}}{(n+2+2s)^{sb_k}(n+sl_k)(sl_k+2)}
\end{equation*}
and
\begin{equation}\label{israel}
\overline{u}_i(r)\geqslant\frac{c_0^{s^{k+1}}r^{sl_k+4}}{(n+2+2s)^{pb_k}(n+sl_k)(sl_k+2)(n+2+sl_k)(sl_k+4)}.
\end{equation}
Setting $l_{k+1}=sl_k+4$, we have by (ii) of Lemma~\ref{lemmaxu1} that \eqref{israel} remains true for $k+1$ with $b_{k+1}=sb_k+4(k+1)$. In other words, it follows
\begin{equation*}
\overline{u}_i(r)\geqslant\frac{c_{0}^{s^{k+1}}r^{l_k+1}}{(n+2s+2)^{b_{k+1}}}.
\end{equation*}
Assuming that $c_0\geqslant1$, we can choose $r_0=(n+2s+2)^{4/(s-1)}$ and, by Lemmas~\ref{lemmaxu1} and \ref{lemmaxu2}, the following estimates holds
\begin{equation}\label{wales}
\overline{u}_i(r_0)\geqslant c_{0}^{s^{k+1}}\left[(n+2s+2)^\frac{4}{(s-1)^2}\right]^{s^{k+2}+2s^{k+1}+(k+2)s-k-5}.
\end{equation}
Taking the limit as $k\rightarrow\infty$ in \eqref{wales}, we find a contradiction since right-hand side blows-up. Therefore, $\Delta u_i\leqslant0$ for all $i\in I$. When $c_0<1$, choosing $r_0=c_0^{-1}(n+2s+2)^{4/(s-1)}$ the same argument can be applied.
\end{proof}
As a consequence of the superharmonicity result, we can prove that solutions to \eqref{oursystem} are weakly positive.
\begin{corollary}\label{positivity}
Let $\mathcal{U}$ be a non-singular solution to \eqref{oursystem}. Then, for any $i\in I$ we have that either $u_i\equiv 0$ or $u_i>0$. In other terms, $I=I_0\cup I_+$ is a disjoint union.
\end{corollary}
\subsection{Lower bound estimates}
The main result of the last subsection asserting that component solutions to \eqref{oursystem} are superharmonic is extremely useful to provide essential properties required to start the moving spheres method.
More precisely, we obtain a lower bound estimate for any component solution.
The idea is to use Proposition~\ref{superharmonicity} and the three spheres theorem for the bi-Laplacian.
\begin{lemma}\label{3S}
Let $\Omega\subset\mathbb{R}^{n}$ be a region containing two concentric spheres of radii $r_1$ and $r_2$ and the region between them, and $u:\Omega\to\mathbb{R}$ be a superharmonic function in $\Omega$. Then, for every $r>0$ such that $0<r_1<r<r_2$, it follows
\begin{equation*}
\min_{\partial B_{r}}u\geqslant\frac{\left[\left(\displaystyle\min_{\partial B_{r_1}}u\right) \left(r_{2}^{4-n}-r^{4-n}\right)+\left(\displaystyle\min_{\partial B_{r_2}}u\right) \left(r^{4-n}-r_{1}^{4-n}\right)\right]}{r_{2}^{4-n}-r_{1}^{4-n}}.
\end{equation*}
Moreover, equality only occurs if for some $A,B\in\mathbb{R}$, it holds $u(|x|)=A+B|x|^{4-n}$.
\end{lemma}
\begin{proof}
We include the proof in Appendix~\ref{appendixA}.
\end{proof}
\begin{corollary}\label{lowerestimate}
Let $\mathcal{U}$ be a nonnegative non-singular solution to \eqref{oursystem}. Then, given $0<r_0<r$, it follows
\begin{equation*}
u_i(x)\geqslant\left(\frac{r_0}{|x|}\right)^{n-4}\min_{\partial B_{r_0}}u_i \quad {\rm for \ any} \quad x\in B_r\setminus B_{r_0}.
\end{equation*}
\end{corollary}
\begin{proof}
Fix $0<r_0<r$, by applying Lemma~\ref{3S}, we get
\begin{align*}
\left(r_{0}^{n-4}-r^{n-4}\right)u_i(x)\geqslant\left(|x|^{n-4}-r^{n-4}\right)\min_{\partial B_{r_0}}u_i,
\end{align*}
which, by letting $r\rightarrow\infty$, gives us the desired conclusion.
\end{proof}
\subsection{Kelvin transform}\label{subsec:kelvintransfform}
We define some type of transform suitable to explore the symmetries of \eqref{oursystem}, which is called the {\it fourth order Kelvin transform} of a $p$-map.
The Kelvin transform is a device to extend the concept of harmonic (superharmonic or subharmonic) functions by allowing the definition of a function which is harmonic (superharmonic or subharmonic) at infinity.
This map is a key ingredient for developing a sliding method, namely the moving spheres or the moving planes techniques \cite{MR1159383}.
For $\Omega=\mathbb{R}^n$ or $\Omega=\mathbb{R}^n\setminus\{0\}$, we define the Kelvin transform.
To this end, we need to establish the concept of {\it inversion about a sphere} $\partial B_{\mu}(x_0)$, which is a map $\mathcal{I}_{x_0,\mu}:\Omega\rightarrow\Omega_{x_0,\mu}$ given by $\mathcal{I}_{x_0,\mu}(x)=x_0+K_{x_0,\mu}(x)^2(x-x_0)$, where $K_{x_0,\mu}(x)=\mu/|x-x_0|$ and $\Omega_{x_0,\mu}:=\mathcal{I}_{x_0,\mu}(\Omega)$ is the domain of the Kelvin transform.
In particular, when $x_0=0$ and $\mu=1$, we denote it simply by $\mathcal{I}_{0,1}(x)=x^{*}$ and $K_{0,1}(x)=x|x|^{-2}$. For easy reference, let us summarize some well-known facts about the inversion map.
\begin{proposition}
The map $\mathcal{I}_{x_0,\mu}$ has the properties:
\noindent {\rm (i)} It maps $B_{\mu}(x_0)$ into its complement $\Omega\setminus \bar{B}_{\mu}(x_0)$, such as $x_0$ into $\infty$;\\
\noindent {\rm (ii)} It let the boundary $\partial B_{\mu}(x_0)$ invariant, that is, $\mathcal{I}_{x_0,\mu}(\partial B_{\mu}(x_0))=\partial B_{\mu}(x_0)$;\\
\noindent {\rm (iii)} It is conformal invariant, in the sense that $\langle \mathcal{I}_{x_0,\mu}(x),\mathcal{I}_{x_0,\mu}(y)\rangle=\langle x,y\rangle$ for all $x,y\in\Omega_{x_0,\mu}$.
\end{proposition}
The next step is a generalization of the Kelvin transform for $m$th order operators applied to $p$-maps.
Nonetheless, we often work with the fourth order Kelvin transform.
\begin{definition}
For any $\mathcal{U}:\Omega\rightarrow\mathbb{R}^p$ and $m\in\mathbb{N}$, let us consider the $m$-th order Kelvin transform about the sphere with center at $x_0\in\mathbb{R}^n$ and radius $\mu>0$ defined on $\mathcal{U}_{x_0,\mu}:\Omega_{x_0,\mu}\rightarrow\mathbb{R}^p$ by
\begin{equation*}
\mathcal{U}_{x_0,\mu}(x)=K_{x_0,\mu}(x)^{n-2m}\mathcal{U}\left(\mathcal{I}_{x_0,\mu}(x)\right).
\end{equation*}
In particular, when $p=1$ we fix the notation $u_{x_0,\mu}$.
\end{definition}
Now we need to understand how \eqref{oursystem} behaves under the action of the Kelvin transform.
\begin{proposition}\label{confinva}
System \eqref{oursystem} is conformally invariant, in the sense that it is invariant under the action of Kelvin transform, {\it i.e.}, if $\mathcal{U}$ is a non-singular solution to \eqref{oursystem}, then $\mathcal{U}_{x_0,\mu}$ is a solution to
\begin{equation}\label{revised}
\Delta^{2}(u_{i})_{x_0,\mu}=c(n)|\mathcal{U}_{x_0,\mu}|^{2^{**}-2}(u_{i})_{x_0,\mu} \quad {\rm in} \quad \mathbb{R}^{n}\setminus\{x_0\}\quad {\rm for} \quad i\in I,
\end{equation}
where $\mathcal{U}_{x_0,\mu}=((u_1)_{x_0,\mu},\dots,(u_p)_{x_0,\mu})$.
\end{proposition}
\begin{proof}
For all $x\in\mathbb{R}^n\setminus\{x_0\}$, let us state the formulas below, which proofs are included in Appendix~\ref{appendixA},
\begin{equation}\label{laplaciankelvin}
\Delta u_{x_0,\mu}(x)=K_{x_0,\mu}(x)^{n+2}\Delta u\left(\mathcal{I}_{x_0,\mu}(x)\right)=K_{x_0,\mu}(x)^{4}(\Delta u)_{x_0,\mu}(x)
\end{equation}
and
\begin{equation}\label{biharmonickelvin}
\Delta^2 u_{x_0,\mu}(x)=K_{x_0,\mu}(x)^{n+4}\Delta^{2}u\left(\mathcal{I}_{x_0,\mu}(x)\right)=K_{x_0,\mu}(x)^{8}(\Delta^2 u)_{x_0,\mu}(x).
\end{equation}
Next, expanding the right-hand side of \eqref{revised}, we observe
\begin{equation}\label{honduras}
|\mathcal{U}_{x_0,\mu}(x)|^{2^{**}-2}(u_{i})_{x_0,\mu}=K_{x_0,\mu}(x)^{n+4}|\mathcal{U}(x)|^{2^{**}-2}u_i(x).
\end{equation}
Therefore, the proposition follows as a combination of \eqref{biharmonickelvin} and \eqref{honduras}.
\end{proof}
\begin{remark}
Proposition~\ref{confinva} is not a surprising conclusion, since the Gross--Pitaevskii-type nonlinearity preserves the same {\it conformal invariance} enjoyed by the scalar case. Namely, in the case $p=1$, \eqref{scalarsystem} is invariant under the action of the conformal euclidean group.
\end{remark}
\subsection{Variational technique}
As an independent part, we use a variational approach to prove a weak version of Theorem~\ref{theorem1}, which is based on the analysis of quotient functions $q_{ij}=u_i/u_j$ \cite{MR2603801}.
Before starting our method, we must be cautious about the quotient to be well-defined, since we may have solutions having zeros in the domain, or even being identically null.
By Proposition~\ref{positivity}, we know that the latter situation does not occur. Moreover, we can avoid the former situation by assuming that component solutions $u_i$ are strictly positive, that is, $i\in I_+$.
Notice that Theorem~\ref{theorem1} is now equivalent to prove that all quotient functions are identically constant, or also that component solutions are proportional to each other.
For more results on proportionality of components, see \cite{MR3198645} and the references therein.
We state the main result of this part, which is a fourth order version of \cite[Proposition~3.1]{MR2603801}
\begin{theoremtio}\label{hebey-druet}
{\it
Let $\mathcal{U}$ be a nonnegative weak non-singular solution to \eqref{oursystem}.
Then, there exists $\Lambda\in\mathbb{S}^{p-1}_{+}$ such that $\mathcal{U}=\Lambda u_{x_0,\mu}$, where $u_{x_0,\mu}$ is a fourth order spherical solution given by \eqref{sphericalfunctions}.
}
\end{theoremtio}
\begin{proof}
For any $\mathcal{U}$ be a weak nonnegative non-singular solution to \eqref{oursystem} and $i,j\in I_{+}$, let us consider the quotient function $q_{ij}:\mathbb{R}^n\rightarrow(0,\infty)$ given by $q_{ij}:=u_i/u_j$.
Besides, by the smoothness result in Corollary~\ref{smoothness}, we get $q_{ij}\in C^{\infty}(\mathbb{R}^n)$ for all $i,j\in I_{+}$.
In what follows, we divide the argument into two claims.
The first one provides a universal estimate for any quotient function.
\noindent{\bf Claim 1:} $\min_{\partial B_R(0)}q_{ij}\leqslant q_{ij}\leqslant\max_{\partial B_R(0)}q_{ij}.$
\noindent The proof of the claim is provided in the following two steps:
\noindent{\bf Step 1:} $\Delta q_{ij}\geqslant0$ in $B_R$.
\noindent It is straightforward to check that
\begin{equation*}
\Delta^2 q_{ij}=\frac{u_j\Delta^2 u_i-u_i\Delta^2 u_j}{u_j^2}-\frac{4}{u_j}\nabla\Delta q_{ij}\nabla u_j-\frac{6}{u_j}\Delta q_{ij}\Delta u_j-\frac{4}{u_j}\nabla q_{ij}\nabla\Delta u_j.
\end{equation*}
Notice that using \eqref{oursystem}, the first term on the right-hand side of last equation is zero. Thus, we are left with
\begin{equation}\label{celular}
-\Delta^{2}q_{ij}-\frac{6}{u_j}\Delta u_j\Delta q_{ij}=\frac{4}{u_j}\left[\nabla\Delta q_{ij}\nabla u_j+\nabla q_{ij}\nabla\Delta u_j\right].
\end{equation}
On the other hand, observe that
\begin{equation*}
\frac{4}{u_j}\left[\nabla\Delta q_{ij}\nabla u_j+\nabla q_{ij}\nabla\Delta u_j\right]\geqslant0.
\end{equation*}
Hence, setting $c(x)=-6u_j^{-1}\Delta u_j$, \eqref{celular} can be reformulated as $-\Delta(\Delta q_{ij})+c(x)\Delta q_{ij}\geqslant0$, which implies that $\Delta q_{ij}$ is a supersolution to $L=-\Delta+c$. In addition, using Proposition~\ref{superharmonicity} and the weak formulation \eqref{weakformulattion}, $L$ is a nonnegative operator, and the weak maximum principle from \cite[Theorem~3.1]{MR1814364} can be applied to show that $\min_{\partial B_R}\Delta q_{ij}\leqslant\Delta q_{ij}$. Therefore, by the definition of $\Delta q_{ij}$, we get that $\min_{\partial B_R}\Delta q_{ij}=0$, which concludes the proof of the first step.
\noindent{\bf Step 2:} $\Delta q_{ij}\equiv0$ in $B_R$.
\noindent It is straightforward to observe that $q_{ij}$ satisfies the following uniformly elliptic second order equation $-\Delta q_{ij}+\widetilde{c}(x)q_{ij}=0$ in $B_R$,
where $\widetilde{c}(x)=q_{ij}^{-1} \Delta q_{ij}$. Then, using Step 1 we have that $\widetilde{c}(x)\geqslant0$, which again, by using the weak maximum principle, concludes the proof.
\noindent{\bf Claim 2:} $\min_{\partial B_R(0)}q_{ij}\rightarrow\Lambda_{ij}$ and $\max_{\partial B_R(0)}q_{ij}\rightarrow\Lambda_{ij}$ as $R\rightarrow\infty$,
where
\begin{equation}\label{jurubeba}
\Lambda_{ij}=\frac{\displaystyle\int_{\mathbb{R}^n}|\mathcal{U}|^{2^{**}-2}u_i\mathrm{d} x}{\displaystyle\int_{\mathbb{R}^n}|\mathcal{U}|^{2^{**}-2}u_j\mathrm{d} x}.
\end{equation}
\noindent In fact, we divide the proof into three steps. The first one concerns with the behavior at infinity of component solutions to \eqref{regularsystem}.
\noindent{\bf Step 1:} $|x|^{{(n-4)}/{2}}u_i(x)=o_{R}(1)$ as $R\rightarrow\infty$.
\noindent For $R>0$, let us consider the $p$-map $\mathcal{W}_R=R^{{(n-4)}/{2}}\mathcal{U}(rx)$ the rescaling of $\mathcal{U}$, which in terms of component solutions is given by $(w_R)_i=R^{(n-4)/2}u_i(x)$. Since $u_i\in L^{2^{**}}(\mathbb{R}^n)$, we get
\begin{equation*}
\Delta^2 (w_R)_i=c(n)|\mathcal{W}|^{2^{**}-2}(w_R)_i \quad {\rm and} \quad \displaystyle\int_{B_{2}(0)\setminus B_{1/2}(0)}|\mathcal{W}_R|^{2^{**}}\mathrm{d} x=o_{R}(1) \quad {\rm as} \quad R\rightarrow\infty.
\end{equation*}
Thus, $(w_R)_i\rightarrow0$ in $C^{\infty}_{\mathrm{loc}}(B_{3/2}(0)\setminus B_{3/4}(0))$ as $R\rightarrow\infty$.
In the next step, we obtain a precise upper bound for component solutions to \eqref{regularsystem}, which can be applied to obtain an interpolation estimate, showing that $u_i\in L^{p}(\mathbb{R}^n)$ for $2<p<2^{**}$.
\noindent{\bf Step 2:} For any $0<\varepsilon<1/2$, there exists $C_{\varepsilon}>0$ such that $u_i(x)\leqslant C_{\varepsilon}|x|^{(4-n)(1-\varepsilon)}$ for all $x\in\mathbb{R}^n$.
\noindent First, by {Step 1} for a given $0<\varepsilon<1/2$, there exists $R_{\varepsilon}\gg1$ sufficiently large satisfying
\begin{equation}\label{luxemborg}
\displaystyle\sup_{\mathbb{R}^n\setminus B_{R_{\varepsilon}}(0)}|x|^{2}|\mathcal{U}(x)|^{2^{**}-2}<\frac{(n-4)^2}{2}\varepsilon(1-\varepsilon).
\end{equation}
For $R\geqslant R_{\varepsilon}$, let us consider $\sigma(R)=\displaystyle\max_{i\in I_{+}}\max_{\partial{B_{R}(x_0)}}u_i$ and the auxiliary function
\begin{equation*}
G_{\varepsilon}(x)=\sigma(R_{\varepsilon})\left(\frac{|x|}{R_{\varepsilon}}\right)^{(4-n)(1-\varepsilon)}+\sigma(R)\left(\frac{|x|}{R}\right)^{(4-n)\varepsilon}.
\end{equation*}
Notice that, by construction, we clearly have that $u_i\leqslant G_{\varepsilon}$ on $\partial B_{R}(0)\cup\partial B_{R_{\varepsilon}}(0)$.
Let us suppose that there exists $x_0\in B_{R}(0)\setminus \bar{B}_{R_{\varepsilon}}(0)$, a maximum point of $u_i/G_{\varepsilon}$, which would imply that $\Delta(u_iG^{-1}_{\varepsilon}(x_0))\leqslant0$, and then
\begin{equation}\label{eslovenia}
\frac{\Delta u_i(x)}{u_i(x)}\geqslant\frac{\Delta G^{-1}_{\varepsilon}(x)}{G^{-1}_{\varepsilon}(x)}.
\end{equation}
Furthermore, a direct computation implies
\begin{equation}\label{crocia}
\Delta G^{-1}_\varepsilon(x)=G^{-1}_{\varepsilon}(x)\frac{(n-4)^2}{2}\varepsilon(1-\varepsilon)|x|^{-2}.
\end{equation}
Therefore, by Proposition~\ref{superharmonicity} we obtain that $\Delta^{2}u_i(x)-\Delta u_i(x)\geqslant0$, which combined with \eqref{eslovenia}-\eqref{crocia} yields
\begin{equation*}
|x|^2|\mathcal{U}(x)|^{2^{**}}=\frac{\Delta^2 u_i(x)}{u_i(x)}\geqslant\frac{\Delta u_i(x)}{u_i(x)}\geqslant\frac{\Delta G^{-1}_{\varepsilon}(x)}{G^{-1}_{\varepsilon}(x)}=\frac{(n-4)^2}{2}\varepsilon(1-\varepsilon).
\end{equation*}
This is a contradiction with \eqref{luxemborg} since our choice of $R_{\varepsilon}>0$. Then, applying a strong maximum principle, we have
\begin{equation}\label{nicaragua}
u_i(x)\leqslant\sigma(R_{\varepsilon})\left(\frac{|x|}{R_{\varepsilon}}\right)^{(4-n)(1-\varepsilon)}+\sigma(R)\left(\frac{|x|}{R}\right)^{(4-n)\varepsilon} \quad {\rm in} \quad B_{R}(0)\setminus\bar{B}_{R_{\varepsilon}}(0),
\end{equation}
for all $R>R_{\varepsilon}$.
Thus, using \eqref{nicaragua} combined with Step 1, and taking the limit as $R\rightarrow\infty$, we get
\begin{equation*}
u_i(x)\leqslant\sigma(R_{\varepsilon})\left(\frac{|x|}{R_{\varepsilon}}\right)^{(4-n)(1-\varepsilon)} \quad {\rm in} \quad \mathbb{R}^n.
\end{equation*}
\noindent{\bf Step 3:} $|x|^{4-n}u_i(x)=\int_{\mathbb{R}^n}c(n)|\mathcal{U}(x)|^{2^{**}-2}u_i(x)\mathrm{d} x+o_{R}(1)$ as $R\rightarrow\infty$.\\
\noindent First, since $u_i\in L^{2^{**}}(\mathbb{R}^n)$, we have $|\mathcal{U}|^{2^{**}-2}u_i\in L^{2n/(n+4)}(\mathbb{R}^n)$ for all $i\in I$, which provides $|\mathcal{U}|^{2^{**}-2}u_i\in W^{-2,2}(\mathbb{R}^n)$. Hence, we get that \eqref{oursystem} can be reduced to the following integral system,
\begin{equation*}
u_i(x)=C_n\displaystyle\int_{\mathbb{R}^n}|x-y|^{4-n}|\mathcal{U}(y)|^{2^{**}-2}u_i(y)\mathrm{d} y,
\end{equation*}
from which follows
\begin{align*}
|x|^{n-4}u_i(x)
=C_n\displaystyle\int_{\mathbb{R}^n}\left(\frac{|x|}{|x-y|}\right)^{n-4}|\mathcal{U}(y)|^{2^{**}-2}u_i(y)\mathrm{d} y
=C_n(I_1+I_2),
\end{align*}
where
\begin{equation*}
I_1=\int_{B_{R}(0)}\left(\frac{|x|}{|x-y|}\right)^{n-4}|\mathcal{U}(y)|^{2^{**}-2}u_i(y)\mathrm{d} y, \;
I_2= \displaystyle\int_{\mathbb{R}^n\setminus B_{R}(0)}\left(\frac{|x|}{|x-y|}\right)^{n-4}|\mathcal{U}(y)|^{2^{**}-2}u_i(y)\mathrm{d} y.
\end{equation*}
To control $I_1$, we observe that since
\begin{equation*}
\displaystyle\int_{B_R(0)}\left[\left(\frac{|x|}{|x-y|}\right)^{n-4}-1\right]|\mathcal{U}(y)|^{2^{**}-2}u_i(y)\mathrm{d} y=o_{R}(1),
\end{equation*}
the following asymptotically identity holds
\begin{align}\label{finland}
I_1=\displaystyle\int_{B_R(0)}|\mathcal{U}(y)|^{2^{**}-2}u_i(y)\mathrm{d} y+o_{R}(1) \quad {\rm as} \quad R\rightarrow\infty.
\end{align}
It remains to estimate $I_2$. Accordingly, using {Step 2}, we can write
\begin{align}\label{iceland}
I_2&=\displaystyle\int_{\mathbb{R}^n\setminus B_{R}(0)}\left(\frac{|x|}{|x-y|}\right)^{n-4}|\mathcal{U}(y)|^{2^{**}-2}u_i(y)\mathrm{d} y&\\\nonumber
&\leqslant\displaystyle\int_{B_{|x|/2}(x)}\left(\frac{|x|}{|x-y|}\right)^{n-4}|\mathcal{U}(y)|^{2^{**}-2}u_i(y)\mathrm{d} y+\displaystyle\int_{\mathbb{R}^n\setminus B_{|x|/2}(x)}\left(\frac{|x|}{|x-y|}\right)^{n-4}|\mathcal{U}(y)|^{2^{**}-2}u_i(y)\mathrm{d} y&\\\nonumber
&\leqslant C_{\varepsilon}^{2^{**}-1}\displaystyle\int_{B_{|x|/2}(x)}\left(\frac{|x|}{|x-y|}\right)^{n-4}\left(\frac{|x|}{2}\right)^{-(n+4)(1-\varepsilon)}\mathrm{d} y+2^{n-4}\displaystyle\int_{\mathbb{R}^n\setminus B_R(0)}|\mathcal{U}(y)|^{2^{**}-2}u_i(y)\mathrm{d} y&\\\nonumber
&\leqslant C_{\varepsilon}^{2^{**}-1}2^{(n+4)(1-\varepsilon)-2}\omega_{n-1}|x|^{n-(n+4)(1-\varepsilon)}+2^{n-4}\displaystyle\int_{\mathbb{R}^n\setminus B_R(x)}|\mathcal{U}(y)|^{2^{**}-2}u_i(y)\mathrm{d} y,&
\end{align}
Choosing $\varepsilon=4/(n+4)$ in \eqref{iceland}, we obtain that $n-(n+4)(1-\varepsilon)\leqslant0$, and so
\begin{equation*}
I_2=o_{R}(1) \quad {\rm as} \quad R\rightarrow\infty,
\end{equation*}
which combined with \eqref{iceland} and \eqref{finland}, concludes the proof of Step 3.
Now using Step 3, we obtain that for all $i,j\in I_{+}$, it holds
\begin{equation*}
q_{ij}(x)=\frac{u_i(x)}{u_j(x)}=\frac{|x|^{n-4}u_i(x)}{|x|^{n-4}u_j(x)}=\frac{\int_{\mathbb{R}^n}|\mathcal{U}(x)|^{2^{**}-2}u_i(x)\mathrm{d} x+o_{R}(1)}{\int_{\mathbb{R}^n}|\mathcal{U}(x)|^{2^{**}-2}u_j(x)\mathrm{d} x+o_{R}(1)},
\end{equation*}
which by taking the limit as $R\rightarrow\infty$ yields \eqref{jurubeba}.
Finally, combining Claims 1 and 2, we find that $u_i=\Lambda_{ij}u_j$. In particular, for all $i\in I_{+}$, we have the proportionality $u_i=\Lambda_iu_1$ where $\Lambda_i=\Lambda_{1i}$, which provides $\Delta^2u_1=c(n)|\Lambda^{\prime}|^{2^{**}-2}u^{2^{**}-1}_1$ in $ \mathbb{R}^n$, where $\Lambda^{\prime}=(\Lambda_i)_{i\in I_{+}}$. By Theorem~\ref{theoremA}, for some $x_0\in\mathbb{R}^n$ and $\mu>0$, we have that $u_1$ has the following form
\begin{equation*}
u_1(x)=|\Lambda'|^{-1}\left(\frac{2\mu}{1+\mu^{2}|x-x_0|^{2}}\right)^{\frac{n-4}{2}},
\end{equation*}
which implies that our classification holds for $\Lambda=(\Lambda_1|\Lambda'|^{-1},\dots,\Lambda_p|\Lambda'|^{-1})$; thus the proof of Theorem~\ref{hebey-druet}\textcolor{blue}{'} is completed.
\end{proof}
\subsection{Moving spheres method}
We apply the moving sphere method to show that solutions $\mathcal{U}$ to \eqref{regularsystem} are radially symmetric, and we provide the classification for $|\mathcal{U}|$.
Then, we use this expression to compute the $\mathcal{D}^{2,2}(\mathbb{R}^n,\mathbb{R}^p)$ Sobolev norm of $\mathcal{U}$ directly, and therefore conclude that classical solutions to \eqref{regularsystem} are also weak.
The moving spheres method is an alternative variant of the moving planes method, which can also be used to obtain radial symmetry or more robust Liouville-type results for solutions to more general PDEs \cite{MR2001065}.
Our inspiration is a moving spheres argument due to O. Druet et al. \cite[Proposition~3.1]{MR2558186}.
Initially, we need to state two classification results that will be used later and whose proofs can be found in \cite[Lemma~11.1 and Lemma~11.2]{MR2001065}.
We recall the notation $K_{z,\mu(z)}$ and $\mathcal{I}_{z,\mu(z)}$ from Subsection~\ref{subsec:kelvintransfform}.
\begin{propositionletter}[Weak Liouville-type result]\label{li-zhang1}
Let $u\in C^{1}(\mathbb{R}^n)$ and $\nu>0$. Suppose that for all $z\in\mathbb{R}^n$, there exists $\mu(z)>0$ such that
\begin{equation}\label{hyp1}
K_{z,\mu(z)}(x)^{\nu}\left(u\circ\mathcal{I}_{z,\mu(z)}\right)(x)=u(x) \quad {\rm for \ all} \quad x\in\mathbb{R}^{n}\setminus\{z\}.
\end{equation}
Then, for some $\mu\geqslant0$, $\mu'>0$ and $x_0\in\mathbb{R}^{n}$, it follows that $u(x)=\pm\left(\frac{\mu'}{\mu+|x-x_0|^2}\right)^{{\nu}/{2}}$.
\end{propositionletter}
\begin{propositionletter}[Strong Liouville-type result]\label{li-zhang2}
Let $u\in C^{1}(\mathbb{R}^n)$ and $\nu>0$.
Suppose that for all $z\in\mathbb{R}^n$, there exists $\mu(z)>0$ such that
\begin{equation}\label{hyp2}
K_{z,\mu(z)}(x)^{\nu}\left(u\circ\mathcal{I}_{z,\mu(z)}\right)(x)\leqslant u(x) \quad {\rm for \ all} \quad x\in\mathbb{R}^{n}\setminus \bar{B}_{{\mu}(z)}.
\end{equation}
Then, $u$ is constant.
\end{propositionletter}
\begin{remark}
In terms of the Kelvin transform, that is, $\nu=n-4$, notice that conditions \eqref{hyp1} and \eqref{hyp2} can be rewritten respectively as $u_{z,\mu(z)}=u$ in $\mathbb{R}^n\setminus\{z\}$ and $u_{z,\mu(z)}\leqslant u$ in $\mathbb{R}^n\setminus B_{\mu(z)}(z)$.
\end{remark}
In what follows, let us divide the moving spheres process into three parts, namely, Lemmas \ref{lemmaA}, \ref{lemmaB}, and \ref{lemmaC}.
First, we show that its possible to start the process of moving spheres. For this, it will be crucial Corollaries~\ref{positivity} and \ref{lowerestimate}.
\begin{lemma}\label{lemmaA}
Let $\mathcal{U}$ be a nonnegative non-singular solution to \eqref{oursystem}. Then, for any $x_0\in\mathbb{R}^{n}$, there exists $\mu_{0}(x_0)>0$ such that for $\mu\in(0,\mu_{0}(x_0))$,
$(u_i)_{x_0,\mu}\leqslant u_i$ in $\mathbb{R}^{n}\setminus B_{\mu}(x_0) $ for all $ i\in I.$
\end{lemma}
\begin{proof}
By translation invariance, we may take $x_0=0$. Let us denote $(u_i)_{0,\mu}=(u_i)_{\mu}$ for $i\in I$.
\noindent{\bf Claim 1:} For any $i\in I_{+}$, there exists $r_0>0$ such that for $r\in (0,r_0]$ and $\theta\in\mathbb{S}^{n-1}$, we have
\begin{equation*}
{\partial_r}\left(r^{\frac{n-4}{2}}u_i(r\theta)\right)>0.
\end{equation*}
\noindent In fact, since $u_i$ is a continuously differentiable function for each $i\in I_{+}$, there exists $\widetilde{r}_i>0$ satisfying $\inf_{0<y\leqslant\widetilde{r}_i}u_i>0$ and $\sup_{0<y\leqslant\widetilde{r}_i}|\nabla u_i|<\infty$. Then, choosing
\begin{equation*}
r_i=\min\left\{\widetilde{r}_i,\frac{(n-4)\displaystyle\inf_{0<y\leqslant\widetilde{r}_i}u_i}{2\displaystyle\sup_{0<y\leqslant\widetilde{r}_i}|\nabla u_i|}\right\},
\end{equation*}
for $0<r<r_i$, we have
\begin{align*}
{\partial_r}\left(r^{\frac{n-4}{2}}u_i(r\theta)\right)\geqslant r^{\frac{n-6}{2}}\left(\frac{n-4}{2}u_i(r\theta)-r\left|\partial_r(r\theta)\right|\right).
\end{align*}
By our choice of $r_i>0$, we obtain ${\partial_r}\left(r^{\frac{n-4}{2}}u_i(r\theta)\right)\geqslant0$, which, by taking $r_0=\min_{i\in I_+}r_i$, concludes the proof of the claim.
\noindent {\bf Claim 2:} For $\mu\in(0,r_0]$ and $x\in \bar{B}_{r_0}\setminus B_{\mu}$, it follows that $(u_i)_{\mu}\leqslant u_i$ in $B_{r_0}\setminus B_{\mu}$.
\noindent Indeed, using Claim 1, we observe that $\rho(r)=r^{(n-4)/2}u_i(r\theta)$ is radially increasing in $(0,r_0]$ for any $\theta\in\mathbb{S}^{n-1}$. Hence, taking $r=1$ and $r'=(\mu/|x|)^{2}$, we have $\rho(r')\leqslant\rho(1)$, which completes the proof.
By Claim 2 and Proposition~\ref{superharmonicity}, the hypothesis in Corollary~\ref{lowerestimate} are satisfied. Consequently, for any $r>r_0$ and $i\in I$, we find
\begin{equation*}
u_i(x)\geqslant\left(\frac{r_0}{|x|}\right)^{n-4}\min_{\partial B_{r_0}}u_i \quad {\rm in} \quad B_r\setminus B_{r_0}.
\end{equation*}
Setting $\mu_0=r_0\min_{i\in I_{+}}\left(\frac{\min_{\partial{B_{r_0}}}u_i}{\max_{\bar{B}_{r_0}}u_i}\right)^{{4-n}}$, we find
\begin{equation*}
(u_i)_{\mu}(x)\leqslant\left(\frac{\mu_0}{|x|}\right)^{n-4}\max_{\bar{B}_{r_0}}u_i\leqslant\left(\frac{r_0}{|x|}\right)^{n-4}\min_{\partial B_{r_0}}u_i,
\end{equation*}
for any $\mu\in(0,\mu_0)$, $x\in\mathbb{R}^n\setminus B_{r_0}$ and $i\in I$.
This combined with Claim 1 completes the proof.
\end{proof}
After this lemma, let us introduce a well-defined quantity, namely the supremum for which a $p$-map and its Kelvin transform have the same Euclidean norm.
\begin{definition}For any $x_0\in\mathbb{R}^n$, let us define
\begin{equation}\label{coincidence}
\mu^*(x_0)=\sup\{\mu>0 : (u_i)_{x_0,\mu}\leqslant u_i \quad {\rm in} \quad \mathbb{R}^{n}\setminus B_{\mu}(x_0) \}.
\end{equation}
\end{definition}
The second lemma states that if \eqref{coincidence} is finite, the moving spheres process must stop, and the euclidean norm of solution to \eqref{oursystem} are invariant under Kelvin transform.
\begin{lemma}\label{lemmaB}
Let $\mathcal{U}$ be a nonnegative non-singular solution to \eqref{oursystem}. If $\mu^*(x_0)<\infty$, then
$|\mathcal{U}_{x_0,\mu^*(x_0)}|\equiv|\mathcal{U}|$ in $\mathbb{R}^{n}\setminus\{x_0\}$.
\end{lemma}
\begin{proof}
Without loss of generality, we may take $x_0=0$. We denote $\mu^*(0)=\mu^*$. By the definition of $\mu^*$, when $\mu^*<\infty$, we get that for any $\mu\in(0,\mu^*]$ and $i\in I$, it holds
\begin{equation}\label{eua}
(u_i)_{\mu}\leqslant u_i \quad {\rm in} \quad \mathbb{R}^{n}\setminus B_{\mu}(0).
\end{equation}
Thus, there exist $i_0\in I$ and $(\mu_{k})_{k\in\mathbb{N}}$ in $(\mu^*,\infty)$ satisfying $\mu_{k}\rightarrow\mu^*$ and such that \eqref{eua} does not hold for $i=i_0$ and $\mu=\mu_{k}$. For $\mu>0$, let us define $\omega_{\mu}=(u_{i_0})-(u_{i_0})_{\mu}$.
\noindent {\bf Claim 1:} $\omega_{\mu^*}$ is superharmonic.
\noindent Indeed, as a combination of \eqref{regularsystem} and Lemma~\ref{confinva}, we obtain
\begin{equation*}
\begin{cases}
\Delta^{2}\omega_{\mu^*}(x)=c_{\mu^*}(x)\omega_{\mu^*}& {\rm in} \quad \mathbb{R}^n\setminus B_{\mu^*}(0)\\
\Delta \omega_{\mu^*}(x)=\omega_{\mu^*}(x)=0& {\rm on} \quad \partial B_{\mu^*}(0),
\end{cases}
\end{equation*}
where
\begin{equation*}
c_{\mu^*}=\frac{c(n)|{\mathcal{U}}|^{2^{**}-2}u_{i_0}-c(n)|\mathcal{U}_{\mu^*}|^{2^{**}-2}({u}_{i_0})_{\mu^*}}{{u}_{i_0}-({u}_{i_0})_{\mu^*}}>0 \quad {\rm in} \quad \mathbb{R}^n\setminus B_{\mu^*}(0).
\end{equation*}
Therefore, by Claim 1 we can use the strong maximum principle in \cite[Theorem~3.5]{MR1814364} to conclude
\begin{equation*}
\min_{\mathbb{R}^{n}\setminus B_{\mu^*}(0)}\omega_{\mu^*}=\min_{\partial B_{\mu}(0)}\omega_{\mu^*}.
\end{equation*}
\noindent {\bf Claim 2:} $\omega_{\mu^*}\equiv0$.
\noindent Supposing that $\omega_{\mu^*}$ is not equivalently zero in $\mathbb{R}^{n}\setminus B_{\mu}(0)$, by Hopf Lemma \cite[Lemma~3.4]{MR1814364}, we have that ${\partial_\nu}\omega_{\mu^*}>0 \quad {\rm in} \quad \partial B_{\mu^*}(0)$.
Moreover, by the continuity of $\nabla u_{i_0}$, one can find $r_0>\mu^*$ such that for any $\mu^*\in[\mu,r_0)$, we get
\begin{equation}\label{spo}
\omega_{\mu^*}>0 \quad {\rm in} \quad \bar{B}_{r_0}(0)\setminus B_{\mu}(0).
\end{equation}
Again applying Proposition~\ref{3S}, we obtain
\begin{equation*}
\omega_{\mu^*}\geqslant\left(\frac{r_0}{|x|}\right)^{n-4}\omega_{\mu^*}.
\end{equation*}
On the other hand, by the uniform continuity of the $u_{i_0}$ on $B_{r_0}(0)$, there exists $\varepsilon>0$ such that for any $\mu\in[\mu^*,\mu^*+\varepsilon)$ and $x\in\mathbb{R}^{n}\setminus B_{r_0}(0)$, it follows
\begin{equation}\label{san}
|\omega_{\mu^*}(x)-\omega_{\mu}(x)|=|(u_i)_{\mu}(x)-(u_{i_0})_{\mu^*}(x)|\leqslant\frac{1}{2}\left(\frac{r_0}{|x|}\right)^{n-4}\min_{\partial B_{r_0}(0)}\omega_{\mu^*}.
\end{equation}
Therefore, a combination of \eqref{spo}--\eqref{san} yields $\omega_{\mu^*}\geqslant0$ in $\mathbb{R}^{n}\setminus B_{\mu}(0)$ for any $\mu\in[\mu^*,\mu^*+\varepsilon)$. This is a contradiction with the definition of $\mu^*$, thus $\omega_{\mu^*}\equiv0$ in $\mathbb{R}^{n}\setminus B_{\mu}(0)$. Moreover, let us define
\begin{equation*}
\omega_{\mu}(x)=-\left(\frac{\mu^*}{|x|}\right)^{n-4}\omega_{\mu^*}\left(\left(\frac{\mu^*}{|x|}\right)^2x\right).
\end{equation*}
Hence, it follows that $\omega_{\mu^*}\equiv0$ in $\mathbb{R}^{n}\setminus\{0\}$. Since $u_{i_0}$ cannot be identically zero without contradicting the definition of $\mu^*$, by Proposition~\ref{positivity} $u_{i_0}$ is nowhere vanishing. Consequently, we obtain that $|\mathcal{U}_{\mu^*}|\equiv|\mathcal{U}|$ in $\mathbb{R}^{n}\setminus\{0\}$.
\end{proof}
In the last lemma, we show that the moving spheres process only stops if $\mathcal{U}$ is the trivial solution.
\begin{lemma}\label{lemmaC}
Let $\mathcal{U}$ be a nonnegative non-singular solution to \eqref{oursystem}. If $\mu^*(x_0)<\infty$ for some $x_0\in\mathbb{R}^n$, then $\mathcal{U}\equiv0$.
\end{lemma}
\begin{proof}
By definition of $\mu^*(x_0)$, if $\mu^*(x_0)=\infty$, we get that for any $\mu>0$ and $i\in I$, $(u_i)_{x_0,\mu}\leqslant u_i$ in $\mathbb{R}^n\setminus B_{\mu}(x_0)$. Moreover, assuming that $x_0=0$, by \eqref{eua}, we have
\begin{equation*}
\mu^{n-4}\leqslant \liminf_{|x|\rightarrow\infty}|x|^{n-4}u_i(x),
\end{equation*}
which by passing to limit as $\mu\rightarrow\infty$ provides that for $i\in I$, either $u_i(0)=0$ or $|x|^{n-4}u_i(x)\rightarrow0$ as $|x|\rightarrow\infty$. Using that $u_i(0)=0$ for all $i\in I$, by Propositions~\ref{superharmonicity} and \ref{positivity}, we conclude that $u_i\equiv0$. Therefore, we may assume $|x|^{n-4}u_i(x)\rightarrow\infty$ as $|x|\rightarrow\infty$ for all $i\in I_{+}$.
\noindent{\bf Claim 1:} $\mu^*(z)=\infty$ for all $z\in\mathbb{R}^n$.
\noindent Indeed, when $\mu^*(z)<\infty$ for some $y\in\mathbb{R}^n$, using Lemma~\ref{lemmaB}, we obtain
\begin{equation*}
|x|^{n-4}|\mathcal{U}(x)|=|x|^{n-4}|\mathcal{U}_{z,\mu^*(z)}(x)|\rightarrow\mu^*(z)^{n-4}|\mathcal{U}(z)| \quad {\rm as} \quad |x|\rightarrow\infty,
\end{equation*}
which is a contradiction.
Combining Claim 1 and Proposition~\ref{li-zhang2}, we have that $\mathcal{U}$ is constant. Since $\mathcal{U}$ satisfies \eqref{oursystem}, it follows that $\mathcal{U}\equiv0$.
\end{proof}
\subsection{Proof of Theorem~\ref{theorem1}}\label{proofoftheorem1}
Now using Lemma \ref{lemmaC} and Proposition~\ref{li-zhang1}, we have enough conditions to classify $|\mathcal{U}|$. From this classification, we can compute the $\mathcal{D}^{2,2}(\mathbb{R}^n,\mathbb{R}^p)$-norm of any classical solution to \eqref{regularsystem}, which enables us to conclude that classical solutions are weak solutions, then Theorem~\ref{hebey-druet} can be applied to complete the proof (See Remark~\ref{classical/weak}).
\begin{proof}
By Lemma~\ref{lemmaC}, we may assume $\mu^*(y)<\infty$ for any $y\in\mathbb{R}^n$. Moreover, using Proposition~\ref{li-zhang1}, there exist $x_0\in\mathbb{R}^n$ and $\mu,\mu' >0$ such that
\begin{equation}\label{shape}
|\mathcal{U}(x)|=\left(\frac{\mu'}{\mu+|x-x_0|^2}\right)^{\frac{n-4}{2}}, \quad {\rm for \ all} \quad x \in \mathbb{R}^n.
\end{equation}
Let us consider a smooth cut-off function satisfying $\eta\equiv1$ in $[0,1]$, $0\leqslant\eta\leqslant1$ in $[1,2)$ and $\eta\equiv0$ in $[2,\infty)$. For $R>0$, setting $\eta_R(x)=\eta(R^{-1}x)$, and multiplying the equation \eqref{oursystem} by $\eta_{R}u_i$, we obtain $\Delta^{2}u_i\eta_{R}u_i=|\mathcal{U}|^{2^{**}-2}\eta_{R}u^2_i$,
which gives us
\begin{equation*}
\displaystyle\sum_{i=1}^{p}\Delta^{2}u_i\eta_{R}u_i=\displaystyle\sum_{i=1}^{p}|\mathcal{U}|^{2^{**}-2}\eta_{R}u^2_i=c(n)|\mathcal{U}|^{2^{**}}\eta_R.
\end{equation*}
Thus,
\begin{equation}\label{cansado}
\displaystyle\int_{\mathbb{R}^n}\displaystyle\sum_{i=1}^{p}\Delta^{2}u_i\eta_{R}u_i \ \mathrm{d} x=\displaystyle c(n)\int_{\mathbb{R}^n}|\mathcal{U}|^{2^{**}}\eta_R \ \mathrm{d} x.
\end{equation}
Using integration by parts on the left-hand side,
\begin{equation}\label{chato}
\displaystyle\int_{\mathbb{R}^n}\displaystyle\sum_{i=1}^{p}\Delta^{2}u_i\eta_{R}u_i\mathrm{d} x=\displaystyle\sum_{i=1}^{p}\displaystyle\int_{\mathbb{R}^n}u_i\Delta^2(\eta_{R}u_i)\mathrm{d} x.
\end{equation}
Applying the formula for the bi-Laplacian of the product on the right-hand side of \eqref{chato},
\begin{align*}
\displaystyle\sum_{i=1}^{p}\displaystyle\int_{\mathbb{R}^n}u_i\Delta^2(\eta_{R}u_i)\mathrm{d} x = & \displaystyle\sum_{i=1}^{p}\displaystyle\int_{\mathbb{R}^n}\left[u_i\Delta^2(\eta_R)u_i+4u_i\nabla\Delta\eta_{R}\nabla u_i \right]\mathrm{d} x&\\
+ &\displaystyle\sum_{i=1}^{p}\displaystyle\int_{\mathbb{R}^n}\left[
6u_i\Delta\eta_R\Delta u_i+4u_i\nabla\eta_{R}\nabla\Delta u_i+u_i\eta_R\Delta^2 u_i\right]\mathrm{d} x,&
\end{align*}
which combined with \eqref{chato} provides
\begin{align}\label{demais}
&\displaystyle\sum_{i=1}^{p}\displaystyle\int_{\mathbb{R}^n}\left[u_i\Delta^2(\eta_R)u_i+4u_i\nabla\Delta\eta_{R}\nabla u_i+6u_i\Delta\eta_R\Delta u_i+4u_i\eta_{R}\nabla\Delta u_i\right]\mathrm{d} x=0.&
\end{align}
Again, we use integration by parts in \eqref{demais} to find
\begin{align*}
&\displaystyle\sum_{i=1}^{p}\left[\displaystyle\int_{\mathbb{R}^n}u_i^2\Delta\eta_{R}\mathrm{d} x-4\left(\displaystyle\int_{\mathbb{R}^n}\Delta\eta_R|\nabla u_i|^2\mathrm{d} x+\displaystyle\int_{\mathbb{R}^n}u_i\Delta\eta_R\Delta u_i\mathrm{d} x\right)\right.&\\
&\left.+6\displaystyle\int_{\mathbb{R}^n}u_i\Delta\eta_R\Delta u_i\mathrm{d} x-4\left(\displaystyle\int_{\mathbb{R}^n}u_i\eta_R\Delta^2 u_i\mathrm{d} x+\displaystyle\int_{\mathbb{R}^n}\eta_R\nabla u_i\nabla\Delta u_i\mathrm{d} x\right)\right]=0,&
\end{align*}
which yields
\begin{align}\label{austria}
4\displaystyle\sum_{i=1}^{p}\displaystyle\int_{\mathbb{R}^n}\Delta^{2}u_i\eta_Ru_i\mathrm{d} x
&=\displaystyle\int_{\mathbb{R}^n}(u_i)^2\Delta^2\eta_R\mathrm{d} x-4\displaystyle\int_{\mathbb{R}^n}\Delta\eta_R|\nabla u_i|^2\mathrm{d} x\\\nonumber
&+2\displaystyle\int_{\mathbb{R}^n}u_i\Delta\eta\Delta u_i\mathrm{d} x+4\displaystyle\int_{\mathbb{R}^n}\Delta u_i\nabla u_i\nabla\eta_{R}\mathrm{d} x+4\displaystyle\int_{\mathbb{R}^n}\eta_R|\Delta u_i|^2\mathrm{d} x.
\end{align}
As a result of \eqref{cansado} and \eqref{austria}, we obtain
\begin{align}\label{rosario}
\displaystyle\int_{\mathbb{R}^{n}}|\mathcal{U}|^{2^{**}}\eta_{R}\mathrm{d} x
&=\frac{1}{4}\displaystyle\int_{\mathbb{R}^{n}}|\mathcal{U}|^2\Delta^2\eta_{R}\mathrm{d} x-\displaystyle\int_{\mathbb{R}^{n}}|\nabla\mathcal{U}|^2\Delta\eta_{R}\mathrm{d} x\\\nonumber
&+\frac{1}{2}\displaystyle\int_{\mathbb{R}^{n}}\langle\mathcal{U},\Delta\mathcal{U}\rangle\Delta\eta_{R}\mathrm{d} x+\displaystyle\int_{\mathbb{R}^{n}}\langle\Delta\mathcal{U},\nabla \mathcal{U}\rangle\nabla\eta_R\mathrm{d} x+\displaystyle\int_{\mathbb{R}^{n}}|\Delta\mathcal{U}|^2\eta_{R}\mathrm{d} x.
\end{align}
Moreover, we have
\begin{equation*}
\displaystyle\int_{\mathbb{R}^{n}}|\mathcal{U}|^2\Delta^2\eta_{R}\mathrm{d} x=\mathcal{O}(R^{4-n}) \quad {\rm as} \quad R\rightarrow\infty.
\end{equation*}
Indeed, we observe
\begin{align*}
\displaystyle\left|\int_{\mathbb{R}^{n}}|\mathcal{U}|^2\Delta^2\eta_{R}\mathrm{d} x\right|&\leqslant\displaystyle\int_{\mathbb{R}^{n}}|\mathcal{U}|^2|\Delta^2\eta_{R}|\mathrm{d} x\\
&\leqslant\|\Delta^2\eta_R\|_{C^{0}(\mathbb{R}^n)}\displaystyle\int_{B_{2R}(0)\setminus B_{R}(0)}|\mathcal{U}|^2\mathrm{d} x\\
&\leqslant\frac{\|\Delta^2\eta\|_{C^{0}(\mathbb{R}^n)}}{R^4}\displaystyle\int_{R}^{2R}|\mathcal{U}(r)|^2r^{n-1}\mathrm{d} r\\
&\leqslant\frac{\|\Delta^2 \eta\|_{C^{0}(\mathbb{R}^n)}\|\mathcal{U}\|^2_{L^{\infty}(\mathbb{R}^n)}}{R^4}\displaystyle\int_{R}^{2R}r^{n-1}\mathrm{d} r\\
&=C(n)R^{n-4}.
\end{align*}
Analogously to the others terms, we get the following estimates
\begin{gather*}
\displaystyle\int_{\mathbb{R}^{n}}|\nabla\mathcal{U}|^2\Delta\eta_{R}\mathrm{d} x=\mathcal{O}(R^{2-n}) \ \mbox{and} \
\displaystyle\int_{\mathbb{R}^{n}}\langle\mathcal{U},\Delta\mathcal{U}\rangle\Delta\eta_{R}\mathrm{d} x=\displaystyle\int_{\mathbb{R}^{n}}\langle\Delta\mathcal{U},\nabla\mathcal{U}\rangle\nabla\eta_R\mathrm{d} x=\mathcal{O}(R^{1-n})
\end{gather*}
as $R\rightarrow\infty$,
which, by taking $R\rightarrow\infty$ in \eqref{rosario}, we find that $\eta_{R}\rightarrow1$ in the $C^{0}(\mathbb{R}^n)$-topology, and
\begin{equation*}
\displaystyle\int_{\mathbb{R}^{n}}|\Delta\mathcal{U}|^2\mathrm{d} x=c(n)\displaystyle\int_{\mathbb{R}^{n}}|\mathcal{U}|^{2^{**}}\mathrm{d} x<\infty.
\end{equation*}
Since $|\mathcal{U}|$ has the classification \eqref{shape}, a direct computation yields
\begin{equation*}
\int_{\mathbb{R}^{n}}|\mathcal{U}|^{2^{**}}\mathrm{d} x=S(2,2,n)^{-n},
\end{equation*}
where $S(2,2,n)$ is the best constant of Sobolev defined in \eqref{fourthtalaub}. Hence, $\mathcal{U}$ is a weak solution to \eqref{oursystem}, and the proof follows as a direct application of Theorem~\ref{hebey-druet}\textcolor{blue}{'}.
\end{proof}
\begin{remark}
System \eqref{regularsystem} is equivalent to the following integral system
\begin{equation}\label{integralsystem}
u_i(x)=C_n\int_{\mathbb{R}^n}|x-y|^{4-n}f_i(\mathcal{U}(y))\mathrm{d} y \quad {\rm in} \quad \mathbb{R}^n.
\end{equation}
In the sense that every solution to \eqref{regularsystem} is a solution \eqref{integralsystem} plus a constant, and the reciprocal also holds. W. X. Chen and C. Li in \cite[Theorem~3]{MR2510000} used the moving planes method in their integral form to classify solutions to a class of systems like \eqref{integralsystem} involving more general nonlinearities. Let us mention that this approach can also be extended to study higher order systems.
\end{remark}
\subsection{Maximizers for a vectorial Sobolev inequality} To motivate this part of the work, we observe that ${\mathcal{U}}_{x_0,\mu}=\Lambda u_{x_0,\mu}$ is also a solution to the following non-coupled system
\begin{equation*}
\Delta^2 u_i=\lambda_i c(n)u_i^{2^{**}-1} \quad {\rm in} \quad \mathbb{R}^n \quad {\rm for} \quad i\in I,
\end{equation*}
which arises in the search of extremal functions to $\|u_i\|_{\mathcal{D}^{2,2}(\mathbb{R}^n)}\leqslant C(n,i)\|u_i\|_{L^{2}(\mathbb{R}^n)}$ for each $i\in I$.
To study this problem, we use a variational framework based on the Sobolev space $\mathcal{D}^{2,2}(\mathbb{R}^n,\mathbb{R}^p)$.
We show that solutions obtained in Theorem \ref{theorem1} are indeed the extremal $p$-maps for a type of Sobolev embedding \cite{MR2852264,MR1992377,MR1911916,MR2221095,MR1967041}.
As usual, let us denote by $\mathcal{D}^{k,q}(\mathbb{R}^n,\mathbb{R}^p)$ the Beppo--Levi space defined as the completion of $C_c^{\infty}(\mathbb{R}^n,\mathbb{R}^p)$ with respect to the norm provided by the highest derivative term.
Notice that if $q=2$, then $\mathcal{D}^{k,2}(\mathbb{R}^n,\mathbb{R}^p)$ is a Hilbert space with scalar product given by $\langle\mathcal{U},\mathcal{V}\rangle=\sum_{i=1}^{p}\langle u_i,v_i\rangle_{\mathcal{D}^{k,2}(\mathbb{R}^n)}$. Moreover, for the {\it higher order critical Sobolev exponent} $q_k^{*}=(n+q)/(n-kq)$, we have the continuous embedding, $\mathcal{D}^{k,q}(\mathbb{R}^n,\mathbb{R}^p)\hookrightarrow L^{q_k^{*}}(\mathbb{R}^n,\mathbb{R}^p)$ with
\begin{equation*}
\|\mathcal{U}\|_{L^{q_k^{*}}(\mathbb{R}^n,\mathbb{R}^p)}\leqslant S(k,q,n,p)\|\mathcal{U}\|_{\mathcal{D}^{k,q}(\mathbb{R}^n,\mathbb{R}^p)}.
\end{equation*}
In this fashion, a natural problem to obtain and classify extremal functions and best constants for the inequality above.
For the scalar case, the celebrated papers \cite{MR0448404,MR0463908} contain the sharp Sobolev constant for $k=1$ as follows
\begin{equation*}
S(1,q,n)=
\begin{cases}
\pi^{-\frac{1}{2}}n^{-\frac{1}{q}}\left(\frac{q-1}{n-q}\right)^{1-\frac{1}{q}}\left[\frac{\Gamma\left(1+\frac{n}{2}\right)\Gamma_2(n)}{\Gamma\left(\frac{n}{q}\right)\Gamma\left(n+1-\frac{n}{q}\right)}\right]^{-\frac{1}{n}},& {\rm if} \ 1<q<n\\
\pi^{-\frac{1}{2}}{n^{-1}}\left[\Gamma\left(1+\frac{n}{2}\right)\right]^{-\frac{1}{n}},& {\rm if} \ q=1,
\end{cases}
\end{equation*}
with extremals given by the spherical functions, {\it i.e.}, for some $\mu>0$ and $x_0\in\mathbb{R}^n$,
\begin{equation*}
u(x)=\left(\frac{2\mu}{1+\mu^2|x-x_0|^{q/(q-1)}}\right)^{\frac{n-q}{q}}.
\end{equation*}
In particular, when $q=2$, we get
\begin{equation*}
S(1,2,n)=\left(\frac{4}{n(n-2)\omega_n^{2/n}}\right)^{1/2} \quad {\rm and} \quad u_{x_0,\mu}(x)=\left(\frac{2\mu}{1+\mu^2|x-x_0|^2}\right)^{\frac{n-2}{2}}.
\end{equation*}
On the fourth order case, $k=2$ and $q=2$, C. S. Lin \cite{MR1611691} found the best constant and characterized the set of maximizers,
\begin{equation}\label{fourthtalaub}
S(2,2,n)=\left(\frac{16}{n(n-4)(n^2-4)\omega_n^{{4}/{n}}}\right)^{1/2} \quad {\rm and} \quad u_{x_0,\mu}(x)=\left(\frac{2\mu}{1+\mu^2|x-x_0|^2}\right)^{\frac{n-4}{2}}.
\end{equation}
In the vectorial case, we quote the second order Sobolev inequality
\begin{equation}\label{mimosa}
\|\mathcal{U}\|_{L^{2^{*}}(\mathbb{R}^n,\mathbb{R}^p)}\leqslant S(1,2,n,p)\|\mathcal{U}\|_{\mathcal{D}^{1,2}(\mathbb{R}^n,\mathbb{R}^p)},
\end{equation}
where the extremal maps are the multiples of the second order spherical functions and $S(1,2,n,p)=S(1,2,n)$ for all $p>1$. Let us also consider the fourth order case of \eqref{mimosa} as
\begin{equation}\label{fourthvectsob}
\mathcal{D}^{2,2}(\mathbb{R}^n,\mathbb{R}^p)\hookrightarrow L^{2^{**}}(\mathbb{R}^n,\mathbb{R}^p).
\end{equation}
Ou main result here states that the solutions to \eqref{regularsystem} are the extremal functions for
\begin{equation}\label{fourthsobine}
\|\mathcal{U}\|_{\mathcal{D}^{2,2}(\mathbb{R}^n,\mathbb{R}^p)}\leqslant S(2,2,n,p)\|\mathcal{U}\|_{L^{2^{**}}(\mathbb{R}^n,\mathbb{R}^p)}.
\end{equation}
Remarkably, the best constant in \eqref{fourthsobine} coincides with the one when $p=1$, that is, it follows that $S(2,2,n,1)=S(2,2,n,p)$ for all $p>1$.
In other terms, the number of equations of the system has no effects in the best Sobolev constant for product spaces. In what follows, let us fix the notation $S(2,2,n,p)=S(n,p)$.
\begin{proposition}
Let ${\mathcal{U}}_{x_0,\mu}$ be a spherical solution to \eqref{regularsystem}. Then, ${\mathcal{U}}_{x_0,\mu}$ is the extremal functions for the Sobolev inequality \eqref{fourthsobine}, that is,
\begin{equation*}
\|{\mathcal{U}}_{x_0,\mu}\|_{\mathcal{D}^{2,2}(\mathbb{R}^n,\mathbb{R}^p)}= S(n,p)\|{\mathcal{U}}_{x_0,\mu}\|_{L^{2^{**}}(\mathbb{R}^n,\mathbb{R}^p)}.
\end{equation*}
Moreover, $S(n,p)=S(n)$ for all $p>1$.
\end{proposition}
\begin{proof}
Initially, we observe
\begin{equation}\label{varsobconst}
S(n,p)^{-2}=\inf_{\mathcal{H}^p(\mathbb{R}^n)}\sum_{i=1}^{p}\int_{\mathbb{R}^n}|\Delta u_i|^{2}\mathrm{d} x, \; {\rm where} \; \mathcal{H}^p(\mathbb{R}^n)=\left\{\mathcal{U}\in\mathcal{D}^{2,2}(\mathbb{R}^n,\mathbb{R}^p) : \|\mathcal{U}\|_{L^{2^{**}}(\mathbb{R}^n,\mathbb{R}^p)}=1\right\}.
\end{equation}
When $p=1$ our result is a consequence of Theorem~\ref{theoremA} with best constant $S(n)$ given by \eqref{fourthtalaub}.
\noindent{\bf Claim 1:} $S(n,p)=S(n)$ for all $p>1$.
\noindent In fact, by taking $u\in \mathcal{D}^{2,2}(\mathbb{R}^n)$ satisfying $\|u\|_{L^{2^{**}}(\mathbb{R}^n)}=1$, we have that $\mathcal{U}=u{\bf e_1}$ belongs to $\mathcal{H}^p(\mathbb{R}^n)$, where ${\bf e_1}=(1,0,\dots,0)$. Substituting $\mathcal{U}$ in \eqref{varsobconst}, we get that $S(n,p)\leqslant S(n)$. Conversely, we have
\begin{align}\label{hebeypot}
\left(\sum_{i=1}^{p}\int_{\mathbb{R}^n}|u_i|^{2^{**}}\mathrm{d} x\right)^{{2}/{2^{**}}}&\leqslant\left(\sum_{i=1}^{p}\left(S(n,p)^{-1}\int_{\mathbb{R}^n}|\Delta^2 u_i|\mathrm{d} x\right)^{2^{**}/2}\right)^{{2}/{2^{**}}}&\\\nonumber
&\leqslant S(n)^{-1}\sum_{i=1}^{p}\int_{\mathbb{R}^n}|\Delta u_i|^{2}\mathrm{d} x.&
\end{align}
Therefore, by \eqref{hebeypot} we find that $S(n,p)^{-1}\leqslant S(n)^{-1}$, which gives us the proof of the claim.
Finally, using the direct computation
\begin{equation*}
\frac{\|{\mathcal{U}}_{x_0,\mu}\|_{\mathcal{D}^{2,2}(\mathbb{R}^n,\mathbb{R}^p)}}{\|{\mathcal{U}}_{x_0,\mu}\|_{L^{2^{**}}(\mathbb{R}^n,\mathbb{R}^p)}}=\frac{\|u_{x_0,\mu}\|_{\mathcal{D}^{2,2}(\mathbb{R}^n)}}{{\|u_{x_0,\mu}}\|_{L^{2^{**}}(\mathbb{R}^n)}}=S(n),
\end{equation*}
concludes the proof of the proposition.
\end{proof}
\section{Classification result for singular solutions}\label{section4}
The objective of this section is to present the proof of Theorem~\ref{theorem2}.
First, we show that singular solutions to \eqref{oursystem} are radially symmetry about the origin. Second, using an argument from F. Catrina and Z.-Q. Wang \cite{MR1794994}, we transform \eqref{oursystem} into a PDE system on the cylinder.
Then, we obtain radial symmetry via an asymptotic moving planes technique due to \cite{MR982351} (see also \cite{MR1611691,MR3394387,MR4002167}); this property turns \eqref{oursystem} into a fourth order ODE system.
Eventually, we define a Pohozaev-type invariant by integrating the Hamiltonian energy of the associated Emden-Fowler system \cite{MR1666838,MR2393072,MR4085120,MR3869387,MR4094467,MR4123335}.
Moreover, we prove that the Pohozaev invariant sign provides a removable-singularity classification for nonnegative solutions to \eqref{oursystem}, which is combined with a delicate ODE analysis as in \cite{MR3869387} completes our argument.
Since in this section, we are dealing with singular solutions to \eqref{oursystem}, some components might develop a non-removable singularity at the origin.
In what follows, let us divide singular solutions into two classes.
Namely, a solution to \eqref{oursystem} is called {\it fully-singular}, if the origin is a {\it non-removable singularity} for all component solution $u_i$. Otherwise, we say that $\mathcal{U}$ is {\it semi-singular}.
Observe that for both type of singular solutions, we have $\liminf_{|x|\rightarrow0}|\mathcal{U}|=\infty$. More precisely, we present the following definitions.
\begin{definition}
Let us define the blow-up set by $I_{\infty}=\{i\in I: \liminf_{r \rightarrow 0 }u_i(r)=\infty\}$, where $I=\{1,\dots,p\}$.
\end{definition}
It is easy to observe that $\mathcal{U}$ being a singular solution to \eqref{oursystem} is equivalent to $I_{\infty}\neq \emptyset$.
Hence, in terms of cardinality of the blow-up set, we divide singular solutions to \eqref{oursystem} as follows
\begin{definition}\label{singset}
Let $\mathcal{U}$ be a nonnegative singular solution to \eqref{oursystem}.
We say that $\mathcal{U}$ is {\it fully-singular} if $I_{\infty}=I$. Otherwise, if $ I_{\infty} \neq I $ we call $\mathcal{U}$ {\it semi-singular}.
\end{definition}
\subsection{Cylindrical transformation}\label{sec:cylindricaltransform}
Let us introduce the so-called {\it cylindrical transformation} \cite{MR1794994}.
Using this device, we convert singular solutions to \eqref{oursystem} in the punctured space into a non-singular solutions in a cylinder. In fact, considering {\it spherical coordinates} denoted by $(r,\sigma)$, we can rewrite \eqref{oursystem} as the nonautonomous nonlinear system,
\begin{equation*}
\Delta^2_{\rm sph}u_i=c(n)|\mathcal{U}|^{2^{**}-2}u_i \quad {\rm in} \quad {\mathcal{C}}_0.
\end{equation*}
Here ${\mathcal{C}}_0:=(0,\infty)\times\mathbb{S}^{n-1}$ is the (half-)cylinder and $\Delta^2_{\rm sph}$ is the bi-Laplacian in spherical coordinates given by
\begin{align}\label{Pdespherical}
\Delta^2_{\rm sph}&=\partial_r^{(4)}+ \frac{2(n-1)}{r}\partial_r^{(3)}+\frac{(n-1)(n-3)}{r^2}\partial_r^{(2)}-\frac{(n-1)(n-3)}{r^3}\partial_r&\\\nonumber
&+\frac{1}{r^4}\Delta_{\sigma}^2+\frac{2}{r^2}\partial^{(2)}_r\Delta_{\sigma}+\frac{2(n-3)}{r^3}\partial_r\Delta_{\sigma}-\frac{2(n-4)}{r^4}\Delta_{\sigma},&
\end{align}
where $\Delta_{\sigma}$ denotes the Laplace--Beltrami operator in $\mathbb{S}^{n-1}$.
Moreover, the vectorial Emden--Fowler change of variables (or logarithm coordinates) given by $\mathcal{V}(t,\theta)=r^{\gamma}\mathcal{U}(r,\sigma)$, where $r=|x|$, $t=-\ln r$, $\theta=x/|x|$, and $\gamma=({n-4})/{2}$ is the {\it Fowler rescaling exponent}, sends the problem to the entire cylinder $\mathcal{C}_{\infty}=\mathbb{R}\times \mathbb{S}^{n-1}$.
In the geometric setting, this change of variables corresponds to the conformal diffeomorphism between the cylinder $\mathcal{C}_{\infty}$ and the punctured space $\varphi:(\mathcal{C}_{\infty},g_{{\rm cyl}})\rightarrow(\mathbb{R}^n\setminus\{0\},\delta_0)$ defined by $\varphi(t,\sigma)=e^{-t}\sigma$. Here $g_{{\rm cyl}}=\mathrm{d} t^2+\mathrm{d}\sigma^2$ stands for the cylindrical metric with ${\mathrm{d}\theta}=e^{-2t}(\mathrm{d} t^2+\mathrm{d}\sigma^2)$ its volume element obtained via the pullback $\varphi^{*}\delta_0$, where $\delta_0$ is the standard flat metric. Using this coordinate system, and performing a lengthy computation (see \cite{MR4094467,MR4123335}), we arrive at the following fourth order nonlinear PDE on the cylinder,
\begin{equation}\label{sphevectfowlersystem}
\Delta^2_{\rm cyl}v_i=c(n)|\mathcal{V}|^{2^{**}-2}v_i \quad {\rm on} \quad {\mathcal{C}_{\infty}}.
\end{equation}
Here $\mathcal{V}=(v_1,\dots,v_p)$ and $\Delta^2_{\rm cyl}$ is the bi-Laplacian in cylindrical coordinates given by
\begin{equation*}
\Delta^2_{\rm cyl}=\partial_t^{(4)}-K_2\partial_t^{(2)}+K_0+\Delta_{\theta}^{2}+2\partial_t^{(2)}\Delta_{\theta}-J_0\Delta_{\theta},
\end{equation*}
where $K_0,K_2,J_0$ are constants depending only in the dimension defined by
\begin{equation}\label{coefficients}
K_0=\frac{n^2(n-4)^2}{16}, \quad K_2=\frac{n^2-4n+8}{2} \quad {\rm and} \quad J_0=\frac{n(n-4)}{4}.
\end{equation}
Along this lines let us consider the cylindrical transformation of a $p$-map as follows
\begin{equation*}
\mathfrak{F}:C_c^{\infty}(\mathbb{R}^n\setminus\{0\},\mathbb{R}^p)\rightarrow C_c^{\infty}(\mathcal{C}_{\infty},\mathbb{R}^p)\quad \mbox{given by} \quad
\mathfrak{F}(\mathcal{U})=r^{\gamma}\mathcal{U}(r,\sigma).
\end{equation*}
\begin{remark}
The transformation $\mathfrak{F}$ is a continuous bijection with respect to the Sobolev norms $\|\cdot\|_{\mathcal{D}^{2,2}(\mathbb{R}^n\setminus\{0\},\mathbb{R}^p)} \ {\rm and} \ \|\cdot\|_{H^{2}(\mathcal{C}_{\infty},\mathbb{R}^p)}$, respectively. Furthermore, this transformation sends singular solutions to \eqref{oursystem} into solutions to \eqref{sphevectfowlersystem} and by density, we get $\mathfrak{F}:\mathcal{D}^{2,2}(\mathbb{R}^n\setminus\{0\},\mathbb{R}^p)\rightarrow H^{2}(\mathcal{C}_{\infty},\mathbb{R}^p)$.
\end{remark}
\subsection{Regularity}
As in Proposition~\ref{regregularity}, an important question is whether weak solutions to \eqref{oursystem} are as well as classical solutions. Since the method of regularity lifting used in Proposition~\ref{regregularity} does not directly apply in the punctured space, we use $\mathfrak{F}$ to convert \eqref{oursystem} into a PDE in a cylinder. Then, we can perform a regularity method for complete non-compact manifolds to prove our main proposition. Here we use some results from \cite{MR2569498}.
\begin{proposition}\label{singregularity}
Let $\mathcal{U}\in\mathcal{D}^{2,2}(\mathbb{R}^n\setminus\{0\},\mathbb{R}^p)$ be a nonnegative weak singular solution to \eqref{oursystem}. Then, $\mathcal{U}\in C^{4,\zeta}(\mathbb{R}^n\setminus\{0\},\mathbb{R}^p)$ is a classical singular solution to \eqref{oursystem}.
\end{proposition}
\begin{proof}
For some $\zeta\in(0,1)$, notice that $\mathcal{U}\in C^{4,\zeta}(\mathbb{R}^n\setminus\{0\},\mathbb{R}^p)$ is a classical solution to \eqref{oursystem} if, and only if, $\mathcal{V}\in C^{4,\zeta}(\mathcal{C}_{\infty},\mathbb{R}^p)$ is a classical solution to \eqref{sphevectfowlersystem}. Moreover, since the cylinder is a non-compact complete Riemannian manifold with nonnegative Ricci curvature, using the continuous higher order Sobolev embedding $H^{2}(\mathcal{C}_{\infty})\hookrightarrow L^{q}(\mathcal{C}_{\infty})$ for all $2\leqslant q\leqslant 2^{**}$, and adapting the proof of Proposition~\ref{regregularity}, we can conclude the proof of our result.
\end{proof}
\subsection{Asymptotic moving planes technique}
In this subsection, using a variant of {\it moving planes technique}, we prove that singular solutions to \eqref{oursystem} are radially symmetric about the origin.
The first work proving radial symmetry for solutions to PDEs via this method is due to J. Serrin \cite{MR0333220} (see also \cite{MR634248,MR544879}).
His approach was based on the reflection method developed earlier by A. D. Aleksandrov to study embedded surfaces of constant mean curvature.
In our case, solutions are singular at the origin, thus, to show that they are rotationally invariant, we need to perform an adaptation of Aleksandrov's method, which is called {\it asymptotic moving planes technique} \cite{MR982351}. Furthermore, this tool can be extended to fourth order problems as in \cite{MR3394387} and for second order systems \cite{MR1611691,MR4002167} with isolated singularities.
To the best of our knowledge, our result is the first to use this method in the context of strongly coupled fourth order systems.
To prove our main result, we require three background lemmas from \cite[Section~3]{MR982351}
\begin{lemmaletter}\label{harmasymp}
Let $\vartheta$ be a harmonic function and consider $({\vartheta})_{z,1}=|x|^{2-n}\vartheta\left(z+{x}{|x|^{-2}}\right)$ the second order Kelvin transform of $\vartheta$, which for simplicity it is denoted by $({\vartheta})_{z,1}=\widetilde{\vartheta}$. Then, $\widetilde{\vartheta}$ is harmonic in a neighborhood at infinity, and it satisfies the asymptotic expansion
\begin{equation}\label{harmonicexpansion}
\begin{cases}
\widetilde{\vartheta}(x)=\mathfrak{a}_{0}|x|^{2-n}+\mathfrak{a}_{j}x_j|x|^{-n}+\mathcal{O}(|x|^{-n})\\
\partial_{x_j}\widetilde{\vartheta}(x)=(2-n)\mathfrak{a}_{0}x_j|x|^{-n}+\mathcal{O}(|x|^{-n})\\
\partial_{x_{k}x_{j}}\widetilde{\vartheta}(x)=\mathcal{O}(|x|^{-n}).
\end{cases}
\end{equation}
where $\{\mathfrak{a}_j\}_{j\in\mathbb{N}}\in\mathbb{R}$ are the coefficients of the Taylor expansion.
\end{lemmaletter}
\begin{lemmaletter}\label{starting}
Let $\vartheta$ be a positive function defined in a neighborhood at infinity satisfying the asymptotic expansion \eqref{harmonicexpansion}.
Then, there exist $\bar{\lambda}<0$ and $R>0$ such that $\vartheta(x)>\vartheta(x_{\lambda})$ for $\lambda\leqslant\bar{\lambda}$, $|x|\geqslant R$ and $x\in\Sigma_{\lambda}$.
\end{lemmaletter}
\begin{lemmaletter}\label{boundedness}
Let $\vartheta$ satisfy the assumptions of Lemma~\ref{starting} with $\vartheta(x)=\vartheta(x_{\lambda})$ for some $x\in\Sigma_{\bar{\lambda}}$. Then, there exist $\varepsilon>0$ and $R>0$ satisfying\\
\noindent{\rm (i)} $\vartheta_{x_n}(x)>0$ in $|x_n-\lambda_{0}|<\varepsilon$ and $|x|>R$;\\
\noindent{\rm (ii)} $\vartheta(x)>\vartheta(x_{\lambda})$ in $x_n\geqslant\lambda_{0}+{\varepsilon}/{2}>\lambda$ and $|x|>R$ for all $x\in\Sigma_{\lambda}$, $\lambda\leqslant\lambda_{0}$ with $|\lambda_{0}-\bar{\lambda}|<c_{0}\varepsilon$, where $C_0>0$ is small and depends on $\bar{\lambda}$ and $v$.
\end{lemmaletter}
We also require a maximum principle for singular domains, which is the content of \cite[Lemma~2.1]{MR1333503}
\begin{propositionletter}\label{mp}
Let $\Omega$ be a smooth bounded domain in $\mathbb{R}^n$ and $\mathcal{Z}$ be a compact set in $\mathbb{R}^n$ with ${\mathrm{cap}}(\mathcal{Z})=0$. Assume that $\vartheta(x),h(x)$ are nonnegative continuous functions in ${\Omega}\setminus \mathcal{Z}$ satisfying
\begin{equation*}
-\Delta \vartheta(x)+h(x)\leqslant0 \quad {\rm in} \quad \Omega\setminus \mathcal{Z}
\end{equation*}
in the distributional sense. Then,
\begin{equation*}
\vartheta(x)\geqslant\int_{E}G(x,y)h(y)\mathrm{d} y+\int_{\partial E}{\partial_\nu}G(x,y)\vartheta(y)ds_y \quad {\rm in} \quad \Omega\setminus \mathcal{Z},
\end{equation*}
where $G(x,y)$ is the Green function of $-\Delta$ in $\Omega$ with Dirichlet boundary condition. In particular,
\begin{equation*}
\vartheta(x)\geqslant\inf_{\partial(\Omega\setminus\mathcal{Z})}\vartheta.
\end{equation*}
\end{propositionletter}
\begin{proposition}\label{symmetry}
Let $\mathcal{U}$ be a nonnegative singular solution to equation \eqref{oursystem}. Then, $|\mathcal{U}|$ is radially symmetric about the origin and monotonically decreasing.
\end{proposition}
\begin{proof}
Since $\mathcal{U}$ is a singular solution, we may suppose without loss of generality that the origin is a non-removable singularity of $u_1$. Fixing $z\neq0$ a non-singular point of $\mathcal{U}$, that is, $\lim_{|x|\rightarrow z}|\mathcal{U}(x)|<\infty$, we perform the fourth order Kelvin transform with center at the $z$ and unitary radius,
\begin{equation*}
(u_i)_{z,1}(x)=|x|^{4-n}u_i\left(z+\frac{x}{|x|^2}\right) \quad \mbox{for} \quad i\in I.
\end{equation*}
Denoting $\widetilde{u}_{i}=(u_i)_{z,1}$, we observe that $\widetilde{u}_{1}$ is singular at zero and $z_{0}=-z/|z|^2$, whereas the others components are singular only at zero. Furthermore, using the conformal invariance of \eqref{oursystem}, we get
\begin{equation*}
\Delta^{2}\widetilde{u}_{i}=c(n)|\widetilde{\mathcal{U}}|^{2^{**}-2}\widetilde{u}_i \quad {\rm in} \quad \mathbb{R}^{n}\setminus\{0,z_{0}\}.
\end{equation*}
Let us set $\vartheta_i(x)=-\Delta \widetilde{u}_i(x)$, thus $\vartheta_i(x)=\mathcal{O}(|x|^{2-n})$ as $|x|\rightarrow\infty$. Using Lemma~\ref{harmasymp}, we have that $\vartheta_i$ has the following harmonic asymptotic expansion at infinity,
\begin{equation*}
\begin{cases}
\vartheta_i(x)=a_{i0}|x|^{2-n}+a_{ij}x_j|x|^{-n}+\mathcal{O}(|x|^{-n})\\
\partial_{x_j}\vartheta_i(x)=(2-n)a_{i0}x_j|x|^{-n}+\mathcal{O}(|x|^{-n})\\
\partial_{x_{k}x_{j}}\vartheta_i(x)=\mathcal{O}(|x|^{-n}),
\end{cases}
\end{equation*}
where $a_{i0}=\vartheta_{i}(z)$ and $a_{ij}=\partial_{y_j}\vartheta_{i}(z)$.
Considering the axis defined by $0$ and $z$ as the reflection direction, we can suppose that this axis is orthogonal to the positive $x_n$ direction, that is, given the unit vector ${\bf e}_n=(0,0,\dots,1)$.
For $\lambda>0$, we consider the sets
\begin{equation*}
\Sigma_{\lambda}:=\{x\in\mathbb{R}^n : x_n>\lambda\} \quad \mbox{and} \quad T_{\lambda}:=\partial \Sigma_{\lambda},
\end{equation*}
and we define the reflection about the plane $T_{\lambda}$ by
\begin{equation*}
x=(x_1,\dots,x_{n-1},x_{n}) \mapsto x_{\lambda}=(x_1,\dots,x_{n-1},2\lambda-x_{n}).
\end{equation*}
Let us also introduce the notation $(w_{i})_{\lambda}(x)=\widetilde{u}_{i}(x)-(\widetilde{u}_{i})_{\lambda}(x)$, where $(\widetilde{u}_{i})_{\lambda}(x)=\widetilde{u}_{i}(x_{\lambda})$. Then, showing radial symmetry about the origin for singular solutions to \eqref{oursystem} is equivalent to prove the following
\begin{equation}\label{cg}
(w_{i})_{\lambda}\equiv0 \quad {\rm for} \quad \lambda=0.
\end{equation}
Subsequently, we divide the proof of \eqref{cg} into three claims.
\noindent{\bf Claim 1:} There exists $\bar{\lambda}_0<0$ such that $(w_{i})_{\lambda}>0$ in $\Sigma_{\lambda}$ for all $\lambda<\bar{\lambda}_0$ and $i\in I$.
\noindent In fact, notice that $(w_{i})_{\lambda}$ satisfies the following Navier problem
\begin{equation}\label{df}
\begin{cases}
\Delta^{2}(w_{i})_{\lambda}=(b_{i})_{\lambda}(w_{i})_{\lambda}& \quad {\rm in} \quad \Sigma_{\lambda}\\
\Delta (w_{i})_{\lambda}=(w_{i})_{\lambda}=0& \quad {\rm on} \quad T_{\lambda},
\end{cases}
\end{equation}
where
\begin{equation*}
(b_{i})_{\lambda}=\frac{c(n)|\widetilde{\mathcal{U}}_{\lambda}|^{2^{**}-2}(\widetilde{u}_i)_\lambda-c(n)|{\widetilde{\mathcal{U}}}|^{2^{**}-2}\widetilde{u}_i}{\widetilde{u}_{i}-(\widetilde{u}_{i})_{\lambda}}>0 \quad {\rm in} \quad \bar{\Sigma}_{\lambda}.
\end{equation*}
Then, as a consequence of Lemma \ref{starting}, there exist $\bar{\lambda}<0$ and $R>|z_0|+10$ such that
\begin{equation}\label{pb}
\Delta (w_{i})_{\lambda}(x)=(\vartheta_{i})_{\lambda}(x)-\vartheta_{i}(x)<0 \quad \hbox{for} \quad x\in\Sigma_{\lambda}, \quad\lambda\leqslant\bar{\lambda} \quad \hbox{and} \quad |x|>R.
\end{equation}
In addition, by Proposition~\ref{mp} we can find $C>0$ satisfying
\begin{equation}\label{pe}
\vartheta_i(x)\geqslant C \quad \hbox{for} \quad x\in \bar{B}_{R}\setminus\{0,z_0\}.
\end{equation}
Since $v_i\rightarrow 0$ as $|x|\rightarrow\infty$, combining \eqref{pb} and \eqref{pe}, there exists $\bar{\lambda}_{0}<\bar{\lambda}$ such that
\begin{equation}\label{rn}
\Delta (w_{i})_{\lambda}(x)=(\vartheta_{i})_{\lambda}(x)-\vartheta_{i}(x)<0 \quad \hbox{for} \quad x\in\Sigma_{\lambda} \quad \mbox{and} \quad \lambda\leqslant\bar{\lambda}_0.
\end{equation}
Using that $\lim_{|x|\rightarrow\infty}(w_{i})_{\lambda}(x)=0$, we can apply the strong maximum principle to conclude that $(w_{i})_{\lambda}(x)>0$ for all $\lambda\leqslant\bar{\lambda}_0$ and $i\in I$, which implies the proof of the claim.
Now thanks to Claim 1, we can define the {\it critical sliding parameter} given by
\begin{equation*}
\lambda^{*}=\sup\{\bar{\lambda}>0 : \eqref{rn} \ \hbox{holds for} \ \lambda\geqslant\bar{\lambda}\}.
\end{equation*}
\noindent{\bf Claim 2:} $(w_{i})_{\lambda^{*}}\equiv0$ for all $i\in I$.
\noindent Fix $i\in I$ and suppose by contradiction that $(w_{i})_{\lambda^{*}}(x_0)\neq0$ for some $x_0\in\Sigma_{\lambda^{*}}$.
By continuity, we have that $\Delta (w_{i})_{\lambda^{*}}\leqslant0$ in $\Sigma_{\lambda^{*}}$. Since $\lim_{|x|\rightarrow\infty}(w_{i})_{\lambda}(x)=0$, a strong maximum principles yields that $(w_{i})_{\lambda^{*}}>0$ in $\Sigma_{\lambda^{*}}$.
Also, by \eqref{oursystem}, we get $\Delta^2 (w_{i})_{\lambda^{*}}=|\widetilde{\mathcal{U}}|^{2^{**}-2}\widetilde{u}_{i}-|\mathcal{U}_\lambda|^{2^{**}-2}(\widetilde{u}_{i})_{\lambda}(x)>0$. Hence, $\Delta (w_{i})_{\lambda^{*}}$ is subharmonic.
By employing again a strong maximum principle, we obtain that $\Delta (w_{i})_{\lambda}<0$.
In addition, by the definition of $\lambda^{*}$, there exists a sequence $\{\lambda_k\}_{k\in\mathbb{N}}$ such that, $\lambda_k\nearrow\lambda^{*}$ and $ \sup_{\Sigma_{\lambda_k}}\Delta (w_{i})_{\lambda_k}(x)>0$.
Observing that $\lim_{|x|\rightarrow\infty}\Delta (w_{i})_{\lambda_k}(x)=0$, we can find $x_k\in\Sigma_{\lambda_k}$ satisfying
\begin{equation}\label{peru}
\Delta (w_{i})_{\lambda_k}(x_k)=\sup_{\Sigma_{\lambda_k}}\Delta (w_{i})_{\lambda_k}(x).
\end{equation}
By Lemma~\ref{boundedness}, we observe that $\{x_k\}_{k\in\mathbb{N}}$ is bounded. Thus, up to subsequence, we may assume that $x_k\rightarrow x_0$. If $x_0\in\Sigma_{\lambda^{*}}$, passing to the limit in \eqref{peru}, we obtain $\Delta (w_{i})_{\lambda^{*}}(x_0)=0$, which is a contradiction with $\Delta (w_{i})_{\lambda^{*}}(x_0)\leqslant0$. If $x_0\in T_{\lambda^{*}}$ we have that $\nabla(\Delta (w_{i})_{\lambda^{*}}(x_0))=0$. This contradicts the Hopf boundary Lemma, because $\Delta (w_{i})_{\lambda^{*}}$ is negative and subharmonic in $\Sigma_{\lambda^{*}}$.
\noindent{\bf Claim 3:} $\lambda^{*}=0.$
\noindent Let us assume that the claim is not valid, that is, $\lambda^{*}<0$.
Then, for $\lambda=\lambda^{*}$, it holds $\Delta w_{\lambda^{*}}(x)<0$. Since $\lim_{|x|\rightarrow z_0}u_1(x)=\infty$, we observe that $\widetilde{u}_1$ cannot be invariant under the reflection $x_{\lambda^{*}}$.
Thus, using a strong maximum principle for \eqref{df}, we conclude
\begin{equation}\label{rj}
\widetilde{u}_i(x)<u_i(x_{\lambda}) \quad {\rm for} \quad x\in\Sigma_{\lambda^{*}} \quad {\rm and} \quad x_{\lambda^{*}}\notin\{0,z_0\}.
\end{equation}
Notice that as a consequence of $\lambda^{*}<0$, we have that $\{0,z_0\}\notin T_{\lambda^{*}}$. Whence, applying the Hopf boundary Lemma, we get
\begin{equation}\label{rs}
\partial_{x_k}(\widetilde{u}_i(x_{\lambda^{*}})-\widetilde{u}_i(x))=-2\partial_{x_k}\widetilde{u}_i(x)>0.
\end{equation}
Now choose $\{\lambda_k\}_{k\in\mathbb{N}}$ such that $\lambda_k\nearrow\lambda^{*}$ as $k\rightarrow\infty$ and $x_k\in\Sigma_{\lambda_k}$ such that $\widetilde{u}_{1}({x_k}_{\lambda_k})<\widetilde{u}_1(x_k)$. Then, by Lemma~\ref{starting}, we obtain that $\{x_k\}_{j\in\mathbb{N}}$ is bounded. Whence, $x_k\rightarrow\bar{x}\in\bar{\Sigma}_{\lambda^{*}}$ with $\widetilde{u}_{1}(\bar{x}_{\lambda^{*}})\leqslant\widetilde{u}_1(\bar{x})$. By \eqref{rj} we know that $\bar{x}\in\partial\Sigma_{\lambda^{*}}$ and then $\partial_{x_k}\widetilde{u}_1(\bar{x})\geqslant0$, a contradiction with \eqref{rs}, which proves \eqref{cg}.
\end{proof}
\subsection{Superharmonicity} Now, we use the radial symmetry to prove that any component of a singular solution to \eqref{oursystem} is superharmonic.
The next result is a version of Proposition~\ref{superharmonicity} for singular solution to \eqref{oursystem}, which can be found in \cite[Lemma~2.3]{MR4094467} for the scalar case.
We also remark that \cite[Theorem~1.4]{MR4094467} contains a improved version of this result, which is called a Modica estimate.
\begin{proposition}\label{supersingular}
Let $\mathcal{U}$ be a nonnegative singular solution to \eqref{oursystem}. Then, $-\Delta u_i\geqslant0$ in $\mathbb{R}^n\setminus\{0\}$ for all $i\in I$.
\end{proposition}
\begin{proof}
Let us recall that $u_i(r)=r^{-\gamma}v_i(-\ln r)$, thus $u_i(r)\geqslant C_1r^{-\gamma}$, which together with \eqref{oursystem} implies
\begin{equation*}
0<\omega_{n-1}r^{n-1}{\partial_r}\Delta u_i(r)=c(n)\int_{B_r}|\mathcal{U}|^{2^{**}-1}u_i\mathrm{d} x,
\end{equation*}
for $0<r\ll1$ sufficiently small. Then, we get
\begin{equation}\label{supsing2}
\lim_{r\rightarrow0^{+}}r^{n-1}{\partial_r}\Delta u_i(r)=0.
\end{equation}
Moreover, $u_i$ satisfies
\begin{equation*}
{\partial_r}\left[r^{n-1}{\partial_r}\Delta u_i(r)\right]=r^{n-1}c(n)|\mathcal{U}|^{2^{**}-2}u_i,
\end{equation*}
which combined with \eqref{supsing2} gives us that ${\partial_r}\Delta u_i(r)>0$. Therefore, $\Delta u_i(r)$ is strictly increasing, and by the relation between $u_i$ and $v_i$, we find that $\lim_{r\rightarrow\infty}\Delta u_i(r)=0$, which completes the proof.
\end{proof}
As a direct consequence of Proposition~\ref{supersingular}, we show that singular solutions to \eqref{oursystem} are weakly positive. Again, this property is fundamental to define the quotient function $q_{ij}=u_i/u_j$.
\begin{proposition}\label{singweakpos}
Let $\mathcal{U}$ be a nonnegative singular solution to \eqref{oursystem}. Then, $\mathcal{U}$ is weakly positive.
\end{proposition}
\begin{proof}
It follows direct by Proposition~\ref{supersingular} and the strong maximum principle.
\end{proof}
Later, we will prove that singular solutions are more than weakly positive; indeed, they are strongly positive (see Corollary~\ref{stronglypositive}). In this case, either $I_0=\emptyset$ or $I_+=\emptyset$.
\subsection{Fourth order Emden--Fowler system}
Since we already know that solutions are rotationally invariant, the cylindrical transformation converts \eqref{oursystem} into a fourth order ODE system with constant coefficients. More specifically, using Proposition~\ref{symmetry}, we eliminate the angular components in expression \eqref{Pdespherical}, arriving at
\begin{equation}\label{vectfowlersystem}
v_i^{(4)}-K_2v_i^{(2)}+K_0v_i=c(n)|\mathcal{V}|^{2^{**}-2}v_i \quad {\rm in} \quad \mathbb{R} \quad {\rm for} \quad i\in I
\end{equation}
with initial conditions given by
\begin{equation}\label{initialconditions}
v_i(0)=a_i, \quad v_i^{(1)}(0)=0, \quad v_i^{(2)}(0)=b_i, \quad v_i^{(3)}(0)=0.
\end{equation}
\begin{remark}\label{equalityofinitiadata}
It is essential to ask about the existence of a solution to the IVP \eqref{vectfowlersystem}--\eqref{initialconditions}.
One can see that a necessary condition for this to hold is that the initial values must be the same for all components, that is, there exist $a,b\in\mathbb{R}$ such that $a_i=a$ and $b_i=b$ for all $i\in I$.
Indeed, by \eqref{vectfowlersystem} for all $i,j\in I$, we have
\begin{equation*}
\frac{v_i^{(4)}-K_2v_i^{2}+K_0v_i}{v_i}=\frac{v_j^{(4)}-K_2v_j^{2}+K_0v_j}{v_j}.
\end{equation*}
Hence, for any $C<0$ both component solution $v_i$ and $v_j$ satisfy the linear eigenvalue problem
\begin{equation}\label{eigenvalue}
v^{(4)}-K_2v^{(2)}+K_0v=Cv.
\end{equation}
Moreover, since $K_2^2-4K_0>0$, we have that the characteristic equation associated to \eqref{eigenvalue} has four different roots $\lambda_1<\lambda_2<0<\lambda_3<\lambda_4$.
Thus, any solution to \eqref{eigenvalue} has the form
\begin{equation*}
\widetilde{v}(t)=C_1e^{\lambda_1t}+C_2e^{\lambda_2t}+C_3e^{\lambda_3t}+C_4e^{\lambda_4t}
\end{equation*}
for some $C_1,C_2,C_3,C_4$ constant depending on the initial values, which implies that for any $v$ solution to \eqref{eigenvalue}, $\mathcal{V}=(1,\dots,1)\widetilde{v}$ is a solution to \eqref{vectfowlersystem}.
Then, $a_i=\widetilde{v}(0)$ and $b_i=\widetilde{v}^{(2)}(0)$ for all $i\in I$.
Later we will see that in fact $I=I_+$.
\end{remark}
\subsection{Pohozaev invariant}
The Pohozaev invariant is a homological constant related to the existence and classification of solutions to a large class of PDEs. Its first appearance dates back to the classical paper of S. Pohozaev \cite{MR0192184}.
After that, N. Korevaar et al. \cite{MR1666838} used this tool together with rescaling analysis to prove removable-singularity theorems for solutions to the singular Yamabe equation for flat background metrics setting.
See also the related works \cite{MR2393072,MR4002167,MR3394387,MR4085120}.
Let us also emphasize that the existence of the Pohozaev-type invariant is closely related to conservation laws for the Hamiltonian energy of the ODE system \eqref{vectfowlersystem}.
In our fourth order vectorial setting, let us define an energy which is conserved in time for all $p$-map solutions $\mathcal{V}$ to \eqref{vectfowlersystem} \cite{MR3869387,MR4094467,MR4123335}.
\begin{definition}
For any $\mathcal{V}$ nonnegative solution to \eqref{vectfowlersystem}, let us consider its {\it Hamiltonian Energy} given by
\begin{equation}\label{vectenergy}
\mathcal{H}(t,\mathcal{V})=-\langle \mathcal{V}^{(3)}(t),\mathcal{V}^{(1)}(t)\rangle+\frac{1}{2}|\mathcal{V}^{(2)}(t)|^{2}+\frac{K_2}{2}|\mathcal{V}^{(1)}(t)|^{2}-\frac{K_0}{2}|\mathcal{V}(t)|^2+\hat{c}(n)|\mathcal{V}(t)|^{2^{**}},
\end{equation}
or more explicitly in components,
\begin{align*}
\mathcal{H}(t,\mathcal{V})&=-\left(\sum_{i=1}^{p}v_{i}^{(3)}(t)v_{i}^{(1)}(t)\right)+\frac{1}{2}\left(\sum_{i=1}^{p}\left((v_{i}^{(2)}(t)\right)^2\right)+\frac{K_2}{2}\left(\sum_{i=1}^{p}\left(v_{i}^{(1)}(t)\right)^2\right)&\\\nonumber
&\quad -\frac{K_0}{2}\left(\sum_{i=1}^{p}v_{i}(t)^2\right)^2+\hat{c}(n)\left(\sum_{i=1}^{p}v_{i}(t)^2\right)^{2^{**}/2},&
\end{align*}
where $\hat{c}(n)=(2^{**})^{-1}c(n)$
\end{definition}
Let us remark that this quantity satisfies
\begin{equation}\label{conservation}
{\partial_t}\mathcal{H}(t,\mathcal{V})=0.
\end{equation}
In other words, the Hamiltonian energy is invariant on the variable $t$. In addition, we can integrate \eqref{vectenergy} over $\mathbb{S}_t^{n-1}$ to define another conserved quantity.
\begin{definition}
For any $\mathcal{V}$ nonnegative solution to \eqref{vectfowlersystem}, let us define its {\it cylindrical Pohozaev integral} by
\begin{align*}
\mathcal{P}_{\rm cyl}(t,\mathcal{V})&=\displaystyle\int_{\mathbb{S}_t^{n-1}}\mathcal{H}(t,\mathcal{V})\mathrm{d}\theta.&
\end{align*}
Here $\mathbb{S}_t^{n-1}=\{t\}\times\mathbb{S}^{n-1}$ is the cylindrical ball with volume element given by $\mathrm{d}\theta=e^{-2t}\mathrm{d}\sigma$, where $\mathrm{d}\sigma_r$ is the volume element of the euclidean ball of radius $r>0$.
\end{definition}
By definition, $\mathcal{P}_{\rm cyl}$ also does not depend on $t\in\mathbb{R}$.
Then, let us consider the {\it cylindrical Pohozaev invariant} $\mathcal{P}_{\rm cyl}(\mathcal{V}):=\mathcal{P}_{\rm cyl}(t,\mathcal{V})$.
Thus, by applying the inverse of cylindrical transformation, we can recover the classical {\it spherical Pohozaev integral} defined by $\mathcal{P}_{\rm sph}(r,\mathcal{U}):=\mathcal{P}_{\rm cyl}\circ\mathfrak{F}^{-1}\left(\mathcal{V}\right)$.
\begin{remark}
We are not providing the formula explicitly for the spherical Pohozaev, because it is too lengthy and is not required in the rest of this manuscript.
The cylindrical Pohozaev-invariant is enough to perform our methods. Indeed, fixing $\mathcal{H}(t,\mathcal{V})\equiv H$ and $\mathcal{P}_{\rm sph}(\mathcal{U})=P$, we have that $\omega_{n-1}H=P$.
In other words, the Hamiltonian energy $H$ and spherical Pohozaev invariant $ P $ have the same sign.
For an expression of the Pohozaev invariant in the spherical case, we refer the reader to \cite[Proposition~3.3]{arXiv:1503.06412}.
\end{remark}
\begin{remark}
There exists a natural relation between the derivatives of $\mathcal{P}_{\rm sph}$ and $\mathcal{H}$ respectively,
\begin{equation*}
{\partial_r}\mathcal{P}_{\rm sph}(r,\mathcal{U})=r{\partial_t}\mathcal{H}(t,\mathcal{V}).
\end{equation*}
Thus, for any solution $\mathcal{U}$, the value $\mathcal{P}_{\rm sph}(r,\mathcal{U})$ is also radially invariant.
\end{remark}
Now it is convenient to introduce an important ingredient of our next results.
\begin{definition}
For any $\mathcal{U}$ nonnegative solution to \eqref{oursystem}, let us define its {\it spherical Pohozaev invariant} given by $\mathcal{P}_{\rm sph}(r,\mathcal{U}):=\mathcal{P}_{\rm sph}(\mathcal{U})$.
\end{definition}
\begin{remark} For easy reference, let us summarize the following facts:\\
\noindent{\rm (i)} There exists a type of equivalence between the cylindrical and spherical Pohozaev invariants, $\mathcal{P}_{\rm sph}(\mathcal{U})=\omega_{n-1}\mathcal{P}_{\rm cyl}(\mathcal{V})$, where $\omega_{n-1}$ is the Lebesgue measure of the unit sphere in $\mathbb{R}^{n-1}$.\\
\noindent{\rm (ii)} The Pohozaev invariant of the vectorial solutions are equal to the Pohozaev invariant in the scalar case, which can be defined in a similar way using the Hamiltonian energy associated to \eqref{scalarsystem}.
More precisely, we define $\mathcal{P}_{\rm sph}(u)=\mathcal{P}_{\rm cyl}(r^{\gamma}u)$, where
\begin{equation*}
\mathcal{P}_{\rm cyl}(v)=\displaystyle\int_{\mathbb{S}^{n-1}_t}\left[-v^{(3)}v^{(1)}+\frac{1}{2}|v^{(2)}|^{2}+\frac{K_2}{2}|v^{(1)}|^{2}-\frac{K_0}{2}|v|^{2}+\hat{c}(n)|v|^{2^{**}}\right]\mathrm{d}\theta.
\end{equation*}
Hence, if the non-singular solution is ${\mathcal{U}}_{x_0,\mu}=\Lambda u_{x_0,\mu}$ for some $\Lambda\in\mathbb{S}^{p-1}_+$ and $u_{x_0,\mu}$ a spherical solution from Theorem~\ref{theoremA}, we obtain that $\mathcal{P}_{\rm sph}({\mathcal{U}}_{x_0,\mu})=\mathcal{P}_{\rm sph}(u_{x_0,\mu})=0$. Analogously, if the singular solution has the form $\mathcal{U}_{a,T}=\Lambda u_{a,T}$ for some $\Lambda\in\mathbb{S}^{p-1}_{+,*}$ and $u_{a,T}$ a Emden--Fowler solution from Theorem~\ref{theoremB}, we get that $\mathcal{P}_{\rm sph}({\mathcal{U}}_{a,T})=\mathcal{P}_{\rm sph}(u_{a,T})<0$.
\end{remark}
\subsection{ODE system analysis}
In this subsection, we perform an asymptotic analysis program due to Z. Chen and C. S. Lin \cite[Section~3]{MR3869387}.
This analysis is based on the Pohozaev invariant sign, which, combined with some results from \cite{MR4094467,MR3394387}, determines whether a solution to \eqref{oursystem} has a removable or a non-removable singularity at the origin. Before studying how this invariant classifies solutions to \eqref{oursystem}, we need to set some background results concerning the asymptotic behavior for solutions to \eqref{vectfowlersystem} and its derivatives.
\begin{definition}
For $\mathcal{V}$ solution to \eqref{vectfowlersystem}, let us define its {\it asymptotic set} given by
\begin{equation*}
\mathcal{A}(\mathcal{V}):=\displaystyle\bigcup_{i=1}^{p}\mathcal{A}(v_i) \subset[0,\infty],
\quad {\rm where} \quad \mathcal{A}(v_i):=\left\{l\in [0,\infty] : \lim_{t\rightarrow\pm\infty}v_i(t)=l\right\}.
\end{equation*}
In other words, $\mathcal{A}(\mathcal{V})$ is the set of all possible limits at infinity of the component solutions $v_i$.
\end{definition}
The first of our lemmas states that the asymptotic set of $\mathcal{V}$ is quite simple, in the sense that it does not depend on $i\in I$, and coincides with the one in the scalar case.
\begin{lemma}\label{asymptotics}
Let $\mathcal{V}$ be a nonnegative solution to \eqref{vectfowlersystem}.
Suppose that for all $i\in I$ there exists $l_i\in[0,\infty]$ such that $\lim_{t\rightarrow\pm\infty}v_i(t)=l_i$.
Thus, $l_i\in\{0,l^{*}\}$, where $l^{*}=p^{-1}{K_0}^{\frac{n-4}{8}}$; in other terms, $\mathcal{A}(\mathcal{V})=\{0,l^{*}\}$.
Moreover, if $\mathcal{P}_{\rm cyl}(\mathcal{V})\geqslant0$, then $l^{*}=0$.
\end{lemma}
\begin{proof}
Here it is only necessary to consider the case $t\rightarrow\infty$ since when $t\rightarrow-\infty$, taking $\tau=-t$, and observing that $\widetilde{\mathcal{V}}(\tau):=\mathcal{V}(t)$ also satisfies \eqref{vectfowlersystem}, the result follows equally.
Suppose by contradiction that the lemma does not hold.
Thus, for some fixed $i\in I$, one of the following two possibilities shall happen: either the asymptotic limit of $v_{i}$ is a finite constant $l_{i}>0$, which does not belong to the asymptotic set $\mathcal{A}$, or the limit blows-up, that is, $l_i=+\infty$. Subsequently, we consider these two cases separately:
\noindent{\bf Case 1:} $l_{i}\in [0,\infty)\setminus\{0,l^{*}\}$.
\noindent By assumption, we have
\begin{equation}\label{bound1}
\displaystyle\lim_{t\rightarrow\infty}\left(c(n)|\mathcal{V}|^{\frac{8}{n-4}}v_{i}(t)-K_0v_{i}(t)\right)=\kappa, \quad \hbox{where} \quad \kappa:=c(n)pl_i^{\frac{n+4}{n-4}}-K_0l_i\neq0,
\end{equation}
which implies
\begin{equation}\label{bound2}
c(n)|\mathcal{V}|^{\frac{8}{n-4}}v_{i}(t)-K_0v_{i}(t)=v_i^{(4)}(t)-K_2v_i^{(2)}(t).
\end{equation}
A combination of \eqref{bound1} and \eqref{bound2} implies that for any $\varepsilon>0$ there exists $T_i\gg1$ sufficiently large satisfying
\begin{equation}\label{bound3}
\kappa-\varepsilon<v_i^{(4)}(t)-K_2v_i^{(2)}(t)<\kappa+\varepsilon.
\end{equation}
Now, integrating \eqref{bound3}, we obtain
\begin{equation*}
\int_{T_i}^{t}(\kappa-\varepsilon)\mathrm{d}\tau<\int_{T_i}^{t}\left[v_i^{(4)}(\tau)-K_2v_i^{(2)}(\tau)\right]\mathrm{d}\tau<\int_{T_i}^{t}(\kappa+\varepsilon)\mathrm{d}\tau,
\end{equation*}
which provides
\begin{equation}\label{bound5}
(\kappa-\varepsilon)(t-T_i)+C_1(T_i)<v_i^{(3)}(t)-K_2v_i^{(1)}(t)<(\kappa+\varepsilon)(t-T_i)+C_1(T_i),
\end{equation}
where $C_1(T_i)>0$ is a constant. Defining $\delta:=\sup_{t\geqslant T_i}|v_i(t)-v_i(T_i)|<\infty$, we obtain
\begin{equation*}
\left|\int_{T_i}^{t}K_2v_i^{(1)}(\tau)\mathrm{d}\tau\right|\leqslant|K_2|\delta.
\end{equation*}
Hence, integrating $\eqref{bound5}$ provides
\begin{equation}\label{bound10}
\frac{(\kappa-\varepsilon)}{2}(t-T_i)^2+L(t)<v_i^{(2)}(t)<\frac{(\kappa+\varepsilon)}{2}(t-T_i)^2+R(t),
\end{equation}
where $L(t),R(t)\in \mathcal{O}(t^2)$, namely
\begin{equation*}
L(t)=C_1(T_i)(T_i-t)-|K_2|\delta+C_2(T_i) \quad {\rm and} \quad R(t)=C_1(T_i)(T_i-t)+|K_2|\delta+C_2(T_i).
\end{equation*}
Then, repeating the same integration procedure in \eqref{bound10}, we find
\begin{equation}\label{bound11}
\frac{(\kappa-\varepsilon)}{24}(t-T_i)^4+\mathcal{O}(t^4)<v_i(t)<\frac{(\kappa+\varepsilon)}{2}(t-T_i)^4+\mathcal{O}(t^4) \quad \mbox{as} \quad t\rightarrow\infty.
\end{equation}
Therefore, since $\kappa\neq0$ we can choose $0<\varepsilon\ll1$ sufficiently small such that $\kappa-\varepsilon$ and $\kappa+\varepsilon$ have the same sign. Finally, by passing to the limit as $t\rightarrow\infty$ on inequality \eqref{bound11}, we obtain that $v_i$ blows-up and $l_i=\infty$, which is contradiction.
This concludes the proof of the claim.
\noindent{\bf Case 2:} $l_{i}=\infty$.
\noindent This case is more delicate, and it requires a suitable choice of test functions from \cite{MR1879326}.
More precisely, let $\phi_0\in C^{\infty}({[0,\infty]})$ be a nonnegative function satisfying $\phi_0>0$ in $[0,2)$,
\begin{equation*}
\phi_0(z)=
\begin{cases}
1,&\ {\rm for} \ 0\leqslant z\leqslant1,\\
0,&\ {\rm for} \ z\geqslant2,
\end{cases}
\end{equation*}
and for $j\in \{1,2,3,4\}$, let us fix the positive constants
\begin{equation}\label{miti-pokh}
M_j:=\int_{0}^{2}\frac{|\phi_0^{(j)}(z)|}{|\phi_0(z)|}\, \mathrm{d} z.
\end{equation}
Using the contradiction assumption, we may assume that there exists $T_i>0$ such that for $t>T_i$, it follows
\begin{equation}\label{blow1}
v_i^{(4)}(t)-K_2v_i^{(2)}(t)=\hat{c}(n)|\mathcal{V}(t)|^{\frac{8}{n-4}}v_{i}(t)-K_0v_{i}(t)\geqslant v_i(t)^{\frac{n+4}{n-4}}-K_0v_{i}(t)\geqslant \frac{c(n)}{2}v_{i}(t)^{\frac{n+4}{n-4}}
\end{equation}
and
\begin{equation}\label{blow2}
v_i^{(3)}(t)-K_2v^{(1)}(t)=\frac{1}{2}\int_{T_i}^{t}v_{i}(\tau)^{\frac{n+4}{n-4}}\mathrm{d}\tau+C_1(T_i).
\end{equation}
Besides, as a consequence of \eqref{blow2}, we can find $T_i^{*}>T_i$ satisfying $v_i^{(3)}(T_i^{*})-K_2v^{(1)}(T_i^{*}):=\upsilon>0$.
Besides, since \eqref{vectfowlersystem} is autonomous, we may suppose without loss of generality that $T_i^{*}=0$.
Then, multiplying inequality \eqref{blow1} by $\phi(t)=\phi_0(\tau/t)$, and by integrating, we find
\begin{equation*}
\int_{0}^{T'}v_i^{(4)}(\tau)\phi(\tau)\mathrm{d}\tau-K_2\int_{0}^{T'}v_i^{(2)}(\tau)\phi(\tau)\mathrm{d}\tau\geqslant\frac{1}{2}\int_{0}^{T'}v_{i}(\tau)\mathrm{d}\tau,
\end{equation*}
where $T'=2T$. Moreover, integration by parts combined with $\phi^{(j)}(T')=0$ for $j=0,1,2,3$ implies
\begin{equation}\label{blow4}
\int_{0}^{T'}v_i(\tau)\phi^{(4)}(\tau)v_i(\tau)\mathrm{d}\tau-K_2\int_{0}^{T'}v_i(\tau)\phi^{(2)}(\tau)\mathrm{d}\tau\geqslant\frac{c(n)}{2}\int_{0}^{T'}v_{i}(\tau)^{\frac{n+4}{n-4}}\mathrm{d}\tau+\upsilon.
\end{equation}
On the other hand, applying the Young inequality on the right-hand side of \eqref{blow4}, it follows
\begin{equation}\label{blow5}
v_i(\tau)|\phi^{(j)}(\tau)|=\varepsilon v_i^{\frac{n+4}{n-4}}(\tau)\phi(\tau)+C_{\varepsilon}\frac{|\phi^{(j)}(\tau)|^{\frac{n+4}{8}}}{\phi(\tau)^{\frac{n-4}{8}}}.
\end{equation}
Hence, combining \eqref{blow5} and \eqref{blow4}, we have that for $0<\varepsilon\ll1$ sufficiently small, it follows that there exists $\widetilde{C}_1>0$ satisfying
\begin{equation*}
\widetilde{C}_1\int_{0}^{T'}\left[\frac{|\phi^{(4)}(\tau)|^{\frac{n+4}{8}}}{\phi(\tau)^{\frac{n-4}{8}}}+\frac{|\phi^{(2)}(\tau)|^{\frac{n+4}{8}}}{\phi(\tau)^{\frac{n-4}{8}}}\right]\mathrm{d}\tau\geqslant\frac{c(n)}{4}\int_{0}^{T'}v_{i}(\tau)^{\frac{n+4}{n-4}}\mathrm{d}\tau+\upsilon.
\end{equation*}
Now by \eqref{miti-pokh}, one can find $\widetilde{C}_2>0$ such that
\begin{equation}\label{blow7}
\widetilde{C}_2\left(M_4T^{-\frac{n+2}{2}}-M_2T^{-\frac{n}{4}}\right)\geqslant\frac{c(n)}{4}\int_{0}^{T}v_{i}(\tau)^{\frac{n+4}{n-4}}\mathrm{d}\tau.
\end{equation}
Therefore, passing to the limit in \eqref{blow7} the left-hand side converges, whereas the right-hand side blows-up; this is a contradiction.
For proving of the second part, let us notice that
\begin{equation*}
\lim_{t\rightarrow\infty}\mathcal{P}_{\rm cyl}(t,\mathcal{V})=\omega_{n-1}\left(\frac{K_0}{2}|l^{*}|^{2}-\hat{c}(n)|l^{*}|^{\frac{2n}{n-4}}\right)\geqslant0,
\end{equation*}
which implies $l^{*}=0$ and $\mathcal{P}_{\rm cyl}(\mathcal{V})=0$.
\end{proof}
The next lemma shows that if a component solution to \eqref{oursystem} blows-up, then it shall be in finite time. In this fashion, we provide an accurate higher order asymptotic behavior for singular solutions $\mathcal{V}$ of \eqref{vectfowlersystem}, namely, $\bigcup_{j=1}^{\infty}\mathcal{A}\left(\mathcal{V}^{(j)}\right)=\{0\}$.
\begin{lemma}\label{blow-up}
Let $\mathcal{V}$ be a nonnegative solution to \eqref{vectfowlersystem} such that $\lim_{t\rightarrow\pm\infty}v_i(t)\in \mathcal{A}$ for all $i\in I$. Then, for any $j\geqslant1$, we have that $\displaystyle\lim_{t\rightarrow\pm\infty}v_{i}^{(j)}(t)=0$.
\end{lemma}
\begin{proof}
As before, we only consider the case $t\rightarrow\infty$.
Since $\mathcal{A}=\{0,l^{*}\}$ we must divide our approach into two cases:
\noindent{\bf Case 1:} $\lim_{t\rightarrow\pm\infty}v_i(t)=0$.
\noindent For each ordinary derivative case $j=1,2,3,4$, we construct one step.
When $j\geqslant5$, the proof follows directly from the previous cases, and it is omitted.
We start by $j=2$,
\noindent{\bf Step 1:} $\mathcal{A}(v_i^{(2)})=0$.\\
By assumption $v_i(t)<l^{*}$ for $t\gg1$ large, one has
\begin{equation*}
v_i^{(4)}-K_2v_i^{(2)}=\left(c(n)|\mathcal{V}|^{\frac{8}{n-4}}v_i-K_0v_i\right)<0.
\end{equation*}
Defining $B_i(t)=v_i^{(2)}(t)+K_0v_i(t)$, it holds that $B_i^{(2)}(t)<0$ for all $t\in\mathbb{R}$, and thus, $B_i$ is concave near infinity, which implies $\mathcal{A}(B_i)\neq\emptyset$. Hence, there exists $b_0^{*}\in [0,\infty]$ such that $b_0^{*}:=\lim_{t\rightarrow\infty}B_i(t)$ and $b_1^{*}:=\lim_{t\rightarrow\infty}v_i^{(2)}(t)$.
Supposing that $b_1^{*}\neq0$, there exist three possibilities: First, if we assume $b_1^{*}=\infty$, then we have that $\lim_{t\rightarrow\infty}v^{(1)}_i(t)=\infty$, which is contradiction with $\lim_{t\rightarrow\infty}v_i(t)=0$. Second, assuming $0<b_1^{*}<\infty$, it follows that $v_i^{(2)}(t)>{b_1^{*}t}/2$ for $t\gg1$ sufficiently large; thus $v_i^{(1)}(t)>{b_1^{*}t}/{4}$, which is also a contradiction with the hypothesis. Third, $b^{*}<0$, then using the same argument as before, we obtain that $v_i^{(1)}(t)\leqslant{b_1^{*}t}/{4}$, leading to the same contradiction.
Therefore $b_1^{*}=0$, which concludes the proof.
\noindent{\bf Step 2:} $\mathcal{A}(v_i^{(1)})=0$.
\noindent Indeed, for $t\gg1$ large, there exists $\tau\in[t,t+1]$ satisfying $v_i(t+1)-v_i(t)=v_i^{(1)}(t)+\frac{1}{2}v_i^{(2)}(\tau)$, which, by
taking the limit, and since $\tau\rightarrow\infty$ if $t\rightarrow\infty$, one gets that $v_i(t+1)\rightarrow0$ and $v_i(t)\rightarrow0$, which provides $\lim_{\tau\rightarrow\infty}v_i^{(2)}(\xi)\rightarrow0$.
Consequently, one has that $v_i^{(1)}(t)\rightarrow0$.
\noindent{\bf Step 3:} $\mathcal{A}(v_i^{(3)})=0$.
\noindent Since $H_{i}$ is concave for large $t\gg1$ and $B_i(t)\rightarrow\infty$ as $t\rightarrow\infty$, we find $\lim_{t\rightarrow\infty}B_i^{(1)}(t)=0$. Consequently, $v^{(3)}_i(t)\rightarrow\infty$ as $t\rightarrow\infty$.
\noindent{\bf Step 4:} $\mathcal{A}(v_i^{(4)})=0$.
\noindent By equation \eqref{vectfowlersystem} and by Step 1, we observe that $v^{(4)}_i(t)\rightarrow\infty$ as $t\rightarrow\infty$.
As a combination of Step 1--4, we finish the proof of Case 1.
The second case has an additional difficulty.
Precisely, since $v_i(t)\rightarrow l^{*}$ as $t\rightarrow\infty$ for sufficiently large $t\gg1$, there exist two possibilities: either $v_i$ is eventually decreasing or $v_i$ is eventually increasing.
In both situations, the proofs are similar; thus, we only present the first one.
\noindent{\bf Case 2:} $\lim_{t\rightarrow\infty}v_i(t)=l^{*}$.
Here we proceed as before.
\noindent{\bf Step 1:} $\mathcal{A}(v_i^{(2)})=0$.
\noindent Since we are considering $v_i$ is eventually decreasing, there exists a large $T_i\gg1$ such that $v_i(t)>l^{*}$ for $t>T_i$ and we get that $v_i^{(4)}-K_2v_i^{(2)}=\left(c(n)|\mathcal{V}|^{\frac{8}{n-4}}v_i-K_0v_i\right)\geqslant0$. In this case, $B_i$ is convex for sufficiently large $t\gg1$. Hence, $\mathcal{A}(B_i)\neq\emptyset$ and there exists $b_0^{*}=\lim_{t\rightarrow\infty}B_i(t)$. Since $v_i(t)\rightarrow l^{*}$ as $t\rightarrow\infty$, we get that $\lim_{t\rightarrow\infty}v_i^{(2)}(t)=b_1^{*}$, where $b_1^{*}=b_0^{*}-K_2l^{*}$.
Now repeating the same procedure as before, we obtain that $b_1^{*}=0$ and thus
$\lim_{t\rightarrow\infty}B_i(t)=K_2l^{*}$, which yields $\mathcal{A}(v_i^{(2)})=0$.
The remaining steps of the proof follow similarly to Claim 1, and so the proof of the lemma is finished.
\end{proof}
Before we continue our analysis, it is essential to show that any solution to \eqref{vectfowlersystem} is bounded, which is the content of the following lemma.
\begin{lemma}\label{fbounded}
Let $\mathcal{V}$ be a nonnegative solution to \eqref{vectfowlersystem}.
Then, $v_i(t)<l^{*}$ for all $i\in I$. In particular, $|\mathcal{V}|$ is bounded.
\end{lemma}
\begin{proof} For $i\in I$, let us define the set $Z_i=\left\{t\geqslant0 : v_i^{(1)}(t)=0\right\}$. We divide the proof of the lemma into two cases:
\noindent{\bf Case 1:} $Z_i$ is bounded.
\noindent In this case we have that $v_i$ is monotone for large $t\gg1$ and $\mathcal{A}(v_i)\neq\emptyset$. Therefore, using Lemma~\ref{asymptotics} we obtain that $v_i$ bounded by $l^{*}$ for $t\gg1$ sufficiently large.
\noindent{\bf Case 2:} $Z_i$ is unbounded.
\noindent Fixing $H>0$, we define $F(\tau)=\hat{c}(n)|\tau|^{2^{**}}-\frac{1}{2}|\tau|^2$,
which satisfies $\lim_{\tau\rightarrow\infty}F(\tau)=\infty$.
Therefore, there exists $R_i>|v_i(0)|$ such that $F(\tau)>H$ for $\tau>R_i$.
\noindent{\bf Claim 1:} $|v_i|<R_i$ on $[0,\infty)$.
\noindent Supposing by contradiction that $M_{R_i}=\{t\geqslant0 : |v_i(t)|\geqslant R_i\}$ is non-empty, we can define $t_i^{*}=\inf_{M_{R_i}}v_i$, which is strictly positive by the choice of $R_i$.
Thus, we obtain that $v_i(t_i^{*})=R_i$ and also $v_i^{(1)}(t_i^{*})\geqslant0$. In addition, since $Z_i$ is unbounded, we have that $Z_{i}\cap[t_i^{*},\infty)\neq\emptyset$.
Therefore, considering $T_i^{*}=\inf_{Z_{i}\cap[t_i^{*},\infty)}v_i$.
Hence, a combination of $v^{(1)}(T_i^{*})=0$ and Proposition~\ref{singregularity} implies that $v_i^{(1)}(t)\geqslant0$ for all $t\in[t_i^{*},T_i^{*}]$.
Eventually, we conclude that $v_i(T^{*})>R_i$ and $\mathcal{H}(T_i^{*},\mathcal{V})=\frac{1}{2}|\mathcal{V}^{(2)}(T_i^{*})|^{2}+F(|\mathcal{V}(T_i^{*})|)>H$, which is a contradiction with \eqref{conservation}.
To complete the proof lemma, one can check that $R_i=l^{*}$ for all $i\in I$.
\end{proof}
\begin{lemma}\label{signal}
Let $\mathcal{V}$ be a nonnegative solution to \eqref{vectfowlersystem}.
Then, it follows that $v^{(1)}_i(t)<\gamma v_i(t)$
for all $i\in I$ and $t\in\mathbb{R}$, where we recall that $\gamma=\frac{n-4}{2}$ is the Fowler rescaling exponent.
\end{lemma}
\begin{proof}
Let us define
\begin{equation*}
\widetilde{\gamma}=\sqrt{\frac{K_2}{2}-\sqrt{\frac{K_2^2}{4}-K_0}}.
\end{equation*}
Then, by a direct computation, we get that $\widetilde{\gamma}=\gamma$.
Setting
\begin{equation*}
\lambda_1=\frac{K_2}{2}-\sqrt{\frac{K_2^2}{4}-K_0} \quad {\rm and} \quad \lambda_2=\frac{K_2}{2}+\sqrt{\frac{K_2^2}{4}-K_0},
\end{equation*}
we have that $ \lambda_1+\lambda_2=K_2$ and $\lambda_1\lambda_2=K_0$.
Defining the auxiliary function $\phi_i(t)=v^{(2)}_i-\lambda_2v_i(t)$, we observe that $\phi_i^{(2)}-\lambda_2\phi_i=|\mathcal{V}|^{\frac{8}{n-4}}v_i$ and $-\phi_i^{(2)}+\lambda_2\phi_i\leqslant0$.
Hence, since $\mathcal{V}$ is a nonnegative solution to \eqref{fowler4order} by a strong maximum principle, we get that $\phi_i<0$, which implies that $w_i=v_i^{(1)}/v_i$ satisfies
\begin{equation}\label{sig1}
w_i^{(1)}=-w_i+\lambda_1+\frac{\phi_i}{v_i} \quad {\rm and} \quad \frac{v_i^{(2)}}{v_i}=\lambda_1+\frac{\phi_i}{v_i}.
\end{equation}
Moreover, by Lemma~\ref{asymptotics}, there exists $t_0\in\mathbb{R}$ such that $v^{(1)}_i(t_0)=0$, which provides $w_i(t_0)=0$.
Setting $M:=\left\{t>t_0 : w_i(t)\geqslant\sqrt{\lambda_1}\right\}$, the proof of the lemma is reduced to the next claim.
\noindent{\bf Claim 1:} $M=\emptyset$.
\noindent Indeed, supposing the claim is not true, we set $t_1=\inf M$.
Notice that $t_1>t_0$,
$w_i^{(1)}(t_1)\geqslant0$ and $w_i(t_1)=\sqrt{\lambda_1}$.
On the other hand, by \eqref{sig1}, we obtain that $w_i^{(1)}(t_1)=\frac{\phi_i(t_1)}{v_i(t_1)}<0$, which is contradiction. This finishes the proof of the claim.
\end{proof}
As an application of Lemma~\ref{signal}, we complete the proof of Proposition \ref{symmetry}, which states that any component of $\mathcal{U}$ is radially monotonically decreasing.
\begin{corollary}
Let $\mathcal{U}$ be a nonnegative singular solution to \eqref{oursystem}.
Then, ${\partial_r u_i}(r)<0$ for all $r>0$ and $i\in I_{+}$.
\end{corollary}
\begin{proof}
By a direct computation, we have that ${\partial_r u_i}(r)=-r^{\gamma-1}\left[v_i^{(1)}(t)-\gamma v_i(t)\right]$. Then, the proof of the corollary is a consequence of Lemma~\ref{signal}.
\end{proof}
\subsection{Removable singularity classification}
After establishing the previous lemmas concerning the asymptotic behavior of global solutions to the ODE system \eqref{vectfowlersystem}, we can prove the main results of the section, namely, the removable-singularity classification and the non-existence of semi-singular solutions to \eqref{oursystem}.
These results will be employed in the proof of Theorem~\ref{theorem2}. More precisely, we show that the Pohozaev invariant of any solution is always nonpositive, and it is zero, if, and only if, the origin is a non-removable, otherwise, for singular solutions to \eqref{oursystem} this invariant is always negative.
To show the removable singularity theorem, we need to define some auxiliary functions. For $i\in I$, let us set $\varphi_i:\mathbb{R}\rightarrow\mathbb{R}$ given by
\begin{equation*}
\varphi_i(t)=v_{i}^{(3)}(t)v_{i}^{(1)}(t)-\frac{1}{2}|v_{i}^{(2)}(t)|^2-\frac{K_2}{2}|v_{i}^{(1)}(t)|^2+\frac{K_0}{2}|v_{i}(t)|^2-\hat{c}(n)|v_{i}(t)|^{\frac{2n}{n-4}}.
\end{equation*}
\begin{remark}\label{auxfunction}
By Lemma~\ref{blow-up}, we observe that
\begin{equation*}
\varphi^{(1)}_i(t)=c(n)\left(|\mathcal{V}(t)|^{\frac{8}{n-4}}-|v_i(t)|^{\frac{8}{n-4}}\right)v_i(t)v^{(1)}_i(t).
\end{equation*}
Since $|\mathcal{V}|\geqslant|v_i|$, we have that $\sgn (\varphi^{(1)}_i)=\sgn(v^{(1)}_i)$.
In other terms, the monotonicity of $\varphi_i$ is the same of component function $v_i$.
Moreover, it holds that
$\sum_{i=1}^{p}\varphi_i(t)=-H$.
\end{remark}
\begin{proposition}\label{Pohozaev}
Let $\mathcal{U}$ be a nonnegative solution to \eqref{oursystem}. Then, $\mathcal{P}_{\rm sph}(\mathcal{U})\leqslant0$ and $\mathcal{P}_{\rm sph}(\mathcal{U})=0$, if, and only if, $\mathcal{U}\in C^{4,\zeta}(\mathbb{R}^{n},\mathbb{R}^p)$, for some $\zeta\in(0,1)$.
\end{proposition}
\begin{proof} Let us divide the proof into two claims as follows. The first one is concerned with the sign of the Pohozaev invariant. Namely, we show it is always nonpositive.
\noindent{\bf Claim 1:} If $\mathcal{P}_{\rm sph}(\mathcal{U})\geqslant0$, then $\mathcal{P}_{\rm sph}(\mathcal{U})=0$.
\noindent Indeed, let us define the sum function $v_{\Sigma}:\mathbb{R}\rightarrow\mathbb{R}$ given by $v_{\Sigma}(t)=\sum_{i=1}^pv_i(t)$.
Hence, by Lemma~\ref{asymptotics}, for any $v_i$ there exists a sufficient large $\hat{t}_i\gg1$ such that $v^{(1)}_i(\hat{t}_i)=0$. Furthermore, by Lemma~\ref{blow-up} for any $i\in I$, we can find a sufficiently large $t_i\geqslant \hat{t}_i\gg1$ such that $v^{(1)}_i(t_i)<0$ for all $t>t_i$.
Then, choosing $t_*>\max_{i\in I}\{t_i\}$, we have that $v^{(1)}_{\Sigma}(t)<0$ for $t>t_*$, which implies $\lim_{t\rightarrow\infty}v_{i}(t)=0$. Consequently, by Lemma~\ref{signal}, we conclude that $\mathcal{P}_{\rm sph}(\mathcal{U})=0$.
In the next claim, we use some arguments from \cite[Lemma~2.4]{MR4094467} to show that solutions with zero Pohozaev invariant have a removable singularity at the origin.
\noindent{\bf Claim 2:} If $\mathcal{P}_{\rm sph}(\mathcal{U})=0$, then $\mathcal{U}\in C^{4,\zeta}(\mathbb{R}^{n},\mathbb{R}^p)$, for some $\zeta\in(0,1)$.
\noindent In fact, note that $v_{\Sigma}$ satisfies
\begin{equation}\label{hurry}
v^{(4)}_{\Sigma}-K_2v^{(2)}_{\Sigma}+K_0v_{\Sigma}=c(n)|\mathcal{V}|^{\frac{8}{n-4}}v_{\Sigma}.
\end{equation}
Setting $\widetilde{f}(\mathcal{V})=c(n)|\mathcal{V}|^{\frac{8}{n-4}}v_{\Sigma}$, since $v_i(t)\rightarrow0$ as $t\rightarrow\pm\infty$, it follows that $\lim_{t\rightarrow\infty}\widetilde{f}(\mathcal{V}(t))=0$. Then, we define $\tau=-t$ and $\widetilde{v}_{\Sigma}(\tau)=v_{\Sigma}(t)$, which implies that $\widetilde{v}_{\Sigma}$ also satisfies \eqref{hurry}. Moreover, $\lim_{t\rightarrow-\infty}v_{\Sigma}(t)=\lim_{\tau\rightarrow\infty}\widetilde{v}_{\Sigma}(\tau)=0$
and also
\begin{equation}\label{pokho1}
\lim_{\tau\rightarrow\infty}\widetilde{f}(\widetilde{\mathcal{V}}(\tau))=0.
\end{equation}
Consequently, by ODE theory (see for instance \cite{MR0171038,MR3158844}), we can find sufficiently large $T\gg1$ satisfying
\begin{align*}
\widetilde{v}_{\Sigma}(\tau)&=A_1e^{\lambda_1t}+A_2e^{\lambda_2t}+A_3e^{\lambda_3t}+A_4e^{\lambda_4t}&\\
&\quad+B_1\int_{T}^{t}e^{\lambda_1(\tau-t)}\widetilde{f}(\widetilde{\mathcal{V}}(t))\mathrm{d} t+B_2\int_{T}^{t}e^{\lambda_2(\tau-t)}\widetilde{f}(\widetilde{\mathcal{V}}(t))\mathrm{d} t&\\
&\quad-B_3\int_{\tau}^{\infty}e^{\lambda_3(\tau-t)}\widetilde{f}(\widetilde{\mathcal{V}}(t))\mathrm{d} t-B_4\int_{\tau}^{\infty}e^{\lambda_4(\tau-t)}\widetilde{f}(\widetilde{\mathcal{V}}(t))\mathrm{d} t,&
\end{align*}
where $A_1, A_2, A_3, A_4$ are constants depending on $T$, $B_1, B_2, B_3,B_4$ are constants not depending on $T$, and
\begin{equation*}
\lambda_1=-\frac{n}{2}, \ \lambda_2=-\frac{n-4}{2}, \ \lambda_3=\frac{n}{2} \ {\rm and} \ \lambda_4=\frac{n-4}{2}
\end{equation*}
are the solutions to the characteristic equation $\lambda^{4}-K_2\lambda^2+K_0\lambda=0$. In addition, by \eqref{pokho1} we obtain that $A_3=A_4=0$. Hence, we use the same ideas in \cite[Theorem~3.1]{MR3315584} to arrive at
\begin{equation*}
\widetilde{v}_{\Sigma}(\tau)=\mathcal{O}(e^{-\frac{n-4}{2}\tau}) \quad {\rm as} \quad \tau\rightarrow\infty
\quad {\rm or} \quad {v}_{\Sigma}(t)=\mathcal{O}(e^{\frac{n-4}{2}t}) \quad {\rm as} \quad t\rightarrow-\infty.
\end{equation*}
Eventually, undoing the cylindrical transformation, we have that ${u}_{\Sigma}(r)=\mathcal{O}(1)$ as $r\rightarrow0$, which finishes the proof of the claim.
Therefore, using the last claim, we get $u_{\Sigma}$ is uniformly bounded, which implies $u_i\in C^{0}(\mathbb{R}^n)$ for all $i\in I$.
Finally, standard elliptic regularity theory provides that $\mathcal{U}\in C^{4,\zeta}(\mathbb{R}^n,\mathbb{R}^p)$ for some $\zeta\in(0,1)$ and for all $i\in I$; this concludes the proof of the proposition.
\end{proof}
\begin{proposition}\label{fully-singular}
Let $\mathcal{U}$ be a nonnegative singular solution to \eqref{oursystem}. If $\mathcal{P}_{\rm sph}(\mathcal{U})<0$, then $\mathcal{U}$ is fully-singular.
\end{proposition}
\begin{proof} Suppose by contradiction $\mathcal{U}$ is semi-singular, that is, there exists some $i_0\in I\setminus I_{\infty}$. We may suppose without loss of generality $\{i_0\}=I\setminus I_{\infty}$, which yields
\begin{equation}\label{semi-singular}
\displaystyle\lim_{\substack{r\rightarrow 0\\ i\neq i_0}}u_i(r)=\infty \quad {\rm and} \quad \displaystyle\liminf_{r\rightarrow 0}u_{i_0}(r)=C_{i_0}<\infty.
\end{equation}
\noindent{\bf Claim 1:} $\lim_{t\rightarrow\infty}v_{i_0}(t)=\infty$.
\noindent Indeed, using Lemma~\ref{signal}, we have that $\gamma^{-1}|v_{i_0}^{(1)}(t)|\leqslant v_{i_0}(t)\leqslant C_{i}e^{-\gamma t}$ for all $i\in I\setminus\{i_0\}$, which provides $\varphi_{i_0}(t)\rightarrow0$ as $t\rightarrow\infty$.
Hence, since $P<0$, we get that $H<0$, which combined with Remark~\ref{auxfunction} yields $\sum_{\substack{{i=1}\\ i\neq i_0}}^{p}\varphi_i(t)=-H$.
Let us divide the rest of the proof into two steps:
\noindent{\bf Step 1:} For each $i\in I\setminus\{i_0\}$, there exists $C_i>0$ such that $u_i(r)\geqslant C_ir^{-\gamma}$ for all $r\in(0,1]$.
\noindent First, it is equivalent to $\inf_{t\geqslant0}v_i(t)\geqslant C_i$ in cylindrical coordinates.
Assume by contradiction that it does not hold.
Then, there exists $\{t_k\}_{k\in\mathbb{N}}\subset(0,\infty)$ such that $t_k\rightarrow\infty$ and $v_i(t_k)\rightarrow0$ as $k\rightarrow\infty$.
Moreover, using Lemma~\ref{signal} for all $i\in I$ one obtains $0\leqslant\gamma^{-1}|v_{i}^{(1)}(t_k)|\leqslant v_{i}(t_k)\rightarrow0$, which yields that $\varphi_i(t_k)\rightarrow0$.
This is is a contradiction, and the proof of Step 1 is finished.
\noindent{\bf Step 2:} There exists $\varrho\in C^{\infty}(\mathbb{R}\setminus\{0\})$ such that $\lim_{r\rightarrow0}\varrho(r)=\infty$ and
\begin{equation*}
u_{i_0}(r)\geqslant \varrho(r) \quad {\rm for all} \quad r\in(0,1].
\end{equation*}
\noindent First, it is easy to check that there exists $C_0>0$ such that $u_{i_0}(r)\geqslant C_0$ for all $r\in(0,1]$.
Second, writing the Laplacian in spherical coordinates, we have
\begin{equation*}
r^{1-n}{\partial_r}\left[r^{n-1}{\partial_r \Delta u_{i_0}}(r)\right]=c(n)|\mathcal{U}|^{2^{**}-2}u_{i_0}.
\end{equation*}
Now use the estimates in Step 1 to obtain,
\begin{equation*}
{\partial_r}\left[r^{n-1}\partial_r \Delta u_{i_0}(r)\right]\geqslant c_0r^{n-5},
\end{equation*}
which, by integrating, implies
\begin{equation*}
r^{n-1}\partial_r \Delta u_{i_0}(r)\geqslant c_1r^{n-4}+c_2.
\end{equation*}
By proceeding as before, we get
\begin{equation*}
\Delta u_{i_0}(r)\geqslant c_1r^{-3}+c_2r^{1-n}.
\end{equation*}
Therefore, by repeating the same procedure, we can find $c_1,c_2,c_3,c_4\in\mathbb{R}$ satisfying
\begin{equation*}
u_{i_0}(r)\geqslant c_1r^{-1}+c_2r^{1-n}+c_3r^{-n}+c_4,
\end{equation*}
which concludes the proof of Step 2.
Eventually, passing to the limit as $r\rightarrow0$ in Step 2, we obtain that $u_{i_0}$ blows-up at the origin.
Hence, Claim 1 holds, which is a contradiction with \eqref{semi-singular}.
Therefore, semi-singular solutions cannot exist, and the proposition is proved.
\end{proof}
\begin{remark}
We highlight that Proposition~\ref{fully-singular} is such a surprising result since, for the type of singular system considered in \cite{MR3394387}, it is only possible to obtain the same conclusion with some restriction on the dimension. This better behavior is due to the symmetries that the Gross--Pitaevskii nonlinearity enjoys.
\end{remark}
\begin{corollary}\label{stronglypositive}
Let $\mathcal{U}$ be a nonnegative singular solution to \eqref{oursystem}. Then, $\mathcal{U}$ is strongly positive.
\end{corollary}
\begin{proof} We already know by Proposition~\ref{singweakpos} that $\mathcal{U}$ is weakly positive. Suppose by contradiction that $\mathcal{U}$ is not strongly positive. Then, there exists some $i_0\in I_0$, that is, $u_{i_0}\equiv0$ and so non-singular at the origin. Thus, by Proposition~\ref{fully-singular} all the other components must also be non-singular at the origin. Therefore, $I_{\infty}=\emptyset$, which is contradiction since $\mathcal{U}$ is a singular solution to \eqref{oursystem}.
\end{proof}
\subsection{Proof of Theorem~\ref{theorem2}}
Finally, we have conditions to connect the information we have obtained to prove our classification result.
Our idea is to apply the analysis of the Pohozaev invariant and ODE methods together with Theorem~\ref{theorem1}, and Propositions~\ref{Pohozaev} and \ref{fully-singular}, which can be summarized as follows
\begin{theoremtio}\label{theorem2'}
{\it
Let $\mathcal{U}$ be a nonnegative solution to \eqref{oursystem}.
There exist only two possibilities for the sign of the Pohozaev invariant:\\
\noindent {\rm (i)} If $\mathcal{P}_{\rm sph}(\mathcal{U})=0$, then $\mathcal{U}=\Lambda u_{x_0,\mu}$, where $u_{x_0,\mu}$ is given by \eqref{sphericalfunctions} (spherical solution);\\
\noindent {\rm (ii)} If $\mathcal{P}_{\rm sph}(\mathcal{U})<0$, then $\mathcal{U}=\Lambda^* u_{a,T}$, where $u_{a,T}$ is given by \eqref{emden-folwersolution} (Emden--Fowler solution).
}
\end{theoremtio}
\begin{proof}
(i) It follows directly by Proposition~\ref{Pohozaev} and Theorem~\ref{theorem1}.
\noindent (ii) First, by Corollary~\ref{stronglypositive}, it follows that $I_{+}=I_{\infty}=I$, which makes the quotient functions $q_{ij}=v_i/v_j$ well-defined for all $i,j\in I$.
Moreover, we show that they are constants.
Notice that $v_i$ and $v_j$ satisfy,
\begin{equation*}
\begin{cases}
v_i^{(4)}-K_2v_i^{(2)}+K_0v_i=c(n)|\mathcal{V}|^{2^{**}-2}v_i\\
v_j^{(4)}-K_2v_j^{(2)}+K_0v_j=c(n)|\mathcal{V}|^{2^{**}-2}v_j,
\end{cases}
\end{equation*}
which provides
\begin{equation}\label{argentina}
\left(v_i^{(4)}v_j-v_j^{(4)}v_i\right)=-K_2\left(v_i^{(2)}v_j-v_iv_j^{(2)}\right).
\end{equation}
Furthermore, by a direct computation, it follows
\begin{equation*}
q_{ij}^{(4)}=\frac{v_i^{(4)}v_j-v_iv_j^{(4)}}{v_j^2}-4{v_j^{(1)}}{v_j^{-1}}q^{(3)}_{ij}-6{v_j^{(2)}}{v_j^{-1}}q^{(2)}_{ij}-4{v_j^{(3)}}{v_j^{-1}}q^{(1)}_{ij},
\end{equation*}
which, combined with \eqref{argentina} and Remark~\ref{equalityofinitiadata}, implies that the quotient satisfy the following fourth order homogeneous Cauchy problem,
\begin{align*}
\begin{cases}
q_{ij}^{(4)}+4{v_j^{(1)}}{v_j^{-1}}q^{(3)}_{ij}+6{v_j^{(2)}}{v_j^{-1}}q^{(2)}_{ij}+(4{v_j^{(3)}}{v_j^{-1}}+K_2)q^{(1)}_{ij}=0 \quad {\rm in} \quad \mathbb{R}\\
q_{ij}(0)=a/b, \quad q_{ij}^{(1)}(0)=q_{ij}^{(2)}(0)=q_{ij}^{(3)}(0)=0.
\end{cases}
\end{align*}
Hence, using Lemma~\ref{blow-up} the Picard--Lindel\"{o}f uniqueness theorem, it follows that $q_{ij}\equiv a/b$.
Thus, by the same argument at the end of the proof of Theorem~\ref{hebey-druet}', one can find $\Lambda^*\in\mathbb{S}^{p-1}_{+,*}$ such that $\mathcal{V}(t)=\Lambda^* v_{a,T}(t)$, where $v_{a,T}$ is given by \eqref{emden-folwersolution}.
By undoing the cylindrical transformation, the theorem is proved.
\end{proof}
As an application of Theorem~\ref{theorem2'}\textcolor{blue}{'} and Lemma~\ref{fbounded}, we provide a sharp global estimate for the blow-up rate near the origin for singular solutions to \eqref{oursystem}.
\begin{corollary}
Let $\mathcal{U}$ be a strongly positive singular solution to \eqref{oursystem}. Then, there exist $C_1,C_2>0$ such that
\begin{equation*}
C_1|x|^{-\gamma}\leqslant|\mathcal{U}(x)|\leqslant C_2|x|^{-\gamma} \quad {\rm for \ all} \quad x\in\mathbb{R}^{n}\setminus\{0\}.
\end{equation*}
In other terms, $|\mathcal{U}(x)|=\mathcal{O}(|x|^{-\gamma})$ as $x\rightarrow0$,
where $\gamma=\frac{n-4}{2}$.
\end{corollary}
|
1,116,691,497,333 | arxiv | \section{Introduction}
We consider finite, simple, and undirected graphs, and use standard terminology.
A graph $G$ is {\it locally connected} if for every vertex $u$ of $G$,
the subgraph $G[N_G(u)]$ of $G$ induced by the neighborhood $N_G(u)$ of $u$ in $G$ is connected.
Similarly, $G$ is {\it locally Ore} if
$|N_G(u)\cap N_G(v)|+|N_G(u)\cap N_G(w)|\geq d_G(u)$ for every induced path $vuw$ of order $3$ in $G$,
and $G$ is {\it locally Dirac} if
$2\delta(G[N_G(u)])\geq d_G(u)$ for every vertex $u$ of $G$,
where $\delta(G)$ and $d_G(u)$ are the minimum degree of $G$ and the degree of $u$ in $G$, respectively.
For a vertex $u$ of a graph $G$ and some positive integer $k$,
let $N_G^k(u)$ be set of vertices of $G$ at distance exactly $k$ from $u$.
For a graph $G$,
the {\it girth} $g(G)$ and the {\it circumference} $c(G)$
are the minimum and the maximum order of a cycle in $G$,
respectively.
A graph $G$ is {\it hamiltonian} if $c(G)=n(G)$,
where $n(G)$ is the order of $G$.
A graph $G$ is {\it weakly pancyclic} if it has a cycle of order $\ell$ for every integer $\ell$ between $g(G)$ and $c(G)$.
A cycle $C$ in a graph $G$ is {\it extendable}
if $G$ contains a $C'$ of order $n(C)+1$ with $V(C)\subseteq V(C')$.
A graph $G$ is {\it fully cycle extendable} if every vertex of $G$ lies on a triangle,
and every cycle in $G$ of order less than $n(G)$ is extendable.
For two disjoint graphs $G$ and $H$,
let $G\cup H$ be their union,
let $G+H$ be their join, and
let $\bar{G}$ be the complement of $G$.
Let $K_n$, $P_n$, and $C_n$ be the complete graph, path, and cycle of order $n$, respectively.
Note that the star $K_{1,n-1}$ of order $n$ is $K_1+\bar{K}_{n-1}$.
Let $K_{1,1,n-2}=K_1+K_1+\bar{K}_{n-2}$.
If ${\cal F}$ is a set of graphs, then a graph is {\it ${\cal F}$-free}
if it does not contain a graph in ${\cal F}$ as an induced subgraph.
If ${\cal F}$ contains only one graph $F$,
we write {\it $F$-free} instead of ${\cal F}$-free.
The results in the present paper are motivated by the following conjecture.
\begin{conjecture}[Ryj\'{a}\v{c}ek \cite{r}]\label{conj_1}
Every locally connected graph is weakly pancyclic.
\end{conjecture}
We identify several sets ${\cal F}$ of graphs
such that Conjecture \ref{conj_1} holds for ${\cal F}$-free graphs.
Furthermore, we show that Conjecture \ref{conj_1} holds for graphs
that are locally Ore or locally Dirac.
Before we proceed to our results,
we give a very short summary of previous related work.
Chartrand and Pippert \cite{cp} showed that every connected locally connected graph $G$
with $n(G)\geq 3$ and maximum degree $\Delta(G)\leq 4$ is either hamiltonian or $K_{1,1,3}$.
Extending a result of Kikust \cite{k},
Hendry \cite{h} showed that every connected locally connected graph $G$
with $n(G)\geq 3$, $\Delta(G)\leq 5$, and $\Delta(G)-\delta(G)\leq 1$ is fully cycle extendable.
Gordon et al. \cite{gops} generalized this last result further to graphs $G$
with $n(G)\geq 3$, $\Delta(G)\leq 5$, and $\delta(G)\geq 3$.
Extending earlier results due to Oberly, Sumner \cite{os}, and Clark \cite{c},
Zhang \cite{z} showed that every connected locally connected $K_{1,3}$-free graph $G$
with $n(G)\geq 3$ is fully cycle extendable.
Faudree et al. \cite{frs} weakened the local connectivity requirement for this last result.
Kubicka et al. \cite{kubicka} considered locally Dirac graphs,
and showed that every connected locally Dirac graph $G$
with $n(G)\geq 3$ and $\Delta(G)\leq 11$ is fully cycle extendable.
Furthermore, they observed that results of Hasratian and Khachatrian \cite{hk}
imply that every connected locally Dirac graph of order at least $3$ is hamiltonian.
Our results generalize the mentioned results of
Zhang \cite{z}
and
Kubicka et al. \cite{kubicka}.
\section{Results}
Our first goal is to verify Conjecture \ref{conj_1} for $(K_1+(K_1\cup K_2))$-free graphs.
The next lemma collects several useful observations.
\begin{lemma}\label{lemma1}
Let $G$ be a connected locally connected $(K_1+(K_1\cup K_2))$-free graph of order at least $3$.
\begin{enumerate}[(i)]
\item Every vertex of $G$ lies on a triangle.
\item The diameter of $G$ is at most $2$.
\item $N_G^2(u)$ is independent for every vertex $u$ of $G$.
\item A cycle $C$ in $G$ is extendable if and only if there is some vertex $u$ in $V(G)\setminus V(C)$
that has a neighbor in $V(C)$ such that $N_G(u)\not\subseteq V(C)$ or $2d_G(u)>n(C)$.
\item Every cycle in $G$ of length less than $c(G)$ is extendable.
\end{enumerate}
\end{lemma}
{\it Proof:} (i) Since $G$ is connected locally connected and distinct from $K_2$,
the minimum degree of $G$ is at least $2$, which implies that every vertex of $G$ lies on a triangle.
\medskip
\noindent (ii) For a contradiction, we assume that $uvxy$ is some shortest path in $G$.
Since the minimum degree of $G$ is at least $2$, and $G$ is locally connected,
the vertices $u$ and $v$ have a common neighbor $w$.
Since $G$ is $(K_1+(K_1\cup K_2))$-free, $x$ is adjacent to $w$.
Since $y$ is not adjacent to $v$ or $w$,
the graph $G[\{ v,w,x,y\}]$ is $K_1+(K_1\cup K_2)$,
which is a contradiction. Hence, $G$ has diameter at most $2$.
\medskip
\noindent (iii) For a contradiction, we assume that $xy$ be an edge between two vertices in $N_G^2(u)$
for some vertex $u$ of $G$.
Since $G$ is $(K_1+(K_1\cup K_2))$-free,
$x$ and $y$ have no common neighbor in $N_G(u)$.
Let $v$ be a common neighbor of $u$ and $x$,
and
let $w$ be a common neighbor of $u$ and $y$.
Note that $x$ is not adjacent to $w$,
and that $y$ is not adjacent to $v$.
Since $G$ is $(K_1+(K_1\cup K_2))$-free,
$v$ and $w$ are not adjacent.
Since $G$ is locally connected,
$v$ and $u$ have a common neighbor $z$.
Since $G$ is $(K_1+(K_1\cup K_2))$-free,
we obtain that
$x$ and $z$ are adjacent,
and that
$y$ and $z$ are adjacent,
that is, $z$ is a common neighbor of $x$ and $y$ in $N_G(u)$,
which is a contradiction.
Hence, $N^2_G(u)$ is independent for every vertex $u$ of $G$.
\medskip
\noindent (iv) Let $C$ be a cycle in $G$.
First, we prove the necessity.
For a contradiction, we may assume that $C$ is extendable but that
$N_G(u)\subseteq V(C)$ and $2d_G(u)\leq n(C)$ for every vertex $u$ in $V(G)\setminus V(C)$.
Since $C$ is extendable, there is some vertex $u$ in $V(G)\setminus V(C)$
such that $G$ contains a cycle $C'$ with $V(C')=V(C)\cup \{ u\}$.
By (ii) and (iii), at least one of every two consecutive vertices of $C$ is adjacent to $u$.
Since $2d_G(u)\leq n(C)$, this implies that exactly one of every two consecutive vertices of $C$ is adjacent to $u$,
that is, $C$ is a cycle of even length that alternates between $N_G(u)$ and the independent set $N_G^2(u)$,
where $|N_G(u)|=|N_G^2(u)|$.
Since $C'$ contains two edges between $u$ and $N_G(u)$,
the independence of $N_G^2(u)$ implies the contradiction $|N_G(u)|=|N_G^2(u)|+1$.
Next, we prove the sufficiency.
Therefore, let $u$ in $V(G)\setminus V(C)$
be such that $u$ has a neighbor in $V(C)$ and $N_G(u)\not\subseteq V(C)$ or $2d_G(u)>n(C)$.
If $N_G(u)\subseteq V(C)$,
then $2d_G(u)>n(C)$ implies that $u$ is adjacent to two consecutive vertices of $C$,
say $v$ and $w$,
and replacing $vw$ with $vuw$ yields a cycle $C'$
of order $n(C)+1$ with $V(C)\subseteq V(C')$.
Hence, we may assume that $N_G(u)\not\subseteq V(C)$.
Since $u$ has a neighbor on $C$ and a neighbor not on $C$,
the local connectivity of $G$ implies that there is a triangle $uvwu$
with $v\in V(G)\setminus V(C)$ and $w\in V(C)$.
Since $G$ is $(K_1+(K_1\cup K_2))$-free,
$u$ or $v$ is adjacent to a neighbor of $w$ on $C$,
and we obtain a cycle $C'$ similarly as above.
\medskip
\noindent (v) For a contradiction, we assume that $C$ is a cycle in $G$ of length less than $c(G)$
such that $C$ is not extendable.
By (iv), $V(G)\setminus V(C)$ is an independent set,
and $2d_G(u)\leq n(C)$ for every vertex $u$ in $V(G)\setminus V(C)$.
Let $u^*\in V(G)\setminus V(C)$.
By (iii), at least one of every two consecutive vertices of $C$ is adjacent to $u^*$.
Since $2d_G(u^*)\leq n(C)$, this implies that exactly one of every two consecutive vertices of $C$ is adjacent to $u^*$,
that is, $C$ is a cycle of even length that alternates between $N_G(u^*)$ and the independent set $N_G^2(u^*)$,
where $|N_G(u^*)|=|N_G^2(u^*)|$.
By (ii) and symmetry, we obtain that $N_G(u)=N_G(u^*)$ for every vertex $u$ in $V(G)\setminus V(C)$,
that is, $V(G)\setminus N_G(u^*)$ is an independent set.
This implies that $c(G)\leq 2|N_G(u^*)|=n(C)$,
which is a contradiction.
$\Box$
\medskip
\noindent With Lemma \ref{lemma1} at hand,
it is easy to verify Conjecture \ref{conj_1} for $(K_1+(K_1\cup K_2))$-free graphs.
\begin{theorem}\label{theorempaw}
Let $G$ be a connected locally connected $(K_1+(K_1\cup K_2))$-free graph of order at least $3$.
\begin{enumerate}[(i)]
\item $G$ is weakly pancyclic.
\item $G$ is fully cycle extendable if and only if $2\delta(G)\geq n(G)$.
\end{enumerate}
\end{theorem}
{\it Proof:} (i) follows immediately from Lemma \ref{lemma1} (i) and (v).
We proceed to the proof of (ii).
If $2\delta(G)\geq n(G)$, then the theorem of Dirac \cite{d} implies $c(G)=n(G)$,
and (ii) follows from Lemma \ref{lemma1} (i) and (v).
Now, let $G$ be fully cycle extendable.
Let $u$ be a vertex of $G$ of minimum degree.
By Lemma \ref{lemma1} (ii) and (iii), $V(G)\setminus N_G(u)$ is an independent set.
Since $G$ has a hamiltonian cycle,
we obtain $n(G)-d_G(u)=|V(G)\setminus N_G(u)|\leq |N_G(u)|=d_G(u)$,
which implies $2d_G(u)\geq n(G)$.
$\Box$
\medskip
\noindent We proceed to further connected locally connected graphs that are fully cycle extendable.
In our next result we consider forbidding just one induced subgraph.
Let $X$ be the graph with vertex set $\{ u_0,u_1,u_2,u_3,u_1',u_2',u_3'\}$
and edge set
$$\{ u_0u_1,u_0u_2,u_0u_3,u_0u_1',u_0u_2',u_0u_3',u_1u_2,u_2u_3,u_1u_3,u_1u_1',u_2u_2',u_3u_3'\}.$$
\begin{proposition}\label{proposition1}
Let $F$ be a graph.
Every connected locally connected $F$-free graph of order at least $3$ is fully cycle extendable
if and only if $F$ is an induced subgraph of $K_{1,3}$ or $K_1+P_3$.
\end{proposition}
{\it Proof:} Let $G$ be a connected locally connected $F$-free graph of order at least $3$.
First, we prove the sufficiency.
If $F$ is an induced subgraph of $K_{1,3}$, then Zhang's result \cite{z} implies that $G$ is fully cycle extendable.
If $F$ is an induced subgraph of $K_1+P_3$,
then $N_G(u)$ induces a complete graph for every vertex $u$ of $G$,
which implies that $G$ is complete, and hence $G$ is fully cycle extendable.
Next, we prove the necessity.
Since $K_{1,1,3}$ and $X$ are connected locally connected graphs of order at least $3$ that are not hamiltonian,
$F$ must be an induced subgraph of $K_{1,1,3}$ as well as of $X$.
Since $K_{1,1,3}$ is not an induced subgraph of $X$,
$F$ is a proper induced subgraph of $K_{1,1,3}$,
which implies that $F$ is an induced subgraph of $K_{1,3}$ or $K_1+P_3$.
$\Box$
\medskip
\noindent The arguments used in the previous proof lead to the following result
concerning pairs of forbidden induced subgraphs.
\begin{proposition}\label{proposition2}
Let ${\cal F}$ be a set of two graphs that contains no induced subgraph of $K_{1,3}$ or $K_1+P_3$.
If every connected locally connected ${\cal F}$-free graph of order at least $3$ is fully cycle extendable,
then one graph in ${\cal F}$ is $K_{1,1,3}$,
and the other graph in ${\cal F}$ is an induced subgraph of $X$.
\end{proposition}
{\it Proof:} Since $K_{1,1,3}$ is not hamiltonian,
and ${\cal F}$ contains no proper induced subgraph of $K_{1,1,3}$,
the set ${\cal F}$ must contain $K_{1,1,3}$.
Since $X$ is $K_{1,1,3}$-free and not hamiltonian,
the other graph in ${\cal F}$ is an induced subgraph of $X$.
$\Box$
\medskip
\noindent The next two results yield examples for sets ${\cal F}$ as in Proposition \ref{proposition2}.
Theorem \ref{theorem1} can actually be derived from
Theorem \ref{theorem6} below,
but we include a short independent proof
using the forbidden induced subgraphs.
Note that
Theorem \ref{theorem4}
extends the result of Zhang \cite{z},
because both,
$K_1 + (K_1 \cup P_3)$ and $K_{1,1,3}$,
contain $K_{1,3}$ as an induced subgraph.
\begin{theorem}\label{theorem1}
Every connected locally connected $\{ K_{1,1,3},K_1+P_4\}$-free graph of order at least $3$
is fully cycle extendable.
\end{theorem}
{\it Proof:}
Let $G$ be a connected locally connected $\{ K_{1,1,3},K_1+P_4\}$-free graph of order at least $3$.
As before,
the minimum degree of $G$ is at least $2$, and every vertex of $G$ lies on a triangle.
For a contradiction, we may assume that $C$ is a cycle in $G$ of order less than $n(G)$
such that $G$ contains no cycle $C'$ of order $n(C)+1$ with $V(C)\subseteq V(C')$.
We fix a cyclic order on $C$.
For every vertex $u$ on $C$,
let $u^-$ and $u^+$ be the predecessor and successor of $u$ on $C$ within the cyclic order.
Since $G$ is connected, some vertex $u$ on $C$ has a neighbor $x$ in $V(G)\setminus V(C)$.
Our assumption implies that $x$ is adjacent to neither $u^-$ nor $u^+$.
Since $G[N_G(u)]$ is connected and $P_4$-free, $G[N_G(u)]$ contains a path $xvu^-$.
Our assumption implies that $v$ lies on $C$.
First, we assume that $u^-$ is adjacent to $u^+$ for every vertex $u$ that has a neighbor in $V(G)\setminus V(C)$.
Since $v$ is adjacent to $x$, $v^-$ is adjacent to $v^+$.
Note that $v^+=u^-$ or $v^-=u^+$ is possible.
Since $G[\{ u,u^-,u^+,x,v\}]$ is not $K_1+P_4$, $v$ is adjacent to $u^+$,
and the cycle $uu^+\ldots v^-v^+\ldots u^-vxu$ contradicts our assumption.
Hence, we may assume that $u^-$ is not adjacent to $u^+$.
Since $G[\{ u,u^-,u^+,x,v\}]$ is not $K_{1,1,3}$, $v$ is not adjacent to $u^+$.
Since $G[N_G(u)]$ is connected and $P_4$-free, $G[N_G(u)]$ contains a path $xwu^+$
where $w$ is distinct from $v$.
By symmetry of $v$ and $w$, $w$ is not adjacent to $u^-$.
If $v$ and $w$ are adjacent, then $G[\{ u,u^-,u^+,v,w\}]$ is $K_1+P_4$, and,
if $v$ and $w$ are not adjacent, then $G[\{ u,u^-,x,v,w\}]$ is $K_1+P_4$,
which is a contradiction, and completes the proof. $\Box$
\begin{theorem}\label{theorem4}
Every connected locally connected $\{ K_{1,1,3},K_1+(K_1\cup P_3)\}$-free graph of order at least $3$
is fully cycle extendable.
\end{theorem}
{\it Proof:}
Let $G$ be a connected locally connected $\{ K_{1,1,3},K_1+(K_1\cup P_3)\}$-free graph of order at least $3$.
As before,
the minimum degree of $G$ is at least $2$, and every vertex of $G$ lies on a triangle.
For a contradiction, we may assume that $C$ is a cycle in $G$ of order less than $n(G)$
such that $G$ contains no cycle $C'$ of order $n(C)+1$ with $V(C)\subseteq V(C')$.
We fix a cyclic order on $C$.
For every vertex $u$ on $C$,
let $u^-$ and $u^+$ be the predecessor and successor of $u$ on $C$ within the cyclic order.
Since $G$ is connected, some vertex $u$ on $C$ has a neighbor $x$ in $V(G)\setminus V(C)$.
Our assumption implies that $x$ is adjacent to neither $u^-$ nor $u^+$.
Let $d$ be the minimum distance within the graph $G[N_G(u)]$
between the vertex $x$ and a vertex in $\{ u^-,u^+\}$.
Clearly, $d\geq 2$.
Since $G[N_G(u)]$ is connected and $(K_1\cup P_3)$-free, we have $d\leq 3$.
First, we assume that $u^-$ and $u^+$ are not adjacent.
If $d=2$, then,
by symmetry, we may assume that $u$, $u^-$, and $x$ have a common neighbor $v$.
Now,
if $v$ is not a neighbor of $u^+$, then $G[\{ u,u^-,u^+,v,x\}]$ is $(K_1+(K_1\cup P_3))$,
and,
if $v$ is a neighbor of $u^+$, then $G[\{ u,u^-,u^+,v,x\}]$ is $K_{1,1,3}$,
which is a contradiction.
Hence, $d=3$, and, by symmetry, we may assume that $G[N_G(u)]$ contains a path $xvwu^-$.
Now,
if $w$ is not a neighbor of $u^+$, then $G[\{ u,u^+,v,w,x\}]$ is $(K_1+(K_1\cup P_3))$,
and,
if $w$ is a neighbor of $u^+$, then $G[\{ u,u^-,u^+,v,w\}]$ is $K_{1,1,3}$,
which is a contradiction.
Hence, $u^-$ and $u^+$ are adjacent.
Next, we assume that $d=2$.
By symmetry, we may assume that $u$, $u^-$, and $x$ have a common neighbor $v$.
By our assumption, the vertex $v$ lies on $C$.
By symmetry between $u$ and $v$, $v^-$ and $v^+$ are adjacent.
Note that $v^-=u^+$ or $v^+=u^-$ is possible.
Now the cycle $uu^+\ldots v^-v^+\ldots u^-vxu$ contradicts our assumption.
Hence, $d=3$.
By symmetry, we may assume that $xvwu^-$ is a path in $G[N_G(u)]$.
By our assumption, the vertex $w$ lies on $C$.
Clearly, $v$ is adjacent to neither $u^-$ nor $u^+$.
If $v$ does not lie on $C$,
then the minimum distance
between the vertex $v$ and a vertex in $\{ u^-,u^+\}$ is $2$,
in which case we can argue as above for $d=2$.
Hence, we may assume that $v$ lies on $C$.
By symmetry, $v^-$ and $v^+$ are adjacent.
If $w$ is not a neighbor of $u^+$, then $G[\{ u,u^-,u^+,w,x\}]$ is $(K_1+(K_1\cup P_3))$,
which is a contradiction.
Hence, $w$ is a neighbor of $u^+$.
By symmetry, we may assume that $u$, $v$, and $w$ appear in this order within the cyclic order on $C$.
If $w=v^+$, then the cycle $v^+\ldots u^-uxvv^-\ldots u^+v^+$ contradicts our assumption.
Hence, $w\not=v^+$.
If $w^-$ and $w^+$ are adjacent,
then the cycle $w^-w^+\ldots u^-wvxuu^+\ldots v^-v^+\ldots w^-$ contradicts our assumption.
Hence, $w^-$ and $w^+$ are not adjacent.
If $w^+$ and $u$ are adjacent,
then the cycle $uw^+\ldots u^-u^+\ldots v^-v^+\ldots w^-wvxu$ contradicts our assumption.
Hence, $w^+$ and $u$ are not adjacent,
which implies that $w^+\not=u^-$.
If $w^-$ and $u$ are adjacent,
then the cycle $uw^- \ldots v^+v^-\ldots u^+u^-\ldots w^+wvxu$ contradicts our assumption.
Hence, $w^-$ and $u$ are not adjacent,
which implies that $w^-\not=u^+$.
If $w^+$ and $v$ are adjacent,
then the cycle $uxvw^+\ldots u^-u^+\ldots v^-v^+\ldots wu$ contradicts our assumption.
Hence, $w^+$ and $v$ are not adjacent.
If $w^-$ and $v$ are adjacent,
then the cycle $uxvw^-\ldots v^+v^-\ldots u^+u^-\ldots w^+wu$ contradicts our assumption.
Hence, $w^-$ and $v$ are not adjacent.
If $u'\in \{ u^-,u^+\}$ and $w'\in \{ w^-,w^+\}$,
then $G[\{ u,v,w,u',w'\}]$ is not $(K_1+(K_1\cup P_3))$,
which implies that $u'$ and $w'$ are adjacent.
Hence, every vertex in $\{ u^-,u^+\}$ is adjacent to every vertex in $\{ w^-,w^+\}$.
Now, $G[\{ u,u^-,u^+,w^-,w^+\}]$ is $K_{1,1,3}$, which is a contradiction,
and completes the proof. $\Box$
\medskip
\noindent As we have seen in the previous results, the graph $K_{1,1,3}$ plays a special role.
Excluding this single graph, we obtain the following result.
\begin{theorem}\label{theorem2}
Every connected locally connected $\{ K_1+P_4,K_{1,4}, K_2+(K_1\cup K_2)\}$-free graph of order at least $3$
that is distinct from $K_{1,1,3}$ is fully cycle extendable.
\end{theorem}
{\it Proof:}
Let $G$ be a connected locally connected $\{ K_1+P_4,K_{1,4}, K_2+(K_1\cup K_2)\}$-free graph of order at least $3$.
As before,
the minimum degree of $G$ is at least $2$, and every vertex of $G$ lies on a triangle.
For a contradiction, we may assume that $G$ is distinct from $K_{1,1,3}$,
and that $C$ is a cycle in $G$ of order less than $n(G)$
such that $G$ contains no cycle $C'$ of order $n(C)+1$ with $V(C)\subseteq V(C')$.
We fix a cyclic order on $C$.
For every vertex $u$ on $C$,
let $u^-$ and $u^+$ be the predecessor and successor of $u$ on $C$ within the cyclic order.
Since $G$ is connected, some vertex $u$ on $C$ has a neighbor $x$ in $V(G)\setminus V(C)$.
Our assumption implies that $x$ is adjacent to neither $u^-$ nor $u^+$.
Since $G[N_G(u)]$ is connected and $P_4$-free,
$G[N_G(u)]$ contains a path $xvu^-$.
Our assumption implies that $v$ lies on $C$.
First, we assume that $u^-$ is adjacent to $u^+$.
Now,
if $v$ is not adjacent to $u^+$, then $G[\{ u,u^-,u^+,v,x\}]$ is $K_1+P_4$, and,
if $v$ is adjacent to $u^+$, then $G[\{ u,u^-,u^+,v,x\}]$ is $K_2+(K_1\cup K_2)$,
which is a contradiction.
Hence, $u^-$ is not adjacent to $u^+$.
In fact, by symmetry, for every vertex $\tilde{u}$ on $C$ that has a neighbour in $V(G)\setminus V(C)$,
we obtain that $\tilde{u}^-$ is not adjacent to $\tilde{u}^+$.
In particular, $v^-$ is not adjacent to $v^+$.
If $v^-$ is adjacent to $u^-$,
then the cycle $uxvv^+\ldots u^-v^-\ldots u^+u$ contradicts our assumption.
Hence, $v^-$ and $u^-$ are not adjacent.
Next, we assume that $v^+\not=u^-$.
Since $G[\{ u^-,x,v,v^-,v^+\}]$ is not $K_{1,4}$,
$u^-$ and $v^+$ are adjacent.
If $u$ is not adjacent to $v^+$, then $G[\{ u,u^-,v,v^+,x\}]$ is $K_1+P_4$, and,
if $u$ is adjacent to $v^+$, then $G[\{ u,u^-,v,v^+,x\}]$ is $K_2+(K_1\cup K_2)$,
which is a contradiction.
Hence, $v^+=u^-$.
Next, we assume that $v$ is adjacent to $u^+$.
By symmetry between $u^-$ and $u^+$, we obtain $v^-=u^+$,
that is, $C$ has order $4$.
Since $G$ is connected and distinct from $K_{1,1,3}$,
there is a vertex $y\not\in \{ u,v,u^-,u^+,x\}$ with a neighbor in $\{ u,v,u^-,u^+,x\}$.
If $y$ is adjacent to $u$ or $v$,
then, by symmetry, and since $G$ is $K_{1,4}$-free,
we obtain that $N_G(y)\cap \{ u,v,u^-,u^+,x\}=\{ u,v,x\}$,
and $G[\{ u,v,u^+,x,y\}]$ is $K_2+(K_1\cup K_2)$,
which is a contradiction.
Hence, $y$ is not adjacent to $u$ or $v$.
If $y$ is adjacent to $u^+$ or $u^-$,
then we obtain, by symmetry, that $u^+$ is neighbor of $u^-$,
which is a contradiction.
Hence, the only neighbor of $y$ in $\{ u,v,u^-,u^+,x\}$ is $x$.
Since $G$ is locally connected and $(K_1+P_4)$-free,
this implies the existence of a vertex $z\not\in \{ u,v,u^-,u^+,x\}$
that is adjacent to $x$, $y$, and $u$.
Since $z\not\in \{ u,v,u^-,u^+,x\}$, and $z$ is adjacent to $u$,
we obtain a similar contradiction as above.
Hence, $v$ is not adjacent to $u^+$.
By symmetry, there is a path $xwu^+$ in $G[N_G(u)]$
such that $w^-=u^+$, and $w$ is not adjacent to $u^-$.
If $v$ and $w$ are adjacent,
then $G[\{ u,u^-,u^+,v,w\}]$ is $K_1+P_4$, and,
if $v$ and $w$ are not adjacent,
then $G[\{ u,u^-,v,x,w\}]$ is $K_1+P_4$,
which is a contradiction,
and completes the proof.
$\Box$
\medskip
\noindent We proceed to our results on locally Ore and locally Dirac graphs.
As we will see below,
the neighborhood condition used in the next result is weaker than being locally Ore or locally Dirac.
\begin{theorem}\label{theorem6}
If $G$ is a connected graph of order at least $3$ such that
\begin{eqnarray}\label{e1}
|N_G(u)\cap N_G(v)\cap N_G(w)|
>|N_G(u)\setminus (N_G[v]\cup N_G[w])|
\end{eqnarray}
for every induced path $vuw$ of order $3$ in $G$,
then $G$ is fully cycle extendable.
\end{theorem}
{\it Proof:} Let $G$ be as in the statement.
Clearly, (\ref{e1}) implies that every vertex of $G$ of degree at least $2$ lies on a triangle. If $v$ is a vertex of degree $1$ in $G$, and $u$ is the unique neighbor of $v$, then the connectivity and $n(G)\geq 3$ imply that $u$ has another neighbor $w$,
and $vuw$ is an induced path of order $3$ that violates (\ref{e1}). Therefore, $G$ has minimum degree at least $2$, and every vertex of $G$ lies on a triangle.
For a contradiction, we may assume that $C$ is a cycle in $G$ of length less than $n(G)$ such that $G$ does not contain a cycle $C'$ of length $n(C)+1$ with $V(C)\subseteq V(C')$.
We fix a cyclic order on $C$.
For every vertex $u$ on $C$,
let $u^+$ be the successor of $u$ on $C$ within the cyclic order.
Since $G$ is connected, some vertex $z$ in $V(G)\setminus V(C)$ has a neighbor on $C$.
Let $u$ be some neighbor of $z$ on $C$.
By our assumption, $z$ is not adjacent to $u^+$,
and $N_G(u)\cap N_G(u^+)\cap N(z)\subseteq V(C)$.
Let
\begin{eqnarray*}
A(u) &=& \Big\{v\in N_G(u)\cap N_G(u^+)\cap N(z): v^+\not\in N_G(u)\Big\},\\
B(u) &=& \Big\{v\in N_G(u)\cap N_G(u^+)\cap N(z): v^+\in N_G(u)\Big\},\mbox{ and}\\
C(u) &=& \Big\{v\in (N_G(u)\cap N(z)\cap V(C))\setminus N_G(u^+): v^+\in N_G(u)\Big\}.
\end{eqnarray*}
Note that $A(u)$, $B(u)$, and $C(u)$
are disjoint subsets of $V(C)$.
If $x$ and $y$ are distinct neighbors of $z$ on $C$,
then our assumption implies that $x^+$ and $y^+$ are not adjacent.
Therefore, if $v\in B(u)\cup C(u)$,
then $v^+$ is not adjacent to $z$ or $u^+$,
and hence
$v^+\in N_G(u)\setminus (N_G[u^+]\cup N_G[z])$,
which implies
$|N_G(u)\setminus (N_G[u^+]\cup N_G[z])|\geq
|B(u)\cup C(u)|.$
Now,
\begin{eqnarray*}
|A(u)|+|B(u)| & = & |N_G(u)\cap N_G(u^+)\cap N(z)|\\
& \stackrel{(\ref{e1})}{>} &
|N_G(u)\setminus (N_G[u^+]\cup N_G[z])|\\
&\geq & |B(u)\cup C(u)|\\
& = & |B(u)|+|C(u)|,
\end{eqnarray*}
which implies
\begin{eqnarray}\label{e2}
|A(u)| &>&|C(u)|
\end{eqnarray}
for every neighbor $u$ of $z$ on $C$.
Now, we define a sequence
$u_1,u_2,u_3,\ldots$
of not necessarily distinct neighbors of $z$ on $C$.
Furthermore, for every neighbor $x$ of $z$ on $C$,
we define two sets
$A_k(x)\subseteq A(x)$
and
$C_k(x)\subseteq C(x)$
for every positive integer $k$
for which $u_k$ is defined
in such a way that
$$A_1(x)\subseteq A_2(x)\subseteq A_3(x)\ldots
\subseteq A(x)$$
and
$$C_1(x)\subseteq C_2(x)\subseteq C_3(x)\ldots
\subseteq C(x).$$
Let $u_1$ be any neighbor of $z$ on $C$,
and let $A_1(x)=C_1(x)=\emptyset$ for every neighbor $x$ of $z$ on $C$.
Now, we assume that
the vertices $u_1,\ldots,u_k$
as well as
the sets $A_k(x)$
and $C_k(x)$
for the neighbors $x$ of $z$ on $C$
have already been defined for some positive integer $k$.
If $A(u_k)\setminus A_k(u_k)\not=\emptyset$,
then let $u_{k+1}\in A(u_k)\setminus A_k(u_k)$.
Note that, by definition,
$u_{k+1}\in A(u_k)$ implies that $u_k\in C(u_{k+1})$.
Let
$$
A_{k+1}(x)=
\left\{
\begin{array}{ll}
A_k(u_k)\cup \{ u_{k+1}\} &, x=u_k\mbox{ and}\\
A_k(x) &,\mbox{ otherwise},
\end{array}
\right.
$$
and
$$C_{k+1}(x)=
\left\{
\begin{array}{ll}
C_k(u_{k+1})\cup \{ u_k\} &, x=u_{k+1}\mbox{ and}\\
C_k(x) &,\mbox{ otherwise}.
\end{array}
\right.
$$
By the choice of $u_{k+1}$,
we have $u_{k+1}\not\in A_k(u_k)$,
and hence $|A_{k+1}(u_k)|=|A_k(u_k)|+1$.
If $u_k\in C_k(u_{k+1})$,
then the above definitions imply
the existence of some integer $i$ with $1\leq i<k$
such that $u_i=u_k$ and $u_{i+1}=u_{k+1}$.
Now, we obtain the contradiction
$$u_{k+1}=u_{i+1}\in
A_i(u_i)\cup \{ u_{i+1}\}=
A_{i+1}(u_i)=A_{i+1}(u_k)\subseteq A_k(u_i).$$
This implies $u_k\not\in C_k(u_{k+1})$,
and hence $|C_{k+1}(u_{k+1})|=|C_k(u_{k+1})|+1$.
By a simple inductive argument,
we obtain that,
for every positive integer $k$
for which $u_k$ is defined,
\begin{itemize}
\item $$\sum_{x\in N_G(z)\cap V(C)}\left(|A_k(x)|+|C_k(x)|\right)=2(k-1),$$
\item if $u_k=u_1$, then
\begin{eqnarray*}
|A_k(x)| &=& |C_k(x)|\mbox{ for $x\in N_G(z)\cap V(C)$},
\end{eqnarray*}
and,
\item
if $u_k\not=u_1$, then
\begin{eqnarray*}
|A_k(u_1)| &=& |C_k(u_1)|+1,\\
|A_k(u_k)| &=& |C_k(u_k)|-1,\mbox{ and}\\
|A_k(x)| &=& |C_k(x)|\mbox{ for $x\in (N_G(z)\cap V(C))\setminus \{ u_1,u_k\}$}.
\end{eqnarray*}
\end{itemize}
Since $|A_k(u_k)|\leq |C_k(u_k)|$ holds in every case,
we obtain
\begin{eqnarray*}
|A(u_k)\setminus A_k(u_k)|
&=& |A(u_k)|-|A_k(u_k)|\\
&\geq & |A(u_k)|-|C_k(u_k)|\\
&\geq & |A(u_k)|-|C(u_k)|\\
& \stackrel{(\ref{e2})}{>} & 0
\end{eqnarray*}
for every positive integer $k$
for which $u_k$ is defined.
This implies that $u_1,u_2,u_3,\ldots$
is actually an infinite sequence,
that is, $u_k$ is defined for every positive integer $k$.
Since
$$
\sum_{x\in N_G(z)\cap V(C)}\left(|A_k(x)|+|C_k(x)|\right)
\leq \sum_{x\in N_G(z)\cap V(C)}|A(x)\cup C(x)|
\leq \sum_{x\in V(C)}n(C)= n(C)^2,$$
we obtain a contradiction for $k>n(C)^2$,
which completes the proof. $\Box$
\medskip
\noindent Note that
Corollary \ref{corollary1}
extends the main result,
Theorem 3.3,
of Kubicka et al. \cite{kubicka}.
\begin{corollary}\label{corollary1}
Every connected locally Ore graph $G$ of order at least $3$ is fully cycle extendable.
\end{corollary}
{\it Proof:} Let $G$ be as in the statement.
Let $vuw$ be an induced path of order $3$ in $G$.
By inclusion-exclusion,
we obtain
\begin{eqnarray*}
d_G(u) & = &
|\{ v,w\}|+|(N_G(v)\cup N_G(w))\cap N_G(u)|+|N_G(u)\setminus (N_G[v]\cup N_G[w])|\\
&=& 2+\Big(|N_G(u)\cap N_G(v)|+|N_G(u)\cap N_G(w)|-|N_G(u)\cap N_G(v)\cap N_G(w)|\Big)\\
&& +|N_G(u)\setminus (N_G[v]\cup N_G[w])|.
\end{eqnarray*}
Since $G$ is locally Ore, we have
$|N_G(u)\cap N_G(v)|+|N_G(u)\cap N_G(w)|\geq d_G(u)$,
which implies
$|N_G(u)\cap N_G(v)\cap N_G(w)|
\geq 2+|N_G(u)\setminus (N_G[v]\cup N_G[w])|$,
and hence (\ref{e1}).
Now the desired results follows from Theorem \ref{theorem6}. $\Box$
\medskip
\noindent Corollary \ref{corollary1} immediately implies the following.
\begin{corollary}\label{corollary2}
Every connected locally Dirac graph $G$ of order at least $3$ is fully cycle extendable.
\end{corollary}
|
1,116,691,497,334 | arxiv | \section{Introduction}
On a Riemannian manifold, the Ricci flow $g(t)$ is a family of metrics evolving along its Ricci direction:
\begin{equation}
\partial_t g(t)=-2\text{\rm Ric}(g(t)).
\end{equation}
Since its introduction by Hamilton \cite{Hamilton1982}, the Ricci flow has been used in a wide variety of settings to regularize metrics. From analytic point of view, the Ricci flow is only weakly parabolic unless special gauge is used. It was later discovered by Deturck \cite{Deturck1983} that the Ricci flow is diffeomorphic to a strictly parabolic system which by now called the Ricci-Deturck flow.
For a given smooth Riemannian metric $h$ on $M$, a smooth family $g(t),t\in [a,b]$ is said to be a solution to the Ricci-Deturck $h$-flow if it satisfies
\begin{equation}\label{eqn: h-flow-Ricciform}
\left\{\begin{array}{ll}
\partial_t g_{ij}=-2\text{\rm Ric}_{ij}+\nabla_i W_j +\nabla_jW_i;\\[1mm]
W^k=g^{pq}\left( \Gamma_{pq}^k-\tilde\Gamma_{pq}^k\right)
\end{array}
\right.
\end{equation}
where $\Gamma$ and $\tilde \Gamma$ denote the Christoffel symbols of Levi-Civita connections of $g(t)$ and $h$ respectively. In local coordinate, \eqref{eqn: h-flow-Ricciform} is equivalent to the following strictly parabolic system:
\begin{equation}\label{eqn: h-flow}
\begin{split}
\partial_t g_{ij}&=g^{pq} \tilde \nabla_p\tilde\nabla_q g_{ij}-g^{kl}g_{ip}h^{pq} \tilde R_{jkql}-g^{kl}g_{jp} h^{pq}\tilde R_{ikql}\\
&\quad +\frac12 g^{kl}g^{pq}\big(\tilde \nabla_i g_{pk}\tilde \nabla_j g_{ql}+2\tilde \nabla_k g_{jp}\tilde \nabla_q g_{il}-2\tilde \nabla_k g_{jp}\tilde \nabla_l g_{iq}\\[1mm]
&\quad -2\tilde \nabla_j g_{pk}\tilde \nabla_l g_{iq}-2\tilde \nabla_i g_{pk}\tilde \nabla_l g_{qj}\big)
\end{split}
\end{equation}
where $\tilde{R}$ and $\tilde \nabla$ denote the curvature and the Levi-Civita connection of $h$. The flow is strictly parabolic and is known to be diffeomorphic to the Ricci flow as seen from \eqref{eqn: h-flow-Ricciform}. We follow the terminology in \cite{LammSimon2021} to call it the Ricci-Deturck $h$-flow in order to emphasis its dependence on the chosen background metric $h$. Since the flow is strictly parabolic, it is expected that the flow will improve the regularity of the initial data. When the initial data is sufficiently regular at infinity, it is not difficult to construct a regular solution for a short time using implicit function theorem with some suitable Banach space. However without regularity assumption at infinity, the existence problem becomes difficult. The first important result is by Shi \cite{Shi1989} where he established a short-time solution to the Ricci-Deturck flow with $h=g_0$ where $g_0$ is complete with bounded curvature. Using this, Shi was able to construct complete bounded curvature Ricci flow from bounded curvature initial metric. Moreover, the derivative of the curvature become bounded for positive time. Since then, there are a large body of works improving Shi's result, for example see \cite{KochLamm2012,Hochard2016,BCRW2019,Lai2019,ChauLee2020,LeeTam2020,Lai2021,LeeTam2021-GT,SimonTopping2017,ToppingHao2021} and the references therein.
Motivated by the question concerning the singular metrics in scalar curvature problem (for instances see Conjecture~\ref{conj-1}), in this work we are primarily interested in the case when the initial metric is in $L^\infty$. We say that $g$ is a $L^\infty$ metric if $g$ is a measurable section of $\mathrm{Sym}_2(T^*M)$ such that $\Lambda^{-1}h\leq g_0\leq \Lambda h$ almost everywhere on $M$ for some $\Lambda>1$ and smooth metric $h$ on $M$. To the best of authors' knowledge, the first regularization result along this line is by Simon \cite{Simon2002} where he considered the case when $g_0$ is close to a smooth bounded curvature metric $h$ in $L^\infty$ sense. In particular, it was shown that if $(1-\varepsilon)h\leq g_0\leq (1+\varepsilon)h$ for some sufficiently small dimensional constant $\varepsilon$, then the Ricci-Deturck $h$-flow admits a short-time solution on $M\times (0,T]$ which is smooth with uniform higher order regularity for $t>0$. Moreover, if $g_0$ is $C^0_{\mathrm{loc}}$, then the resulting solution attains $g_0$ in $C^0_{\mathrm{loc}}$ as $t\to 0$, see also \cite{KochLamm2012} for a related works on Euclidean space using heat kernel. The Ricci flow theory from $C^0$ initial metrics has been developed further by Burkhardt-Guim \cite{Burkhardt2019} in order to study the notion of scalar curvature lower bound for $C^0$ metrics on compact manifolds. The theory of Ricci-Deturck flow from $C^0$ data turns out to be very powerful in many stability problem related to the scalar curvature lower bound, for instances see \cite{McFeronSze2012,Bamler2016,JiangShengZhang2021,LeeTam2021,ShiTam2018}. It is therefore important to seek for extension to more general $L^\infty$ initial data.
It is however hard to obtain estimates under only $L^\infty$ data in general. In a recent work by Lamm and Simon \cite{LammSimon2021}, rough metrics $g_0\in L^\infty\cap W^{2,2}$ on complete four-manifolds $(M^4,h)$ are considered where $h$ is assumed to have bounded geometry of infinite order. It was shown that the Ricci-Deturck $h$-flow exists for short-time and is uniformly smooth for $t>0$. Moreover, the flow converges back to the initial data in $W^{2,2}_{\mathrm{loc}}$ sense as $t\to 0$. We would like to stress out that a $L^\infty\cap W^{1,n}_{\mathrm{loc}}$ Riemannian metric in general not continuous, see the example discussed in the introduction of \cite{LammSimon2021}. Motivated by their works, we consider the case when $g_0\in L^\infty\cap W^{1,n}$. The following is the quantitative version of the existence theorem, which generalizes the works in \cite{LammSimon2021} to weaker initial regularity assumptions and higher dimensions. We always assume $n>2$, where $n$ is the dimension of $M$.
\begin{thm}\label{thm-existence}
Let $(M^n,h)$ be a complete Riemannian manifold with $|\text{\rm Rm}(h)|\leq 1$. For any $\Lambda_0>1$, there is $\varepsilon_0(n,\Lambda_0)>0$ such that the following holds. Suppose $g_0$ is a $L^\infty\cap W^{1,n}_{\mathrm{loc}}$ Riemannian metric, not necessarily smooth, on $M$ such that
\begin{enumerate}
\item[(i)] $\Lambda_0^{-1}h\leq g_0\leq \Lambda_0 h$ on $M$;
\item[(ii)] for all $x\in M$,
$$\left(\fint_{B_h(x,1)} |\tilde \nabla g_0|^n \; d\mathrm{vol}_h \right)^{1/n}\leq \varepsilon$$
for some $\varepsilon<\varepsilon_0$.
\end{enumerate}
Then there are $T(n,\Lambda_0),C_0(n,\Lambda_0),C_{n}(n),\Upsilon_n>0$ and a smooth solution $g(t)$ to the Ricci-Deturck $h$-flow on $M\times (0,T]$ such that
\begin{enumerate}\setlength{\itemsep}{0.5mm}
\item[(a)] $(\Upsilon_n\Lambda_0)^{-1} h\leq g(t)\leq \Upsilon_n\Lambda_0 h$ on $M\times (0,T]$;
\item[(b)] For all $x\in M$ and $t\in (0,T]$,
$$\left(\fint_{B_h(x,1)} |\tilde \nabla g(t)|^n \; d\mathrm{vol}_h \right)^{1/n}\leq C_n\varepsilon+C_0 t^{1/n};$$
\item[(c)] For any $k\in \mathbb{N}$, there is $C(k,n,\Lambda_0)>0$ such that for all $t\in (0,T]$,
\begin{equation*}
\sup_M |\tilde\nabla^k g(t)|\leq C(k,n,\Lambda_0) t^{-k/2}.
\end{equation*}
\item[(d)] For all $t\in (0,T]$, $\sup_M t^{1/2} |\tilde\nabla g(t)|\leq C(n,\Lambda_0) \sqrt{\varepsilon+t^{1/n}}$;
\item[(e)] If $g_0\in C^\infty_{\mathrm{loc}}(\Omega)$ for some $\Omega\Subset M$, then $g(t)\to g_0$ in $C^\infty_{\mathrm{loc}}(\Omega)$ as $t\to 0$;
\item[(f)] $g(t)\to g_0$ in $W^{1,n}_{\mathrm{loc}}$ as $t\to 0$.
\end{enumerate}
\end{thm}
Moreover, the solution is unique within the suitable class, see Theorem~\ref{thm-unique}. Here the background metric $h$ is serving as a reference to rule out the bubbling singularity in view of concentration only. A simple scaling argument will show that one can replace $h$ by any other fixed metric which is uniformly $C^1$ comparable to $h$.
In short, Theorem~\ref{thm-existence} shows that $L^\infty$ metric with small local gradient concentration can be smoothed with quantitative estimates. In particular, this theorem implies that if $g_0\in L^{\infty}\cap W^{1,n}$ and $h$ is uniformly non-collapsed, then one can deform the metric $g_0$ slightly to a smooth metric. We refer readers to Theorem~\ref{shorttime-compact} for the detailed statement. This is in spirit similar to the work in \cite{ChanChenLee2022} where smooth metrics with small $L^{n/2}$ curvature are considered.
\begin{rem}
Since the Ricci-Deturck $h$-flow is diffeomorphic to the Ricci flow, it is nature to expect that we can produce Ricci flow starting from $g_0$. If $g_0$ is assumed to be smooth, this was carried out in \cite{HuangTam2018}. For rough initial data, we refer readers to \cite{LammSimon2021} for the detailed exposition. We do not pursue here.
\end{rem}
When the background metric $(M,h)$ is standard Euclidean space, we are able to show that the constructed Ricci-Deturck $h$-flow exists for all time. Moreover, it will converge to $h$ if it is initially close to $h$ at spatial infinity, see Section~\ref{Sec:stability}.
The most important feature of the Ricci-Deturck flow (or Ricci flow) is that it tends to preserve non-negativity of curvature. In particular, it is well-known that the Ricci-Deturck flow preserved the scalar curvature lower bound in the smooth case. Using this property and the above new existence theory, we study several problems related to metrics with scalar curvature lower bound.
When $M=\mathbb{T}^n$, the celebrated work of Schoen and Yau \cite{SchoenYau1979,SchoenYau1979-2}, Gromov and Lawson \cite{GromovLawson1980} stated that metrics with $\mathcal{R}\geq 0$ must be flat. In \cite{Gromov2014}, Gromov asked if a sequence of metrics $g_i$ with $\mathcal{R}(g_i)\geq -i^{-1}$ will sub-converge to a flat metric in some appropriate weak sense. In Theorem~\ref{thm:stability-sigma}, we show that if $g_i$ is uniformly $L^\infty$ with uniformly small gradient concentration, then it will sub-converge to a flat metric in $L^p$ for all $p>0$. For related works, we refer interested readers to \cite{AHPPS2019,Allen2020,CAP2020,ChuLee2021,ChuLee2021-2} and the survey paper of Sormani \cite{Sormani2021} for a comprehensive discussion.
The torus stability problem is on the other hand closely related to a rigidity conjecture of Schoen predicting that $L^\infty$ metrics on $\mathbb{T}^n$ with co-dimension three singularity and $\mathcal{R}\geq 0$ on regular part must have removable singularity, see Conjecture~\ref{conj-1} for the full conjecture. There are many important works toward the full conjecture. For instance, it was confirmed by Li and Mantoulidis \cite{LiMantoulidis2019} when $n=3$ using minimal surface method. When the singular metric is continuous, it was confirmed by the second named author and Tam \cite{LeeTam2021}. To this end, in this work we consider the singular metric which is additionally $W^{1,n}$ and show that they must be Ricci flat on the regular part, see Theorem~\ref{rigidity-FollowingLEETAM}. Using the same argument as in \cite{LeeTam2021}, we also prove a positive mass theorem for metrics with analogous singularity, see Theorem~\ref{rigidity-FollowingLEETAM-PMT}.
Another related problem is whether scalar curvature lower bound is preserved under weak convergence if the limit is a-priori smooth. The first result along this line is by Gromov \cite{Gromov2014} saying that the scalar curvature lower bound will be preserved under $C^0$ convergence. In \cite{Bamler2016}, a Ricci flow proof was given by Bamler. This was later generalized by Burkhardt-Guim \cite{Burkhardt2019} to the case when the limit is not smooth. Using the Ricci-Deturck flow and uniqueness, we show that the same conclusion holds for uniformly bi-Lipschitz sequence converging to a smooth metric in $W^{1,n}$, see Theorem~\ref{thm:stability-R-lowerbound}.
The paper is organized as follows. In Section~\ref{Sec:apriori}, we will establish a-priori estimates of smooth Ricci-Deturck flow. In Section~\ref{Sec:Existence}, we will use these a-priori estimates to construct short-time solution from rough initial data. In particular, we prove our main result, Theorem~\ref{thm-existence}. In Section~\ref{Sec:stability}, we will study the stability of rough initial metric on Euclidean along the Ricci-Deturck flow. In Section~\ref{Sec:almostRid}, we will use the Ricci-Deturck flow to study the stability problem in scalar curvature related to the Yamabe type of manifolds. In Section~\ref{Sec:singular}, we use the Ricci-Deturck flow to regularize singular metrics and establish rigidity. In Section~\ref{Sec:scalarPres}, we study the scalar curvature persistence problem under $W^{1,n}$ convergence using the new existence theory.
\medskip
{\it Acknowledgement}:
The authors would like to thank Tobias Lamm for suggesting the problem and useful discussion. J. Chu was partially supported by Fundamental Research Funds for the Central Universities (No. 7100603592 and No. 7100603624).
\section{A-priori estimates}\label{Sec:apriori}
In this section, we will assume the Ricci-Deturck $h$-flow to be smooth up to $t=0$, and derive a-priori estimates under small gradient concentration assumption. To fix our notations, we will use $|\Omega|$ to denote the measure of $\Omega$ with respect to $h$. We will also denote $a\wedge b=\min\{a,b\}$ for $a,b\in \mathbb{R}$.
\medskip
Suppose $(M,h)$ is a complete (not necessarily compact) manifold with bounded curvature. The following proposition shows that we may without loss of generality assume that all derivative of curvature $\text{\rm Rm}(h)$ are bounded.
\begin{prop}\label{prop:RF-smoothing}
Suppose $(M,h_0)$ is a complete manifold with $|\text{\rm Rm}(h_0)|\leq 1$, then for any $\varepsilon>0$, there is a complete metric $\tilde h$ on $M$ such that
\begin{enumerate}
\item[(i)] $\|\tilde h-h_0\|_{C^1(M,h_0)}<\varepsilon$;
\item[(ii)] for all $k\in \mathbb{N}$, there is $C(n,k,\varepsilon)>0$ such that
$$\sup_M|\nabla^k \text{\rm Rm}(\tilde h)|\leq C(n,k,\varepsilon).$$
\end{enumerate}
\end{prop}
\begin{proof}
Since $h$ has bounded curvature, the classical existence theory of Ricci flow by Shi \cite{Shi1989} and doubling time estimates infer that there is $T_n>0$ and a solution to the Ricci flow $h(t)$ starting from $h(0)=h_0$ such that $|\text{\rm Rm}(h(t))|\leq 2$ on $[0,T_n]$. Shi's estimate implies that for all $k\in \mathbb{N}$, there is $C(n,k)>0$ such that for all $t\in (0,T_n]$,
\begin{equation}\label{prop:RF-smoothing eqn 1}
|\nabla^k \text{\rm Rm}(h(t))|\leq \frac{C(n,k)}{t^{k/2}}.
\end{equation}
Since $|\text{\rm Rm}(h(t))|\leq 2$ on $[0,T_n]$, the Ricci flow equation implies
\begin{equation}
\left|\frac{\partial}{\partial t}h(t)\right| \leq 2|\text{\rm Ric}(h(t))| \leq 4
\end{equation}
and so
\begin{equation}\label{prop:RF-smoothing eqn 2}
e^{-4t}h_0\leq h(t)\leq e^{4t}h_0.
\end{equation}
It remains to consider the $C^1$ continuity of $h(t)$. By Shi's estimates,
\begin{equation}
\begin{split}
\frac{\partial}{\partial t}|\nabla^{h_0}h(t)|_{h_0}^2&=4\langle \nabla^{h_0}\text{\rm Ric}(h(t)), \nabla^{h_0} h(t) \rangle_{h_0}\leq \frac{C_1}{t^{1/2}}|\nabla^{h_0}h(t)|_{h_0}.
\end{split}
\end{equation}
For any $\sigma>0$, we define $Q=\sqrt{|\nabla^{h_0}h(t)|_{h_0}^2+\sigma}$. Then the above shows $\partial_{t}(Q^{2})\leq C_{1}Qt^{-1/2}$ and then $\partial_{t}Q\leq C_{1}t^{-1/2}$. Integrating on both sides on $[0,t]$ and using $|\nabla^{h_0}h(0)|_{h_0}=0$,
\begin{equation}
\sqrt{|\nabla^{h_0}h(t)|_{h_0}^2+\sigma}-\sigma \leq C_{1}t^{1/2}.
\end{equation}
Letting $\sigma\to0$, we conclude that $|\nabla^{h_0}h(t)|_{h_{0}}\leq C(n,k)t^{1/2}$ on $M\times [0,T_n]$. Combining this with \eqref{prop:RF-smoothing eqn 1} and \eqref{prop:RF-smoothing eqn 2}, the assertion follows by choosing $\tilde h=h(t_\varepsilon)$ for sufficiently small $t_{\varepsilon}$.
\end{proof}
By Proposition~\ref{prop:RF-smoothing} and scaling, in what follows we may as well assume $h$ to satisfy the following slightly stronger assumption:
\begin{equation*}
(\star):\left\{
\begin{array}{ll}
\forall k\in \mathbb{N},\; \exists\; C(n,k)>0, \; \sup_M|\nabla^k \text{\rm Rm}(h)|\leq C(n,k);\\[1mm]
\mathrm{diam}(M,h)>4.
\end{array}
\right.
\end{equation*}
If $M$ is non-compact, the diameter is understood to be $+\infty$. We introduce this so that the Sobolev inequality below is always local. It is standard (see e.g. \cite[Theorem 14.3]{LiBook}) that a local Sobolev inequality holds with respect to the metric $h$: there is $C_{S}>0$ such that for any $x_0\in M$ and $\phi\in C^\infty_{c}(B_h(x_0,2))$,
\begin{equation}\label{Sobo-ineq}
\left(\fint_{B_h(x_0,2)} \phi^\frac{2n}{n-2} d\mathrm{vol}_h \right)^\frac{n-2}{n}\leq C_S \cdot \left(\fint_{B_h(x_0,2)} |\tilde\nabla \phi|^2 d\mathrm{vol}_h \right).
\end{equation}
Since we have fixed the radius and rescaled the curvature, the Sobolev constant is indeed a dimensional constant. We also remark that the Sobolev inequality is stable with respect to $C^0$ perturbation of metrics.
\medskip
Before we establish the crucial a-priori estimates, we will collect an important smoothing estimates in Ricci-Deturck $h$-flow.
\begin{lma}\label{lma:localEstimate-fromC^1}
Suppose $g(t)$ is a smooth solution of the Ricci-Deturck $h$-flow on $M\times [0,T]$ for some background metric $h$ satisfying $(\star)$ so that for $t\in (0,T]$,
\begin{equation}
\left\{
\begin{array}{ll}
\Lambda^{-1} h\leq g(t)\leq \Lambda h;\\[1mm]
\sup_M |\tilde \nabla g(t)|\leq C_1 t^{-1/2}
\end{array}
\right.
\end{equation}
for some $\Lambda,C_1>0$. Then for all $k\in \mathbb{N}$, there is $C(n,k,C_1,\Lambda)>0$ such that for all $t\in (0,T]$,
\begin{equation}
\sup_M|\tilde \nabla^k g(t)|\leq \frac{C(n,k,C_1,\Lambda)}{t^{k/2}}.
\end{equation}
\end{lma}
\begin{proof}
This follows from the proof of \cite[Lemma 4.2, Theorem 4.3]{Simon2002}. We remark here that since we have assumed a bound on $|\tilde\nabla g(t)|$ and metric equivalence, the ``$\delta$ fair to $h$" assumption as in \cite[Lemma 4.2, Theorem 4.3]{Simon2002} is not necessary. The proof can be carried over by the Bernstein-Shi's trick, see also \cite{Shi1989}.
\end{proof}
In particular, Lemma~\ref{lma:localEstimate-fromC^1} reduces the question of obtaining derivatives estimates to obtaining $C^1$ estimate of the Ricci-Deturck $h$-flow. The following lemma shows that as long as a concentration estimates of $\tilde \nabla g(t)$ is preserved along the flow, the $C^{1}$ estimates will follow.
\begin{lma}\label{lma:localEstimate-fromC^1conc}
For any $\Lambda>1$, there is $c_1(n,\Lambda)>0$ such that if $g(t)$ is a smooth solution of the Ricci-Deturck $h$-flow on $M\times [0,T]$ for some background metric $h$ satisfying the assumption $(\star)$ so that
\begin{enumerate}
\item[(i)] $\Lambda^{-1}h\leq g(t)\leq \Lambda h$ on $M\times [0,T]$;
\item[(ii)] $\forall x\in M$ and $t\in (0,T]$,
$$\left(t^{n/2}\cdot \fint_{B_h(x,\sqrt{t})}|\tilde \nabla g(t)|^n d\mathrm{vol}_h \right)^{1/n}\leq c_1\varepsilon^2 \quad\text{for some}\; \varepsilon\in (0,1);$$
\item[(iii)] $\sup_{M\times [0,T]} |\tilde \nabla^k g(t)|<+\infty$ for all $k\in \mathbb{N}$.
\end{enumerate}
Then for all $t\in (0,T\wedge 1]$, we have
\begin{equation}
\sup_M |\tilde \nabla g(t)|< \frac{\varepsilon}{\sqrt{t}}.
\end{equation}
\end{lma}
\begin{proof}
Since the assumptions are invariant under parabolic scaling: $\hat g(t)=\lambda^2 g(\lambda^{-2}t)$ and $\hat h=\lambda^2 h$ for $\lambda\geq 1$, we may assume $T=1$.
Let $c_1$ be a small constant to be chosen. By assumption (iii), for the constant $c_1$, we may let $T_1\in (0,1]$ be the maximal time such that the conclusion holds on $(0,T_1)$. If $T_1<1$, then the conclusion fails at $t=T_1$. We claim that if $c_1$ is sufficiently small, then this will be impossible for any $\varepsilon\in (0,1)$.
Suppose on the contrary that $T_1<1$, then there is $x_0\in M$ such that
\begin{enumerate}
\item $|\tilde\nabla g(x_0,T_1)|>\frac12 \varepsilon T_1^{-1/2}$;
\item $\sup_{M}|\tilde \nabla g(t)|\leq \varepsilon t^{-1/2}$ for all $t\in (0,T_1]$.
\end{enumerate}
By considering $T_1^{-1}g(T_1t)$ and $T_1^{-1}h$, we may assume that
\begin{enumerate}
\item[(a)] $|\tilde\nabla g(x_0,1)|>\frac12 \varepsilon $;
\item[(b)] $\sup_{M}|\tilde \nabla g(t)|\leq \varepsilon t^{-1/2}$ for all $t\in (0,1]$.
\end{enumerate}
We remark here that since $T_1<1$, the rescaled background metric still satisfies $(\star)$. Applying Lemma~\ref{lma:localEstimate-fromC^1}, we deduce that $|\tilde\nabla^2g(t)|\leq C_2t^{-1}$ for some $C_2(n,\Lambda)>0$. In particular, for all $x\in B_h(x_0,\delta)$ with $\delta<1$,
\begin{equation}
|\tilde \nabla g(x,1)|\geq |\tilde \nabla g(x_0,1)|-C_2 \delta\geq \frac14 \varepsilon,
\end{equation}
provided that we choose $\delta=\frac1{4C_2}\varepsilon=c_3\varepsilon$ for some small $c_3(n,\Lambda)>0$. Hence,
\begin{equation}
\begin{split}
\fint_{B_h(x_0,1)}|\tilde \nabla g(1)|^n d\mathrm{vol}_h
&\geq \frac{|B_h(x_0,\delta)|}{|B_h(x_0,1)|}\fint_{B_h(x_0,\delta)}|\tilde \nabla g(1)|^n d\mathrm{vol}_h \\
&\geq \frac{|B_h(x_0,c_3\varepsilon)|}{|B_h(x_0,1)|} \cdot \frac{\varepsilon^{n}}{4^{n}} \\[1.5mm]
&\geq \left( c_4 \varepsilon^2\right)^n
\end{split}
\end{equation}
for some $c_4(n,\Lambda)>0$. Here we have used the volume comparison theorem to control the volume ratio up to a fixed scale. On the other hand, assumption (ii) implies that
\begin{equation}
\fint_{B_h(x_0,1)}|\tilde \nabla g(1)|^n d\mathrm{vol}_h \leq \left( c_1 \varepsilon^2\right)^n
\end{equation}
which is impossible if we choose $c_1(n,\Lambda)=\frac12 c_4>0$.
\end{proof}
By Lemma~\ref{lma:localEstimate-fromC^1conc}, the existence time can be characterized by the gradient concentration and the bi-Lipschitz estimates along the flow. The following Lemma shows that if the gradient concentration is small, then the local $W^{1,n}$ norm can be estimated by the initial value quantitatively.
\begin{lma}\label{borderline-C1}
For any $\Lambda>1$, there is $\delta_1(n,\Lambda)>0$ such that the following holds. Suppose $g(t)$ is a smooth solution to the Ricci-Deturck $h$-flow on $M\times [0,T]$ for some background metric $h$ satisfying $(\star)$ so that
\begin{enumerate}
\item[(i)] $\Lambda^{-1} h\leq g(t)\leq \Lambda h$ on $M\times [0,T]$;
\item[(ii)] $\forall x\in M$ and $t\in (0,T]$,
$$\left(\fint_{B_h(x,1)} |\tilde\nabla g(t)|^n d\mathrm{vol}_h\right)^{1/n} \leq \delta_{1};$$
\end{enumerate}
then there is $L_1(n,\Lambda)>0$ such that for all $x\in M$ and $t\in [0,T]$,
\begin{equation}
\int_{B_h(x,1)} |\tilde\nabla g(t)|^n d\mathrm{vol}_h \leq \int_{B_h(x,2)} |\tilde\nabla g_0|^n d\mathrm{vol}_h+L_1|B_h(x,2)| \cdot t.
\end{equation}
Here $|\Omega|$ denotes the measure of $\Omega$ with respect to $h$.
\end{lma}
\begin{proof}
In what follows, we will use $C_i$ to denote constants depending only on $n$ and $\Lambda$. Along the Ricci-Deturck $h$-flow, direct computation and the Cauchy-Schwarz inequality show that
\begin{equation}
\frac{\partial}{\partial t}|\tilde{\nabla} g|^{2}+\Lambda^{-1}|\tilde{\nabla}^{2}g|^{2}
\leq g^{ab}\tilde{\nabla}_{a}\tilde{\nabla}_{b}|\tilde{\nabla} g|^{2}+C_1|\tilde{\nabla} g|^{4}+C_1.
\end{equation}
Since $|\tilde{\nabla} g|$ may be not smooth, then for sufficiently small $\sigma\in(0,\delta_{1})$, we define the smooth function $Q$ by
\begin{equation}
Q = \sqrt{|\tilde{\nabla} g|^{2}+\sigma^{2}}.
\end{equation}
It is clear that $|\tilde{\nabla} Q|\leq|\tilde{\nabla}^{2}g|$. Then
\begin{equation}\label{W 1 n estimate eqn 1}
\frac{\partial}{\partial t}Q^{2}+\Lambda^{-1}|\tilde{\nabla} Q|^{2} \leq g^{ab}\tilde{\nabla}_{a}\tilde{\nabla}_{b}Q^{2}+C_2(Q^{2}+1).
\end{equation}
Let $\eta$ be a cut-off function on $M$ such that $\eta\equiv1$ on $B_{h}(x,1)$, $\eta\equiv0$ on $M\setminus B_{h}(x,2)$ and $|\tilde\nabla \eta|\leq 10^4$. Then \eqref{W 1 n estimate eqn 1} implies that
\begin{equation}\label{W 1 n estimate eqn 2}
\begin{split}
& \int_{M}\eta^{4}Q^{n-2}\cdot \partial_tQ^{2}d\mathrm{vol}_{h}
+\Lambda^{-1}\int_{M}\eta^{4}Q^{n-2}|\tilde{\nabla} Q|^{2}d\mathrm{vol}_{h} \\
\leq {} & \int_{M}\eta^{4}Q^{n-2}g^{ab}\tilde{\nabla}_{a}\tilde{\nabla}_{b}Q^{2}d\mathrm{vol}_{h}+C_3\int_{M}\eta^{4}(Q^{n}+1)d\mathrm{vol}_{h}.
\end{split}
\end{equation}
For the first term on the right hand side, integrating by parts and using the Cauchy-Schwarz inequality,
\begin{equation}
\begin{split}
& \int_{M}\eta^{4}Q^{n-2}g^{ab}\tilde{\nabla}_{a}\tilde{\nabla}_{b}Q^{2}d\mathrm{vol}_{h} \\
= {} & -\int_{M}\tilde{\nabla}_{a}(\eta^{4}Q^{n-2}g^{ab})\tilde{\nabla}_{b}Q^{2}d\mathrm{vol}_{h} \\
\leq {} & -2(n-2)\Lambda^{-1}\int_{M}\eta^{4}Q^{n-2}|\tilde{\nabla} Q|^{2}d\mathrm{vol}_{h}
+C_4\int_{M}(\eta^{4}Q^{n}+\eta^{3}Q^{n-1})|\tilde{\nabla} Q|d\mathrm{vol}_{h} \\
\leq {} & -(n-2)\Lambda^{-1}\int_{M}\eta^{4}Q^{n-2}|\tilde{\nabla} Q|^{2}d\mathrm{vol}_{h}+C_5\int_{M}(\eta^{4}Q^{n+2}+\eta^{2}Q^{n})d\mathrm{vol}_{h}.
\end{split}
\end{equation}
Substituting this into \eqref{W 1 n estimate eqn 2},
\begin{equation}\label{W 1 n estimate eqn 3}
\begin{split}
& \frac{2}{n}\cdot\frac{\partial}{\partial t}\left(\int_{M}\eta^{4}Q^{n}d\mathrm{vol}_{h}\right)+\frac1{2\Lambda}\int_{M}\eta^{4}Q^{n-2}|\tilde{\nabla} Q|^{2}d\mathrm{vol}_{h} \\
\leq {} & C_6\int_{M}(\eta^{4}Q^{n+2}+\eta^{2}Q^{n}+\eta^{4}Q^{n}+\eta^{4})d\mathrm{vol}_{h}.
\end{split}
\end{equation}
Since $\eta$ is compactly supported on $B_h(x,2)$, the Sobolev inequality \eqref{Sobo-ineq} implies
\begin{equation}\label{W 1 n estimate eqn 4}
\begin{split}
& \left(\int_{M}(\eta^{4}Q^{n})^{\frac{n}{n-2}}d\mathrm{vol}_{h}\right)^{\frac{n-2}{n}}\\[1mm]
\leq{}& C_S \cdot |B_h(x,2)|^{-2/n}\cdot \int_{M}|\tilde{\nabla}(\eta^{2}Q^{\frac{n}{2}})|^{2}d\mathrm{vol}_{h} \\
\leq {} & C_7\cdot|B_h(x,2)|^{-2/n}\cdot\left(\int_{M}\eta^{4}Q^{n-2}|\tilde{\nabla} Q|^{2}d\mathrm{vol}_{h}
+\int_{M}\eta^{2}Q^{n}d\mathrm{vol}_{h}\right).
\end{split}
\end{equation}
Combining \eqref{W 1 n estimate eqn 3} and \eqref{W 1 n estimate eqn 4}, we conclude that
\begin{equation}\label{energy-ineq}
\begin{split}
& \frac{2}{n}\cdot\frac{\partial}{\partial t}\left(\int_{M}\eta^{4}Q^{n}d\mathrm{vol}_{h}\right)
+\frac{|B_h(x,2)|^{2/n}}{2C_7\Lambda}\left(\int_{M}(\eta^{4}Q^{n})^{\frac{n}{n-2}}d\mathrm{vol}_{h}\right)^{\frac{n-2}{n}} \\[1.5mm]
\leq {} & C_8\int_{M}(\eta^{4}Q^{n+2}+\eta^{2}Q^{n}+\eta^{4}Q^{n}+\eta^{4})d\mathrm{vol}_{h} \\
\leq {} & C_9\left(\int_{B_{h}(x,2)}Q^{n}d\mathrm{vol}_{h}\right)^{\frac{2}{n}}\left(\int_{M}(\eta^{4}Q^{n})^{\frac{n}{n-2}}d\mathrm{vol}_{h}\right)^{\frac{n-2}{n}}
+C_9\int_{B_{h}(x,2)}(Q^{n}+1)d\mathrm{vol}_{h}.
\end{split}
\end{equation}
Now we are going to control the first term on the right hand side by using assumption (ii). Let $\{z_i\}_{i=1}^N$ be a maximal set in $B_h(x,2)$ so that $d_h(z_i,z_j)>\frac14$ and
\begin{equation}
B_h(x,2)\subset \bigcup_{i=1}^N B_h(z_i,1)\subset B_h(x,4).
\end{equation}
Then the standard volume comparison argument shows that $N\leq C_n$ as $|\text{\rm Rm}(h)|\leq 1$.
Together with the definition of $Q$ and $\sigma\in(0,\delta_{1})$, we conclude that
\begin{equation}
\begin{split}
\int_{B_{h}(x,2)}Q^{n}d\mathrm{vol}_{h} &\leq \sum_{i=1}^N \int_{B_h(z_i,1)} |\tilde \nabla g(t)|^n d\mathrm{vol}_h+C_n\cdot \sigma^{n}\cdot|B_h(x,4)|\\[1mm]
&\leq C_n \cdot\delta_{1}^n \cdot |B_h(x,4)|\\[2mm]
&\leq \hat C_n\cdot\delta_{1}^n \cdot |B_h(x,2)|.
\end{split}
\end{equation}
Here we have used the volume comparison theorem in the last inequality. By putting this into \eqref{energy-ineq}, if we choose $\delta_{1}$ such that $C_{9}\hat{C}_{n}^{2/n}\delta_{1}^{2}\leq (2C_{7}\Lambda)^{-1}$, then
\begin{equation}
\frac{\partial}{\partial t}\left(\int_{M}\eta^{4}Q^{n}d\mathrm{vol}_{h}\right) \leq C_{10} |B_h(x,2)|.
\end{equation}
After integrating both sides on $[0,t]$ and letting $\sigma\to 0$, we obtain
\begin{equation}
\int_{B_{h}(x,1)}|\tilde{\nabla} g(t)|^{n}d\mathrm{vol}_{h}
\leq \int_{B_{h}(x,2)}|\tilde{\nabla} g(0)|^{n}d\mathrm{vol}_{h}+C_{10}|B_h(x,2)|\cdot t
\end{equation}
for all $(x,t)\in M\times[0,T]$. This completes the proof by applying the volume comparison theorem again.
\end{proof}
Next, we consider the trace estimate.
\begin{lma}\label{borderline-trace}
For any $\Lambda_0>1$, there is $\delta_2(n,\Lambda_0),S(n,\Lambda_0)>0$ such that the following holds. Suppose $g(t)$ is a smooth solution to the Ricci-Deturck $h$-flow on $M\times [0,T]$ for some background metric $h$ satisfying $(\star)$ so that
\begin{enumerate}
\item[(i)] $\Lambda_0^{-1} h\leq g_0\leq \Lambda_0 h$ on $M$;
\item[(ii)] $\forall x\in M$ and $t\in (0,T]$,
$$\left(\fint_{B_h(x,1)} |\tilde\nabla g(t)|^n d\mathrm{vol}_h\right)^{1/n} \leq \delta_2;$$
\item[(iii)] $\sup_{M\times [0,T]} |\tilde \nabla^k g(t)|<+\infty$ for all $k\in \mathbb{N}$.
\end{enumerate}
then on $M\times [0,T\wedge S]$,
\begin{equation}
\Lambda^{-1}h\leq g(t)\leq \Lambda h
\end{equation}
where $\Lambda=\Upsilon_n\Lambda_0$ for some large dimensional constant $\Upsilon_n>0$.
\end{lma}
\begin{proof}
We assume $T\leq 1$. By assumptions (i) and (iii), we may find $T_1>0$ such that for all $t\in [0,T_1)$, the conclusion holds. If $T_1<1$, then the conclusion fails at $t=T_1$. It suffices to show that if $\delta_{2}(n,\Lambda_0)$ is sufficiently small, then $T_1$ is bounded from below by constant depending only on $n$ and $\Lambda_0$. We will use $C_i$ to denote constants depending only on $n$ and $\Lambda_0$.
For any $x\in M$ and $t\in (0,T_1]$, then assumption (ii) and the volume comparison theorem imply
\begin{equation}
\begin{split}
t^{n/2} \fint_{B_h(x,\sqrt{t})} |\tilde \nabla g(t)|^n d\mathrm{vol}_h
&=\frac{t^{n/2}}{|B_h(x,\sqrt{t})|} \int_{B_h(x,\sqrt{t})} |\tilde \nabla g(t)|^n d\mathrm{vol}_h \\
&\leq t^{n/2}\cdot \frac{|B_h(x,1)|}{|B_h(x,\sqrt{t})|} \fint_{B_h(x,1)} |\tilde \nabla g(t)|^n d\mathrm{vol}_h \\
&\leq t^{n/2} \cdot \frac{V_{-1}(1)}{V_{-1}(\sqrt{t})}\cdot \delta_{2}^{n}\\[1.5mm]
&\leq C_n\delta_{2}^{n} .
\end{split}
\end{equation}
where $V_{-1}(r)$ denotes the volume of geodesic ball with radius $r$ in the space form $M_{-1}^{n}$. Since on $M\times [0,T_1]$, we have $\Lambda^{-1}h\leq g(t)\leq \Lambda h$. By Lemma~\ref{lma:localEstimate-fromC^1conc}, for all $t\in (0,T_1]$,
\begin{equation}\label{pre-C1}
\sup_M |\tilde \nabla g(t)|\leq \frac{C_0\delta_{2}^{1/2}}{\sqrt{t}}
\end{equation}
for some constant $C_0(n,\Lambda_0)>0$ as long as $\delta_{2}$ is sufficiently small depending only on $n$ and $\Lambda_0$.
We want to consider the upper bound of the metric $g(t)$ at $t=T_1$. It suffices to control the upper bound of $\operatorname{tr}_hg$. Since the conclusion fails at $t=T_1$, there is $x_0\in M$ such that $\operatorname{tr}_hg (x_0,T_1)>\frac12\Lambda$. By \eqref{pre-C1}, for all $z\in B_h(x_0,r)$,
\begin{equation}
\operatorname{tr}_hg (x,T_1)\geq \operatorname{tr}_hg(x_0,T_1)-C_1\delta_{2}^{1/2} T_1^{-1/2}r
\end{equation}
and hence
\begin{equation}\label{inter-trace-1}
\begin{split}
\left(\int_{B_h(x_0,r)} \operatorname{tr}_hg\; d\mathrm{vol}_h\right)\bigg|_{t=T_1}
\geq \left( \frac12 \Lambda-C_1\delta_{2}^{1/2} T_1^{-1/2}r\right)|B_h(x_0,r)|.
\end{split}
\end{equation}
When $r$ is small enough so that the right hand side is positive. We may assume $r\leq \frac12$ and fix the choice of $r$ later.
On the other hand, taking trace on \eqref{eqn: h-flow} infers that
\begin{equation}\label{equ-trace}
\frac{\partial}{\partial t}\operatorname{tr}_{h}g
\leq g^{ab}\tilde{\nabla}_{a}\tilde{\nabla}_{b}\operatorname{tr}_hg+C_2(|\tilde{\nabla} g|^{2}+1).
\end{equation}
Let $\eta$ be a cut-off function on $M$ such that $\eta \equiv 1$ on $B_{h}(x_{0},r)$, $\eta\equiv0$ on $M\setminus B_{h}(x_{0},2r)$ and $|\tilde{\nabla}\eta|\leq 10^4r^{-1}$. Integrating \eqref{equ-trace} and using integration by parts yield that
\begin{equation}
\begin{split}
& \frac{\partial}{\partial t}\int_{B_{h}(x_{0},2r)} \eta\operatorname{tr}_{h}g(t)\; d\mathrm{vol}_{h} \\
\leq {} & C_3\int_{B_{h}(x_{0},2r)}|\tilde{\nabla} \eta|\cdot|\tilde{\nabla} g(t)|\;d\mathrm{vol}_{h}
+C_2\int_{B_{h}(x_{0},2r)}(|\tilde{\nabla} g(t)|^{2}+1)d\mathrm{vol}_{h} \\
\leq {} & C_3\left(\int_{B_{h}(x_{0},2r)}|\tilde{\nabla}\eta|^{\frac{n}{n-1}}d\mathrm{vol}_{h}\right)^{\frac{n-1}{n}}
\left(\int_{B_{h}(x_{0},2r)}|\tilde{\nabla} g(t)|^{n}d\mathrm{vol}_{h}\right)^{\frac{1}{n}} \\
& +C_2|B_h(x_0,2r)|^{\frac{n-2}{n}}
\left(\int_{B_{h}(x_{0},2r)}|\tilde{\nabla} g(t)|^{n}d\mathrm{vol}_{h}\right)^{\frac{2}{n}}+C_2|B_h(x_0,2r)|\\
\leq {} & C_3r^{-1}\delta_{2}|B_h(x_0,2r)|^\frac{n-1}{n}|B_h(x_0,1)|^\frac1n
+C_2\delta_{2}^2 |B_h(x_0,2r)|^{\frac{n-2}{n}} |B_h(x_0,1)|^\frac{2}{n}\\[2.5mm]
&+C_2|B_h(x_0,2r)|\\[2mm]
\leq {} &C_4 (\delta_{2} r^{-2}+1) |B_h(x_0,2r)|.
\end{split}
\end{equation}
Here we have used the volume comparison theorem in the last inequality. By integrating the above on $[0,T_1]$ together with (i), we conclude that
\begin{equation}
\begin{split}
&\quad \left(\int_{B_h(x_0,r)} \operatorname{tr}_hg\; d\mathrm{vol}_h\right)\bigg|_{t=T_1}\\
&\leq \int_{B_h(x_0,2r)} \operatorname{tr}_hg_0\; d\mathrm{vol}_h+ C_4 T_1(\delta_{2} r^{-2}+1) |B_h(x_0,2r)|\\[2mm]
&\leq |B_h(x_0,2r)| \left( n\Lambda_0+C_4 T_1+C_4 T_1\delta_{2} r^{-2} \right).
\end{split}
\end{equation}
Combining this with \eqref{inter-trace-1} and the volume comparison theorem,
\begin{equation}
\Upsilon_n \Lambda_0-C_1\delta_{2}^{1/2} T_1^{-1/2}r \leq C_n \left( n\Lambda_0+C_4 T_1+C_4 T_1\delta_{2} r^{-2} \right).
\end{equation}
If we fix $\Upsilon_n=2nC_n$ and choose $r=T_1^{1/2}$, then we see that $T_1\geq C_5^{-1}$ if $\delta_{2}$ is small enough depending only on $n$ and $\Lambda_0$. This completes the proof on the upper bound of $g(t)$.
The lower bound of $g(t)$ can be proved using an identical argument by considering the function $\operatorname{tr}_{g(t)}h$ instead. The evolution equation of $\operatorname{tr}_{g(t)}h$ is almost identical to that of \eqref{equ-trace}, see \cite{Shi1989,Simon2002} for example.
\end{proof}
The following proposition shows that the solution of $h$-flow will be uniformly smooth up to $t=0$ on region where $g_0$ is smooth. This will be useful in studying the regularity of singular metric which is smooth outside some singular set.
\begin{prop}\label{prop:local-smoothness-Ck}
Suppose $g(t)$ is a smooth solution to the Ricci-Deturck $h$-flow on $M\times [0,T]$ for some background metric $h$ satisfying $(\star)$ so that
\begin{enumerate}
\item[(i)] $\Lambda^{-1}h\leq g(t)\leq \Lambda h$ on $M\times [0,T]$;
\item[(ii)] For all $k\in \mathbb{N}$, there is $C(n,k,\Lambda)>0$ such that for all $t\in (0,T]$,
$$\sup_M |\tilde\nabla^k g(t)|\leq C(n,k,\Lambda)t^{-k/2};$$
\item[(iii)] There is $\Omega\Subset M, k_0\in \mathbb{N}$ so that $\sum_{m=1}^{k_0}\sup_\Omega|\tilde\nabla^mg_0| <\Upsilon_0$;
\end{enumerate}
then for all $\tilde\Omega\Subset \Omega$, there is $L(n,h,\tilde\Omega,\Omega,\Lambda,\Upsilon_0)$ such that for all $t\in [0,T]$,
\begin{equation}
\sup_{\tilde\Omega} \sum_{m=1}^{k_0}|\tilde\nabla^{m} g(t)|\leq L.
\end{equation}
\end{prop}
\begin{proof}
We will use $C_i$ to denote constants depending only on $n$, $\Lambda$ and $\Upsilon_0$. By the proof of \cite[Lemma 4.2]{Shi1989}, it suffices to consider the case of $k_0=1$. Let $x_0\in \tilde\Omega$ and $r_0\in(0,1)$ such that $B_h(x_0,r_0)\Subset \Omega$. We claim that there is $L(n,h,\Omega,\Lambda,\Upsilon_0)>0$ such that
\begin{equation}\label{prop:local-smoothness-Ck claim}
|\tilde\nabla g(x,t)| < L\cdot \rho(x)^{-1}
\end{equation}
for all $(x,t)\in B_h(x_0,r_0)\times [0,T]$ where
\begin{equation}
\rho(x)=\inf\{ r\in (0,r_0): B_h(x,r)\Subset B_h(x_0,r_0)\}.
\end{equation}
Then the conclusion follows immediately from the above claim \eqref{prop:local-smoothness-Ck claim}.
Next, we will prove \eqref{prop:local-smoothness-Ck claim}. The large constant $L$ will be determined later. We assume $L>\Upsilon_0$. Then by $r_{0}<1$ and assumption (iii), there is $T_{1}>0$ such that \eqref{prop:local-smoothness-Ck claim} holds on $B_h(x_0,r_0)\times [0,T_{1}]$. If $T_{1}=T$, then we are done. If $T_{1}<T$, then
\begin{equation}\label{prop:local-smoothness-Ck eqn 1}
|\tilde\nabla g(x,t)| < L\cdot \rho(x)^{-1}
\end{equation}
for all $(x,t)\in B_h(x_0,r_0)\times [0,T_{1})$, while there is $x_{1}\in B_{h}(x_{0},r_{0})$ such that
\begin{equation}\label{prop:local-smoothness-Ck eqn 2}
|\tilde\nabla g(x_{1},T_{1})| = L\cdot \rho_{1}^{-1},
\end{equation}
where $\rho_{1}=\rho(x_{1})$. \eqref{prop:local-smoothness-Ck eqn 1} implies that for all $(x,t)\in B_h(x_1,\frac12\rho_1)\times [0,T_1]$,
\begin{equation}
|\tilde\nabla g(x,t)| \leq 2L\rho_{1}^{-1}.
\end{equation}
Combining assumption (i) and \eqref{prop:local-smoothness-Ck eqn 2}, we obtain $L\rho_{1}^{-1}\leq C_{0}T_{1}^{1/2}$.
To better illustrate the dependency, we let
\begin{equation}
r=\frac18 \rho_1, \ \ \tilde T_1=64\rho_1^{-2}T_1
\end{equation}
and consider the rescaled metrics $\hat g(t)=r^{-2}g(r^2t)$ and $\hat h=r^{-2}h$ on $B_{\hat h}(x_1,4)\times [0,\tilde T_1]$ which satisfy
\begin{equation}\label{rescaled-conditions}
\left\{
\begin{array}{ll}
\Lambda^{-1} \hat h\leq \hat g(t)\leq \Lambda \hat h;\\[1mm]
|\nabla^{\hat h}\hat g(t)|\leq \frac14 L;\\[1mm]
|\nabla^{\hat h}\hat g(0)|\leq \frac18 \rho_1\Upsilon_0 ;\\[1mm]
|\nabla^{\hat h}\hat g|(x_1,\tilde T_1)=\frac18 L.\\
\end{array}
\right.
\end{equation}
We will work on $B_{\hat h}(x_1,4)\times [0,\tilde T_1]$. Recalling $L\rho_{1}^{-1}\leq C_{0}T_{1}^{-1/2}$, we obtain
\begin{equation}\label{prop:local-smoothness-Ck eqn 3}
L^2\tilde T_1 = 64L^{2}\rho_{1}^{-2}T_{1} \leq 64 C_{0}^{2}.
\end{equation}
Using \eqref{eqn: h-flow} and \eqref{rescaled-conditions}, direct calculation shows that there are $C_1,C_2,C_3>0$ such that
\begin{equation}
\left\{
\begin{array}{ll}
\left( \frac{\partial}{\partial t}-\hat g^{ij}\nabla^{\hat h}_i\nabla^{\hat h}_h\right)\left(e^{-C_1L^2 t}\operatorname{tr}_{\hat h}\hat g\right)
\leq -e^{-C_1L^2 t}|\nabla^{\hat h}\hat g|^2;\\[3mm]
\left( \frac{\partial}{\partial t}-\hat g^{ij}\nabla^{\hat h}_i\nabla^{\hat h}_h\right) \left( e^{-C_2L^2t}|\nabla^{\hat h}\hat g|^2 \right)
\leq -C_3^{-1}e^{-C_2L^2t}|\nabla^{\hat h,2}\hat g|^2.
\end{array}
\right.
\end{equation}
Let $\eta$ be a cut-off function on $M$ such that $\eta\equiv 1$ on $B_{\hat h}(x_1,1)$, $\eta\equiv 0$ on $M\setminus B_{\hat h}(x_1,4)$ and $|\nabla^{\hat h}\eta|\leq10^{4}$. Consider the test function
\begin{equation}
F=e^{-\tilde CL^2t} \eta^{2} |\nabla^{\hat h}\hat g|^2+\tilde C e^{-C_1L^2 t} \operatorname{tr}_{\hat h}\hat g
\end{equation}
where $\tilde C$ is a constant to be determined later. \eqref{rescaled-conditions} and the Cauchy-Schwarz inequality implies
\begin{equation}
\begin{split}
& \left( \frac{\partial}{\partial t}-\hat g^{ij}\nabla^{\hat h}_i\nabla^{\hat h}_h\right)F \\[1mm]
\leq {} & 10^{4}e^{-\tilde CL^2t}\eta\big|\nabla^{\hat h}|\nabla^{\hat h}\hat g|^2\big|
-C_3^{-1}e^{-\tilde CL^2t}\eta^{2}|\nabla^{\hat h,2}\hat g|^2
-\tilde C e^{-C_1L^2t} |\nabla^{\hat h}\hat g|^2\\[1mm]
\leq {} & C_4 e^{-\tilde C L^2 t}|\nabla^{\hat h}\hat g|^2-\tilde Ce^{-C_1L^2t} |\nabla^{\hat h}\hat g|^2.
\end{split}
\end{equation}
By \eqref{prop:local-smoothness-Ck eqn 3}, we have $L^2t\leq L^{2}T_{1}\leq64C_{0}^{2}$. Then after choosing $\tilde C$ sufficiently large depending only on $n$ and $\Lambda$,
\begin{equation}
\left( \frac{\partial}{\partial t}-\hat g^{ij}\nabla^{\hat h}_i\nabla^{\hat h}_h\right)F \leq 0.
\end{equation}
Therefore, the maximum principle implies
\begin{equation}
F(x_1,\tilde T_1)
\leq \max\left\{\sup_{B_{\hat h}(x_1,4)} F(\cdot, 0), \sup_{\partial B_{\hat h}(x_1,4)\times [0,\tilde T_1]} F \right\}
\leq \frac18 \rho_1 \Upsilon_0+n\tilde C \Lambda.
\end{equation}
On the other hand, by \eqref{rescaled-conditions} and \eqref{prop:local-smoothness-Ck eqn 3},
\begin{equation}
F(x_1,\tilde T_1)\geq \frac1{64}e^{-64 C_0^{2}\tilde C} L^2.
\end{equation}
However, the above is impossible if we initially choose $L$ to be sufficiently large. This shows that with this choice of $L$, we must have $T_{1}=T$, which proves the claim \eqref{prop:local-smoothness-Ck claim}. Then we complete the proof of case $k_0=1$. When $k_{0}>1$, the higher order estimate can be proved by using a similar argument or using Bernstein-Shi's trick as in \cite{Shi1989}.
\end{proof}
We remark here that the above estimate we obtained is far from being optimal but it suffices for our purpose. In particular, using iteration method one should be able to improve dependency of some constants. Next, we will establish a uniform time zero regularity of the Ricci-Deturck flow with respect to the $L^p_{\mathrm{loc}}$ topology under metric equivalence and small gradient concentration assumptions.
\begin{prop}\label{timezero-regularity-W1n}
For any $\Lambda>1$, there is $\delta_{3}(n,\Lambda)>0$ such that the following holds. Suppose $g(t)$ is a smooth solution to the Ricci-Deturck $h$-flow on $M\times [0,T]$ for some background metric $h$ satisfying $(\star)$ so that
\begin{enumerate}
\item[(i)] $\Lambda^{-1} h\leq g(t)\leq \Lambda h$ on $M\times [0,T]$;
\item[(ii)] $\forall x\in M$ and $t\in (0,T]$,
$$\left(\fint_{B_h(x,1)} |\tilde\nabla g(t)|^n d\mathrm{vol}_h\right)^{1/n} \leq \delta_{3};$$
\end{enumerate}
then for all $p>0$ there is $L_p(p,n,\Lambda)>0$ such that for $x\in M$ and $t,s\in [0,T]$,
\begin{equation}
\fint_{B_h(x,1)} | g(t)-g(s)|^p d\mathrm{vol}_h \leq L_p|t-s|.
\end{equation}
\end{prop}
\begin{proof}
We use $C_i$ to denote constants depending only on $n$ and $\Lambda$. For $x\in M$. Let $\eta$ be a smooth cut-off function on $M$ such that $\eta\equiv 1$ on $B_h(x,1)$, $\eta\equiv 0$ on $M\setminus B_h(x,2)$ and $|\tilde{\nabla}\eta|\leq10^{4}$. We assume without loss of generality that $t>s$ and define $\alpha=g(t)-g(s)$. Then $\partial_{t}\alpha_{ij}=\partial_{t}g_{ij}$ and assumption (i) implies $|\alpha|\leq2\Lambda$. Using \eqref{eqn: h-flow} and integrating by parts,
\begin{equation}\label{timezero-regularity-W1n eqn 1}
\begin{split}
& \frac{\partial}{\partial t}\int_M \eta^{2} |{\alpha}|^2 \; d\mathrm{vol}_h \\
= {} & 2\int_M \eta^{2} \, \langle{\alpha}, g^{pq}\tilde\nabla_p \tilde\nabla_q g+ \widetilde{\mathrm{Rm}}*g^{-1}*g
+ g^{-1}*g^{-1}*\tilde\nabla g*\tilde \nabla g \rangle \; d\mathrm{vol}_h \\
\leq {} & 2\int_M \eta^{2} h^{ik}h^{jl} {\alpha}_{kl}g^{pq}\tilde\nabla_p \tilde\nabla_q g_{ij}d\mathrm{vol}_{h}
+C_{1}\int_{M}\eta^{2}|\alpha|(1+|\tilde{\nabla} g|^{2})d\mathrm{vol}_{h} \\
= {} & -2\int_M h^{ik}h^{jl}\tilde{\nabla}_{p}(\eta^{2}{\alpha}_{kl}g^{pq})\tilde\nabla_q g_{ij}d\mathrm{vol}_{h}
+2C_{1}\Lambda\int_{M}\eta^{2}(1+|\tilde{\nabla} g|^{2})d\mathrm{vol}_{h}.
\end{split}
\end{equation}
For the first term on the right hand side, using $g_{ij}=\alpha_{ij}+g_{ij}(s)$, the Cauchy-Schwarz inequality and $|\alpha|\leq 2\Lambda$,
\begin{equation}
\begin{split}
& -2\int_M h^{ik}h^{jl}\tilde{\nabla}_{p}(\eta^{2}{\alpha}_{kl}g^{pq})\tilde\nabla_q g_{ij}d\mathrm{vol}_{h} \\
= {} & -2\int_M h^{ik}h^{jl}\tilde{\nabla}_{p}(\eta^{2}{\alpha}_{kl}g^{pq})\tilde\nabla_q\alpha_{ij}d\mathrm{vol}_{h}
-2\int_M h^{ik}h^{jl}\tilde{\nabla}_{p}(\eta^{2}{\alpha}_{kl}g^{pq})\tilde\nabla_q g_{ij}(s)d\mathrm{vol}_{h} \\
\leq {} & -2\Lambda^{-1}\int_{M}\eta^{2}|\tilde{\nabla}\alpha|^{2}d\mathrm{vol}_{h}
+C_{2}\int_{M}(\eta|\alpha||\tilde{\nabla}\alpha|+\eta^{2}|\alpha||\tilde{\nabla} g||\tilde{\nabla}\alpha|)d\mathrm{vol}_{h} \\
& +C_{2}\int_M (\eta|\alpha|+\eta^{2}|\tilde{\nabla}\alpha|+\eta^{2}|\alpha||\tilde{\nabla} g|)|\tilde{\nabla} g(s)|d\mathrm{vol}_{h} \\
\leq {} & C_{2}\int_{B_{h}(x,2)}(|\alpha|^{2}+|\alpha|^{2}|\tilde{\nabla} g|^{2}
+|\alpha||\tilde{\nabla} g(s)|+|\tilde{\nabla} g(s)|^{2}+ |\alpha||\tilde{\nabla} g||\tilde{\nabla} g(s)|)d\mathrm{vol}_{h} \\
\leq {} & C_{2}\Lambda^{2}\int_{B_{h}(x,2)}(1+|\tilde{\nabla} g|^{2}+|\tilde{\nabla} g(s)|^{2})d\mathrm{vol}_{h}.
\end{split}
\end{equation}
Substituting this into \eqref{timezero-regularity-W1n eqn 1},
\begin{equation}
\frac{\partial}{\partial t}\int_M \eta^{2}|{\alpha}|^2 \; d\mathrm{vol}_h
\leq C_{3}\int_{B_{h}(x,2)}(1+|\tilde{\nabla} g|^{2}+|\tilde{\nabla} g(s)|^{2})d\mathrm{vol}_{h}.
\end{equation}
Using assumption (ii), the volume comparison theorem and a covering argument, we obtain
\begin{equation}
\begin{split}
\int_{B_{h}(x,2)}(1+|\tilde{\nabla} g|^{2}+|\tilde{\nabla} g(s)|^{2})d\mathrm{vol}_{h}
\leq C_{4}(1+\delta_{3}^{2})|B_{h}(x,2)| \leq C_{5}|B_{h}(x,1)|.
\end{split}
\end{equation}
It then follows that
\begin{equation}
\frac{\partial}{\partial t}\int_M \eta^{2}|{\alpha}|^2 \; d\mathrm{vol}_h
\leq C_{6}|B_{h}(x,1)|.
\end{equation}
Integrating on $[s,t]$ and recalling $\alpha=g(t)-g(s)$, we obtain
\begin{equation}
\fint_{B_{h}(x,1)} |g(t)-g(s)|^{2} \; d\mathrm{vol}_h
\leq \frac{1}{|B_{h}(x,1)|}\int_M \eta^{2}|{\alpha}|^2 \; d\mathrm{vol}_h
\leq C_{6}(t-s).
\end{equation}
This completes the proof of the case $p=2$. When $0<p<2$, the conclusion follows from the H\"older inequality. When $p>2$, using assumption (i), we obtain
\begin{equation}
\fint_{B_{h}(x,1)} |g(t)-g(s)|^{p} \; d\mathrm{vol}_h
\leq \Lambda^{p-2} \fint_{B_{h}(x,1)} |g(t)-g(s)|^{2} \; d\mathrm{vol}_h
\leq C_{6}\Lambda^{p-2}(t-s).
\end{equation}
\end{proof}
We end this section by showing a $W^{1,n}_{\mathrm{loc}}$ stability along flow with small gradient concentration.
\begin{prop}\label{W1n stability}
For any $\Lambda>1$, there is $\delta_{4}(n,\Lambda)>0$ such that the following holds.
Suppose $g(t)$ is a smooth solution to the Ricci-Deturck $h$-flow on $M\times [0,T]$ for some background metric $h$ satisfying $(\star)$ so that
\begin{enumerate}
\item[(i)] $\Lambda^{-1} h\leq g(t)\leq \Lambda h$ on $M\times [0,T]$;
\item[(ii)] $\forall x\in M$ and $t\in (0,T]$,
\begin{equation*}
\left(\fint_{B_h(x,1)} |\tilde\nabla g(t)|^n d\mathrm{vol}_h\right)^{1/n} \leq \delta_{4}.
\end{equation*}
\end{enumerate}
Then for any smooth metric $\bar{g}$ on $M$, there is $\ov{L}(n,\Lambda,\bar{g})>0$ such that for all $x\in M$ and $t,s\in [0,T]$,
\begin{equation}
\int_{B_h(x,1)} |\tilde\nabla(g(t)-\bar{g})|^n d\mathrm{vol}_h \leq \int_{B_h(x,2)} |\tilde\nabla(g(0)-\bar{g})|^n d\mathrm{vol}_h+\ov{L}|B_h(x,2)| \cdot t.
\end{equation}
\end{prop}
\begin{proof}
We will use $C_{i}$ to denote constants depending only on $n$ and $\Lambda$, while $\ov{C}_{i}$ to denote constants depending only on $n$, $\Lambda$ and $\bar{g}$. Define
\begin{equation}
\beta_{ij} = g_{ij}(t) - \bar{g}_{ij}.
\end{equation}
Then along the Ricci-Deturck $h$-flow, we have
\begin{equation}
\frac{\partial}{\partial t}\beta_{ij} = g^{ab}\tilde{\nabla}_{a}\tilde{\nabla}_{b}\beta_{ij}+g^{-1}*\tilde{\nabla}^{2}\bar{g}+\widetilde{\mathrm{Rm}}*g^{-1}*g+g^{-1}*g^{-1}*\tilde{\nabla} g*\tilde{\nabla} g
\end{equation}
and so
\begin{equation}
\begin{split}
& \frac{\partial}{\partial t}|\tilde{\nabla}\beta|^{2}+2\Lambda^{-1}|\tilde{\nabla}^{2}\beta|^{2} \\
\leq {} & g^{ab}\tilde{\nabla}_{a}\tilde{\nabla}_{b}|\tilde{\nabla}\beta|^{2}+C_{1}|\tilde{\nabla}\beta||\tilde{\nabla} g||\tilde{\nabla}^{2}\beta|+C_{1}|\tilde{\nabla}\beta|^{2} \\
&+\ov{C}_{1}(|\tilde{\nabla} g|+1)|\tilde{\nabla}\beta|
+C_{1}|\tilde{\nabla}\beta|\left(|\tilde{\nabla} g||\tilde{\nabla}^{2}g|+|\tilde{\nabla} g|^{3} \right).
\end{split}
\end{equation}
Using $g=\beta+\bar{g}$ and the Cauchy-Schwarz inequality,
\begin{equation}
\begin{split}
& \frac{\partial}{\partial t}|\tilde{\nabla}\beta|^{2}+2\Lambda^{-1}|\tilde{\nabla}^{2}\beta|^{2} \\
\leq {} & g^{ab}\tilde{\nabla}_{a}\tilde{\nabla}_{b}|\tilde{\nabla}\beta|^{2}+C_{2}|\tilde{\nabla}\beta||\tilde{\nabla} g||\tilde{\nabla}^{2}\beta|
+C_{2}|\tilde{\nabla} g|^{2}|\tilde{\nabla}\beta|^{2}+C_{2}|\tilde{\nabla}\beta|^{2}+\ov{C}_{2} \\
&+C_{2}|\tilde{\nabla}\beta|\left(|\tilde{\nabla} g||\tilde{\nabla}^{2}\beta|+|\tilde{\nabla} g||\tilde{\nabla}^{2}\bar g|+|\tilde{\nabla} g|^{2}|\tilde{\nabla}\beta|
+|\tilde{\nabla} g|^{2}|\tilde{\nabla}\bar g| \right) \\
\leq {} & g^{ab}\tilde{\nabla}_{a}\tilde{\nabla}_{b}|\tilde{\nabla}\beta|^{2}+C_{3}|\tilde{\nabla}\beta||\tilde{\nabla} g||\tilde{\nabla}^{2}\beta|
+C_{3}|\tilde{\nabla} g|^{2}|\tilde{\nabla}\beta|^{2}+\ov{C}_{3}|\tilde{\nabla}\beta|^{2}+\ov{C}_{3} \\
\leq {} & g^{ab}\tilde{\nabla}_{a}\tilde{\nabla}_{b}|\tilde{\nabla}\beta|^{2}+\Lambda^{-1}|\tilde{\nabla}^{2}\beta|^{2}
+C_{3}|\tilde{\nabla} g|^{2}|\tilde{\nabla}\beta|^{2}+\ov{C}_{3}|\tilde{\nabla}\beta|^{2}+\ov{C}_{3}.
\end{split}
\end{equation}
For $\sigma\in(0,\delta_{4})$, we define the smooth function $H$ by
\begin{equation}
H = \sqrt{|\tilde{\nabla}\beta|^{2}+\sigma^{2}}
\end{equation}
Then we obtain
\begin{equation}
\frac{\partial}{\partial t}H^{2}+\Lambda^{-1}|\tilde{\nabla}\beta|^{2} \leq g^{ab}\tilde{\nabla}_{a}\tilde{\nabla}_{b}H^{2}+C_{4}|\tilde{\nabla} g|^{2}H^{2}+\ov{C}_{4}H^{2}+\ov{C}_{4}.
\end{equation}
Let $\eta$ be a cut-off function such that $\eta\equiv1$ on $B_{h}(x,1)$, $\eta\equiv0$ on $M\setminus B_{h}(x,2)$ and $|\tilde\nabla \eta|\leq 10^4$ on $M$. By the similar calculation of \eqref{energy-ineq},
\begin{equation}\label{W 1 n estimate eqn 5}
\begin{split}
& \frac{2}{n}\cdot\frac{\partial}{\partial t}\left(\int_{M}\eta^{4}H^{n}d\mathrm{vol}_{h}\right)
+\frac{|B_{h}(x,2)|^{2/n}}{C_{5}\Lambda}\left(\int_{M}(\eta^{4}H^{n})^{\frac{n}{n-2}}d\mathrm{vol}_{h}\right)^{\frac{n-2}{n}} \\[1.5mm]
\leq {} & C_{5}\int_{M}\eta^{4}|\tilde{\nabla} g|^{2}H^{n}d\mathrm{vol}_{h}
+\ov{C}_{5}\int_{M}(\eta^{2}H^{n}+\eta^{4}H^{n}+\eta^{4})d\mathrm{vol}_{h}. \\
\leq {} & C_{5}\left(\int_{B_{h}(x,2)}|\tilde{\nabla} g|^{n}d\mathrm{vol}_{h}\right)^{\frac{2}{n}}\left(\int_{M}(\eta^{4}H^{n})^{\frac{n}{n-2}}d\mathrm{vol}_{h}\right)^{\frac{n-2}{n}}
+\ov{C}_{5}\int_{B_{h}(x,2)}(H^{n}+1)d\mathrm{vol}_{h}.
\end{split}
\end{equation}
Using assumption (ii), we obtain
\begin{equation}\label{W 1 n estimate eqn 6}
\left(\int_{B_{h}(x,2)}|\tilde{\nabla} g|^{n}d\mathrm{vol}_{h}\right)^{\frac{2}{n}} \leq C_{n}\delta_{4}^{2}|B_{h}(x,2)|^{2/n}
\end{equation}
and
\begin{equation}\label{W 1 n estimate eqn 7}
\int_{B_{h}(x,2)}(H^{n}+1)d\mathrm{vol}_{h} \leq \ov{C}_{6}|B_{h}(x,2)|.
\end{equation}
Choosing $\delta_{4}$ sufficiently small depending only on $n$ and $\Lambda$, then \eqref{W 1 n estimate eqn 5}, \eqref{W 1 n estimate eqn 6} and \eqref{W 1 n estimate eqn 7} imply
\begin{equation}
\frac{\partial}{\partial t}\left(\int_{M}\eta^{4}H^{n}d\mathrm{vol}_{h}\right)
\leq \ov{C}_{7}|B_{h}(x,2)|.
\end{equation}
After integrating both sides on $[0,t]$ and letting $\sigma\to0$, we obtain
\begin{equation}
\int_{B_h(x,1)} |\tilde\nabla(g(t)-\bar{g})|^n d\mathrm{vol}_h \leq \int_{B_h(x,2)} |\tilde\nabla(g(0)-\bar{g})|^n d\mathrm{vol}_h+\ov{C}_{8}|B_h(x,2)| \cdot t.
\end{equation}
\end{proof}
\section{Existence of Ricci-Deturck flow with rough initial data}\label{Sec:Existence}
In this section, we will establish the existence of the Ricci-Deturck $h$-flow starting from $g_0$ which is bi-Lipschitz on $M$ and is $W^{1,n}$. We start with the proof of Theorem~\ref{thm-existence} which is a quantitative version of the existence theory.
\begin{proof}[Proof of Theorem~\ref{thm-existence}]
We assume $(M,h)$ to be complete non-compact. The compact case can be proved using a similar but simpler argument.
By Proposition~\ref{prop:RF-smoothing}, we may assume that the background metric $h$ satisfies $(\star)$ by a slight perturbation. By the result of \cite{Tam2010} (see also \cite{GreenWu1979}) and $|\text{\rm Rm}(h)|\leq 1$, there is $\rho\in C_{\mathrm{loc}}^\infty(M)$ such that $|\tilde\nabla\rho|^2+|\tilde\nabla^2 \rho|\leq 1$ and
\begin{equation}
C_n^{-1}(d_h(\cdot,p)+1)\leq \rho(\cdot) \leq C_n(d_h(\cdot,p)+1)
\end{equation}
for some dimensional constant $C_n$ and fixed point $p\in M$.
Let $\phi$ be a smooth function on $[0,+\infty)$ such that $\phi\equiv 1$ on $[0,1]$, $\phi\equiv 0$ on $[2,+\infty)$ and $0\leq- \phi'\leq 10$. Consider the following sequence of metrics:
\begin{equation}
h_{i}=\phi\left(i^{-1}\rho\right)g_0 +\left(1-\phi(i^{-1}\rho )\right)h
\end{equation}
so that $h_i\equiv h$ at infinity and coincides with $g_0$ on large compact set. Moreover, $h_i\to g_0$ in $W^{1,n}_{\mathrm{loc}}$ as $i\to +\infty$. Since $h_i=h$ outside a large compact set, we may mollify $h_i$ slightly and obtain a smooth metric $g_{i,0}$ so that $g_{i,0}=h$ outside compact set and $g_{i,0}\to g_0$ in $W^{1,n}_{\mathrm{loc}}$ as $i\to+\infty$. More precisely, we may assume that for all $i\in\mathbb{N}$ sufficiently large,
\begin{equation}
\left\{
\begin{array}{ll}
\Lambda_0^{-1} h\leq g_{i,0}\leq \Lambda_0 h;\\
\displaystyle \left(\fint_{B_h(x,1)} |\tilde \nabla g_{i,0}|^n \; d\mathrm{vol}_h \right)^{1/n}\leq \varepsilon
\end{array}
\right.
\end{equation}
for all $x\in M$ and for some $\varepsilon<\varepsilon_0$. We will specify the choice of $\varepsilon_0$ later.
\medskip
By \cite[Theorem A.1]{LammSimon2021} which is a slight modification of Shi's classical existence theory \cite{Shi1989}, there is a short-time solution to the Ricci-Deturck $h$-flow $g_i(t)$ on $M\times [0,S_i]$ for some $S_i>0$ such that $g_i(0)=g_{i,0}$ and $\sup_M |\tilde \nabla^m g_i(t)|<+\infty$ for all $m\in \mathbb{N}$ on $M\times [0,S_i]$. Here we remark that the injectivity radius lower bound assumption in \cite[Theorem A.1]{LammSimon2021} is unnecessary as $h$ has bounded geometry. Alternatively, we can apply the existence result in \cite{Shi1989} and obtain a Ricci flow from $g_{i,0}$ and pull-back to a Ricci-Deturck $h$-flow as all metrics considered are smooth with bounded curvature.
We may assume $S_i$ is the maximal existence time of the flow $g_{i}(t)$. Let $T_i\in (0,S_{i})$ be the maximal time such that for all $t\in [0,T_i)$,
\begin{enumerate}
\item[(i')] $(2\Lambda)^{-1} h< g_i(t)< 2\Lambda h$ on $M\times [0,T_i)$;
\item[(ii')] $\forall x\in M$ and $t\in (0,T_i]$,
$$\left(\fint_{B_h(x,1)} |\tilde\nabla g_i(t)|^n d\mathrm{vol}_h\right)^{1/n} < \delta;$$
\item[(iii')] for all $m\in \mathbb{N}$, $\sup_M |\tilde \nabla^m g_i(t)|<+\infty$ on $[0,T_i)$
\end{enumerate}
where $\Lambda=\Upsilon_n\Lambda_0$ is obtained in Lemma~\ref{borderline-trace} and $\delta(n,\Lambda_0)=\min\{ \delta_1(n,\Lambda),\delta_2(n,\Lambda_0)\}$ is obtained in Lemma~\ref{borderline-C1} and Lemma~\ref{borderline-trace} using this choice of $\Lambda$.
We will show that $S_i$ is bounded from below uniformly depending only on $n$ and $\Lambda_{0}$. It suffices to estimate the lower bound of $T_i$. By Lemma~\ref{lma:localEstimate-fromC^1} and Lemma~\ref{lma:localEstimate-fromC^1conc}, (iii') holds up to $t=T_i$. Therefore the maximality of $T_{i}$ implies that either (i') or (ii') fails at $t=T_i$. Using Lemma~\ref{borderline-C1}, a covering argument and the volume comparison theorem, we deduce that
\begin{equation}
\begin{split}
\int_{B_h(x,1)} |\tilde\nabla g_i(t)|^n d\mathrm{vol}_h &\leq \int_{B_h(x,2)} |\tilde\nabla g_{i,0}|^n d\mathrm{vol}_h+L_1|B_h(x,2)| \cdot t\\[1mm]
&\leq (C_n\varepsilon^n+C_{n}L_1t)\cdot |B_h(x,1)|.
\end{split}
\end{equation}
Therefore, by Lemma~\ref{borderline-trace}, if $\varepsilon_0(n,\Lambda_0)$ is sufficiently small, then $T_i> S(n,\Lambda_0)$.
Restricting $g_i(t)$ on $M\times [0,S]$, we may apply Lemma~\ref{lma:localEstimate-fromC^1} and Lemma~\ref{lma:localEstimate-fromC^1conc} to show that $g_i(t)$ is uniformly $C^k_{\mathrm{loc}}$ bounded for any $[a,b]\subset (0,S]$ and uniformly bi-Lipschitz up on $[0,S]$. By diagonal subsequent argument and Arzel\`a-Ascoli theorem, we obtain $g(t)=\lim_{i\to +\infty}g_i(t)$ on $M\times (0,S]$ with
\begin{enumerate}
\item[(a)] $(2\Lambda)^{-1}h\leq g(t)\leq 2\Lambda h$ on $M\times (0,S]$;
\item[(b)] for all $k\in \mathbb{N}$, there is $C(k,n,\Lambda_0)>0$ such that on $(0,S]$;
$$\sup_M |\tilde\nabla^k g(t)|\leq C(n,m,\Lambda_0) t^{-k/2}$$
\item[(c)] for all $x\in M$ and $t\in (0,S]$,
\begin{equation*}
\left(\fint_{B_h(x,1)} |\tilde\nabla g(t)|^n d\mathrm{vol}_h\right)^{1/n} \leq (C_n\varepsilon^n+C_{n}L_1t)^{1/n}.
\end{equation*}
\end{enumerate}
The conclusion (d) follows from Lemma~\ref{lma:localEstimate-fromC^1conc} while the conclusion (e) follows from Proposition~\ref{prop:local-smoothness-Ck}.
\medskip
Now it remains to prove (f). By a covering argument, it is sufficient to work on $B_h(x,1)$ for fixed $x\in M$. Proposition \ref{timezero-regularity-W1n} shows
\begin{equation}\label{Ln convergence}
\left(\fint_{B_h(x,1)} |g_{i}(t)-g_{i,0}|^n d\mathrm{vol}_h\right)^{1/n} \leq L_{n}t.
\end{equation}
For any $\varepsilon>0$, there is a smooth metric $\bar g$ on $M$ such that
\begin{equation}\label{W1n convergence eqn 1}
\left(\fint_{B_h(x,1)} |\tilde\nabla(g_0-\bar g)|^n d\mathrm{vol}_h\right)^{1/n} \leq \varepsilon.
\end{equation}
Recall that $g_{i,0}\to g_{0}$ in $W^{1,n}_{\mathrm{loc}}$. Then for sufficiently large $i$,
\begin{equation}
\left(\fint_{B_h(x,1)} |\tilde\nabla(g_{i,0}-\bar g)|^n d\mathrm{vol}_h\right)^{1/n} \leq 2\varepsilon.
\end{equation}
By Proposition \ref{W1n stability} and a covering argument, there are $\ov{L}(n,\Lambda,\bar{g})$ and $C_{1}(n,\Lambda)$ such that
\begin{equation}
\left(\fint_{B_h(x,1)} |\tilde{\nabla}(g_{i}(t)-\bar g)|^n d\mathrm{vol}_h\right)^{1/n} \leq C_{1}\varepsilon+\ov{L}t.
\end{equation}
Combining this with \eqref{W1n convergence eqn 1},
\begin{equation}\label{W1n convergence}
\left(\fint_{B_h(x,1)} |\tilde{\nabla}(g_{i}(t)-g_{0})|^n d\mathrm{vol}_h\right)^{1/n} \leq (C_{1}+1)\varepsilon+\ov{L}t.
\end{equation}
In \eqref{Ln convergence} and \eqref{W1n convergence}, by letting $i\to\infty$, $t\to0$ and followed by $\varepsilon\to0$, we obtain $g(t)\to g_{0}$ in $W^{1,n}_{\mathrm{loc}}$.
\end{proof}
Next, we will show that the solution of the Ricci-Deturck $h$-flow constructed in Theorem~\ref{thm-existence} is unique in the corresponding class. This will be crucial to study the persistence of scalar curvature under $W^{1,n}_{\mathrm{loc}}$ convergence.
\begin{thm}\label{thm-unique}
Suppose $h$ is a background metric satisfying $|\text{\rm Rm}(h)|\leq 1$. Then for any $\Lambda_0>1$, there is $\varepsilon_0(n,\Lambda_{0})>0$ such that the following holds. If $g_0$ is a $L^\infty\cap W^{1,n}_{\mathrm{loc}}$ Riemannian metric on $M$ such that
\begin{enumerate}
\item[(i)] $\Lambda_0^{-1}h\leq g_0\leq \Lambda_0 h$ on $M$;
\item[(ii)] for all $x\in M$,
$$\left(\fint_{B_h(x,1)} |\tilde \nabla g_0|^n \; d\mathrm{vol}_h \right)^{1/n}\leq \varepsilon_0$$
\end{enumerate}
then the solution to the Ricci-Deturck $h$-flow on $M\times (0,T]$ is unique within the class of solution $g(t)$ satisfying
\begin{enumerate}
\item[(a)] $(\Upsilon_n\Lambda_{0})^{-1} h\leq g(t)\leq \Upsilon_n\Lambda_{0} h$ on $M\times (0,T]$;
\item[(b)] For all $x\in M$ and $t\in (0,T]$,
\[
\left(\fint_{B_h(x,1)} |\tilde \nabla g(t)|^n \; d\mathrm{vol}_h \right)^{1/n}\leq C(n,\Lambda_0) \varepsilon_0;
\]
\item[(c)] $g(t)\to g_0$ in $L^2_{\mathrm{loc}}$ as $t\to 0$,
\end{enumerate}
where $\Upsilon_n$ is the constant obtained in Lemma \ref{borderline-trace}.
\end{thm}
\begin{proof}
Suppose $g(t)$ and $\hat{g}(t)$ are two solutions of $h$-flow satisfying (a), (b) and (c). We will use $C_i$ to denote constants depending only on $n$ and $\Lambda_{0}$. By the Ricci-Deturck $h$-flow equation \eqref{eqn: h-flow}, ${\alpha}(t)=g(t)-\hat g(t)$ satisfies
\begin{equation}
\begin{split}
\partial_t {\alpha}_{ij}&=g^{pq}\tilde\nabla_p \tilde\nabla_q g_{ij}-\hat g^{pq}\tilde\nabla_p \tilde\nabla_q \hat g_{ij}+ \widetilde{\mathrm{Rm}}*g^{-1}*\hat g^{-1}* g*{\alpha} \\
&\quad + \widetilde{\mathrm{Rm}}*\hat g^{-1}*{\alpha} + g^{-1}*\hat g^{-1}*g^{-1}* \tilde\nabla g*\tilde\nabla g* {\alpha}\\
&\quad + \hat g^{-1} *\hat g^{-1} * g^{-1}*\tilde\nabla g*\tilde\nabla g*{\alpha} + \hat g^{-1} *\hat g^{-1} *\tilde\nabla g*\tilde\nabla {\alpha}\\
&\quad + \hat g^{-1} *\hat g^{-1} *\tilde\nabla \hat g*\tilde\nabla {\alpha}.
\end{split}
\end{equation}
Let $\eta$ be a smooth cutoff function on $M$ such that $\eta\equiv 1$ on $B_h(x,1)$, $\eta\equiv0$ on $M\setminus B_h(x,2)$ and $|\tilde\nabla\eta|\leq 10^4$. Using assumption (i) and the Cauchy-Schwarz inequality, for any $\tau\in(0,1)$,
\begin{equation}
\begin{split}
& \frac{\partial}{\partial t}\int_M \eta^{4} |{\alpha}|^2 \; d\mathrm{vol}_h\\
\leq {} & \int_M 2\eta^{4}h^{ik}h^{jl} {\alpha}_{kl} \left[g^{pq}\tilde\nabla_p \tilde\nabla_q {\alpha}_{ij}+\left(g^{pq}-\hat g^{pq}\right)\tilde\nabla_p \tilde\nabla_q \hat g_{ij}\right]\\
&+C_1\tau^{-1} \int_M \eta^{4} |{\alpha}|^2(1+|\tilde \nabla g|^2+|\tilde \nabla \hat{g}|^2) \; d\mathrm{vol}_h+ \tau \int_M \eta^{4} |\tilde \nabla {\alpha}|^2 \; d\mathrm{vol}_h\\
= {} & -2\int_M h^{ik}h^{jl}\tilde\nabla_p\left( \eta^{4}{\alpha}_{kl} g^{pq} \right) \tilde\nabla_q {\alpha}_{ij}\; d\mathrm{vol}_h
+\int_M \eta^{4} g^{-1}*\hat g^{-1}*{\alpha} * {\alpha} *\tilde\nabla^2 \hat g \, d\mathrm{vol}_h\\
&+C_1\tau^{-1} \int_M \eta^{4} |{\alpha}|^2(1+|\tilde \nabla g|^2+|\tilde \nabla \hat{g}|^2) \; d\mathrm{vol}_h+ \tau \int_M \eta^{4} |\tilde \nabla {\alpha}|^2 \; d\mathrm{vol}_h\\
\leq {} & -C_2^{-1} \int_M \eta^{4}|\tilde\nabla {\alpha}|^2 \;d\mathrm{vol}_h+ \int_M \eta^{4} g^{-1}*\hat g^{-1}*{\alpha} * {\alpha} *\tilde\nabla^2 \hat g \, d\mathrm{vol}_h\\
&+ C_2\int_M \eta^{2} |{\alpha}|^2 \;d\mathrm{vol}_h+C_2\tau^{-1}\int_M \eta^{4} |{\alpha}|^2(1+|\tilde \nabla g|^2+|\tilde \nabla \hat{g}|^2)\, d\mathrm{vol}_h
+\tau \int_M \eta^{4} |\tilde \nabla {\alpha}|^2 \; d\mathrm{vol}_h.
\end{split}
\end{equation}
Choosing $\tau=(2C_{2})^{-1}$, we obtain
\begin{equation}\label{thm-unique eqn 1}
\begin{split}
& \frac{\partial}{\partial t}\int_M \eta^{4} |{\alpha}|^2 \; d\mathrm{vol}_h \\
\leq {} & -C_3^{-1} \int_M \eta^{4}|\tilde\nabla {\alpha}|^2 \;d\mathrm{vol}_h+ \int_M \eta^{4} g^{-1}*\hat g^{-1}*{\alpha} * {\alpha} *\tilde\nabla^2 \hat g \, d\mathrm{vol}_h\\
&+ C_3\int_M \eta^{2} |{\alpha}|^2 \;d\mathrm{vol}_h+C_3\int_M \eta^{4} |{\alpha}|^2(1+|\tilde \nabla g|^2+|\tilde \nabla \hat{g}|^2)\, d\mathrm{vol}_h.
\end{split}
\end{equation}
For the second term on the right hand side of \eqref{thm-unique eqn 1}, integrating by parts and using the Cauchy-Schwarz inequality, for any $\tau\in(0,1)$,
\begin{equation}\label{thm-unique eqn 3}
\begin{split}
&\quad \int_M \eta^{4} g^{-1}*\hat g^{-1}*{\alpha} * {\alpha} *\tilde\nabla^2 \hat g \, d\mathrm{vol}_h\\
&= \int_M \tilde \nabla g * \tilde\nabla (\eta^{4} g^{-1} *\hat g^{-1} * {\alpha} *{\alpha}) \,d\mathrm{vol}_h\\
&\leq C_4 \int_{M} |\tilde\nabla g| \left(\eta^{3}|{\alpha}|^2+\eta^{4}|\tilde\nabla g||{\alpha}|^2+\eta^{4}|\tilde\nabla \hat g||{\alpha}|^2+ \eta^{4} |{\alpha}| |\tilde\nabla {\alpha}| \right)d\mathrm{vol}_{h}\\
&\leq C_5 \int_{M}\left(\tau^{-1}\eta^{4}|{\alpha}|^2 (|\tilde \nabla g|^{2}+|\tilde\nabla \hat g|^{2})+ \eta^{2}|{\alpha}|^2 \right)d\mathrm{vol}_{h}
+ \tau\int_{M} \eta^{4}|\tilde \nabla {\alpha}|^2\;d\mathrm{vol}_{h}
\end{split}
\end{equation}
Choosing $\tau=(2C_{3})^{-1}$ and substituting \eqref{thm-unique eqn 3} into \eqref{thm-unique eqn 1},
\begin{equation}\label{thm-unique eqn 6}
\begin{split}
\frac{\partial}{\partial t}\int_M \eta^{4} |{\alpha}|^2 \; d\mathrm{vol}_h
\leq {} & -C_6^{-1} \int_M \eta^{4}|\tilde\nabla {\alpha}|^2 \;d\mathrm{vol}_h
+C_6\int_M |{\alpha}|^2 \eta^{2} \;d\mathrm{vol}_h \\
& +C_6\int_M \eta^{4}|{\alpha}|^2(1+|\tilde \nabla g|^2+|\tilde\nabla\hat g|^2) \; d\mathrm{vol}_h.
\end{split}
\end{equation}
We squeeze more negativity from the first term by using the Sobolev inequality \eqref{Sobo-ineq} and Kato's inequality:
\begin{equation}
\begin{split}
|B_h(x,2)|^{2/n} \left(\int_M (\eta^{4}|{\alpha}|^2)^\frac{n}{n-2} \right)^\frac{n-2}{n}
\leq {} & C_{7}\int_{M}|\tilde{\nabla}(\eta^{2}|\alpha|)|^{2}d\mathrm{vol}_{h} \\
\leq {} & C_{7}\int_{M}(\eta^{2}|\alpha|^{2}+\eta^{4}|\tilde{\nabla}\alpha|^{2})d\mathrm{vol}_{h}
\end{split}
\end{equation}
and then
\begin{equation}\label{thm-unique eqn 2}
\begin{split}
& -C_6^{-1} \int_M \eta^{4}|\tilde\nabla {\alpha}|^2 \;d\mathrm{vol}_h \\
\leq {} & -C_{8}^{-1}|B_h(x,2)|^{2/n} \left(\int_M (\eta^{4}|{\alpha}|^2)^\frac{n}{n-2} \right)^\frac{n-2}{n}
+ C_8\int_M \eta^{2} |{\alpha}|^2 \;d\mathrm{vol}_h.
\end{split}
\end{equation}
Substituting \eqref{thm-unique eqn 2} into \eqref{thm-unique eqn 6},
\begin{equation}\label{thm-unique eqn 4}
\begin{split}
& \frac{\partial}{\partial t}\int_M \eta^{4} |{\alpha}|^2 \; d\mathrm{vol}_h\\
\leq {} & -C_9^{-1} |B_h(x,2)|^{2/n}\left(\int_M (\eta^{4}|{\alpha}|^2)^\frac{n}{n-2} \;d\mathrm{vol}_h\right)^\frac{n-2}{n}\\
& + C_9\int_M |{\alpha}|^2 \eta^{2} \;d\mathrm{vol}_h+C_9\int_M \eta^{4}|{\alpha}|^2(1+|\tilde \nabla g|^2+|\tilde\nabla\hat g|^2) \; d\mathrm{vol}_h.
\end{split}
\end{equation}
By assumption (ii) and a covering argument,
\begin{equation}
\begin{split}
& \int_M \eta^{4} |{\alpha}|^2 |\tilde\nabla g|^2 \;d\mathrm{vol}_h\\
\leq {} & \left(\int_M \left(\eta^{4}|{\alpha}|^2\right)^\frac{n}{n-2} \, d\mathrm{vol}_h \right)^\frac{n-2}{n} \left(\fint_{B_h(x,2)} |\tilde\nabla g|^{n} \, d\mathrm{vol}_h \right)^{2/n} |B_h(x,2)|^{2/n}\\
\leq {} & C_n\varepsilon_0 \left(\int_M \left(\eta^{4} |{\alpha}|^2\right)^\frac{n}{n-2} \, d\mathrm{vol}_h \right)^\frac{n-2}{n} |B_h(x,2)|^{2/n}.
\end{split}
\end{equation}
Similarly,
\begin{equation}
\int_M \eta^{4} |{\alpha}|^2 |\hat\nabla g|^2 \;d\mathrm{vol}_h
\leq C_n\varepsilon_0 \left(\int_M \left(\eta^{4} |{\alpha}|^2\right)^\frac{n}{n-2} \, d\mathrm{vol}_h \right)^\frac{n-2}{n}\cdot |B_h(x,2)|^{2/n}.
\end{equation}
Therefore, if $\varepsilon_0$ is sufficiently small depending only on $n$ and $\Lambda_{0}$, then \eqref{thm-unique eqn 4} can be reduced to
\begin{equation}\label{evo-difference}
\begin{split}
\frac{\partial}{\partial t}\int_M \eta^{4} |{\alpha}|^2 \; d\mathrm{vol}_h
&\leq C_{10}\int_M \eta^{2}|{\alpha}|^2 \;d\mathrm{vol}_h+C_{10}\int_M \eta^{4} |{\alpha}|^2 \; d\mathrm{vol}_h.
\end{split}
\end{equation}
For any $\varepsilon>0$, let $S_{\varepsilon}$ be the maximal time such that for all $(z,t)\in M\times (0,S_\varepsilon]$,
\begin{equation}\label{ap-diff-bound}
\int_{B_h(z,1)}|{\alpha}|^2\, d\mathrm{vol}_h<\varepsilon.
\end{equation}
Proposition~\ref{timezero-regularity-W1n} shows $S_{\varepsilon}>0$. Then there is $x_{0}\in M$ such that
\begin{equation}\label{thm-unique eqn 5}
\left(\int_{B_h(x_{0},1)}|{\alpha}|^2\, d\mathrm{vol}_h\right)\bigg|_{t=S_{\varepsilon}} \geq \frac{\varepsilon}{2}.
\end{equation}
We claim that $S_\varepsilon$ is bounded from below uniformly (independent of $\varepsilon$). Combining \eqref{evo-difference} with \eqref{ap-diff-bound}, and using a covering argument, for all $t\in (0,S_\varepsilon]$,
\begin{equation}
\frac{\partial}{\partial t}\left(e^{-C_{10}t}\int_M \eta^{4}|{\alpha}|^2 \; d\mathrm{vol}_h\right) \leq C_{11}\varepsilon e^{-C_{10}t} \leq C_{11}\varepsilon.
\end{equation}
Integrating this on $[s,S_\varepsilon]$, letting $s\to 0$, and using the fact that $g(t),\hat g(t)\to g_0$ in $L^2_{\mathrm{loc}}$ as $t\to 0$, we conclude that
\begin{equation}
\left(\int_{B_h(x,1)} |{\alpha}|^2 \, d\mathrm{vol}_h\right)\bigg|_{t=S_\varepsilon}\leq C_{11}\varepsilon e^{C_{10}S_\varepsilon} S_\varepsilon.
\end{equation}
Combining this with \eqref{thm-unique eqn 5}, we obtain $S_\varepsilon\geq S_0(n,\Lambda)>0$ which is independent of $\varepsilon$. This gives the uniqueness on $M\times (0,S_0]$ as $\varepsilon$ is arbitrary. The uniqueness beyond $S_0$ is standard thanks to the smoothness of $g(t)$ for $t>0$.
\end{proof}
\medskip
We now show that if $(M,h)$ is non-collapsed at infinity, then the small gradient concentration assumption in Theorem~\ref{thm-existence} can always be achieved by scaling if in addition $g_0\in W^{1,n}$ globally.
\begin{prop}\label{scaling-assumption}
Suppose $h$ is a smooth metric satisfying assumption $(\star)$. If in addition $\mathrm{inj}(M,h)>0$ and $g_0\in W^{1,n}$, then for any $\varepsilon>0$, there is $r_\varepsilon\in(0,1)$ such that for all $x\in M$, we have
\begin{equation}
\left( r_{\varepsilon}^n \fint_{B_h(x,r_{\varepsilon})} |\tilde\nabla g_0|^n\; d\mathrm{vol}_h\right)^{1/n}<\varepsilon.
\end{equation}
\end{prop}
\begin{proof}
By assumption on injectivity radius of $h$, there is $v_0>0$ such that for all $x\in M$,
\begin{equation}\label{scaling-assumption eqn 1}
\mathrm{Vol}_h\left( B_h(x,r)\right)\geq v_0r^n
\end{equation}
for all $r\in (0,1]$. Let $\varepsilon>0$ be fixed and $x_0\in M$. By assumption $g_{0}\in W^{1,n}$, there is $R>1$ such that
\begin{equation}\label{scaling-assumption eqn 2}
\left(\int_{M\setminus B_h(x_0,R)} |\tilde\nabla g_0|^n \; d\mathrm{vol}_h\right)^{1/n} <\varepsilon .
\end{equation}
We claim that there is $r_0(\varepsilon)\in(0,1)$ such that for all $x\in B_h(x_0,R+1)$,
\begin{equation}\label{scaling-assumption eqn 3}
\left(\int_{B_h(x,r_0)} |\tilde\nabla g_0|^n \; d\mathrm{vol}_h \right)^{1/n}<\varepsilon.
\end{equation}
Suppose the claim is false, there is a sequence of $x_i\in B_h(x_0,R+1)$ and $r_i\to 0$ such that
\begin{equation}
\left(\int_{B_h(x_i,r_i)} |\tilde\nabla g_0|^n \; d\mathrm{vol}_h \right)^{1/n} \geq \varepsilon.
\end{equation}
We may assume $x_i\to x_\infty\in B_h(x_0,R+2)$. Since $r_i\to 0$, then $\lim_{i\to +\infty}\chi_{B_h(x_i,r_i)}= \chi_{\{x_\infty\}}$ and Lebesgue's dominated convergence theorem implies that
\begin{equation}
\lim_{i\to +\infty}\left(\int_{B_h(x_i,r_i)} |\tilde\nabla g_0|^n \; d\mathrm{vol}_h \right)^{1/n} =0,
\end{equation}
which is impossible.
On the other hand, for all $x\in M\setminus B_h(x_0,R+1)$, we have $B_h(x,1)\Subset M\setminus B_h(x_0,R)$. By \eqref{scaling-assumption eqn 2},
\begin{equation}\label{scaling-assumption eqn 4}
\left(\int_{B_h(x,1)} |\tilde\nabla g_0|^n \; d\mathrm{vol}_h \right)^{1/n}
\leq \left(\int_{M\setminus B_h(x_0,R)} |\tilde\nabla g_0|^n \; d\mathrm{vol}_h \right)^{1/n} < \varepsilon.
\end{equation}
Then the conclusion follows from \eqref{scaling-assumption eqn 3} and \eqref{scaling-assumption eqn 4} by relabelling the constant, using \eqref{scaling-assumption eqn 1} and volume comparison theorem.
\end{proof}
Proposition~\ref{scaling-assumption} in particular says that the rescaled metric $\hat g_0=r_{\varepsilon}^{-2}g_0$ and $\hat h=r_{\varepsilon}^{-2}h$ will remain bi-Lipschitz while the small gradient concentration can be achieved everywhere so that Theorem~\ref{thm-existence} applied to obtain a global Ricci-Deturck $\hat h$-flow $\hat g(t)$ starting from $\hat g_0$ (in $W^{1,n}_{\mathrm{loc}}$ sense) with scaling invariant estimates.
\begin{thm}\label{shorttime-compact}
Let $(M^n,h)$ be a complete Riemannian manifold with bounded curvature and $\mathrm{inj}(M,h)>0$. Suppose $g_0$ is a $L^\infty\cap W^{1,n}$ Riemannian metric on $M$ such that
\begin{equation}
\Lambda_0^{-1}h\leq g_0\leq \Lambda_0 h
\end{equation}
for some $\Lambda_0>1$. Then there is a smooth solution $g(t)$ to the Ricci-Deturck $h$-flow on $M\times (0,S]$ for some $S>0$ such that
\begin{enumerate}
\item[(a)] $(\Upsilon_n\Lambda_0)^{-1} h\leq g(t)\leq \Upsilon_n\Lambda_0 h$ on $M\times (0,S]$;
\item[(b)] For any $k\in \mathbb{N}$,
\begin{equation*}
\limsup_{t\to 0^+}\left(\sup_M t^{k/2}|\tilde\nabla^k g(t)|\right)= 0;
\end{equation*}
\item[(c)] If $g_0\in C^\infty_{\mathrm{loc}}(\Omega)$ for some $\Omega\Subset M$, then $g(t)\to g_0$ in $C^\infty_{\mathrm{loc}}(\Omega)$ as $t\to 0$;
\item[(d)] $g(t)\to g_0$ in $W^{1,n}_{\mathrm{loc}}$ as $t\to 0$,
\end{enumerate}
where $\Upsilon_n$ is the constant obtained in Lemma \ref{borderline-trace}.
\end{thm}
\begin{proof}
By scaling, the existence and conclusion (a), (c), (d) follow from Proposition~\ref{scaling-assumption} and Theorem~\ref{thm-existence}. It remains to show (b).
For any $\varepsilon>0$, let $r_\varepsilon$ be the constant obtained in Proposition~\ref{scaling-assumption}. Consider the rescaled metric $\hat h=r_\varepsilon^{-2}h$ and $\hat g_0=r_\varepsilon^{-2}g_0$. Since $r_\varepsilon<1$, the new background metric still satisfies $(\star)$ and $\Lambda_0^{-1}\hat h\leq \hat g_0\leq \Lambda_0 \hat h$. In addition, we have
\begin{equation}
\sup_{x\in M} \left(\fint_{B_{\hat h}(x,1)}|\nabla^{\hat h}\hat g_0|^n \; d\mathrm{vol}_{\hat h} \right)^{1/n}<\varepsilon.
\end{equation}
Applying Theorem~\ref{thm-existence} on $\hat g_0$ and $\hat h$ and rescale them back to $g_0$ and $h$, we obtain a short-time solution $g(t)$ on $M\times (0,T\cdot r_\varepsilon^2]$ with
\begin{equation}\label{estimate-compact}
\left\{
\begin{array}{ll}
(\Upsilon_n\Lambda_0)^{-1} h\leq g(t)\leq (\Upsilon_n\Lambda_0) h;\\[1mm]
\limsup_{t\to 0^+}\left(\sup_M t^{1/2}|\tilde \nabla g(t)|\right)\leq C(n,\Lambda_0)\varepsilon^{1/2};\\[2mm]
\sup_M t^{k/2}|\tilde \nabla^k g(t)|\leq C(n,k,\Lambda_0)
\end{array}
\right.
\end{equation}
and converges $g_0$ as $t\to 0$ in $W_{\mathrm{loc}}^{1,n}$. Here we used the fact that the solution $g(t)$ is unique by Theorem~\ref{thm-unique} so that all solution constructed from rescaling coincides. Since $\varepsilon$ is arbitrarily, we have
\begin{equation}\label{1st-ORDER}
\limsup_{t\to 0^+}\left(\sup_M t^{1/2}|\tilde \nabla g(t)|\right)=0.
\end{equation}
It remains to show that the same holds for $|\tilde\nabla^kg(t)|$, $k>1$. We start with $k=2$. We claim that
\begin{equation}
\limsup_{t\to 0^+}\left(\sup_M t|\tilde \nabla^2 g(t)|\right)=0.
\end{equation}
Suppose the assertion is false, there is a sequence $x_i\in M$ and $t_i\to 0$ such that
\begin{equation}\label{shorttime-compact eqn 1}
t_i|\tilde\nabla^2 g|(x_i,t_i)> \varepsilon_1
\end{equation}
for some $\varepsilon_1>0$. Consider $g_i=t_i^{-1}g(t_i)$ and $h_i=t_i^{-1}h$. By the assumptions of $h$ and $t_i\to 0$, $(M,h_i,x_i)$ converges to the standard Euclidean space $(\mathbb{R}^n,g_{\mathrm{flat}},0)$ in the smooth pointed Cheeger-Gromov sense, while \eqref{estimate-compact} shows that $g_i$ converges to some smooth metric $g_\infty$ on $\mathbb{R}^n$ which is uniformly equivalent to $g_{\mathrm{flat}}$. By \eqref{shorttime-compact eqn 1}, we obtain $|D^2 g_\infty(0)|>0$. However, \eqref{1st-ORDER} implies that $Dg_\infty\equiv 0$ on $\mathbb{R}^n$, which is impossible. The higher order asymptotic can be proved similarly by using induction argument. This completes the proof.
\end{proof}
\begin{rem}
One might compare the result with the work of Miles \cite{Simon2002} where $C^0$ assumption is now replaced by $W^{1,n}$. Both assumptions are also imposed to rule out the possibility of metric cone as the singular model (the possible limiting case). This is reflected in curvature estimates (b).
\end{rem}
\section{Stability on Euclidean space}\label{Sec:stability}
In this section, we consider the case when $h$ is flat so that the assumption $(\star)$ is scaling invariant for all scale. We will study the asymptotic behaviour of the constructed Ricci-Deturck $h$-flow as $t\to +\infty$. The following long-time existence follows directly from scaling argument.
\begin{thm}\label{Thm:LT}
For any $\Lambda_0>1$, there is $\varepsilon_0(n,\Lambda_{0})>0$ such that the following holds. Suppose $(M,h)$ is a complete non-compact flat manifold. If $g_0\in L^\infty\cap W^{1,n}_{\mathrm{loc}}$ is a Riemannian metric (not necessarily smooth) such that
\begin{enumerate}
\item[(i)] $\displaystyle \Lambda_0^{-1}h \leq g_0\leq \Lambda_0 h$ on $M$;
\item[(ii)]
$\displaystyle \sup_{x\in M,r>1} \left(r^n\fint_{B_h(x,r)} |\tilde \nabla g_0|^n d\mathrm{vol}_h\right)^{1/n}\leq \varepsilon_0$ ;
\end{enumerate}
then there is a long-time solution $g(t)$ to the Ricci-Deturck $h$-flow on $M\times (0,+\infty)$ so that $g(t)\to g_0$ in $W^{1,n}_{\mathrm{loc}}$ as $t\to 0$ and for all $t\in (0,+\infty)$,
\begin{enumerate}
\item[(a)] $(C_n\Lambda_0)^{-1}h \leq g(t)\leq C_n\Lambda_0 h$ for some $C_n>1$;
\item[(b)] for all $k\in \mathbb{N}$, there is $C(n,k,\Lambda_0)>0$ such that for all $t\in (0,+\infty)$,
$$\sup_M |\tilde\nabla^kg(t)|^2\leq \frac{C(n,k,\Lambda_0)}{t^k}.$$
\item[(c)] for all $t\in (0,+\infty)$,
$$\sup_{x\in M,r>1}\left(r^n \fint_{B_h(x,r)} |\tilde\nabla g|^n\, d\mathrm{vol}_h \right)^{1/n} \leq C_n\varepsilon _0+C_0(n,\Lambda_0)(r^{-2}t)^{1/n}$$
\end{enumerate}
In particular, $g(t)$ sub-converges to some flat metric on $M$ as $t\to +\infty$.
\end{thm}
\begin{proof}
Theorem \ref{thm-existence} guarantees the short-time existence of the solution $g(t)$ to the Ricci-Deturck $h$-flow. We will show $g(t)$ exists on $(0,+\infty)$ and satisfies (a), (b) and (c).
For any $i\in\mathbb{N}$, consider the rescaled metric $g_{i,0}=i^{-2}g_0$ and $h_i=i^{-2}h$. By Theorem~\ref{thm-existence}, there is a short-time solution $g_i(t)$ to the Ricci-Deturck $h_i$-flow on $M\times (0,T]$ with
\begin{enumerate}
\item[(a')] $(C_n\Lambda_0)^{-1}h_i \leq g_i(t)\leq C_n\Lambda_0 h_i$ for some $C_n>1$;
\item[(b')] For all $k\in \mathbb{N}$ and $t\in (0,T]$,
$$\sup_M |\nabla^{h_i,k}g_i(t)|_{h_i}^2\leq \frac{C(n,k,\Lambda_0)}{t^k};$$
\item[(c')] $g_i(t)\to g_{i,0}$ in $W^{1,n}_{\mathrm{loc}}$ as $t\to 0$.
\end{enumerate}
Rescale the flow $g_i(t)$ back to $\hat{g}_{i}(t)=i^2g_i(i^{-2}t)$, which is defined on $(0,i^2T]$. Since the solution is unique by Theorem~\ref{thm-unique}, $g(t)=\hat g_i(t)$ for any $i\in\mathbb{N}$ and hence $g(t)$ exists for all $t>0$. Then (a) and (b) follow from (a') and (b') since the derivatives are scaling invariant. To establish (c), we apply Theorem \ref{thm-existence} the rescaled flow $g_i(t)$ and obtain
\begin{equation}
\begin{split}
\left(i^n \fint_{B_{h}(x,i)} |\tilde\nabla g(i^2t)|^n \, d\mathrm{vol}_{h}\right)^{1/n}
= {} & \left( \fint_{B_{h_i}(x,1)} |\nabla^{h_i}g_i(t)|^n \, d\mathrm{vol}_{h_i}\right)^{1/n}\\[2mm]
\leq {} & C_n\varepsilon_0 +C_0t^{1/n}
\end{split}
\end{equation}
for each $x\in M$, $t\in (0,T]$ and $i\in\mathbb{N}$. Relabelling the parameter and using the volume comparison theorem, we obtain (c). The convergence follows from the Arzel\`a-Ascoli theorem and the derivatives estimates.
\end{proof}
\begin{rem}
If $(M,h)=(\mathbb{R}^n,g_{\mathrm{flat}})$ is the standard Euclidean space, then the assumption (ii) is equivalent to the smallness of global gradient concentration: $ \int_{M} |\tilde\nabla g|^n \, d\mathrm{vol}_h<\omega_{n}\varepsilon_{0}^{n}$, where $\omega_{n}$ denotes the volume of unit ball in $\mathbb{R}^{n}$.
\end{rem}
Inspired by the works of \cite{SSS2008}, we now consider the behaviour of $g(t)$ as $t\to +\infty$ if in addition $g_0$ is sufficiently close to $h$ at infinity.
\begin{thm}
Under the assumption of Theorem~\ref{Thm:LT}, if in addition $(M,h)=(\mathbb{R}^n,g_{\mathrm{flat}})$ is the standard Euclidean space and $|g_0-h|\in L^2(\mathbb{R}^n,g_{\mathrm{flat}})$, then $g(t)\to g_{\mathrm{flat}}$ in $C^\infty_{\mathrm{loc}}$ as $t\to +\infty$.
\end{thm}
\begin{proof}
Consider the difference ${\beta}(t)=g(t)-h$ which satisfies
\begin{equation}
\begin{split}
\frac{\partial }{\partial t}{\beta}_{ij}&= g^{pq}\tilde\nabla_p \tilde\nabla_q {\beta}_{ij}+ g^{-1}*g^{-1}* \tilde \nabla {\beta}* \tilde \nabla {\beta}.
\end{split}
\end{equation}
since $\tilde\nabla h=0$. Let $\phi$ be a cut-off function on $[0,+\infty)$ such that $\rho\equiv1$ on $[0,1]$, $\rho\equiv0$ on $[2,+\infty]$ and $|\phi'|\leq10$, and $\rho$ be a function on $M=\mathbb{R}^n$ which is equivalent to the distance function from a fixed point $p\in M=\mathbb{R}^n$ and satisfies $\|\rho\|_{C^2(M,h)}\leq 1$. Define $\eta_R=\phi^{n}\left(\frac{\rho}{R} \right)$ and consider the energy
$$E_R(t)=\int_M |{\beta}|^2 \eta^2_R\, d\mathrm{vol}_h.$$
Then our assumption is equivalent to say that there is $E_0>0$ so that $E_R(0)\leq E_0$ for all $R>1$. We will use $C_i$ to denote constants depending only on $n$ and $\Lambda_{0}$.
Differentiating $E_R$ with respect to $t$ and integrating by parts yields
\begin{equation}\label{E R t estimate}
\begin{split}
\frac{d}{dt}E_R(t)&= \int_M \left(2\langle {\beta}, g^{pq}\tilde\nabla_p \tilde\nabla_q {\beta} \rangle + g^{-1}*g^{-1}* \tilde \nabla {\beta}* \tilde \nabla {\beta}*{\beta} \right) \eta^2_R \, d\mathrm{vol}_h\\
&\leq \int_M \left( -C_{1}^{-1}\eta_R^2|\tilde \nabla {\beta}|^2 +C_1|{\beta}| |\tilde\nabla {\beta}|^2 \eta^2_R+C_1|\tilde\nabla \eta_R|^2 |{\beta}|^2\right)d\mathrm{vol}_h\\
&\leq \int_M \left(-C_{2}^{-1}|\tilde \nabla (\eta_R{\beta})|^2 +C_2|{\beta}|^2 |\tilde\nabla {\beta}|^2 \eta^2_R+C_2|\tilde\nabla \eta_R|^2 |{\beta}|^2\right)d\mathrm{vol}_h\\[2mm]
&=-\mathbf{I}+\mathbf{II}+\mathbf{III}.
\end{split}
\end{equation}
We squeeze more negativity from $\mathbf{I}$ by using the Sobolev inequality \eqref{Sobo-ineq} and Kato's inequality:
\begin{equation}
\mathbf{I}\geq C_{3}^{-1} \left(\int_M |{\beta} \eta_R|^\frac{2n}{n-2}\, d\mathrm{vol}_h \right)^\frac{n-2}{n},
\end{equation}
while $\mathbf{II}$ can be estimated using (c) in Theorem~\ref{Thm:LT} (with $r=+\infty$):
\begin{equation}
\begin{split}
\int_M |{\beta}|^2 |\tilde \nabla {\beta}|^2 \eta_R^2 \,d\mathrm{vol}_h
&\leq \left(\int_M |{\beta} \eta_R|^\frac{2n}{n-2}\, d\mathrm{vol}_h \right)^\frac{n-2}{n} \left(\int_M |\tilde\nabla g|^{n}\,d\mathrm{vol}_h \right)^{1/n}\\
&\leq C_n\varepsilon_0\left(\int_M |{\beta} \eta_R|^\frac{2n}{n-2}\, d\mathrm{vol}_h \right)^\frac{n-2}{n}.
\end{split}
\end{equation}
Therefore, if $\varepsilon_0$ in the assumption is sufficiently small, $-\mathbf{I}+\mathbf{II}\leq 0$ for any $R>1$.
On the other hand, since $g(t)$ is uniformly equivalent to $h$, then $|\beta|\leq C$. Combining this with $\eta_{R}=\phi^{n}\left(\frac{\rho}{R}\right)$,
\begin{equation}\label{III estimate}
\begin{split}
\mathbf{III}&\leq \frac{C_{4}n^2}{R^2}\int_M |{\beta}|^2 \eta_R^{2-\frac2n} \ d\mathrm{vol}_h\\
&\leq \frac{C_{5}}{R^2}\int_M |{\beta}|^{2-\frac2n} \eta_R^{2-\frac2n} \ d\mathrm{vol}_h\\
&\leq \frac{C_{5}}{R^{2}} E_R^{1-\frac1{n}}|B_{h}(x,2R)|^{\frac{1}{n}} \\
&\leq \frac{C_{6}}{R} E_R^{1-\frac1{n}}.
\end{split}
\end{equation}
Here we used Young's inequality and the volume growth of Euclidean space. Substituting $-\mathbf{I}+\mathbf{II}\leq 0$ and \eqref{III estimate} into \eqref{E R t estimate},
\begin{equation}
\frac{d}{dt}E_R(t) \leq \frac{C_{6}}{R} E_R^{1-\frac1{n}}.
\end{equation}
For any $\sigma>0$, the above shows
\begin{equation}
\frac{d}{dt}(E_R(t)+\sigma)^{\frac{1}{n}} = \frac{C_{6}}{nR}(E_R(t)+\sigma)^{\frac{1}{n}-1} E_R^{1-\frac1{n}} \leq \frac{C_{6}}{nR}.
\end{equation}
Integrating on $[s,t]$ and letting $\sigma\to0$,
\begin{equation}
E^{\frac{1}{n}}_R(t) \leq E^{\frac{1}{n}}_R(s)+\frac{C_{6}(t-s)}{nR}.
\end{equation}
Letting $s\to0$ and followed by $R\to+\infty$, we see that for all $t>0$,
\begin{equation}\label{L2-longtime}
\int_M |{\beta}(t)|^2\,d\mathrm{vol}_h\leq \int_M |{\beta}(0)|^2\,d\mathrm{vol}_h.
\end{equation}
By the Arzel\`a-Ascoli theorem and the derivatives estimates, passing to a subsequence, $g(t)$ converges to some flat metric $\hat h$ in $C^\infty_{\mathrm{loc}}$. By Lebesgue's dominated convergence theorem, \eqref{L2-longtime} implies
\begin{equation}
\int_M |h-\hat{h}|^2\,d\mathrm{vol}_h < +\infty.
\end{equation}
Since $(M,h)$ is the standard Euclidean space, then $\hat h\equiv h$ on $M$. Hence, the above argument shows that $g(t)\to h$ as $t\to +\infty$ in $C^\infty_{\mathrm{loc}}$ without passing to subsequence. This completes the proof.
\end{proof}
\section{Applications on almost rigidity problems}\label{Sec:almostRid}
In this section, we will use the Ricci-Deturck $h$-flow to study scalar curvature compactness problems under assumptions of uniformly bi-Lipschitz and $W^{1,n}_{\mathrm{loc}}$ smallness. This is motivated by the torus stability problems, see \cite{Gromov2014,Sormani2021}. We begin with recalling the notion of Yamabe invariant $\sigma(M)$ which is given by
\begin{equation}
\sigma(M)=\sup\left\{ \mathcal{Y}(M,[g]): [g] \textit{ is a conformal class of metrics on } M\right\},
\end{equation}
where
\begin{equation}
\mathcal{Y}(M,[g_0])=\inf\left\{ \int_M R(g)\;d\mu_g: g\in[g_0],\; \mathrm{Vol}(M,g)=1\right\}.
\end{equation}
It is well known that if a smooth metric on a
compact manifold attains the Yamabe invariant $\sigma(M)$ and if $\sigma(M)\leq 0$, then the metric must be Einstein \cite{Schoen1989}. The basic model of manifolds with $\sigma(M)=0$ is the standard torus $\mathbb{T}^n$ which was shown by Schoen-Yau \cite{SchoenYau1979,SchoenYau1979-2} when $n\leq 7$ and Gromov-Lawson \cite{GromovLawson1980} in general dimension.
We use the geometric flow smoothing to establish an integral stability on manifolds with non-positive Yamabe invariant, which in particular is applicable to the torus. This is in spirit similar to that in \cite{LeeNaberNeumayer2020}.
\begin{thm}\label{thm:stability-sigma}
Let $M$ be a compact manifold with $\sigma(M)\leq 0$ with a background metric $h$ satisfying $(\star)$. For any $\Lambda_0>1$, there is $\varepsilon_0(n,\Lambda_0)>0$ such that the following holds. Suppose $g_{i,0}$ is a sequence of Riemannian metric on $M$ such that
\begin{enumerate}
\item[(i)] $\Lambda_0^{-1}h\leq g_{i,0}\leq \Lambda_0 h$ on $M$;
\item[(ii)] $\displaystyle \left( \fint_{B_h(x,1)} |\tilde \nabla g_{i,0}|^n \, d\mathrm{vol}_h \right)^{1/n} \leq \varepsilon_0$ for all $x\in M$;
\item[(iii)] $\mathcal{R}(g_{i,0})\geq -i^{-1}$ on $M$.
\end{enumerate}
Then there is a Ricci-flat metric $\hat h$ on $M$ so that after passing to subsequence, $g_{i,0}$ converge to $\hat h$ in $ L^p$ for all $p>0$ as $i\to +\infty$, modulo diffeomorphism.
\end{thm}
\begin{proof}
By Theorem~\ref{thm-existence}, there exists a sequence of Ricci-Deturck $h$-flow $g_i(t)$ on $M\times [0,T]$ starting from $g_{i,0}$ such that
\begin{enumerate}
\item[(a)] $\Lambda_{0}^{-1}h\leq g_i(t)\leq \Lambda_{0} h$;
\item[(b)] For any $k\in \mathbb{N}$, there is $C(k,n,\Lambda_0)>0$ such that for all $t\in (0,T]$,
\begin{equation*}
\sup_M |\tilde\nabla^k g_i(t)|\leq C(k,n,\Lambda_0) t^{-k/2};
\end{equation*}
\item[(c)] $\displaystyle \left(\fint_{B_h(x,1)} |\tilde \nabla g_i(t)|^n \,d\mathrm{vol}_h \right)^{1/n}\leq C_0\varepsilon_0$ for all $(x,t)\in M\times [0,T]$.
\end{enumerate}
Moreover, it is well-known that the Ricci-Deturck flow preserves the scalar curvature lower bound and hence for all $t\in [0,T]$ and $i\in \mathbb{N}$,
\begin{equation}
\mathcal{R}(g_i(t))\geq -i^{-1}.
\end{equation}
Therefore, using diagonal subsequence argument and the Arzel\`a-Ascoli theorem, after passing to subsequence, $g_i(t)\to g(t)$ on $M\times (0,T]$ in $C^\infty_{\mathrm{loc}}$ with
\begin{enumerate}
\item[(i')] $\Lambda^{-1}h\leq g(t)\leq \Lambda h$;
\item[(ii')] For any $k\in \mathbb{N}$, there is $C(k,n,\Lambda_0)>0$ such that for all $t\in (0,T]$,
\begin{equation*}
\sup_M |\tilde\nabla^k g(t)|\leq C(k,n,\Lambda_0) t^{-k/2};
\end{equation*}
\item[(iii')]$\mathcal{R}(g(t))\geq 0$;
\item[(iv')] $\displaystyle \left(\fint_{B_h(x,1)} |\tilde \nabla g(t)|^n \,d\mathrm{vol}_h \right)^{1/n}\leq C_0\varepsilon_0$ for all $(x,t)\in M\times (0,T]$.
\end{enumerate}
The standard rigidity shows that $g(t)$ is Ricci-flat for all $t\in (0,T]$. By \eqref{eqn: h-flow-Ricciform},
\begin{equation}
\left\{\begin{array}{ll}
\partial_t g_{ij}=\nabla_i W_j +\nabla_jW_i;\\[1mm]
W^k=g^{pq}\left( \Gamma_{pq}^k-\tilde\Gamma_{pq}^k\right)
\end{array}
\right.
\end{equation}
so that $g(t)=\Psi_t^* g(T)$ where $\Psi_t$ is a diffeomorphism of $M$ satisfying
\begin{equation}
\left\{
\begin{array}{ll}
\partial_t\Psi_t(x)=W(\Psi_t(x),t);\\[1mm]
\Psi_T(x)=x
\end{array}
\right.
\end{equation}
for all $(x,t)\in M\times (0,T]$. By (i'), $\Psi_{t}$ and $\Psi_{t}^{-1}$ are uniformly $C^{1}$ bounded (independent of $t$). Using (iv') and Rellich-Kondrachov theorem, passing to a subsequence, we may assume $g_{i,0}$ converges to some $g_\infty$ in $L^p$ for all $p>0$. By applying Proposition~\ref{timezero-regularity-W1n} on $g_i(t)$ and followed by letting $i\to +\infty$, a covering argument show that
\begin{equation}
\int_M |\Psi_t^*g(T)-g_{\infty}|^p \, d\mathrm{vol}_h \leq L_{p}t
\end{equation}
for some $L_{p}>0$ independent of $t$. Then
\begin{equation}
\begin{split}
& \int_{M}|(\Psi_{t}^{-1})^{*}g_{i,0}-g(T)|^{p}\,d\mathrm{vol}_{h} \\
\leq {} & C\int_{M}|(\Psi_{t}^{-1})^{*}g_{i,0}-(\Psi_{t}^{-1})^{*}g_{\infty}|^{p}\,d\mathrm{vol}_{h}
+C\int_{M}|(\Psi_{t}^{-1})^{*}g_{\infty}-g(T)|^{p}\,d\mathrm{vol}_{h} \\
\leq {} & C\int_{M}|g_{i,0}-g_{\infty}|^{p}\,d\mathrm{vol}_{h}
+C\int_M |\Psi_t^*g(T)-g_{\infty}|^{p} \, d\mathrm{vol}_h \\
\leq {} & C\int_{M}|g_{i,0}-g_{\infty}|^{p}\,d\mathrm{vol}_{h}+CL_{p}t \\
\end{split}
\end{equation}
for some $C>0$ independent of $t$ and $i$. It then follows that $(\Psi_{i^{-1}}^{-1})^{*}g_{i,0}$ converges to Ricci flat metric $g(T)$ in $L^{p}$ sense.
\end{proof}
\begin{rem}
By working on the diffeomorphic Ricci flow directly with the regularity estimates established in this work, we believe that one can use the decomposition theorem in \cite{LeeNaberNeumayer2020} to obtain a similar result in term of $d_p$ topology which is more intrinsic in nature.
\end{rem}
\section{Application on singular metric with scalar lower bound}\label{Sec:singular}
In this section, we would like to use the Ricci-Deturck flow to study the following conjecture of Schoen concerning singular metrics with $\mathcal{R}\geq 0$ on manifolds with non-positive Yamabe invariant:
\begin{conj}[Conjecture 1.5 in \cite{LiMantoulidis2019}] \label{conj-1}
Let $M^n$ be a compact manifold with $\sigma(M)\leq 0$. Suppose $g$ is an $L^\infty$ metric on $M$ such that for some for some smooth metric $h$ and $\Lambda>1$, $\Lambda^{-1}h\leq g\leq \Lambda h$ and is smooth away from a closed, embedded submanifold $\Sigma$ with co-dimension $\geq 3$ and satisfies $\mathcal{R}(g)\geq 0$ outside $\Sigma$, then $\text{\rm Ric}(g)=0$ and $g$ can be extended smoothly on $M$.
\end{conj}
Using the regularization result from Theorem~\ref{thm-existence} together with the maximum principle method in \cite{LeeTam2021}, we have the following partial result.
\begin{thm}\label{rigidity-FollowingLEETAM}
Let $M^n$ be a compact manifold with $\sigma(M)\leq 0$ and $\Sigma$ is a compact set of co-dimension $\geq 3$ on $M$. Suppose $g_0$ is a $L^\infty\cap W^{1,n}$ Riemannian metric on $M$ so that $g_0\in C^\infty_{\mathrm{loc}}(M\setminus \Sigma)$ and $\mathcal{R}(g_0)\geq 0$ outside $\Sigma$. Then $\text{\rm Ric}(g_0)=0$ outside $\Sigma$. If $\Sigma$ consists of only isolated points, then $g_0$ is a smooth metric with respect to a possibly different smooth structure on $M$.
\end{thm}
\begin{proof}[Sketch of the proof]
It suffices to find an smooth approximation $g_i$ such that $g_i\to g_0$ in $C^\infty_{\mathrm{loc}}(M\setminus \Sigma)$ and $\mathcal{R}(g_i)\geq 0$ on $M$. In \cite{LeeTam2021}, this was carried out in the case when $g_0$ is $C^0(M)$ using the result of \cite{Simon2002}. In the setting of $L^\infty\cap W^{1,n}$, we use the new existence theory instead. Since the argument is almost identical, we only sketch the proof here.
By Theorem~\ref{shorttime-compact}, we may find a Ricci-Deturck $h$-flow $g(t)$ starting from $g_0$ in $W^{1,n}$ sense such that $g(t)$ stays uniformly equivalent to $h$ and satisfies
\begin{equation}
|\tilde \nabla g(t)|^2+ |\tilde \nabla^2 g(t)|+|\text{\rm Rm}(g(t))|\leq \frac{\delta}{t}
\end{equation}
where $\delta$ can be made as small as we wish by shrinking the time. The curvature estimates follows from the boundedness of $|\text{\rm Rm}(h)|$ and the $C^2$ bound of $g(t)$ with respect to $h$. By the proof of \cite[Theorem 1.1]{LeeTam2021}, we have $\mathcal{R}(g(t))\geq 0$ for $t>0$ where the smallness of $\delta$ and uniform bi-Lipschitz of $g(t)$ are essentially used. Hence, $\text{\rm Ric}(g(t))\equiv 0$ on $M\times (0,T]$ by the classical rigidity. Since $g(t)$ is smooth up to $t=0$ outside $\Sigma$. We obtain the conclusion by letting $t\to 0$ on $g(t)$ outside $\Sigma$.
\end{proof}
\begin{rem}
We remark here that one can obtain an appropriate extension of $g_0$ across $\Sigma$ using the additional structure $W^{1,n}$ following the method in \cite{LeeTam2021}, see also \cite{LammSimon2021}. Moreover, the co-dimension of $\Sigma$ can be relaxed further using the concept of upper Minkowski dimension as in \cite{LeeTam2021}. Since we are primarily interested in the conjecture of Schoen on $L^\infty$ metrics, we do not pursue here.
\end{rem}
As one can expect from the works in \cite{LeeTam2021,LiMantoulidis2019,ShiTam2018}, we can use the regularization method in the proof of Theorem~\ref{rigidity-FollowingLEETAM} to prove the following positive mass Theorem. We refer readers to \cite{LeeTam2021,ShiTam2018} for the background setting. For related works, see also \cite{Miao,JiangShengZhang2020,Lee2013,LeeLeFloch2015}.
\begin{thm}\label{rigidity-FollowingLEETAM-PMT}
Let $(M^n, g_0)$ be a Asymptotically flat manifold with $n\geq 3$, $g_0$ is a $L^\infty\cap W^{1,n}_{loc}$ metric on $M$ such that $g_0$ is smooth away from some compact set $\Sigma$ of $M$ of co-dimension at least $3$. Suppose $\mathcal{R}(g_0)\geq 0$ outside $\Sigma$, then the ADM mass of each end is nonnegative. Moreover, if the ADM mass of one of the ends is zero, then $M$ is diffeomorphic to $\mathbb{R}^n$ and $g_0$ is flat outside $\Sigma$.
\end{thm}
\begin{proof}[Sketch or Proof]
The proof follows verbatim from that of \cite[Theorem 1.2]{LeeTam2021}. We only point out the modifications. By Theorem~\ref{shorttime-compact} and the asymptotically flat assumption, there is asymptotically flat Ricci-Deturck $h$ flow on $M$ starting from $g_0$ in $W^{1,n}_{loc}$ sense. We may assume $h$ to be $g_{euc}$ at each ends. Moreover, for each end $E$ and $t>0$,
$$m_{ADM}(E,g(t))\leq m_{ADM}(E,g_0).$$
See the proof of \cite[Proposition 5.1]{LeeTam2021} which is based on \cite{McFeronSze2012}. The proof of \cite[Theorem 1.2]{LeeTam2021} infers that $g(t)$ is of $\mathcal{R}(g(t))\geq 0$ for $t>0$ and hence the positive mass Theorem for smooth metrics \cite{SY,SchoenYau2017} implies that $m_{ADM}(E,g(t))\geq 0$ and hence the $m_{ADM}(E,g_0)\geq 0$. Moreover, if $m_{ADM}(E,g_0)=0$ for some end $E$, then the ADM mass of $g(t)$ along the same end is zero and hence $(M,g(t))$ is isometric to the flat Euclidean space. The flatness of $g_0$ outside $\Sigma$ follows from the smooth convergence outside $\Sigma$.
\end{proof}
\section{Application on scalar curvature persistence}\label{Sec:scalarPres}
In this section, we would like to understand how scalar curvature lower bound behaves when the convergence is weak while the limiting metric is a-priori smooth. Under the uniform bi-Lipschitz assumption, we can show that the scalar curvature lower bound persists under $W^{1,n}_{\mathrm{loc}}$ convergence. We focus on the compact case while the analogous non-compact case can be proved by the similar argument.
\begin{thm}\label{thm:stability-R-lowerbound}
Let $(M,h)$ be a compact manifold. Suppose $g_{i,0}$ is a sequence of smooth Riemannian metrics on $M$ such that
\begin{enumerate}
\item[(i)] $\Lambda_0^{-1}h\leq g_{i,0}\leq \Lambda_0 h$ on $M$ for some $\Lambda_0>1$;
\item[(ii)] $\limsup_{i\to +\infty}\|g_{i,0}-g_0\|_{W^{1,n}(M,h)}=0$ for some smooth metrics $g_0$ and $h$;
\item[(iii)] $\mathcal{R}(g_{i,0})\geq \kappa$ on $M$ for some $\kappa\in \mathbb{R}$.
\end{enumerate}
Then we have $\mathcal{R}(g_0)\geq \kappa$ on $M$.
\end{thm}
\begin{proof}
We follow the strategy in \cite{Bamler2016}. By Theorem~\ref{thm-existence} and Theorem~\ref{shorttime-compact}, for $i$ sufficiently large there exists a solution to the Ricci-Deturck $h$-flow $g_i(t)$ on $M\times [0,T]$ starting from $g_{i,0}$ such that $g_i(t)$ is uniformly smooth away from $t=0$ and stay uniformly equivalent to $h$ for $t\in (0,T]$. As in the proof of Theorem~\ref{thm:stability-sigma}, we obtain a smooth Ricci-Deturck $h$-flow $g(t)$ on $M\times (0,T]$ such that $\mathcal{R}(g(t))\geq \kappa$ for $t\in (0,T]$. Moreover, Proposition~\ref{timezero-regularity-W1n} and our assumptions imply that $g(t)$ attains $g_0$ as the initial data in $L^2$ sense as $t\to 0$. By Theorem~\ref{thm-unique}, $g(t)$ coincides with the classical smooth Ricci-Deturck $h$-flow solution $\hat g(t)$ with initial data $g_0$. This shows that $\mathcal{R}(\hat g(t))\geq \kappa$ for all $t\in (0,T]$. The conclusion follows by letting $t\to 0$ on $\hat g(t)$.
\end{proof}
\begin{rem}
In \cite{Bamler2016}, the persistence of scalar curvature is made local so that $\kappa$ is allowed to be a function, see also \cite{HuangLee2021} for some generalizations along this line. We expect that similar localization under the current setting is also true using the technique in \cite{HuangLee2021}.
\end{rem}
|
1,116,691,497,335 | arxiv | \section{Introduction}
\quad $\,\,$ The Riemann hypothesis states that the complex zeros of the Riemann zeta function $\zeta(s)$ must all lie on
the critical line $\textrm{Re}(s)=1/2$
\cite{riem,titchmarsh,borwein,rev}. This conjecture is an unsolved problem in mathematics and
all efforts to prove it have remained unsuccessful.
Hilbert and P\'olya suggested that there might be a spectral interpretation of the
nontrivial zeros of the Riemann zeta function, which could be the
eigenvalues of a self-adjoint operator in an appropriate Hilbert space.
The work of Bohigas and Giannoni \cite{bohigas} and also Berry \cite{berry}, showed that the quantum energy spectra of a
classically chaotic systems exhibit universal spectral correlations which are described by random matrix theory \cite{mehta}.
Numerical evidences indicate that the distribution of spacing between nontrivial zeros of the Riemann zeta
function is statistically identical to the distribution of eigenvalue spacings in a Gaussian unitary ensemble
of random matrices. Such a conjecture prompted many authors to consider the
Riemann hypothesis in the light of random matrix theory and quantum mechanics of classically
chaotic systems ~\cite{berry3,berry4,bu2,bourgade,sierra2,sierra1}.
These facts have also led some authors to consider a
Fermi gas with a fully chaotic classical dynamics~\cite{le,le2}. These authors considered the imaginary part
of the complex zeros of the Riemann zeta function as the single-particle levels of a fermionic many-body system,
known in the literature as the Riemannium.
A crucial question is whether it makes sense to suppose that this elusive operator, conjectured by
Hilbert and P\'olya occurs in a quantum mechanical system with countably infinite number of degrees of freedom.
Here we are interested in
investigating the consequences of the existence of such
operator in a quantum field theory framework. The assumptions are:
the operator must be defined in a bounded region of space, since the spectrum is discrete,
acts on scalar functions defined in flat space-time, and finally,
is self-adjoint in some Hilbert space.
In the approach of quantum field theory using functional methods \cite{fu1,fu2},
Gaussian path integrals yield expressions, which depend on the differential operator's determinant. Although these determinants diverge,
finite regularized values can be obtained using the spectral zeta function regularization~\cite{se,ray,dowker,ha,vo,vor,quine,elizalde,dune}.
Using that the prime zeta function can be analytically extended only in the strip
$0<\sigma\leq 1$ \cite{lan,carl}, Menezes and Svaiter \cite{ga} concluded that the sequence of prime numbers cannot be associated with some
hypothetical linear operator of a physical system with infinitely many degrees of freedom. This result was generalized by Andrade to other sequences of numbers
motivated by number theory ~\cite{andrade}.
Later, Menezes, Svaiter, and Svaiter first considered the same situation with numerical sequences whose asymptotic distributions are not
``far away" from the asymptotic distribution of prime numbers \cite{gn2}. Next, using the construction of the so-called
super-zetas or secondary zeta functions built over the Riemann zeros, i.e.,
the nontrivial zeros of the Riemann zeta function~\cite{gui,delsarte,cha1,cha2,ivic,voros1,superzeta}, and the regularity properties of one of these secondary zeta
functions at the origin, these authors have
shown that the sequence of the nontrivial zeros of the Riemann zeta function can in
principle be interpreted as being the spectrum of a self-adjoint operator acting on scalar fields of some hypothetical system.
In the functional approach to Euclidean scalar field theory there are three fundamental objects to be considered. The generating functional of all
Schwinger functions $Z[J]$, the generating functional of connected Schwinger functions $W[J]$, and finally the generating functional of proper vertices $\Gamma[\varphi]$.
Since in perturbation theory the proper vertices are given by the sum of one-particle irreducible diagrams, $\Gamma[\varphi]$ is the generating functional of
the sum of one-particle irreducible Green's functions. Performing a loop expansion we can define the one-loop effective action \cite{ea}.
Since, there is a relationship between the Casimir energy and the one-loop effective
action \cite{casimir,milton,plunien,mostepanenko,Bordag,blau},
it is natural to enquire whether is it possible to calculate the renormalized vacuum energy associated with a hypothetical system where the imaginary part of
the complex zeros of the zeta function appear in the spectrum of the vacuum modes.
The aim of this paper is to discuss the renormalization of the vacuum expectation value of the energy operator, i.e., the renormalized zero-point energy
associated to a massive scalar field defined in a $(d+1)$-dimensional flat space-time,
assuming that one of the coordinates lies in an finite interval $[0,a]$.
We call such configuration a slab-bag \cite{nn}. We consider a
linear operator $-\Delta_{d-1}-{M}$, where $\Delta_{d-1}$ is the usual Laplacian defined in a $(d-1)$-dimensional space and ${M}$
is an unknown operator such that its eigenvalues are the imaginary part of the complex zeros of the Riemann zeta function.
We are using the same idea of the papers \cite{actor,caruso,matt}, where some confining potential acts as a pair of effective plates.
One way to implement our model is to assume that the scalar field is coupled to a confined background
field $\sigma(\textbf{x}_\perp,z)$ with an
interaction Lagrangian ${\cal{L}}_{int}=\sigma(\textbf{x}_\perp,z)\,\varphi^{2}(t,\textbf{x}_\perp,z)$ such
that the system is confined in the interval $[0,a]$ in one dimension and
is unrestricted in the other spatial directions. The idea of substituting the hard boundary conditions on some surface by
potentials was used by many authors.
For instance, the zero-point energy of a quantum field as the
limit of quantum field theory coupled to a background was used in Ref.
\cite{jaffe1,jaffe2,jaffe3}.
The organization of this paper is the following.
In section II we discuss arithmetic gases, where the sequence of prime numbers is used in a quantum field theory framework.
In Section III, we study a massive scalar field in a $(d+1)$-dimensional space-time, for which the
coordinate $x_d$ lies in a finite interval $[0,a]$,
assuming that in the spectrum of the vacuum modes appear the nontrivial zeros of the
zeta-function. In section IV the renormalized zero-point energy of this system is presented using a combination between
dimensional and analytic regularization procedures.
Conclusions are given in Section V. In Appendix A we present the analytic extension for one of the super-zeta functions
following the classical paper of Delsarte. In Appendix B we show that the renormalized vacuum energy is identified with the
constant term in the asymptotic expansion of the regularized vacuum energy. In the paper we use $k_{B}=c=\hbar=1$.
\section{Arithmetic Riemann gas in quantum field theory}
In this section we discuss how is the sequence of prime numbers has been used in quantum field theory framework.
As a tool to show connections between number theory and physics using statistical-mechanics methods,
some authors introduced the sequence of prime numbers in quantum field theory~\cite{stn,sss,bakas,julia,sp,spector}.
Let us consider a non-interacting bosonic field theory with
Hamiltonian
\begin{equation}
H_B=\omega\sum_{k=1}^{\infty}\ln(p_{k})b^{\dagger}_{k}b_{k},
\label{25}
\end{equation}
where $b_k^{\dagger}$ and $b_{k}$ are respectively the creation and annihilation operators of quanta associated
to the bosonic field and $p_{k}$ are the sequence of prime numbers. We have $p_{1}=2,\,p_{2}=3,...$ and so on.
The energy of each mode is given by $\nu_{k}=\omega\ln\,p_{k}$.
The arithmetic gas partition function is exactly the Riemann zeta function, i.e.,
$Z=\zeta(\beta\omega)$. Since the Riemann zeta function has a simple pole at $s=1$, there is a Hagedorn temperature
above which the system can not be heated up \cite{hagedorn}. Other property of such system is that the
asymptotic density of states increases exponentially with energy. The Riemann gas is also called an arithmetic gas.
For a free arithmetic fermionic gas the Hamiltonian is given by
\begin{equation}
H_F=\omega\sum_{k=1}^{\infty}\ln(p_{k})c^{\dagger}_{k}c_{k},
\label{26}
\end{equation}
where $c_k^{\dagger}$ and $c_{k}$ are respectively the creation and annihilation operators of quanta associated
to the fermionic field and the $p_{k}$ are again the sequence of prime numbers. Making use of the arithmetical M\"obius function \cite{hardy}
defined by $\zeta^{-1}(s)=\sum_{n=1}^{\infty}\mu(n)/n^{s}$
it is possible to show that the partition function of the fermionic system is given by $\zeta(\beta\omega)/\zeta(2\beta\omega)$.
The M\"obius coefficients are $\mu(1)=1$ and $\mu(n)=0,1$ or $-1$, depending whether, $n$ is divisible by a square of a prime number,
or is a product of an even number of primes, all
different, or of an odd number of primes, all different respectively.
As discussed in the literature, from the thermal partition function associated to the Hamiltonians $H_{B}$ and $H_{F}$ we get
\begin{equation}
Z_{F}(\beta)\,Z_{B}(2\beta)=Z_{B}(\beta).
\end{equation}
The noninteracting mixture of two systems, each of its own temperature $\beta^{-1}$ and $(2\beta)^{-1}$ is equivalent to another bosonic system with
temperature $\beta^{-1}$.
Let us show that a alternative bosonic Hamiltonian similar to $H_B$ can not be constructed with the sequence of prime numbers or a sequence of the power of them in the spectrum. Only the $\omega\ln p_{n}$ spectra is allowed.
To proceed let us use the the following representation for $\ln x=\lim_{s\rightarrow 0}\frac{x^{s}-1}{s}$, the replica method used to study disordered systems \cite{glass,do}.
The Hamiltonian of this non-interacting bosonic field theory can be written as
\begin{equation}
H=\lim_{s\rightarrow 0}\omega\biggl(\sum_{k=1}^{\infty}\frac{p_{k}^{s}}{s}b^{\dagger}_{k}b_{k}-\sum_{k=1}^{\infty}\frac{b^{\dagger}_{k}b_{k}}{s}\biggr).
\label{cas}
\end{equation}
Although the Hamiltonian of the Riemann gas is obtained only when ${s\rightarrow 0}$, we can ask if there
is a Hamiltonian for generic value of $s$. The case with $s=1$, is exactly a system with the spectrum proportional to the prime numbers.
In the following we use the results obtained in \cite{ga} to prove that only the Riemann gas is well defined. Using the
definition of the prime zeta function $P(s)$, $s=\sigma+\tau i$, for
$\sigma,\tau \in \mathbb{R}$, given by
\begin{equation}
P(s)=\sum_{\left\{p\right\}}\,p^{-s}, \,\,\textrm{Re}(s)>1,
\label{21}
\end{equation}
where the summation is performed over all primes \cite{carl,lan}, it was shown that the free energy of a system with the
sequence of prime numbers as the spectrum does not exist.
This is related to the fact that there is no analytic extension for $P(s)$ to the half-plane $\sigma\leq 0$.
The same argument can be used to prove that there is no fermionic system with infinitely number of degrees of freedom
whose spectrum is composed by the sequence of prime numbers. Let us consider Dirac fermions interacting with an external
field $A_{\mu}(x)$ in Euclidean space.
The fermionic path integral is not well defined since the determinant
of the Dirac operator diverges because it is an unbound product of increasing eigenvalues $\lambda_{n}$.
It is possible to define a regularized determinant using the spectral zeta function associated with the Dirac operator and
the principle of analytic continuation.
The analytic continuation of the spectral zeta function must be regular at the origin, i.e., $s=0$. The
eigenvalues of the Dirac operator cannot be the sequence of prime numbers by the same reason we discussed for the bosonic case.
Although there is a reciprocity between the set of prime numbers and nontrivial zeros of the zeta function,
these two sequences of numbers have totally distinct behavior with respect to being the spectrum of a
linear operator associated to a system with countable infinite number of degrees of freedom.
In the next section we assume that there is a spectral interpretation of the
nontrivial zeros of the Riemann zeta function. The nontrivial zeros are the
eigenvalues of a self-adjoint operator in an appropriate Hilbert space. We are interested in studying the consequences of the existence of such
operator in a quantum field theory framework.
\section{The zeta zeros and the vacuum energy}
Here we are interested in studying finite size systems \cite{b1,b2,b22,b3}, where the
translational invariance is broken and the imaginary part of the complex zeros of the Riemann zeta function appear as the
spectrum of ${M}$. We refer as finite size system to any system that has finite size in at least on space dimension.
We use cartesian coordinates $x^{\mu}=(t,\textbf{x}_\perp,z)$.
Let us consider the complex zeros of the zeta function
$\rho=\beta+i\gamma$ for
$\beta,\gamma \in \mathbb{R}$ and $0<\beta<1$.
The Riemann hypothesis is the conjecture that $\beta=1/2$.
We will assume through the paper the validity of this conjecture and also take only the positive non-trivial zeta zeros, i.e., $\gamma>0$.
For the prime numbers, using the
prime number theorem, we get the asymptotic regimes $p_{n}\sim\,n\ln n$.
Let the zeros of $\zeta(s)$ on the line $Re(s)=1/2$ be $\rho_k=1/2+i\gamma_k$ $(k=\pm1,\pm2, ...)$ with $\gamma_{-k}
=-\gamma_{k}$ and $\gamma_{0}=0$. Let us assume that these zeros are simple.
If the zeros with $\gamma>0$ are arranged in a sequence $\rho_{k}=1/2+i\gamma_{k}$ so that
$\gamma_{k+1}>\gamma_{k}$, then $|\rho_{k}|\sim\gamma_{k}\sim 2\pi k/\ln k$ as $k\rightarrow \infty$.
Therefore for the zeros of the zeta function we get $\gamma_{k}\sim\,k/\ln k$.
The Weyl theorem and its generalization by Pleijel relate the asymptotic distribution of eigenvalues of an elliptic differential
operator with geometric parameters of the
surface where the fields satisfy some boundary condition \cite{weyl,pleijel,kac,cou,can,protter}. The asymptotic series for the density of eigenvalues
of the Laplacian operator in a four dimensional space-time is given by $N(\omega)= V\omega^{2}/2\pi^{2}\mp\ S\omega/8\pi+ S{q}/6\pi^2+O(\omega^{-2})$,
where $V$ is the volume of the three-space.
$S$ the area of the boundary and ${q}$ is mean curvature of the boundary averaged over the surface for Dirichlet (Neumann) boundary conditions.
Since the asymptotic behavior of the non-trivial zeros has a regime quite different from the asymptotic behavior of the spectrum of the Laplacian,
we consider a linear operator $-\Delta_{d-1} - {M}$, where
$\Delta_{d-1}$ is the (d-1)-dimensional Laplacian, and impose that the eigenvalues associated with the linear operator ${M}$ are the
imaginary part of the complex zeros of the zeta function.
Suppose a $d+1$-dimensional Minkowski space-time. Let us consider a quantum field theory of a single scalar field $\varphi:\cal M\rightarrow \mathbb{R}$.
The action functional of the theory is
\begin{equation}
S(\varphi)=\int\,d^{d+1}x \,{\cal{L}}(\varphi),
\end{equation}
where ${\cal{L}}(\varphi)$ is the Lagrangian density of the system.
We should assume that the scalar field is coupled to a background field $\sigma(\textbf{x}_\perp,z)$ such
that the system is confined in the interval $[0,a]$ in one dimension and
is unrestricted in the other spatial directions. The Lagrangian density of the system is given by
\begin{eqnarray}
{\cal{L}} =\, &&\varphi(t,\textbf{x}_\perp,z)\bigl(\partial^{2}_t-\Delta_{d-1}-{ M}\bigr)\varphi(t,\textbf{x}_\perp,z)\nonumber\\
&&
-m^{2}\varphi^{2}(t,\textbf{x}_\perp,z)
-\delta m^{2}\varphi^{2}(t,\textbf{x}_\perp,z),
\end{eqnarray}
where $m_B^2=m^2+\delta m^2$, and $m^2_B$ is the squared bare mass and $\delta m^2$ is the mass counterterm.
The counterterms have to be included in odd-dimensional space-time as will be shown below.
The eigenfrequencies of the vacuum modes can be found from the equation
\begin{equation}
\bigl(-\Delta_{d-1}-{M}\bigr)\varphi(\textbf{x}_\perp,z)=\omega^2\varphi(\textbf{x}_\perp,z).
\end{equation}
Since we are assuming that the linear operator ${M}$ has a differential and a background contribution we can write
\begin{equation}
-{M}=\bigl(-{\cal O}_{z}+
\sigma(\textbf{x}_\perp,z)\bigr),
\end{equation}
where ${\cal O}_{z}$ is an unknown differential operator.
It is clear that the linear operator ${-M}$ satisfies
\begin{equation}
\bigl(-{\cal O}_{z}+\sigma(\textbf{x}_\perp,z)\bigr)u_{n}(\textbf{x}_\perp,z)=\frac{\gamma_n}{a^2}u_{n}(\textbf{x}_\perp,z),
\end{equation}
where $u_n(\textbf{x}_\perp,z)$ is a countable infinite set of eigenfunctions. The $u_n(z)$ are normalized eigenfunctions
satisfying the completeness and orthonormality relations, i.e,
\begin{equation}
\sum_n u_n(z)u_n^{\ast}(z')= \delta(z-z')
\end{equation}
and
\begin{equation}
\int_0^a\, dz\, u_n(z)u_{n'}^{\ast}(z)= \delta_{n,n'}.
\end{equation}
The zero-point energy of an massive scalar field defined in a $d$-dimensional box of volume $\Omega$ in a $(d+1)$-dimensional flat space-time is given by
\begin{equation}
\langle 0|H|0 \rangle = \frac{1}{2}\sum_{\textbf{k}}^{\infty}\omega_{\textbf{k}}.
\end{equation}
In the bag configuration, the eigenfrequencies of the vacuum modes are given by
\begin{equation}
\omega_{\textbf{k}} = \sqrt{k_1^2 + k_2^2 + \cdots + k_{d-1}^2 + k_d^2+m^{2}},
\end{equation}
where, as we have discussed,
\begin{equation}
k_d(n,a) = \frac{\sqrt{\gamma_n}}{a}\,\,\, n = 1, 2, \cdots,
\end{equation}
the quantities $\gamma_n$ being the imaginary part of the nontrivial zeros of the Riemann zeta function.
Since the eigenfunctions associated to the $M$ operator are not known, we cannot find the local
renormalized energy density. In other words, local
methods cannot be implemented here.
The zero-point energy
of the scalar field in the slab-bag configuration, taking into account the above equations and using the fact that the integrations over
the momenta correspond to $(d-1)$ integrations and one summation, is thus given by
\begin{eqnarray}
\langle 0|H|0 \rangle =&& \frac{1}{2}\prod_{i = 1}^{d-1}\left(\frac{L_i}{2\pi}\right)\int_{0}^{\infty}d^{d-1}k\times\nonumber\\
&&\sum_{n=1}^{\infty}\left[k_1^2 + \cdots + k_d^2(n,a)+m^{2}\right]^{1/2},
\end{eqnarray}
where $L_i \gg a$. Defining the total zero-point energy per unit area of the hyperplates:
\begin{equation}
\varepsilon_{d+1}(a) = \frac{\langle 0|H|0 \rangle}{\prod_{i}L_i},
\end{equation}
one arrives at
\begin{eqnarray}
\varepsilon_{d+1}(a,m) =&& \frac{1}{2}\sum_{n=1}^{\infty}\int_{0}^{\infty}\,\frac{d^{d-1}k}{(2\pi)^{d-1}}\times\nonumber\\
&&
\left[k_1^2 + \cdots + k_d^2(n,a)+m^{2}\right]^{1/2}.
\label{13}
\end{eqnarray}
In the following we use dimensional \cite{dim1,dim2,dim3} and analytic regularization combined.
Using the formula
\begin{equation}
\int_{0}^{\infty}\frac{d^{d}u}{(u^{2}+a^{2})^{s}}=\frac{\pi^{d/2}}{\Gamma(s)}
\Gamma\biggl(s-\frac{d}{2}\biggr)
\frac{1}{(a^{2})^{s-d/2}},
\end{equation}
and defining the function $f(d)$ as
\begin{equation}
f(d)= \frac{1}{2(2\sqrt{\pi})^{d}}
\end{equation}
the vacuum-energy per unit area $\varepsilon_{d+1}(a)$ can be written as
\begin{equation}
\varepsilon_{d+1}(a,m)=\frac{f(d)}{a^{d}}
\Gamma(-d/2)\sum_{n=1}^{\infty}\bigl(\gamma_{n}+a^{2}m^{2}\bigr)^{\frac{d}{2}}.
\label{fun}
\end{equation}
Due to the unboundedness of the eigenvalues of ${M}$, the vacuum-energy per unit area is divergent in a space-time of any dimension.
In the next section we will discuss the renormalization procedure to obtain a finite vacuum energy associated to the system.
\section{The renormalized vacuum expectation value of the energy operator}
In order to obtain the renormalized vacuum energy, let us use the analytic extension technique.
Before continuing, we would like to point out that a very common procedure to study the divergent contribution of the vacuum energy given by Eq. (\ref{fun}) is to use the
heat-kernel expansion in some asymptotic limit. This is a good procedure for usual operators, such as the Laplacian, where appears a logarithmic divergent mass term
$\ln(4\mu^{2}/m^{2})$. The $\mu$ is an parameter with mass dimension that we have to introduce in order to perform analytic regularizations.
Since in our problem
we are using an unknown operator, this procedure cannot be implemented. In Appendix B we show that the renormalized vacuum energy is
identified with the constant term in the asymptotic expansion of the regularized vacuum energy. To proceed,
let us assume that $a^{2}m^{2}<\gamma_{1}$. In other words, the square of the ratio between the length of the slab-bag in the
$z$ direction to the Compton length $l_{c}=m^{-1}$ is smaller than
the first non-trivial zero.
It is important to stress that in general it is not necessary to impose such condition. Here we need this assumption to make
use of the generalization of the binomial expansion.
For any complex coefficient $\alpha$ and $|x|<1$ we may write
\begin{equation}
(1+x)^{\alpha}= \sum_{n=0}^{N}C_{\alpha}^{n}\, x^n,
\end{equation}
where $C_{\alpha}^{n}$ are the generalizations of the binomial coefficients given by $C_{\alpha}^{n}=
\frac{\Gamma(\alpha+1)}{\Gamma(n+1)\Gamma(\alpha-n +1)}$. For odd dimensional
space-times we get that the sum has a finite number of terms, i.e., $N=d/2$ and for even dimensional space-time we have $N\rightarrow\infty$.
Let us define the function $g(k,d)$ as
\begin{equation}
g(d,k)=\frac{f(d)\Gamma(\frac{d}{2}+1)}{\Gamma(k+1)\Gamma\bigr(\frac{d}{2}-k+1\bigl)}.
\label{g}
\end{equation}
The zero-point energy per unit area can be written as
\begin{eqnarray}
\varepsilon_{d+1}(a,m)=&&\frac{1}{a^{d}}\sum_{k=0}^{N(d)}g(d,k)\Gamma(-d/2)(m a)^{2k}
\times\nonumber\\
&&\sum_{n =1}^{\infty}(\gamma_{n})^{\frac{d}{2}-k}.
\end{eqnarray}
Let us define the super-zeta or secondary zeta function built over the Riemann zeros, i.e.,
the nontrivial zeros of the Riemann zeta function. Let $s$ be a complex variable i.e. $s=\sigma+i\tau$ with $\sigma,\tau \in \mathbb{R}$.
The super-zeta $G_{\gamma}(s)$ is defined as
\begin{equation}
G_{\gamma}(s)=\sum_{n =1}^{\infty}\frac{1}{\gamma_{n}^{s}}, \,\,\,\textrm{Re}(s)>1,
\label{zetazero}
\end{equation}
where we are assuming that $\gamma_{n}>0$.
The analytic continuation of
the super-zetas has been discussed in the literature ~\cite{gui,delsarte,cha1,cha2,ivic,voros1}.
In the Appendix A, we present the analytic extension for $G_{\gamma}(s)$ assuming the Riemann hypothesis and following the Ref.~\cite{delsarte}.
A more detailed study for the zeta function for the Riemann zeros
can be found in Ref.~\cite{superzeta}.
Let us study first the even dimensional space-time case. By using the definition of $g(d,k)$ of Eq. (\ref{g}) we define the function
$h(d,k)=g(d,k)\Gamma(-d/2)$.
The zero-point energy per unit area can be written as
\begin{eqnarray}
\varepsilon_{d+1}(a,m)=\frac{1}{a^d}\sum_{k=0}^{\infty}h(d,k)(ma)^{2k}G_{\gamma}\bigr(k-d/2\bigl).
\label{even-energy}
\end{eqnarray}
By the use of the analytic extension of the super-zeta $G_{\gamma}(s)$ we obtain that the regularized
vacuum energy of the system is finite in any even dimensional space-time.
The zero-point energy per unit area $\varepsilon_{d+1}(a)$ in the odd dimensional space-time case can be written as
\begin{eqnarray}
\varepsilon_{d+1}(a,m)=&&\frac{1}{a^d}\sum_{k=0}^{d/2}g(d,k)(ma)^{2k}\times\nonumber\\
&&
\Gamma(-d/2)\,G_{\gamma}\bigr(k-d/2\bigl).
\end{eqnarray}
The analytic continuation of $G_{\gamma}(s)$ has simple poles for $(4n+3)$ dimensional space-time, for $n=0,1,2,..$. Since the gamma function has poles
for negative integers, we conclude that to find a finite vacuum energy in odd-dimensional space-time, we have to introduce
mass counterterms.
The gamma function $\Gamma(z)$ is
a meromorphic function of a complex variable $z$ with simple poles
at the points $z=0,-1,-2..\,$. In the neighborhood of any of its poles $z=-n$ for $n=0,1,2..$ the gamma function has a representation given by
\begin{equation}
\Gamma(z)=\frac{(-1)^{n}}{n!(z+n)}+\Omega(z+n)
\end{equation}
where $\Omega(z+n)$ stands for the regular part of the analytic extension. The same construction can be used for the super-zeta function.
In the neighborhood of any of its poles $z=-(2m+1)$ for $m=0,1,2...$ the super-zeta function has a representation given by
\begin{equation}
G_{\gamma}(z)=\frac{\alpha_{1}}{z+(2m+1)}+\Phi\bigl(z+(2m+1)\bigr),
\end{equation}
where $\Phi(z+(2m+1))$ is the regular part of the analytic extension and $\alpha_1$ is a constant.
The regularized vacuum energy has
second order and first order poles for $(4n+3)$-dimensional space-time with $n=0,1,2..\,$. Just first order poles for odd-dimensional space-time
excluding the $(4n+3)$-dimensional poles, and a computable finite part which would be the renormalized vacuum energy. The term $k=0$ does not depend on the mass.
Therefore in order to make the energy density per unit area finite in odd dimensional space-time we introduce first mass counterterms (associated to this $k=0$ term)
proportional to second order and first order poles. The other terms $k> 0$ which depend on the mass give us first order poles.
To render the density energy per unit area finite we introduce also mass counterterms proportional to these first order poles.
This problem has been discussed by Kay \cite{kay}. This author
shows that the analytic regularization procedure does not yield a finite result automatically in the case of massive fields.
Let us discuss this renormalization procedure in a three-dimensional space-time. In this case we have
\begin{eqnarray}
\varepsilon_{3}(a,m)&=&\frac{1}{a^2}\sum_{k=0}^{1}g(2,k)(ma)^{2k}\Gamma(-1+\epsilon)\times\nonumber\\
&& G_{\gamma}\bigr(k-1+\epsilon\,\bigl).
\end{eqnarray}
The mass counterterm $\delta m^2$ in a generic odd-dimensional space-time has dimension $[\delta m^{2}]=\frac{1}{a^{d+1}}\mu^{1-d}$,
where we introduced a mass parameter $\mu$. Therefore we get
\begin{eqnarray}
\delta m^2=&&-\frac{1}{8\pi\,a^3\mu}\biggr(\frac{1}{\epsilon^2}+\frac{1}{\epsilon}(\Omega_1 +\Phi_1)\biggl)+\nonumber\\
&&-\frac{1}{8\pi\,a\mu}\frac{1}{\epsilon}m^2 G_{\gamma}(0),
\end{eqnarray}
where $\Omega_{1}$ is the regular part of the analytic extension for the gamma function around the pole in $z=-1$ and $\Phi_1$ is
the regular part of the analytic extension for the super-zeta
function around the pole in $z=-1$.
The mass correction $\Delta m^2$ is given by
\begin{eqnarray}
\Delta m^2(a,\mu)&=&\frac{1}{8\pi}\biggl(\frac{1}{a^3\mu}\Omega_1\Phi_1+
\frac{1}{a\mu}m^2\Omega_1G_{\gamma}(0)\biggr),
\end{eqnarray}
where $\mu$ is a mass parameter that has take to introduced to perform the analytic regularization.
We can define a renormalized square mass, the Riemann mass, given by
\begin{eqnarray}
m_{R}^{2}=m^{2}+\Delta m^2(a,\mu).
\end{eqnarray}
It is interesting to stress that the mass generation occurs only
in odd dimensional space-time, and that the effect vanishes when the translational invariance is recovered. This mechanism is quite similar with the topological
generation of mass \cite{yo, ford}. In this case, the translational invariance is maintained.
\section{Conclusions}
"Do you know a physical reason that the Riemann hypothesis should be true?"(E. Landau). Our knowledge of number theory and related areas is not powerful enough to
prove it, until now. The aim of this paper is to investigate the consequences
in the quantum field theory framework of accepting the truth of this mathematical proposition.
From Hadamard's theory it is possible to define analytic functions by its zeros and singularities. In the case of the Riemann zeta function, it is possible
to represent it as an Euler product and a Hadamard's product. This shows a reciprocity between the set of prime numbers and nontrivial zeros of the zeta function.
Nevertheless these two sequences of numbers have totally distinct behavior with respect to being the spectrum of a
linear operator associated to a system with countable infinite number of degrees of freedom.
In two recent papers it was investigated whether number-theoretical sequences can be associated with the spectrum of some
hypothetical linear operator of physical systems with infinite number of degrees of freedom.
In the first one it
was shown that the sequence of prime numbers is not zeta
regularizable~\cite{ga}, therefore this sequence of numbers
cannot be the spectrum of a linear operator described by functional integrals. In the second one \cite{gn2} it was shown that it is possible to extend the
Hilbert-P\'olya conjecture to systems with with countably infinite number of degrees of freedom described by functional integrals.
In this paper, the renormalized zero-point energy of a massive scalar field with the Riemann zeros in the spectrum of the vacuum modes is presented. Using analytic and dimensional regularization,
for even dimensional space-time, we show that the series that defines the regularized energy density is
finite. For odd dimensional space-time the analytic regularization procedure does not produce finite results because the coefficients of the series are divergent.
We concluded that in order to renormalize the vacuum energy one is forced to introduce mass counterterms in the
interaction Lagrangian for any odd dimensional space-time.
Finally two comments are in order. First, the renormalized vacuum energy is identified with the constant term in the asymptotic expansion of the regularized vacuum energy.
For the massless case it is possible to show, using an exponential ultraviolet cut-off~\cite{kay,ruggiero,ss,nami1,nami2,bene}, that the finite part of the regularized vacuum energy
agrees with the renormalized energy density obtained by the analytic regularization procedure.
Second, the Casimir energy for the case of fermionic fields was investigated in this configuration,
based on the MIT bag model \cite{mit1,mit2}. The generalization for massive fermions was presented by Elizalde et al \cite{el}.
See also the Ref. \cite{oi}. The study of fermionic fields where the vacuum modes contain in the spectrum the nontrivial zeros of the Riemann zeta function
is under investigation by the authors.
\section{Acknowlegements}
We would like to thanks Martin Makler, Benar Svaiter, Jorge Stephany Ruiz and Gabriel Menezes for useful discussions.
This paper was supported by Conselho Nacional de Desenvolvimento
Cientifico e Tecnol{\'o}gico do Brazil (CNPq).
\begin{appendix}
\makeatletter \@addtoreset{equation}{section} \makeatother
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\section{The analytic extension of the super-zeta function $G_{\gamma}(s)$}
In this appendix we present the analytic extension of one of the super-zeta functions.
We are following the Ref.~\cite{delsarte}. Using the definition given by Eq.~(\ref{zetazero}) we get
\begin{equation}
\Gamma\biggl(\frac{s}{2}\biggr)\pi^{-\frac{s}{2}}G_{\gamma}(s)=\int_{0}^{\infty}\,dx\,
x^{\frac{s}{2}-1}\sum_{\gamma>0}\,e^{-\pi\gamma^{2}x}.
\label{zetazero2}
\end{equation}
Let us split the integral that appears in Eq.~(\ref{zetazero2}) in the intervals $[0,1]$ and $[1,\infty)$, and define the functions
\begin{equation}
A(s)=\int_{0}^{1}\,dx\, x^{\frac{s}{2}-1}\sum_{\gamma>0}\,e^{-\pi\gamma^{2}x}
\label{zetazero3}
\end{equation}
and
\begin{equation}
B(s)=\int_{1}^{\infty}\,dx\, x^{\frac{s}{2}-1}\sum_{\gamma>0}\,e^{-\pi\gamma^{2}x}.
\label{zetazero4}
\end{equation}
Note that $B(s)$ is an entire function. To proceed let us use that
\begin{eqnarray}
\sum_{\gamma>0}\,e^{-\pi\gamma^{2}x} = &&-\frac{1}{2\pi\sqrt{x}}\sum_{n=2}^{\infty}
\frac{\Lambda(n)}{\sqrt{n}}\,e^{-\frac{(\ln n)^{2}}{4\pi x}}+\,e^{\frac{\pi x}{4}}\nonumber\\
&&-\frac{1}{2\pi}\int_{0}^{\infty}dt\,e^{-\pi x t^{2}}\Psi(t),
\label{zetazero55}
\end{eqnarray}
where the function $\Psi(t)$ is given by
\begin{equation}
\Psi(t)=\frac{\zeta'(\frac{1}{2}+i t)}{\zeta(\frac{1}{2}+i t)}+\frac{\zeta'(\frac{1}{2}-i t)}{\zeta(\frac{1}{2}-i t)}.
\label{zetazero5}
\end{equation}
Substituting Eq.~(\ref{zetazero55}) in~(\ref{zetazero3}) we get that $A$-function can be written as
\begin{equation}
A(s)=A_{1}(s)+A_{2}(s)+A_{3}(s),
\label{zetazero6}
\end{equation}
where
\begin{equation}
A_{1}(s)=-\frac{1}{2\pi}\int_{0}^{1}\,dx\,x^{\frac{s}{2}-\frac{3}{2}}\biggl(\sum_{n=2}^{\infty}
\frac{\Lambda(n)}{\sqrt{n}}\,e^{-\frac{(\ln n)^{2}}{4\pi x}}\biggr),
\label{zetazero7}
\end{equation}
\begin{equation}
A_{2}(s)=\int_{0}^{1}\,dx\,x^{\frac{s}{2}-1}\,e^{\frac{\pi x}{4}}
\label{zetazero8}
\end{equation}
and finally
\begin{equation}
A_{3}(s)=-\frac{1}{2\pi}\int_{0}^{1}\,dx\,x^{\frac{s}{2}-1}\,\biggl(\int_{0}^{\infty}\,e^{-\pi x t^{2}}\Psi(t)\biggr).
\label{zetazero9}
\end{equation}
Changing variables in the $A_{1}(s)$, i.e., $x\rightarrow1/x$ we get
\begin{equation}
A_{1}(s)=-\frac{1}{2\pi}\int_{1}^{\infty}\,dx\,x^{-\frac{s}{2}-\frac{1}{2}}
\biggl(\sum_{n=2}^{\infty}\frac{\Lambda(n)}{\sqrt{n}}\,e^{-\frac{x(\ln n)^{2}}{4\pi}}\biggr).
\label{zetazero10}
\end{equation}
It is clear that $A_{1}(s)$ is an entire function of $s$. Let us define $\Phi(s)$ as
\begin{equation}
\Phi(s)=A_{1}(s)+B(s).
\label{zetazero11}
\end{equation}
Using Eqs.~(\ref{zetazero4}),~(\ref{zetazero6}),~(\ref{zetazero8}),~(\ref{zetazero9}) and~(\ref{zetazero11}) we can write expression~(\ref{zetazero2}) as
\begin{equation}
\Gamma\biggl(\frac{s}{2}\biggr)\pi^{-\frac{s}{2}}G_{\gamma}(s)=\Phi(s)+A_{2}(s)+A_{3}(s).
\label{zetazero12}
\end{equation}
Since $\Phi(s)$ is an entire function and we have the integrals that define $A_{2}(s)$ and $A_{3}(s)$, the above formula is
the analytic extension of the secondary zeta function.
The function
$G_{\gamma}(s)$ is a meromorphic function of $s$ in the whole complex plane with double pole at
$s=1$ and simple poles at $s=-1,-3,..,-(2n+1),..\,$. Therefore $(s-1)^{2}G_{\gamma}(s)(\Gamma(s))^{-1}$ is an entire function.
\section{Asymptotic expansion of the regularized vacuum energy}
Since the renormalized vacuum energy is identified with the constant term in the asymptotic expansion of the regularized vacuum energy
the aim of this appendix is to discuss the divergent and the finite contribution for the vacuum energy using the cut-off method. For simplicity we use $m=0$.
To proceed, the angular part of the $(d-1)$-dimensional integral of Eq.~(\ref{13}) can be easily calculated with the aid of the result
\begin{equation}
\int d^{d-1}k=\frac{2\pi^\frac{d-1}{2}}{\Gamma(\frac{d-1}{2})}\int_{0}^{\infty}\,dr\,r^{d-2},
\end{equation}
where $r = \sqrt{k_1^2 + \cdots + k_{d-1}^2}$. Defining the function $F(d)$ as
\begin{equation}
F(d)= \frac{1}{(2\sqrt{\pi})^{d-1}}\,\frac{1}{\Gamma(\frac{d-1}{2})},
\end{equation}
one can rewrite the vacuum energy per unit area as
\begin{equation}
\varepsilon_{d+1}(a) = F(d)\sum_{n=1}^{\infty}\int_{0}^{\infty}\,dr\,r^{d-2}\left[r^2 + k_d^2(n,a)\right]^{1/2}
\end{equation}
Using that $k_d(n,a) = \sqrt{\gamma_n}/a$, and simple algebraic manipulations allow us to write the vacuum energy per unit area $\varepsilon_{d+1}(a)$ as
\begin{equation}
\varepsilon_{d+1}(a) = \frac{F(d)}{2a^{d}}\int_{0}^{\infty}dv\,v^{(d-3)/2}\sum_{n=1}^{\infty}\left(v+\gamma_{n}\right)^{1/2}.
\end{equation}
As usual, let us introduce an ultraviolet cut-off given by
\begin{equation}
e^{-\lambda\left(v+\gamma_{n}\right)^{1/2}},\,\,\,\textrm{Re}(\lambda) > 0.
\end{equation}
The regularized vacuum energy per unit area $\varepsilon_{d+1}(a,\lambda)$ is then given by
\begin{eqnarray}
\varepsilon_{d+1}(a,\lambda) = &&\frac{F(d)}{2a^{d}}\int_{0}^{\infty}dv\,v^{(d-3)/2}\sum_{n=1}^{\infty}
\left(v+\gamma_{n}\right)^{1/2}\,\nonumber\\
&& \times e^{-\lambda\left(v+\gamma_{n}\right)^{1/2}}.
\end{eqnarray}
After some algebra one gets
\begin{eqnarray}
\varepsilon_{d+1}(a,\lambda) = && \frac{F(d)}{a^{d}}
\sum_{n=1}^{\infty}
\int_{\sqrt{\gamma_n}}^{\infty} du\, u^{d-1}\left(1-\frac{\gamma_n}{u^2}\right)^{(d-3)/2}\nonumber\\
&& \times e^{-\lambda u}.
\end{eqnarray}
Here we use again the generalization for the binomial expansion for any real coefficient $\alpha$ and $|x|<1$.
For even dimensional
space-time we get that the sum has a finite number of terms, i.e. $N=\frac{d-3}{2}$ and for odd dimensional space-time we have infinite terms.
From section III, we know that only for even dimensional space-time, the renormalized vacuum energy is finite, without
the necessity of introducing counterterms.
Let us discuss the even dimensional case. After some algebra we can write the asymptotic expansion in the form
\begin{eqnarray}
\varepsilon_{d+1}(a,\lambda) = && \frac{1}{a^d}\sum_{k=0}^{\frac{d-3}{2}}g_1(d,k)
\sum_{n=1}^{\infty}(-1)^{k}\gamma_{n}^{k}\times\nonumber\\
&&\int_{\sqrt{\gamma_n}}^{\infty} du\, u^{(d-2k)-1}\,e^{-\lambda u},
\end{eqnarray}
where $g_1(d,k)=\frac{4\sqrt\pi\,F(d)}{\Gamma(k+1)\Gamma\left(\frac{d-1-2k}{2}\right)}$.
In order to proceed let use the definition of the incomplete gamma function $\Gamma(a,z)$\cite{abram}
\begin{equation}
\Gamma(a,z)=\int_{z}^{\infty}t^{a-1}\,e^{-t}\,dt,
\end{equation}
which has the following series expansion \cite{grads,ederlyi}
\begin{eqnarray}
\Gamma(a,z)\,&&=\,\Gamma(a)\left[1-z^a\,e^{-z}\sum_{l=0}^{\infty}\frac{z^l}{\Gamma(a+l+1)}\right], \nonumber\\
&& a\neq 0,-1,-2...
\end{eqnarray}
Defining $h_1(d,k)=g_1(d,k)\Gamma(d-2k)$, the density energy therefore may be written as
\begin{eqnarray}
&&\varepsilon_{d+1}(a,\lambda)=\frac{1}{a^d}\sum_{k=0}^{\frac{d-3}{2}}h_1(d,k)\sum_{n=1}^{\infty}
\frac{(-1)^{k}\gamma_n^{k}}{\lambda^{d-2k}}\times\nonumber\\
&&\biggl[1-(\lambda\sqrt{\gamma_n})^{d-2k}
\sum_{l=0}^{\infty}\frac{\,e^{-\lambda\sqrt{\gamma_n}}(\lambda\sqrt{\gamma_n})^l}{\Gamma(d-2k+l+1)}\biggr].
\end{eqnarray}
Let us study the polar part of the regularized energy density. We get
\begin{eqnarray}
&&\varepsilon_{d+1}(a,\lambda)=\frac{1}{a^d}\sum_{k=0}^{\frac{d-3}{2}}(-1)^{k}h_1(d,k) \nonumber\\
&&\times \sum_{n=1}^{\infty}\biggl[
\frac{\gamma_{n}^{-k}}{\lambda^{d-2k}} - \frac{\gamma_n^{-\frac{d}{2}}-\lambda\gamma_{n}^{-\frac{d+1}{2}}}{\Gamma(d-2k+1)}
+...\biggr]
\end{eqnarray}
where a term of order $\lambda^2$ was neglected. Note that the cut-off regularization is not enough to obtain a finite result for the vacuum energy.
We also need to apply an analytic regularization procedure.
To see this let us investigate the particular case of $d=3$, which implies $k=0$. We have that $\varepsilon_{4}(a,\lambda)$ can be written as
\begin{equation}
\varepsilon_{4}(a,\lambda)=-\frac{1}{\pi a^3}\left[\frac{2}{\lambda^3}G_{\gamma}(0) -\frac{1}{3}G_{\gamma}(-3/2) +
\frac{5\lambda}{12}\,G_{\gamma}(-2)\right].
\end{equation}
The first term in the right hand side in the above equation is divergent when the cut-off is removed. The second term is finite by the use of analytic regularization procedure.
This term coincides with the density energy for the system using an analytic regularization procedure. Since
$G_{\gamma}(-2)$ is regular, the third one goes to zero when the cut-off is removed.
The finite part gives us the correct renormalized energy density. It expresses the fact that the
renormalized vacuum energy is identified with the constant term in the asymptotic expansion of the regularized vacuum energy \cite{ful}.
\end{appendix}
|
1,116,691,497,336 | arxiv | \section{Introduction}
\begin{figure}
\centering
\includegraphics[width=.9\linewidth]{groJ1008-57_gamma_uncertainties}
\caption{Distribution of the relative uncertainties (90\% confidence level)
of the power-law photon indices, $\Gamma$, in our study of all
\textsl{RXTE}-observations of GRO~J1008$-$57 \citep{kuehnel2013a}. Fitting
all 43 observations separately results in the green histogram. As soon as we
perform a simultaneous fit with $\mathcal{P}=1$ global continuum parameter
the uncertainties decrease significantly as shown by the red histogram.
Finally, using $\mathcal{P}=2$ global parameters (blue histogram) results in
a median of ${\sim}6\%$ in the uncertainties (compare the arrows on top).}
\label{fig:1008gamma}
\end{figure}
Nowadays, the still increasing computation speed and available memory allows
us to analyze large datasets at the same time. Using X-ray spectra of
accreting neutron stars as an example, we have shown in a previous paper
\citep[][hereafter paper I]{kuehnel2015a} that loading and fitting the spectra
simultaneously has several advantages compared to the ``classical'' way of
X-ray data analysis, which is treating every observation individually. In
particular, instead of fixing parameters to a mean value one can determine
them by a joint fit to all datasets under consideration. Due to the
reduced number of degrees of freedom the remaining parameters can be better
constrained (see Fig.~\ref{fig:1008gamma} as an example). Furthermore,
parameters no longer need to be independent, but can be combined into
functions. For instance, the slope of the spectra might be described as a
function of flux with the coefficients of this function as fit-parameters.
The disadvantages of fitting many datasets simultaneously are, however, an
increased runtime and a complex handling because of the large number of
parameters. In paper I, we have introduced functions to facilitate this
handling, which have been implemented into the \texttt{Interactive Spectral
Interpretation System} (ISIS) \citep{houck2000a}. While these functions are
already available as part of the
\texttt{ISISscripts}\footnote{http://www.sternwarte.uni-erlangen.de/isis} they
are continuously updated and new features are implemented. One important
question, which we raised in paper I, is about the goodness of a
simultaneous fit as it is, e.g., calculated after the commonly used
$\chi^2$-statistics, particularly the case where some datasets are
not described well by the chosen model. Due to the potential large total
number of datasets, the information about failed fits can be
hidden in the applied fit-statistics. After we have given a reminder
about the terminology of simultaneous fits in Section~\ref{sec:reminder},
we describe the problem of detecting failed fits in more detail in
Section~\ref{sec:solution} and provide possible solutions. We will
conclude this paper by applying these solutions to examples in
Section~\ref{sec:examples}.
\subsection{Simultaneous Fits}
\label{sec:reminder}
\begin{figure}
\centering
\includegraphics[width=.9\linewidth]{terminology}
\caption{Terminology of simultaneous fits in ISIS, according to paper I. A
data-group consists of simultaneously taken datasets, here data-group A has $n$
and B has $m$ datasets. A model with $p+\mathcal{P}$ parameters is fitted
such that each data-group has its own set of $p_i$ parameters, called group
parameters. The common $\mathcal{P}$ parameters between the groups are the
so-called global parameters.}
\label{fig:terminology}
\end{figure}
As we have described in paper I, a \textit{data-group} contains all datasets
which have been taken simultaneously in time or, in general, represent the
same state of the observed object. In the example illustrated in
Fig.~\ref{fig:terminology} two data-groups, A and B, have been added to the
simultaneous fit, containing $n$ and $m$ datasets, respectively. Thus, a
dataset is labeled by the data-group it belongs to, e.g., B3 is the third
dataset in the second data-group. After a model with $p$ parameters has been
defined each of these data-groups is fitted by an individual set of parameters,
called \textit{group parameters}. Consequently, all datasets belonging to a
specific data-group are described by the same parameter values. A specific group
parameter can now be marked as a so-called \textit{global parameter}. The
value of the corresponding group parameters will now be tied to this global
parameter, i.e, this parameter has a common value among all data-groups.
Instead of tying group parameters together to a global value, a parameter
function may be defined, to which the group parameters are set instead. This
function takes, e.g., other parameters as input to calculate the value for
each group parameter. In this case, correlations between model parameters,
e.g, as predicted by theory can be implemented and fitted directly.
\section{Goodness of a simultaneous fit}
\label{sec:solution}
As an indicator for the goodness of a fit the analysis software used,
e.g ISIS or XSPEC \citep{arnaud1996a}, usually displays the fit-statistics
after the model has been fitted to the data. Here, we chose the
$\chi^2$-statistics since the developed functions for a simultaneous fit have
been first applied to accreting neutron stars. The high count rates satisfies
the Gaussian approxmation of the uncertainties, which are actually Poisson
distributed. In principle, however, the discussed issues and their solutions
can be generalized for any kind of fit-statistics.
For each datapoint $k$ the difference between the $\mathrm{data}$ and the
$\mathrm{model}$ is calculated and normalized by the measurement uncertainty,
$\mathrm{error}$, of the data. The sum over all $n$ datapoints is called the
$\chi$-square,
\begin{equation}\label{eq:chisqr}
\chi^2 = \sum_{k=1}^n \frac{(\mathrm{data}_k -
\mathrm{model}_k)^2}{\mathrm{error}_k^2}
\end{equation}
and is displayed after a fit. Additionally, the sum is normalized to the
total number of degrees of freedom, $n-p$ with the number of free
fit-parameters $p$, since the $\chi^2$ increases with $n$. This normalized sum,
called the reduced $\chi$-square,
\begin{equation}\label{eq:redchisqr}
\chi^2_\mathrm{red} = \frac{\chi^2}{n-p}
\end{equation}
is also displayed. For Gaussian distributed data the expected value
is $\chi^2_\mathrm{red}=1$ for a perfect fit of the chosen model. However,
once the probability distribution is changed, e.g., when a
spectrum has been rebinned, the expected value changes as well. Consequently,
a reliable measure for the goodness of the fit has to be defined with some
forethought.
The $\chi^2_\mathrm{red}$ threshold, for which a simultaneous fit is
acceptable, strongly depends on the considered case. In particular, a few
data-groups might not be described well by the chosen model, which would
result in an unacceptable $\chi^2_\mathrm{red}$ when fitted individually.
However, in case of a simultaneous fit, this information might be hidden in
the classical definition of the $\chi^2_\mathrm{red}$ (Eq.~\ref{eq:redchisqr}).
Let us consider $N$ data-groups and a model with $p$
group parameters and $\mathcal{P}$ global parameters. Then, the total
$\chi^2_\mathrm{red}$ is
\begin{equation}\label{eq:redchisqr:simfit}
\chi^2_\mathrm{red} = \frac{\sum_{i=1}^N \chi^2_i}{\sum_{i=1}^N (n_i -
p_i) - \mathcal{P}}
\end{equation}
with the number of degrees of freedom, $n_i - p_i$, and the $\chi^2_i$ for
each data-group $i$ after Eq.~\ref{eq:chisqr}. Now, we assume a failed fit
with $\chi^2_i \sim 2$ for a particular $i$ to be present, while for the
remaining data-groups $\chi^2_i \sim 1$. For $N \gtrsim 10$ the
$\chi^2_\mathrm{red}$ after Eq.~\ref{eq:redchisqr:simfit} is still near unity
and, thus, suggests a successful simultaneous fit. In the
following, we present three possibilities to investigate the
goodness of a simultaneous fit more carefully.
\subsection{Histogram of the goodness}
\label{sec:histogram}
A trivial but effective solution is to check the goodness of the fit for
each data-group individually. Here, in the chosen case of the
$\chi^2$-statistics, the $\chi^2_\mathrm{red,i}$ is calculated for each
data-group, $i$, after
\begin{equation}\label{eq:groupchi}
\chi^2_\mathrm{red,i} = \frac{\chi^2_i}{n_i - p_i}
\end{equation}
where $n_i$ are the number of datapoints in the data-group, and $p_i$ is the
number of free group parameters. Due to that the global parameters are
not taken into account here, the $\chi^2_\mathrm{red,i}$ is, however,
different to that performed by a single fit of the data-group.
In the case of a large number of data-groups, it is more convenient to sort the
$\chi^2_\mathrm{red,i}$ into a histogram to help investigating the goodness
of the fit to all data-groups. We have added such a histogram to the
simultaneous fit functions as part of the \texttt{ISISscripts}. After a fit
has been performed using the fit-functions \texttt{fit\_groups} or
\texttt{fit\_global} (see paper I) this histogram is added to the default
output of the fit-statistics. In this way, failed fits of specific data-groups
can be identified by the user at first glance.
\subsection{A combined goodness of the fit}
\label{sec:newstatistic}
Instead of a few failed fits to certain data-groups, one might ask if the
chosen model fails in the global context of a simultaneous fit. To
answer this question, a special goodness of the simultaneous fit is needed to
take its logical structure into account. As explained in
Section~\ref{sec:reminder}, a data-group represents a certain state of the
observed object, e.g., the datasets where taken at the same time. Thus, the
data-groups are statistically independent of each other. Calculating the
goodness of the fit in a traditional way, which is the $\chi^2_\mathrm{red}$
after Eq.~\ref{eq:redchisqr:simfit} in our case, does not, however, take
this aspect into account. As a solution we propose to define a
\textit{combined goodness of the fit} calculating the weighted mean of the
individual goodness of each data-group. In the case of $\chi^2$-statistics, a
combined reduced $\chi^2$ is calculated by
\begin{equation}\label{eq:combredchisqr}
\chi^2_\mathrm{red,comb.} = \frac{1}{N} \sum_{i=1}^N \frac{\chi_i^2}{n_i
- p_i - \mu_i \mathcal{P}}
\end{equation}
with $\chi_i^2$ computed after Eq.~\ref{eq:chisqr} for each data-group, $i$,
and a weighting factor, $\mu_i$, for the number of global parameters,
$\mathcal{P}$:
\begin{equation}\label{eq:mu}
\mu_i \approx (n_i - p_i) \times \sum_{j = 1}^N \frac{1}{n_j - p_j}
\end{equation}
Thus, $\mu_i$ normalizes the effect of data-group $i$ on the determination of
the global parameters, $\mathcal{P}$, by its number of degrees of freedom
relative to the total number of degrees of freedom of the simultaneous fit.
Equation~\ref{eq:mu} is, however, an approximation only. A data-group might not
be sensitive to a certain global parameter, e.g, if the spectra in this
data-group do not cover the energy range necessary to determine the parameter.
A failed fit to a specific data-group, for example with a high individual
$\chi^2_\mathrm{red,i}$, has a higher impact on the $\chi^2_\mathrm{red,comb.}$
(Eq.~\ref{eq:combredchisqr}) than on the traditional
$\chi^2_\mathrm{red}$ (Eq.~\ref{eq:redchisqr}). In general we expect
$\chi^2_\mathrm{red,comb.} \geq \chi^2_\mathrm{red}$, even if all data-groups
are fitted well. In the case of a good simultaneous fit (better than a certain
threshold), a weak feature in the data might still be unnoticed, if it is not
detected in any individual data-group. Such a feature can be investigated by
stacking the residuals, as outlined in the following section.
We note, however, that Eq.~\ref{eq:combredchisqr} is the result of an
empirical study. A more sophisticated goodness of a simultaneous fit
should be based on a different type of fit-statistics suitable for a
simultaneous analysis of many datasets, such as a Bayesian approach or a
joint likelihood formalism similar to \citet{anderson2015a}.
\subsection{Stacked residuals}
\label{sec:summedresiduals}
Once datasets can be technically stacked to achieve a higher
signal-to-noise ratio, e.g., when spectra have the same energy grid and
channel binning, further weak features might get visible. This is a common
technique in astrophysics \citep[see, e.g.,][]{ricci2001a,bulbul2014a}.
However, when stacked datasets are analyzed, differences in the individual
datasets, like source intrinsic variability, can no longer be revealed.
In case of a simultaneous fit, the residuals of all data-groups can be
stacked instead. The stacking dramatically increases the total exposure in
each channel bin. Thus, the stacked residuals of all data-groups,
$R(k)$, as a function of the energy bin, $k$, can be investigated to further
verify the goodness of the simultaneous fit
\begin{equation}\label{eq:sumresiduals}
R(k) = \sum_{i=1}^N \mathrm{data}_{i,k} - \mathrm{model}_{i,k}
\end{equation}
This task can be achieved using, e.g, the \texttt{plot\_data}
function\footnote{\url{http://space.mit.edu/home/mnowak/isis\_vs\_xspec/plots.html},
which is available through the \texttt{ISISscripts} as well.} written by
M.~A.~Nowak.
We can show that the combined reduced $\chi^2$ is effectively equal
to the goodness of a fit of the stacked data in the first place. Assuming the
same number of degrees of freedom, $n-p$, for each data-group,
Eq.~\ref{eq:combredchisqr} gives
\begin{equation}\label{eq:chisqrequal}
\chi^2_\mathrm{red,comb.} = \frac{1}{f} \sum_{i=1}^N
\sum_{k=1}^{n} \frac{(\mathrm{data}_{i,k} -
\mathrm{model}_{i,k})^2}{\mathrm{error}_{i,k}^2}
\end{equation}
with $f = N (n-p-\mu\mathcal{P})$ and having used Eq.~\ref{eq:chisqr}. Now,
the summand no longer depends on $i$ or $k$ explicitly. Thus, the order of the
sums in Eq.~\ref{eq:chisqrequal} may be switched. If we finally interpret $k$
as a spectral energy bin we end up with the goodness as a function of $k$:
\begin{align}
\begin{split}
\chi^2_\mathrm{red,comb.}(k) &\propto \sum_{i=1}^N
\frac{(\mathrm{data}_{i,k} -
\mathrm{model}_{i,k})^2}{\mathrm{error}_{i,k}^2} \\
&\propto \sum_{i=1}^N \mathrm{data}_{i,k}^2
\end{split}
\end{align}
This means that all datasets of the simultaneous fit are first summed up for
each energy bin in the combined reduced $\chi^2$. In contrast to
stacking the data in the first place, however, source variability can still
be taken into account during a simultaneous fit.
Note that once all data-groups have the same number of degrees of
freedom, the $\chi^2_\mathrm{red,comb.}$ (Eq.~\ref{eq:chisqrequal}) is equal
to the classical $\chi^2_\mathrm{red}$ (Eq.~\ref{eq:redchisqr}). To further
investigate the goodness of the simultaneous fit in such a case, the histogram
of the goodness of all data-groups (see Sec.~\ref{sec:histogram}) and, if
possible, the stacked residuals should be investigated.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{groJ1008-57_combined_xte_residuals}
\caption{The stacked spectra (a) and stacked residuals of all
individual data-groups (b) containing 43 \textsl{RXTE}-spectra of
GRO~J1008$-$57 (blue: PCA; red: HEXTE). Residual features are left in PCA,
which are caused by calibration uncertainties. These features are not
detected in that detail in the residuals of the single spectrum with the
highest signal (c).} \label{fig:1008}
\end{figure}
\section{Examples}
\label{sec:examples}
\subsection{GRO J1008$-$57}
\label{sec:example:gro1008}
The Be X-ray binary GRO J1008$-$57 was regularly monitored by \textsl{RXTE}
during outbursts in 2007 December, 2005 February, and 2011 April with a few
additional pointings by \textsl{Suzaku} and \textsl{Swift}. A detailed
analysis of the spectra has been published in \citet{kuehnel2013a} and in
paper I we have demonstrated, as an example, the advantages of a simultaneous
fit based on these data (see also Fig.~\ref{fig:1008gamma}). The
$\chi^2_\mathrm{red}$ of 1.10 with 3651 degrees of freedom (see Table~4
of K\"uhnel at al., 2013 \citet{kuehnel2013a}) calculated after Eq.~\ref{eq:redchisqr:simfit} indicates a
good fit of the underlying model to the data. Using the combined reduced
$\chi^2$ defined in Eq.~\ref{eq:combredchisqr} we find, however,
$\chi^2_\mathrm{red,comb.} = 1.68$. The reason for this significant worsening
of the goodness are calibration uncertainties in \textsl{RXTE}-PCA, which
are visible in the stacked residuals of all 43 data-groups as shown in
Fig.~\ref{fig:1008}: the strong residuals below 7\,keV are probably caused by
insufficient modeling of the Xe L-edges, the absorption feature at 10\,keV by
the Be/Cu collimator, and the sharp features around 30\,keV by the Xe K-edge
\citep[for a description of the PCA see][]{jahoda2006a}. These calibration
issues have been detected in a combined analysis of the Crab pulsar by
\citet{garcia2014a} as well. However, the calibration issues, which
are responsible for the high $\chi^2_\mathrm{red,comb.}$, do not affect the
continuum model of GRO~J1008$-$57 because of their low significance in the
individual data-groups. These calibrations features might have an influence,
however, in data with a much higher signal-to-noise ratio than the datasets
used here or once narrow features, such as emission lines, are studied.
\subsection{Vela X-1}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{velaX-1_XMM_iron_complex}
\caption{The iron line region of Vela X-1 can be nicely studied in this
stacked residuals of all 88 \textsl{XMM-Newton}-spectra (a). The model
includes the continuum shape only and, thus, does not take any fluorescent
line emission into account. The residuals of the single spectrum with the
highest signal show the K$\alpha$ emission line only (b). Note that
the residual flux in this line is $\sim$15 times lower compared to the
stacked residuals.}
\label{fig:vela}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{velaX-1_XMM_redchisqr}
\caption{Example for a histogram of the goodness of the fits. Here, the
distribution of the $\chi^2_\mathrm{red}$ of all individual data-groups
of the 88 \textsl{XMM-Newton}-spectra of Vela~X-1 are shown.}
\label{fig:velachi}
\end{figure}
Another excellent example for a simultaneous fit was performed by
\citet{martineznunez2014a}. These authors have analyzed 88 spectra recorded by
\textsl{XMM-Newton} during a giant flare of Vela~X-1. Although the continuum
parameters were changing dramatically within the ${\sim}100$\,ks observation a
single model consisting of three absorbed power-laws is able to describe the
data with a $\chi^2_\mathrm{red} = 1.43$ with 9765 degrees of freedom
\citep{martineznunez2014a}. Due to a global photon index for all power-laws
and data-groups the absorption column densities and iron line fluxes could be
constrained well.
Because every data-group is a single spectrum taken by the \textsl{XMM}-EPIC-PN
camera and a common energy grid was used, the $\chi^2_\mathrm{red,comb.}$
equals the $\chi^2_\mathrm{red}$ here. Thus, it is preferred to calculate the
stacked residuals of all data-groups according to Eq.~\ref{eq:sumresiduals}.
To demonstrate the advantage of this tool we have used the continuum model
only, i.e. without any fluorescence line taken into account, and evaluated
this model without any channel grouping to achieve the highest possible energy
resolution. The resulting stacked residuals in the iron line region
(5--9\,keV) are shown in Fig.~\ref{fig:vela}. The iron K$\alpha$ line at
${\sim}6.4$\,keV and the K$\beta$ line at ${\sim}7.1$\,keV are nicely
resolved. The tail following the K$\beta$ line is probably caused by a slight
mismatch of the continuum model with the data and requires a more detailed
analysis. Note that the flux of this mismatch is a few
$10^{-3}$\,photons\,s$^{-1}$\,cm$^{-2}$, which is detectable only in these
stacked residuals featuring 100\,ks exposure time and after the strong
continuum variability on ks-timescales has been subtracted.
As a further demonstrative example, the histogram of the goodness of the
fits of the 88 data-groups calculated after Eq.~\ref{eq:groupchi} is shown
in Fig.~\ref{fig:velachi}. The median $\chi^2_\mathrm{red}$ is around 1.3,
indicating that the model still could be improved slightly. Investigating
the three outliers with $\chi^2_\mathrm{red} > 2$ indeed proves that the
residuals around in the iron line region are left, which are responsible
for the high $\chi^2_\mathrm{red}$ and very similar to those shown in
Fig.~\ref{fig:vela}. There are no, however, extended residuals visible.
Thus, the continuum parameters presented in \citet{martineznunez2014a} are
still valid.
\subsection{XTE J1859+083}
\begin{figure}
\centering
\includegraphics[width=.985\linewidth]{xteJ1859p083_combined_xrt_residuals}
\caption{Seven stacked spectra of \textsl{Swift}-observations of
XTE~J1859+083 (a) and the residuals to the model (b). A weak
iron K$\alpha$ emission line at 6.4\,keV is visible, which is not detected
in the residuals of any individual spectrum (c).}
\label{fig:xte}
\end{figure}
The last example shown in this work is the outburst of the transient pulsar
XTE~1859+083 in 2015 April. This source was in quiescence since its bright
outburst in 1996/1997 \citep{corbet2009a}. During the recent outburst several
short observations by \textsl{Swift} were performed. A first analysis of these
data in combination with \textsl{INTEGRAL} spectra reports an absorbed
power-law shape of the source's X-ray continuum \citep{malyshev2015a}. We have
extracted and analyzed seven \textsl{Swift}-XRT spectra and can confirm these
findings. However, after examining the stacked residuals of all spectra an
iron K$\alpha$ emission line at 6.4\,keV shows up that has not been detected
before (see Fig.~\ref{fig:xte}). We define the equivalent width of this line
as a global parameter and find a value of $60 \pm 40$\,eV (uncertainty is at
the 90\% confidence level, $\chi^2_\mathrm{red,comb.} = 0.98$ with 585 degrees
of freedom).
\section{Summary}
We have continued developing functions to handle simultaneous fits in ISIS,
which we have introduced in paper~I. In particular, we have concentrated on
tools for checking the goodness of the fits to discover failed fits
of individual data-groups or global discrepancies of the model. We
propose to
\begin{itemize}
\item investigate the distribution of the goodness of fits to all
individual data-groups
\item calculate a combined goodness, here the
$\bf \chi^2_\mathrm{\bf red,comb.}$, which takes the individual nature of
each data-group into account
\item look at the stacked residuals of all data-groups to reveal weak
features
\end{itemize}
during a simultaneous fit in order to find the global best-fit. We
have demonstrated the tremendous benefit of analyzing the stacked
residuals by observations of three accreting neutron stars, in which we
could identify weak features that had not been detected before.
\begin{acknowledgements}
M.~K\"uhnel was supported by the Bundesministerium f\"ur Wirtschaft und
Technologie under Deutsches Zentrum f\"ur Luft- und Raumfahrt grants 50OR1113
and 50OR1207. The \texttt{SLxfig} module, developed by John E. Davis, was used
to produce all figures shown in this paper. We are thankful for the
constructive and critical comments by the reviewers, which were helpful to
significantly improve the quality of the paper.
\end{acknowledgements}
\renewcommand{\bibsep}{0pt}
|
1,116,691,497,337 | arxiv |
\subsection{The apparent spin-synchronism of CoRoT-Exo-4b and $\tau$-Boo}
We now turn to consider the planet CoRot-Exo-4b (\citealt{Aigrain2008},
\citealt{Moutou2008}), which is a transiting $m_{2} = 0.72 M_{J}$
planet in a $P = 9.2$ d circular orbit around a late F-type star
of mass $m_{1} = 1.16
M_{\odot}$. The star is rotating with
a period of $P_{\star} = 8.8$ d, and similarly to $\tau$-Boo
\citep{Donati2008}, appears to be synchronised
at a mid-latitude i.e. the stellar rotation is consistent with being
synchronised plus differential rotation in the convection zone. The
age of system is estimated to be in the range $\tau_{\star} =
(0.7-2.0)$ Gyr.
Since this a late F-type star with similar properties to XO-3,
a similar tidal $Q^{\prime}\geq 10^{10}$ is
expected. Simple estimates of tidal evolution timescales (assuming $Q^{\prime}_{1} =
10^{10}$) give $\tau_{\Omega} \sim 5 \times 10^{15}$ yrs for the whole
star, which makes the tidal spin-synchronism hypothesis
implausible. Indeed, even with optimistic assumptions about the
moment of inertia of the outer convective envelope with
$\epsilon_{\star}\sim10^{-3}$, and with core-envelope decoupling assumed,
tides are not likely to be strong enough
to counteract the magnetic braking spin-down, and
$\tau_{\Omega}\gg$ the age of the universe - which we have confirmed
with numerical integrations.
On the other hand, as \cite{Aigrain2008} discuss,
a rotation period of $9$ d is close to the peak of
the rotation period distribution for F-type stars, both during the
T-Tauri phase, and in the field. The current
magnetic braking timescale based on simple
exponential decay is $\tau_{\mathrm{mb}} \sim 10$ Gyr (assuming weak
braking for an F--star with $\gamma=0.1$), so if the system
formed in this state, then its spin will likely not have changed
significantly since the star started on the main sequence. Depending on the initial
spin period after the dissipation of the protoplanetary disc and the
evolution of the star onto the main sequence, magnetic braking alone
may explain its current spin period. Hence, it is likely that it is
just a coincidence for this system to be in a synchronous state.
It would be interesting to see observations of the RM effect
for this system - since this would further test the outer envelope
synchronism hypothesis, as well as potentially constrain the tidal $Q^{\prime}$ of the
star. In particular, if tides have indeed synchronised the outer envelope
with the orbit, then we would also expect the orbit to have also
aligned with the spin of the star through tidal friction.
There are several other similar F--stars with close-in massive planets:
Wasp-14 \citep{Joshi2008}, $\tau$-Boo \citep{Donati2008},
XO-4 \citep{McCullough2008}, Wasp-3 \citep{Wasp32008}, HAT-P-7
\citep{HATP72008} and HD 147506 \citep{HATP22007}.
Like CoRot-Exo-4 b, $\tau$-Boo appears to
have $P\simeq P_{\star}$. $\tau$-Boo has a massive $m_{2} = 3.9
M_{J}$ planet on a $P = 3.3$ d orbit
which is synchronised with the stellar rotation at intermediate
latitudes. For this system, simple estimates of the tidal
spin-synchronisation timescale suggest that tides are
unlikely to have synchronised the whole star with the orbit \citep{Donati2008},
especially if the stellar $Q^{\prime}$ is as large as $10^{10}$. On
the other hand, if we assume only the outer envelope is spun up, and
$\epsilon_{\star}$ is as small as $\sim10^{-3}$, then even with
$Q^{\prime} \sim 10^{10}$, $\tau_{\Omega, env}\leq \tau_{\star}$.
It is thus possible to synchronise the outer
envelope within $\tau_{\star}$ in this simple model. But if this is
the case, since this star is very similar to XO-3,
we would return to the problem of explaining why the orbit of XO-3 b has not
aligned also. This is because we would have to reduce
$\tau_{i}$ by $\epsilon_{\star}$, since if we assume core-envelope
decoupling can occur for such stars, the outer envelope could align with the orbit
separately to the rest of the star. For XO-3, we can effectively rule
this out, unless we invoke a much higher $Q^{\prime}$ than even
$10^{10}$.
Clearly much more work needs to be done to study the
possibility of core-envelope
decoupling in such stars, since this has implications to the survival
and remnant inclinations of close-in planets, as well as to the
efficiency of tidal dissipation.
OGLE-TR-9 \citep{Snellen2008} is a rapidly rotating F3 star hosting a close-in massive
planetary companion. No limits on the eccentricity or spin-orbit
alignment were detected for this system. It would be especially
interesting to see RM observations, since if we observe
$i>38^{\circ}$, then the planet will be orbiting inside the
orbit-projected corotation radius. This means that the planet will be
undergoing tidally induced inspiral. If the system has high $\lambda$
and $e$, then it may have been formed by some scattering or Kozai
migration. If circular, coplanar, then some other mechanism must have
been at work to stop inward migration due to disc torques - or the
disc torques must have been weak - maybe magnetic interactions/disc
truncation/etc.??? Really want eccentricity and RM observations for
this system.
An example of
such a star is OGLE-TR-L9 \citep{Snellen2008}, which is a rapidly rotating
F3 star hosting a close-in massive planetary companion. No limits on
the orbital eccentricity or spin-orbit alignment have been reported
for this system, but they would be especially interesting. This is
because, if tides are as weak as we predict, and no other
mechanism is more efficient at damping tides than turbulent viscosity
in the outer convection zone, then tidal dissipation in the
star will not have changed the orbit or spin properties of the
system (even though the planet will be undergoing tidal inspiral if $i
> 38^{\circ}$, since then $P > P_{\star} \cos i$). Excluding dissipation in the planet, this would mean that if
the planet has been scattered or Kozai migrated into an eccentric,
inclined orbit, then we would be able to observe the direct outcome of these
processes, enabling us to constrain these theories. On the other
hand, if the planet is found to be on a circular, aligned orbit, which
favours the gas-disc migration hypothesis, then tides are unlikely to
have been able to halt the inward migration of the planet. This might
indicate that other processes could have prevented the migration, such
as star-planet magnetic interactions \citep{Laine2008},
or that the gas in the disc dissipated before the planet
had migrated to the star, or that type II migration rates should be
reduced. We would very much like to encourage RM effect,
and further radial velocity observations for this system.
An example of such a star is CoRoT-Exo-4 (\citealt{Aigrain2008},
\citealt{Moutou2008}), which hosts a transiting $m_{2} = 0.72 M_{J}$
planet in a $P = 9.2$ d circular orbit. The star is rotating with
a period of $P_{\star} = 8.8$ d, and similarly to $\tau$-Boo
\citep{Donati2008}, appears to be synchronised
at a mid-latitude i.e. the stellar rotation is consistent with being
synchronised plus differential rotation in the convection zone. The
age of system is estimated to be in the range $\tau_{\star} =
(0.7-2.0)$ Gyrs.
In addition, the semi-major axis distribution
of the close-in planets does not appear to support the hypothesis that
these planets do indeed have very short inspiral times, and are just the next to
fall into their stars as many have before, since there is not a
similar distribution of planets further out
\section{Introduction}
Since the discovery of the first extrasolar planet around a solar--type
star \citep{MQ1995}, observers have now detected more than 300
planets\footnote{see http://exoplanet.eu/ for the latest updates}
around stars outside the solar system. Many of these planets have
roughly Jovian masses and orbit their host stars in orbits with semi-major
axes less than $0.1$ AU, the so--called ``Hot Jupiters'' (HJs). In both
of the giant planet formation scenarios (see
\citealt{Papaloizou2006} for a comprehensive review),
core accretion and gravitational instability, it is difficult to produce
HJs in situ. Close--in
planets are likely to form in colder regions of the protoplanetary disc, much further out ($a \sim $
several AU), before a migratory process brings the planet in
towards the star and to its present location \citep{LBR1996}.
The formation of systems of giant planets can be thought of as
occurring in two oversimplified stages
\citep{JT2008}. During stage 1 the cores of the giant planets are formed, they accrete
gas and undergo migration, driven by the dynamical interaction between the
planets and the gaseous protoplanetary disc (see \citealt{PapProt2007} for a
recent review). This stage lasts a few Myr until the gas dissipates, by which
time a population of gas giants may exist. If these form sufficiently
closely packed then stage 2 follows (some evidence in
favour of such packing in multiple--planet systems is given by \citealt{BarnesGreenberg2007}).
This stage lasts from when the disc has dissipated until the
present, and primarily involves gravitational interactions and
collisions between the planets. Recent studies of stage 2
(\citealt{JT2008}; \citealt{Chatterjee2008}; \citealt{FordRasio2008}) have shown that this is a chaotic era, in
which planet--planet scatterings force the
ejection of all but a few ($\sim 2-3$) planets from the system, in a
period of large--scale dynamical instability lasting $\la
10^{8}$yr. This mechanism can excite the eccentricities of the planets
to levels required to explain observations.
Planet--planet scatterings tend also to excite the inclinations of the
planets with respect to the initial symmetry plane of the system
\citep{JT2008}, though this has been found to be less efficient than the excitation of eccentricity. This
potentially leads to observable
consequences via the Rossiter--McLaughlin (RM) effect
(\citealt{Rossiter1924}; \citealt{Mclaughlin1924};
\citealt{Winn2005}).
The RM effect is a spectral distortion of the radial velocity
data that results from the planet occulting a spot on
the rotating surface of the star as it transits the stellar
disc. High--precision radial velocity data during a
transit allow a determination of the angle ($\lambda$) between the
sky--projected angular momentum vectors of the planetary orbit ($\mathbf{h}$)
and stellar spin ($\mathbf{\Omega}$), through the RM effect. This measured value
$\lambda$ is not necessarily the same as the inclination (or stellar
obliquity) $i$, which is the angle between the equatorial plane of the star and
orbital plane of the planet (defined by
$\cos i = \hat{\mathbf{\Omega}} \cdot \hat{\mathbf{h}}$), since $\lambda$ is
just a sky--projection of this angle. Nevertheless, $\lambda$ gives a
lower bound on the angle between these two vectors, and they are related by
$\cos i = \cos I_{\star} \cos I_{p} + \sin I_{\star} \sin I_{p} \cos
\lambda$, where $I_{p}$ and $I_{\star}$ are the angles of inclination of the planetary
orbital plane and stellar equatorial plane, to the plane of the sky. For a transit,
the orbit must be close to edge--on, so
$I_{p}\sim 90^{\circ}$, giving $\cos i \simeq \sin I_{\star}\cos \lambda$.
The RM effect has now been used to measure the degree of
spin--orbit alignment in 11 systems \citep{Winn2008}, though this number is
expected to grow rapidly over the next few years. These systems are
currently all consistent with $\lambda$ being
zero, with the exception of XO-3 b \citep{Hebrard2008}, which is
discussed in \S \ref{XO-3} below.
It is possible for HJs orbiting a host star which has a distant and inclined
stellar companion, or massive inclined outer planetary companion, to undergo another
type of migration. This is Kozai migration \citep{WuMurray2003}.
The presence of such an outer companion to an exoplanet host star could
cause Kozai oscillations, which produce periods of extreme
eccentricity in the planetary orbit, if various conditions are satisfied
(e.g. see section 1.2 of \citealt{Fabrycky2007}). The subsequent tidal
dissipation that occurs during the periods of small pericentre distance leads to
gradual inward migration of the planet. It has
even been proposed that a combination of planet--planet scattering, tidal
circularisation and the Kozai mechanism using outer planets, can produce
HJs around single stars \citep{Nagasawa2008}. HJs produced from these processes
generally have their orbital angular momentum vector misaligned with respect
to the stellar spin axis by large angles -- occasionally larger than $90^{\circ}$
(\citealt{Fabrycky2007}; \citealt{Nagasawa2008}).
Misaligned orbits are not predicted from stage 1
alone, so if $\lambda$ is measured
to be appreciably nonzero in enough systems, then it could be seen as
evidence for planet--planet
scattering or Kozai migration. This is because
gas--disc migration does not seem able to excite orbital inclination
(\citealt{LubowOgilvie2001}; \citealt{Cresswell2007}). Alternatively,
if observed planets are all found with
$\lambda$ consistent with zero, this could rule out planet--planet scattering
or Kozai migration as being
of any importance.
One important consideration is that at such close
proximity to their parent stars, strong tidal interactions between the
star and planet are expected to cause significant long--term spin--orbit
evolution, including changes to the value of
$\lambda$ (actually the true spin--orbit
misalignment angle $i$) over time. If tides can change
$\lambda$ since the time of formation, then we may have difficulty in
distinguishing migration caused by planet--planet scattering and Kozai
oscillations, from gas--disc migration. The tidal evolution of such
inclined orbits must therefore be considered an important goal in planetary evolution
studies. In this paper we approach the problem of studying the effects of
tidal friction on such inclined orbits.
\section{Previous work on tidal evolution relevant to close--in planets}
There has been much recent interest into the subject of tidal evolution, and our
work can be considered to follow on from some of the more recent
developments in the field. \cite{WS2002} studied the
effects of stellar spin--down on
tidal evolution. They developed a sophisticated model including the
effects of stellar evolution, resonance
locking and magnetic braking. They studied
dynamical tides in
solar--type stars, and found that the spin-down due to magnetic braking
causes resonance locking to become more intense, which leads to faster
orbital decay of a planet.
\cite{DobbsLin2004} also studied
the effects of stellar spin--down, though they considered equilibrium
tides, and we adopt the same model of tidal friction.
They studied tidal eccentricity evolution, and
proposed an explanation for the
coexistence of both circular and eccentric orbits for the planets in the
period range 7-21 days, the so--called borderline planets, as
the result of the variation in spin--down rates of young
stars. Planets with orbital periods less than 6 days are sufficiently
close to their host stars for tidal dissipation in the planet to be mostly
able to account for their negligible
eccentricities. Those with periods longer than 21 days are
negligibly affected by tides and have been observed with
a range of eccentricities, so the borderline planets
are referred to as such since they lie in the crossover
between these two regimes. Both of these works highlight the
importance of studying the effects of stellar
spin--down on tidal evolution.
Recently \cite{Pont2008} discussed empirical evidence for tidal
spin--up of exoplanet host stars, and found indications
of such a process occurring in the present sample of transiting
planets. He also
proposed that the mass--period relation
of close--in planets could be
accounted for by tidal transfer of angular momentum from the orbit to
the spin of the star. We must note, however, that
angular momentum losses from the system due to magnetic braking are
likely to change this picture considerably. In particular, the
attainment of a spin--orbit synchronous state becomes questionable when
angular momentum is being removed from the system -- this is
discussed in \S \ref{phaseplaneanalysis} below. We note that
his conclusions also depend on assumptions regarding the stellar $Q^{\prime}$ -- see \S
\ref{introtide}.
\cite{JacksonI2008} found that simple timescale
considerations of circularisation may not accurately represent the
true evolution from considering coupled evolution of the eccentricity
and semi-major axis. In addition, they found that it is inaccurate to neglect the
combined effects of the stellar and planetary tides in computing the
evolution.
Here we aim to study the accuracy of the simple
timescale estimates for tidal evolution, when coupled evolution of the
orbital and rotational elements is considered, in a more general model
of the long--term effects of tidal friction than \cite{JacksonI2008}
consider. We include stellar spin--down, and study its effects in a simplified
system. In particular, we investigate the tidal evolution of inclination,
since this has not been done in previous studies.
\section{Model of Tidal Friction adopted}
\subsection{General introduction to tidal friction}
\label{introtide}
The tidal interaction between two orbiting bodies acts to continually
change the orbital and rotational system parameters, and continually
dissipates energy. Ultimately -- in
the absence of angular momentum loss from the system --
either an equilibrium state is asymptotically approached, or the two
bodies spiral towards each other at an accelerating rate, and
eventually collide. The equilibrium state is characterised by
coplanarity (the equatorial planes of the bodies coincide with the
orbital plane), circularity and corotation (rotational frequencies of each
body match the orbital frequency) \citep{Hut1980}.
The efficiency of tidal dissipation in a body is often parametrised by a
dimensionless quality factor $Q$, which reflects the fact the body
undergoes a forced oscillation and dissipates a small fraction of the
associated energy during each oscillation period. This is analogous to
the quality factor in a forced, damped harmonic oscillator
\citep{MurrayDermott1999}, and is defined by
\begin{eqnarray*}Q = 2\pi E_{0} \left(\oint -\dot E \,\mathrm{d}t\right)^{-1}, \end{eqnarray*}
where $E_{0}$ is the maximum energy stored in an oscillation and the
integral represents the energy dissipated over one cycle. This is
related to the time lag $\tau$ in the response of the body to tidal forcing
of frequency $\hat \omega$ by $Q^{-1}=\hat \omega \tau$, when $Q \gg 1$. We find it
convenient to define $Q^{\prime} = \frac{3Q}{2k}$,
where $k$ is the second--order potential Love number of the body,
since this combination always
appears together in the evolutionary equations. $Q^{\prime}$ reduces to
$Q$ for a homogeneous fluid body, where $k=\frac{3}{2}$.
The problem of determining the efficiency of tidal dissipation, and therefore
quantifying the evolution of the system, amounts to calculating $Q^{\prime}$
factors for each body. $Q^{\prime}$ is a function of the
tidal frequency (and possibly the
amplitude of the tidal disturbance), being the
result of complex dissipative processes in each body
\citep{Zahn2008}. For rotating fluid bodies,
such as giant planets and stars, calculations of the excitation
and dissipation of internal waves have
indicated that $Q^{\prime}$ varies in a complicated way with the tidal frequency (see
\citealt{Savonije1995}; \citealt{Savonije1997};
\citealt{Papaloizou1997}; \citealt{Gio2004}, hereafter OL04;
\citealt{Gio2007}, hereafter OL07). These
calculations rely on a variety of approximations and involve uncertainties,
particularly regarding the interaction of the waves with convection.
Typically assumed values of $Q^{\prime} \sim 10^{6}$ for stars
are roughly consistent with observational data regarding the circularisation
periods of binary stars (OL07). In addition, the magnitude of $Q$
for HJs is often assumed to be similar to that for Jupiter, which has
been inferred to be in the range $6\times 10^{4} - 2\times 10^{6}$
\citep{YoderPeale1981}. This estimate is based on a model
of the tidal origin of the Laplace resonance among the Galilean satellites;
however, it has been argued that even if the origin of the resonance is
primordial, the average $Q$ cannot be far from these bounds \citep{Peale2002} -
giving $Q^{\prime} \sim 10^{6}$ (since $k \simeq 0.38$ for Jupiter).
This estimate also appears consistent with the work of
\cite{JacksonI2008}, who found that one can
reproduce the outer planet ($a>0.2\,\mathrm{AU}$) eccentricity
distribution from integrating the tidal evolution equations backwards
in time for the observed close--in planets ($a < 0.2$ AU) quite well
if $Q \sim 10^{5.5}$ for stars and $Q \sim 10^{6.5}$ for HJs. However,
this stellar $Q^{\prime}$ is difficult to reconcile with the existence of the planets
on the tightest orbits, such as WASP-12 b \citep{Hebb2008} and OGLE-TR-56
b \citep{Sasselov2003}, since it would imply that the inspiral time for these
planets would be much less than the age of the system. That several planets
have been found with similarly short periods makes this seem unlikely on
probabilistic grounds. Additionally, \cite{JacksonI2008} assume
that $Q^{\prime}$ is the same for all exoplanet host stars; this may
be an oversimplification, as is discussed in \S \ref{FstarsQ} below.
It would seem more plausible that the relevant
stellar $Q^{\prime}$ is in fact higher than this, and this is supported by the
theoretical work of OL07. They also propose that the efficiency of tidal dissipation in solar--type stars may be
different when the orbiting companion is a $\sim M_{\odot}$ star
in a close binary, than when its orbiting companion is a close--in gas giant
planet. This could be a result of differences in the tidal and spin frequencies in the
two situations, which may allow the excitation of inertial waves in the
convective envelope of the star in the former case but usually not in the
latter. It could also be a result of the
excitation of
internal inertia--gravity modes at the boundary between the convective and radiative
zones, and their resulting dissipation in the radiative core of
solar--type stars. If these waves are of small amplitude, then they are
unlikely to achieve sufficient nonlinearity to prevent coherent
reflection from the centre of the star, hence they form global modes,
and only weakly contribute to the dissipation \citep{GoodmanDickson1998}.
If these waves achieve sufficient nonlinearity, then they could overturn
the local entropy stratification. This may result in wave breaking,
which could dissipate energy in the wave, or in incoherent reflection of the waves
from the centre of the star, preventing the formation
of global modes, and enhancing the dissipation. Simple estimates of
when these waves become nonlinear
indicate that the waves excited by HJs marginally achieve
sufficient nonlinearity to
disrupt the reflection of the waves. This is in contrast to the close binary
circularisation problem, in which these waves are likely always to
achieve sufficient nonlinearity. This may result in much higher $Q^{\prime}$
values relevant to the survival of the very close--in planets,
potentially explaining
the discrepancy between the circularisation of binary stars and the
survival of the very close--in planets.
\subsection{Model adopted -- see Appendix \ref{Appendix} for details}
In light
of the uncertainties involved in calculating $Q^{\prime}$, and the difficulty
of calculating the evolution when $Q^{\prime}$ is a complicated function of $\hat \omega$, we adopt a
simplified model, based on a frequency--independent lag time --
though see discussion in Appendix \ref{Appendix}. We adopt the model of
\cite{Eggleton1998}, which is based on the equilibrium tide model of
\cite{Hut1981}. In this formulation, we calculate the evolution of the
specific angular momentum of the planetary orbit \begin{eqnarray*}\mathbf{h} = \mathbf{r} \times \dot{\mathbf{r}} = n a^{2}
\sqrt{1-e^{2}} \; \hat{\mathbf{h}},\end{eqnarray*} together
with its eccentricity vector $\mathbf{e}$, and the stellar and planetary
spin vectors $\mathbf{\Omega}_{1}$ and $\mathbf{\Omega}_{2}$. The eccentricity vector
has the magnitude of the eccentricity, and points in the direction of
periastron, and is defined by
\begin{eqnarray*} \mathbf{e} = \frac{\dot{\mathbf{r}} \times \mathbf{h}}{Gm_{12}} - \hat{\mathbf{
r}},
\end{eqnarray*} where $m_{12}=m_{1}+m_{2}$ is the sum of the stellar
and planetary masses.
Both $\mathbf{h}$ and $\mathbf{e}$ are conserved for
an unperturbed Keplerian orbit; therefore under weak external
perturbations their components vary slowly compared with the orbital
period. This allows averaging of the effects of the tidal perturbation over
a Keplerian orbit, resulting in a set of secular evolution equations
for the rotational and orbital elements.
This formulation is
beneficial because it can treat arbitrary orbital eccentricities
and stellar and planetary obliquities, unlike other models which are
only valid to a given order in the eccentricity, or for small (or
zero) orbital inclinations (\citealt{GoldSot1966};
\citealt{JacksonI2008}; \citealt{Hut1981}). Using the secular
evolution equations allows us to perform integrations quickly that
represent dynamical evolution over billions of years. The full set of equations
is presented in Appendix \ref{Appendix} in a form which is straightforward to
numerically integrate. These
equations have been written in such a way as to
eliminate references to the basis vectors chosen in their
representation, since the eccentricity basis vector is undefined for a
circular orbit. In this
form, the equations are regular at $e=0$, unlike those in
\cite{Eggleton2001} and \cite{ML2002}.
\section{Magnetic braking}
Observations of solar--type stars have shown that the mean
stellar rotational velocity decreases with time
\citep{Skumanich1972},
following the relation $\Omega \propto t^{-1/2}$, where $t$ is
the main--sequence age.
This is the empirical Skumanich relation, and can be
interpreted as telling us that solar--type stars have been undergoing continuous
spin--down since they first started on the main sequence.
Magnetic braking by a magnetised outflowing wind has long been
recognised as an important mechanism for the removal of angular
momentum from rotating stars \citep{WD1967}, and such a mechanism
seems able to explain most of the
observed stellar spin--down \citep{Barnes2003}.
Although the Skumanich law is well established for stars with
rotational velocities in the range $1-30 \; \mathrm{km}\,\mathrm{s}^{-1}$, it overestimates
the spin--down rates of stars up to $t \sim 10^{8}$ yrs, and thus
cannot explain the presence of fast rotators in the Pleiades \citep{IvanovaTaam2003}.
As a resolution to this problem it has been suggested that the angular
momentum loss rate for high rotation rates could be reduced, as a
result of the saturation of the stellar dynamo
\citep{MacBren1991}, or alternatively due to a reduction in the
number of open field lines in a complex magnetic field topology
\citep{TaamSpruit1989}. These and similar approaches lead
to modified models of the magnetic braking torque for fast rotators,
and several such models have been proposed (e.g.\ \citealt{IvanovaTaam2003};
\citealt{Holzwarth2005}). Nevertheless, \cite{Barnes2003} finds that the
Skumanich relation is remarkably accurate at modelling the spin--down
of Sun--like stars that are not rapid rotators, so to a
first approximation, a magnetic braking torque based on the empirical
Skumanich law is best for our purposes.
Here we include the effects of magnetic braking in
the tidal evolution equations, through the inclusion of
the \cite{VerbuntZwaan1981} braking torque, with the particular
coefficients of \cite{DobbsLin2004}, as follows (see Appendix~\ref{Appendix})
\begin{eqnarray}
\label{MBtorque}
\dot {\bmath\omega}_{\mathrm{mb}} = - \alpha_{\mathrm{mb}} \; \Omega_{1}^{2} \;
\mathbf{\Omega}_{1},
\end{eqnarray}
where $\alpha_{\mathrm{mb}} = 1.5 \times 10^{-14}\gamma$ yrs. $\gamma$ is a correction
factor for F-dwarfs, which takes the value 0.1 for an F dwarf, but is
unity for a G or K dwarf. We can also define a magnetic braking
timescale $ \tau_{\mathrm{mb}} \equiv \frac{\Omega_{1}}{\dot \omega_{\mathrm{mb}}} =
\frac{1}{\alpha_{\mathrm{mb}}}\frac{1}{\Omega_{1}^{2}}$,
which is $\sim 10^{10}$ yrs for the Sun.
\section{Analysis of the effects of magnetic braking on tidal evolution for a
simplified system}
\label{phaseplaneanalysis}
\subsection{Circular, coplanar orbit with magnetic braking}
We first study the effects of magnetic braking on a simplified system of a
circular, coplanar orbit under the influence of only the
tide that is raised on the star by the planet,
and magnetic braking. We have neglected the tide in the planet
here since the
moment of inertia of the planet is much smaller than that of star and
the orbit (i.e. $I_{2} \ll I_{1} \sim m_{2}a^{2}$), so to a first
aproximation we can neglect the effects of planetary spin; in any case the
planetary spin is expected to synchronize rapidly with the orbit. The following set of
dimensionless equations can be derived from the full set of equations
in Appendix \ref{Appendix}:
\begin{eqnarray}
\label{eqn:ODE1}
\frac{d\tilde \Omega}{d\tilde t} &=&
\tilde n^{4}\left(1-\frac{\tilde \Omega}{\tilde n}\right) -
A \: \tilde \Omega^{3}, \\
\frac{d\tilde n}{d\tilde t} &=& 3 \: \tilde n^{\frac{16}{3}}\left(1-\frac{\tilde \Omega}{\tilde n}\right),
\label{eqn:ODE2}
\end{eqnarray}
where we have normalised the stellar spin frequency $\Omega_{1}$ and
orbital mean motion $n$ to the orbital frequency at the
stellar surface, together with a factor $C^{3/4}$. $C$ is the ratio of
the orbital angular momentum of a mass $m_{2}$ in an orbit with semi-major
axis equal to the stellar radius $R_{1}$, to
the spin angular momentum of an equally rapidly rotating star of radius
$R_{1}$, mass $m_{1}$ and dimensionless radius of gyration
$r_{g1}$. The reduced mass is
$\mu=\frac{m_{1}m_{2}}{m_{1}+m_{2}}$. $C$ is important for
classifying the stability of the equilibrium curve $\tilde \Omega = \tilde
n$ in the absence of magnetic braking, and it can be shown from energy
and angular momentum considerations that this
equilibrium is stable if $\tilde n \leq 3^{-\frac{3}{4}}$ -- equivalent to the
statement that no more than a quarter of the total angular
momentum can be in the form of spin angular momentum for stability \citep{Hut1980}. We have thus defined the
following dimensionless quantities:
\begin{eqnarray*}
\tilde \Omega &=& \Omega_{1}\left(\frac{R_{1}^{3}}{Gm_{12}}
\right)^{\frac{1}{2}}C^{-\frac{3}{4}}, \\
\tilde n &=& n\left(\frac{R_{1}^{3}}{Gm_{12}}
\right)^{\frac{1}{2}}C^{-\frac{3}{4}}, \\
C &=& \frac{\mu R_{1}^{2}}{I_{1}} = \frac{\mu}{r_{g 1}^{2} \: m_{1}}, \\
\tilde t &=& \left(\frac{Gm_{12}}{R_{1}^{3}}\right)^{\frac{1}{2}} \:
\left(\frac{9}{2Q^{\prime}_{1}}\right)\left(\frac{m_{2}}{m_{1}}\right)
\: C^{\frac{13}{4}} \; t, \\
A &=& \alpha_{\mathrm{mb}} \:
\: \left(\frac{Gm_{12}}{R_{1}^{3}}\right)^{\frac{1}{2}} \:
\left(\frac{2 Q^{\prime}_{1}}{9}\right)\left(\frac{m_{1}}{m_{2}}\right)
C^{-\frac{7}{4}}.
\end{eqnarray*}
There is only one parameter ($A$) that completely characterises
the solution in the $(\tilde n,\tilde \Omega)$--plane, and its value
may be estimated as
\begin{eqnarray*}
A \simeq 100 \: \gamma
\left(\frac{Q_{1}^{\prime}}{10^{6}}\right),
\end{eqnarray*} for a Jupiter--mass planet orbiting a Sun--like star undergoing
magnetic braking (with standard $\alpha_{\mathrm{mb}}$ and with $Q^{\prime} = 10^{6}$).
The size of this term shows that in general magnetic braking dominates
the stellar spin evolution. Note that in
the absence of magnetic braking ($A=0$), Eqs.~\ref{eqn:ODE1} and
\ref{eqn:ODE2} do not contain reference to the masses of the star and
planet or to the tidal $Q^{\prime}$ of the star. The parameter $A$, together with the initial conditions ($\tilde
n_{0},\tilde \Omega_{0}$), completely determines the evolution.
\begin{figure
\centering
\subfigure{\label{Figmb1}\includegraphics
[width=0.495\textwidth]{1}} \\
\subfigure{\label{Figmb2}\includegraphics[width =
0.495\textwidth]{2}}
\caption{($\tilde n,\tilde \Omega$)--plane with $A = 100$
for a HJ orbiting a Sun--like star. The diagonal dashed line in each
plot corresponds to corotation ($\tilde \Omega =\tilde n$).
Top: magnetic braking spins the star down so that the planet finds
itself inside corotation, where the sign of the tidal torque
changes, and planet is subject to tidally induced orbital decay. For an initially high
$\tilde n$ outside corotation tidal friction efficiently transfers angular momentum from spin to orbit,
which pushes the planet outwards.
Bottom: Solutions with the same initial
conditions are plotted with and without magnetic braking for a HJ around a solar--type star, with
dot-dashed lines having $A=0$ and solid lines have $A = 100$. The dot--dashed lines
are also curves of constant total angular momentum. This shows that the
inclusion of magnetic braking is extremely important in
determining the secular evolution of the system, and its absence
results in a very different evolutionary history unless $\tilde
\Omega \ll \tilde n$ in the initial state.}
\label{fig:withMBplots}
\end{figure}
We plot some solutions on the $(\tilde n,\tilde
\Omega)$--plane in Fig.~\ref{fig:withMBplots} by integrating
Eqs.~\ref{eqn:ODE1} and \ref{eqn:ODE2} for
various initial conditions. Fig.~\ref{fig:withMBplots} shows two phase
portrait plots, which show the general qualitative behaviour of the solutions
to Eqs.~\ref{eqn:ODE1} and \ref{eqn:ODE2}, for a given value of the
parameter $A$. The arrows on each curve show the direction of time
evolution from the initial state. For prograde orbits we
restrict ourselves to studying the region,
$0 \leq \tilde n \leq 10, 0 \leq \tilde \Omega \leq
10$, in the $(\tilde n,\tilde \Omega)$--plane. This is because
$\Omega_{1} = \sqrt{Gm_{12}/R_{1}^{3}}$ corresponds to stellar breakup
velocity and $n \geq \sqrt{Gm_{12}/R_{1}^{3}}$ means that the planet would be orbiting at,
or beneath, the stellar surface. For a HJ with a mass of $M_{J}$ orbiting a star
of mass $M_{\odot}$, $C\sim 0.01$, so $\tilde n \simeq 10$
corresponds to an orbital semi-major axis of $a \simeq 0.01$
AU, and $\tilde n \simeq 0.1$ corresponds to $a \simeq 0.2$
AU, so these plots represent the full range of orbits of the HJs.
The top plot in Fig.~\ref{fig:withMBplots} is for $A = 100$, which
corresponds to canonical magnetic braking for a G/K star ($\gamma=1$)
and $Q_{1}^{\prime}$ of $10^{6}$.
In the absence of magnetic braking ($A=0$) we recover the standard
tidal evolution equations for a coplanar, circular orbit. These have
been well studied in the literature (e.g.\ \citealt{Counselman1973};
\citealt{Hut1981}). These equations have an equilibrium of
coplanarity and corotation i.e.\ $i = 0$ and $\tilde \Omega=\tilde n$,
where the orbital inclination (or stellar obliquity) $i$ is
defined by $\cos i = \hat{\mathbf{\Omega}}_{1} \cdot \hat{\mathbf{h}}$
and $i \geq 0$. The system
will approach this equilibrium if both the spin angular momentum
is less than a quarter of the total angular
momentum, and the total angular momentum
exceeds some critical value (\citealt{Hut1980}; \citealt{Greenberg1974}).
With no braking, orbits initially outside
corotation ($\tilde \Omega>\tilde n$) are not subject to tidally induced orbital decay, and
asymptotically approach a stable equilibrium $\tilde \Omega = \tilde
n$ for $\tilde n \leq 3^{-\frac{3}{4}}$. Orbits initially inside
corotation can evolve in two different ways,
depending on the stability of the equilibrium state on the solution's closest
approach to $\tilde \Omega = \tilde n$. If $\frac{d\tilde \Omega}{d\tilde t} >
\frac{d\tilde n}{d\tilde t} > 0$ near corotation, then $\tilde n \leq 3^{-\frac{3}{4}}$, and the equilibrium
state is locally stable (though no such curves are plotted in
Fig.~\ref{fig:withMBplots}, since they occur only in the far bottom
left of the plot, near the origin). This is when the corotation radius moves inwards faster than the orbit shrinks
due to tidal friction, which can result in a final stable equilibrium
state for the system if the corotation radius ``catches up'' with the
planet. On the other hand, orbits inside corotation for which this condition
is not satisfied are subject to tidally induced orbital decay, since tidally
induced angular momentum exchange enhances the difference between
$\tilde \Omega$ and $\tilde n$, which leads to further orbital evolution, and the
spiralling in of the planet. This evolution can be seen from the
dot--dashed lines in the bottom plot in Fig.~\ref{fig:withMBplots}.
Including magnetic braking ($A \neq 0$) means that $\tilde \Omega = \tilde
n$ is no longer an equilibrium state, and the total angular
momentum of the system is not conserved. For an orbit initially not subject
to spiralling into the star via tidal transfer of angular momentum
from orbit to spin ($\tilde \Omega \geq \tilde n$) we see from the top
of Fig.~\ref{fig:withMBplots}, that magnetic
braking will spin the star down so that the planet finds itself inside
the corotation radius of the star. Passing through corotation changes the sign of the tidal
torque and causes the planet to spiral into the star. Note
that, if we ignore the age of the system, any bound orbit will eventually decay
in a finite time since the system has no stable equilibrium. The
effect of magnetic braking is to increase the minimum semi-major axis at which the orbit is
not subject tidally induced orbital decay over the nuclear lifetime of the star.
This means that an initially rapidly rotating G--type star hosting a close--in Jupiter
mass companion will lose significant spin angular momentum through magnetic braking (over a
time $\sim \tau_{\mathrm{mb}}$). During this stage of spin-down the spin frequency of the star
may temporarily equal the orbital frequency of its close--in planet, but the rate of angular momentum
loss through magnetic braking will exceed the tidal rate of transfer of
angular momentum from orbit to spin. The stellar spin
continues to drop well below synchronism until the efficiency of
transfer of tidal angular momentum from orbit to spin can compensate
or overcompensate for the braking. If $\frac{d\tilde \Omega}{d\tilde t} >
\frac{d\tilde n}{d\tilde t} > 0$ inside corotation, then tides will
act to spin up the star, though the timescale for this to cause significant spin--up may be
much longer than the stellar lifetime, and this only occurs if the orbit has
sufficient angular momentum to noticeably spin up the star. Otherwise,
the planet continues to spiral inwards once it moves inside
corotation, and $\tilde \Omega
\simeq$ const.
\begin{figure
\begin{center}
\subfigure{\includegraphics[width =
0.495\textwidth]{{5}}}
\end{center}
\caption{($\tilde n,\tilde \Omega$)--plane showing the effects of reducing the stellar mass fraction
participating in angular momentum exchange with the orbit while the
braking rate is unchanged, as has
been proposed for an F star like $\tau$ Boo. $A = 100$, $\epsilon_{\star} = 10^{-2}$}
\label{FigmbFstar}
\end{figure}
So far we have considered the whole star to participate in tidal angular momentum
exchange with the orbit. For an F-dwarf (like
$\tau$ Boo), it has been proposed that only the outer convective
envelope (of mass fraction $\sim \epsilon_{\star}$) participates
in angular momentum exchange with the orbit
(\citealt{Marcy1997}; \citealt{DobbsLin2004}; \citealt{Donati2008}). If the core and envelope
of such a star can decouple, then tides would only have to spin up the outer
layers of the star, which would reduce the spin--up time by $\sim \epsilon_{\star}$. In this case
the system could remain in a state with $\tilde \Omega \simeq \tilde
n$ just inside corotation, with the resulting torque on the orbit
small. This may explain the spin--orbit synchronism of stars such as
$\tau$ Boo, as noted by \cite{DobbsLin2004}. Fig.~\ref{FigmbFstar} shows the phase
plane for a simplified system in which the moment of inertia of the
star acted on by tides is reduced by a factor $\epsilon_{\star} \sim 10^{-2}$, but the
braking rate is unchanged i.e. we multiply the first term on the
right-hand side of
Eq.~\ref{eqn:ODE1} by $\epsilon^{-1}_{\star}$. Note that this may be too
simple a model to describe such core--envelope decoupling, and we have
ignored associated changes to the braking rate.
\subsection{Extending the analysis to inclined circular orbits}
We can extend the simplified system of equations analysed in the previous
section to arbitrary inclination ($i$) of the orbital plane
with respect to the equatorial plane of star:
\begin{eqnarray}
\label{eqn:ODEinc1}
\frac{d\tilde \Omega}{d\tilde t} &=&
\tilde n^{4}\left[\cos i
-\frac{\tilde \Omega}{2\tilde n}\left(1+\cos^{2} i \right)\right] -
A \: \tilde \Omega^{3}, \\
\label{eqn:ODEinc2}
\frac{d\tilde n}{d\tilde t} &=& 3 \: \tilde
n^{\frac{16}{3}}\left[1-\frac{\tilde \Omega}{\tilde n}\cos i
\right], \\
\frac{d i}{d\tilde t} &=& -\tilde n^{4}\tilde \Omega^{-1}\sin i \left[1-
\frac{\tilde \Omega}{2\tilde n}\left(\cos i - \tilde n^{\frac{1}{3}}\tilde \Omega
\right)\right],
\label{eqn:ODEinc3}
\end{eqnarray}
For small inclination, Eq.~\ref{eqn:ODEinc3} reproduces
Eq.~13 from \cite{Hut1981}, with the exception that we have used a constant
$Q_{1}^{\prime}$ rather than a constant time--lag in the equations
(i.e. replace time lag $\tau$ by $\frac{1}{2}\frac{3}{2k_{1}nQ^{\prime}_{1}}$, and
note that $k_{1}$ is twice the apsidal motion constant of the star).
From Eq.~\ref{eqn:ODEinc2} the orbit begins to decay if \begin{eqnarray} \tilde
\Omega \cos i < \tilde n, \end{eqnarray} which is always satisfied for a retrograde orbit
($i\geq 90^{\circ}$). This is just a generalisation of the corotation
condition $\tilde \Omega =\tilde n$ to a non--coplanar orbit. The
inclination grows
if \begin{eqnarray} \tilde \Omega > \tilde \Omega_{crit} =
2\tilde n \left(\cos i -\tilde n^{\frac{1}{3}}\tilde \Omega
\right)^{-1}, \label{incexc}
\end{eqnarray} where we have assumed the quantity in brackets is
positive i.e. $\cos i >\tilde n^{\frac{1}{3}}\tilde \Omega$.
This agrees with the condition from \cite{Hut1981}
when $i\sim 0$.
For sufficiently close--in orbits that tidal friction
is important, magnetic braking will rapidly spin down the star such
that this condition is not satisfied, so we can safely conclude that the
inclination is not likely to grow appreciably by tidal friction. When
this condition is not satisfied, the inclination decays to zero, on a
timescale $\tau_{i}$ (see next section). Note that $i=180^{\circ}$ is an
unstable equilibrium value of the inclination.
In the absence of magnetic braking we recover the evolution considered
by \cite{Greenberg1974}, so we will now concentrate on the inclusion
of magnetic braking. The top figure in Fig.~\ref{Figmbi} plots the ($\tilde n,\tilde \Omega$)--plane
for $A = 10$ for an initial inclination of $i=90^{\circ}$. A smaller value
of $A$ is chosen over the previous section in order to show the
effects of tides more clearly, since reducing $A$ is equivalent to
reducing $Q^{\prime}$. The evolution of $i$ is not plotted
(and will in general be different
for each curve), but is found to
decay once the stellar spin decays sufficiently that Eq.~\ref{incexc}
is not satisfied. The bottom figure in Fig.~\ref{Figmbi} shows the effect of
increasing the inclination in steps to illustrate the behaviour as $i$
is increased, on various curves with otherwise the same
initial conditions, with $A = 10$. The orbit begins to decay
for smaller $\tilde n$, and decays at a faster rate as $i$ is
increased. This peaks for a perfectly retrograde
orbit ($i=180^{\circ}$), with anti--parallel spin and orbit, where the rates of change of
spin and orbital angular frequencies are maximum. Fig.~\ref{Figmbi}
shows that the orbit generally begins to decay outside corotation,
once $\tilde \Omega \cos i < \tilde n$. This has implications for the
tidal evolution of close--in planets on inclined orbits, in that if
this condition is satisfied, the planet will be undergoing tidally
induced orbital decay -- though the inspiral time may be longer than
the expected stellar lifetime.
\begin{figure
\centering
\subfigure{\label{Figmbi1}\includegraphics[width=0.495\textwidth]{{3}}}\\
\subfigure{\label{Figmbi2}\includegraphics[width=0.495\textwidth]{{4}}}
\caption{Top: ($\tilde n,\tilde \Omega$)--plane for an orbit with an
initial $i=90^{\circ}$, with $A = 10$, where the dashed line corresponds to
$\tilde \Omega=\tilde n$. This is similar to Fig.~\ref{Figmb1} for a circular orbit,
except that the orbit decays once $\tilde \Omega
\cos i < \tilde n$, which can occur above the dashed line in contrast with
the circular case. The evolution of $i$ is not plotted, and is in general different
for each curve, but is found to decay once the stellar spin drops below that given by
Eq.~\ref{incexc}.
Bottom: various initial inclinations, with $A = 10$, where the dashed line corresponds to
$\tilde \Omega=\tilde n$. The bottom trajectory has $i=0$, and
the inclination is increased in steps towards the top curve, which
has $i=180^{\circ}$, to
illustrate the behaviour.
Note that orbits with larger initial $i$ decay for smaller
$\tilde n$, once $\tilde \Omega \cos i < \tilde n$ is satisfied.}
\label{Figmbi}
\end{figure}
In this section we have seen that magnetic braking can only be reasonably
neglected for a coplanar orbit when $\Omega \ll n$. For an inclined
orbit, this condition must be generalised to $\Omega \cos i \ll n$ -
obvious from Eq.~\ref{eqn:ODEinc2}. Neglecting $\dot {\bmath\omega}_{\mathrm{mb}}$ for stars
for which this condition is not satisfied can result in a
qualitatively different evolution, as already seen in
Fig.~\ref{Figmb2} for a coplanar orbit.
\section{Tidal evolution timescales}
\label{timescales}
It is common practice to interpret the effects of tidal evolution in terms of simple
timescale estimates. The idea behind these is that if the rate of change of a
quantity $X$ is exponential, then $\dot X/X$ will be a constant, so we
can define a timescale $\tau_{X} = X/\dot X$. If $\dot X/X \ne$
const, then these may not accurately represent the
evolution. Here we reproduce the timescales that can be derived from the equations in Appendix~\ref{Appendix}.
A tidal inspiral time can be calculated from the equation for $\dot
a$, by considering only the effects of the tide raised on the star by
the planet (not
unreasonable since $I_{2} \ll I_{1} \sim m_{2}a^{2}$). Here $\dot a/a
\sim a^{-13/2} \ne $ const, so a more accurate
estimate of the inspiral time for a circular coplanar orbit is
\small{
\begin{eqnarray*} \hspace{-10.0pt} \tau_{a} \hspace{-9.0pt} &\equiv& \hspace{-10.0pt} -\frac{2}{13}\frac{a}{\dot a} \\
&\simeq& \hspace{-10.0pt} 12.0 \;\mbox{Myr}
\left(\frac{Q_{1}^{\prime}}{10^{6}}\right)\left(\frac{m_{1}}{M_{\odot}}\right)^{\frac{8}{3}}
\left(\frac{M_{J}}{m_{2}}\right)
\left(\frac{R_{\odot}}{R_{1}}\right)^{5}\left(\frac{P}{1\mathrm{d}}\right)^{\frac{13}{3}}\left(1-\frac{P}{P_{\star}}\right)^{-1}
\end{eqnarray*} } \normalsize
Here $P$ and $P_{\star}$ are the orbital and stellar spin periods, respectively.
We have already seen in \S \ref{phaseplaneanalysis} that it is unreasonable to assume that
$\Omega$ is fixed unless $\Omega \ll n$, due to magnetic braking.
If the orbit is inside corotation, angular momentum will be
transferred from the orbit to the spin of the star, giving a tidal spin--up time of
\begin{eqnarray*} \tau_{\Omega_{1}} &\equiv& -\frac{\Omega_{1}}{\dot
\Omega_{1}} \simeq \frac{13\tau_{a}}{2\;\alpha},
\end{eqnarray*}
where $\alpha=\frac{\mu h}{I \Omega}$ is the ratio of orbital to spin angular momentum. For the
HJ problem, $\tau_{\Omega_{1}}\geq \tau_{a}$ since $\alpha\sim O(1)$, which neglects the spin--down effects of magnetic braking. The
planetary spin $\Omega_{2}$ will tend to synchronise much faster,
since the moment of inertia of the planet is much less than that of
the orbit (by $\sim 10^{5}$), and will not be considered further
i.e.\ we assume $\Omega_{2} = n$.
A circularisation time can be derived from the equation for
$\mathbf{\dot e}$, and is given for a coplanar orbit by
\small
\begin{eqnarray*}
\tau_{e} &\equiv& -\frac{e}{\dot e} \\
&\simeq& 16.8 \;\mbox{Myr} \left(\frac{Q_{1}^{\prime}}{10^{6}}\right)\left(
\frac{m_{1}}{M_{\odot}}\right)^{\frac{8}{3}}\left(\frac{M_{J}}{m_{2}}\right)
\left(\frac{R_{\odot}}{R_{1}}\right)^{5}\left(\frac{P}{1\mathrm{d}}\right)^{
\frac{13}{3}} \\ && \hspace{-10.0pt}
\times\left[\left(f_{1}(e^{2})-\frac{11}{18}
\frac{P}{P_{\star}}f_{2}(e^{2})\right)+
\beta \left(f_{1}(e^{2})-\frac{11}{18}f_{2}(e^{2})\right)\right]^{-1}
\end{eqnarray*}
\normalsize
where we have included both the stellar and planetary tides, as these
have been shown to both contribute to the tidal evolution of $e$
\citep{JacksonI2008}.
Note that $\dot e/e \sim $ const only if $P\sim$ const and $e \ll 1$, where $f_{1,2}(e^{2})\simeq
1$. The factor
\begin{eqnarray*} \beta =
\frac{Q_{2}^{\prime}}{Q_{1}^{\prime}}\left(\frac{m_{1}}{m_{2}}\right)^{2}\left(\frac{R_{2}}{R_{1}}\right)^{5}
\sim 10 \frac{Q_{2}^{\prime}}{Q_{1}^{\prime}}
\end{eqnarray*} for the HJ problem.
If the orbital and stellar equatorial planes are misaligned, then
dissipation of the tide raised on the star by the planet would align them on a timescale
\small
\begin{eqnarray*}
\tau_{i} &\equiv& -\frac{i}{\frac{di}{dt}} \\
&\simeq& 70 \; \mbox{Myr} \:
\left(\frac{Q_{1}^{\prime}}{10^{6}}\right)
\left(\frac{m_{1}}{M_{\odot}}\right)
\left(\frac{M_{J}}{m_{2}}\right)^{2}
\left(\frac{R_{\odot}}{R_{1}}\right)^{3}\left(\frac{P}{1\mathrm{d}}\right)^{4} \\
&& \times\left(\frac{\Omega_{1}}{\Omega_{0}}\right)
\left[1-
\frac{P}{2P_{\star}}\left(1 - \frac{1}{\alpha}
\right)\right]^{-1}
\end{eqnarray*} \normalsize
where we have assumed that the orbit is circular and made the
small $i$ approximation. We
take $\Omega_{0} = 5.8 \times 10^{-6}\mbox{s}^{-1}$, which corresponds
to a spin period of $\sim 12.5$ d.
The validity of these timescales to accurately represent the tidal
evolution of the orbital and rotational elements is an important
subject of study, since these timescales are commonly applied to
observed systems. In a recent paper, \cite{JacksonI2008} found that it
is essential to consider the coupled evolution of $e$ and $a$ in order
to accurately model the tidal evolution, and that both the stellar and
planetary tides must be considered. They showed that the actual change
of $e$ over time can be quite different from simple
circularisation timescale considerations, due to the coupled evolution
of $a$. In the following we will consider the validity of the
spin--orbit alignment timescale to accurately model tidal evolution of $i$.
\section{Numerical integrations of the full equations for an inclined orbit}
We perform direct numerical integrations of the equations in Appendix \ref{Appendix} with a 4th/5th order
Runge-Kutta scheme with adaptive stepsize control, using a scheme with
Cash-Karp coefficients similar to that described in
\cite{Press1992}. Our principal aim is to study inclination evolution and to
determine the accuracy of the spin--orbit alignment timescale $\tau_{i}$ for
close--in planets.
We choose a ``standard'' system of a HJ in orbit around an FGK star. We
have $m_{1} = M_{\odot}, m_{2} = M_{J}, R_{1} = R_{\odot}, R_{2} =
R_{J}$, and we modify\footnote{We take $\mathbf{\Omega}_{1}
\cdot \mathbf{e} = 0$, though this need not be assumed. We find
negligible difference between integrations for which $\mathbf{\Omega}_{1}
\cdot \mathbf{e} = 0$ and $\mathbf{\Omega}_{1}
\cdot \mathbf{e} \ne 0$} $a$, $i$, $e$. We choose the initial ratio
$\Omega_{1}/n=10$, and include magnetic braking in all simulations,
setting $\gamma = 1$ unless stated otherwise. We choose
$Q_{1}^{\prime}=Q_{2}^{\prime}=10^{6}$, and $\Omega_{2}/n=1$ unless stated otherwise.
The dimensionless radii of gyration are chosen to be $r^{2}_{g1} = 0.076$ and
$r^{2}_{g2} = 0.261$, which are values appropriate for polytropic
stellar and planetary models with
respective indices 3 and 1.
\subsection{Inclusion of magnetic braking and the importance of
coupled evolution of $a$ and $i$}
\begin{figure*
\begin{center}
\subfigure[$a$ evolution for $a=0.05$ AU]{\label{Coupleda0.05-a}
\includegraphics[width=0.485\textwidth]{semia005e0vari}}
\subfigure[$\cos i$ evolution for $a=0.05$ AU]{\label{Coupleda0.05-i}
\includegraphics[width=0.485\textwidth]{{cosia005e0vari}}}
\subfigure[$a$ evolution $a=0.08$ AU]{\label{Coupleda0.08-a}
\includegraphics[width=0.485\textwidth]{{semia008e0vari}}}
\subfigure[$\cos i$ evolution $a=0.08$ AU]{\label{Coupleda0.08-i}
\includegraphics[width=0.485\textwidth]{{cosia008e0vari}}}
\end{center}
\caption{Tidal evolution for a circular, inclined orbit at $a=0.05$ AU and
$a=0.08$ AU, with various initial inclinations:
$i=6^{\circ},26^{\circ},45^{\circ},90^{\circ},180^{\circ}$.
(a) and (c) show their respective semi-major axis
evolutions (with the highest inclination orbit decaying first --
the bottom curve -- and the lowest inclination orbit decaying last
-- the top curve),
and (b) and (d) show the respective inclination evolution for these
systems. The outer orbit $a$ changes only slightly over $10$ Gyr, and $i$ evolves as
expected from simple estimates of $\tau_{i}$. The inner orbit, on the
other hand, is tidally shrunk, and this reduction in $a$ means
that the true evolution is much faster than the simple timescale
estimates predict. This highlights the importance of
considering coupled evolution of $a$ and $i$.}
\label{Fig:Coupleda-i}
\end{figure*}
For a prograde orbit ($i < 90^{\circ}$) initially outside corotation,
$\dot {\bmath{\omega}}_{\mathrm{mb}}$ rapidly spins the star down
sufficiently to ensure that inclination is not excited through tidal
friction (so that Eq.~\ref{incexc} is not satisfied), and the
inclination begins to
decay. Subsequent spin--down moves the
orbit--projected corotation radius beyond the orbit of the planet (so
that $\Omega \cos i < n$), and the resulting tidal
inspiral accelerates as the difference between $\Omega \cos i$ and $n$ is
enhanced. The associated reduction in $a$ increases the rate of
stellar spin--orbit alignment. Thus the inclusion of $\dot
{\bmath{\omega}}_{\mathrm{mb}}$ increases the
rate of alignment, and reduces $\tau_{i}$ from the simple
estimate, which ignores magnetic braking and coupled $a$ and $i$ evolution.
The effect of $\dot {\bmath{\omega}}_{\mathrm{mb}}$ on a retrograde orbit ($i
\geq 90^{\circ}$) is qualitatively different. A retrograde orbit is always
subject to tidally induced inspiral since $\Omega \cos i < n$ for all $i
\geq 90^{\circ}$. $\dot {\bmath{\omega}}_{\mathrm{mb}}$ acts to reduce $\Omega$, thereby reducing the difference
$|\Omega \cos i - n|$, making the tidal torque smaller. This
acts to \textit{increase} the timescale for alignment of the stellar
spin and orbit, though the effect
is found to be small.
The most important effect of including $\dot {\bmath{\omega}}_{\mathrm{mb}}$
is simply that of reducing the stellar spin sufficiently so that
$\Omega \cos i < n$, where the semi-major axis can then decay through tidal
friction. As $a$ subsequently decreases, the tidal torque increases, resulting in a
faster inclination decay rate. $\dot {\bmath{\omega}}_{\mathrm{mb}}$ can only be
neglected if $|\Omega \cos i| \ll n$. If the orbit is already highly inclined,
such as $i\geq 90^{\circ}$, then the inclusion of magnetic braking
is not so important, since $\Omega \cos i < n$ regardless of the spin rate.
For an orbit initially at $a = 0.05$ AU, the simple estimate of the
stellar spin-orbit alignment timescale gives $\tau_{i} \simeq 2.0
\times 10^{10}$
yrs. Fig.~\ref{Fig:Coupleda-i} shows the evolution of $i$ and $a$ for various initial inclinations for
an orbit at $a = 0.05$ AU and $a=0.08$ AU respectively. The outer
orbit $a$ changes only slightly over $10$ Gyr, and $i$ evolves as
expected from the simple estimate of $\tau_{i}\simeq 3.0 \times
10^{11}$ yrs. The inner orbit, on the
other hand, is subject to tidally induced orbital decay, with an
inspiral time of $1-3$ Gyr, and this reduction in $a$ increases the
rate of inclination evolution. This highlights the importance of
considering coupled evolution of $a$ and $i$, especially for large
initial inclinations, where inspiral occurs for higher stellar spin rates. The
difference between the simple estimate of $\tau_{i}$ and the true timescale can be
up to an order of magnitude different for orbits whose $a$ changes appreciably.
\subsection{Inclined and eccentric orbits}
\begin{figure*
\begin{center}
\subfigure{\label{inceccorba}
\includegraphics[width=0.485\textwidth]{{semia008i45vare}}}
\subfigure{\label{inceccorbi}
\includegraphics[width=0.485\textwidth]{{cosia008i45vare}}}
\subfigure{\label{inceccorbe}
\includegraphics[width=0.485\textwidth]{{ecca008i45vare}}}
\end{center}
\caption{Top left: Semi-major axis evolution for an inclined orbit with
initial $i=45^{\circ}$ at
$a=0.08$ AU for various initial $e$, with
$e=0,0.2,0.4,0.5,0.6,0.7,0.8$.
Top right and bottom: Inclination and eccentricity evolution for the same
systems. Solutions with the smallest
initial $e$ have the smallest change in $a$ and $i$. Solutions
with the largest initial $e$ undergo much more rapid tidal
evolution (note that for $e=0.8$, $e$ decays to less than $0.7$
within $10^{5}$ yrs) -- curves can be distinguished by noting that
the curves corresponding to the fastest evolution have the largest
initial $e$. Increasing the
eccentricity can be seen to reduce the inspiral time by up to several orders
of magnitude over the circular case. In contrast, increasing the
inclination in Fig.~\ref{Fig:Coupleda-i} only reduces the inspiral
time by a factor $\sim O(1)$ over a coplanar orbit. Also note that
$\tau_{e} < \tau_{i}$ for all integrations.}
\label{Fig:incleccorb}
\end{figure*}
\cite{JacksonI2008} already highlighted the importance of coupled $a$
and $e$ evolution for a coplanar orbit. We will now consider how $e$
might affect $i$ evolution for a non--coplanar orbit, which is the
subject of this section.
A nonzero eccentricity reduces the pericentre distance $r_{p} =
a(1-e)$, which increases the tidal torque over a circular orbit, since
tidal torque $\sim r^{-6}$. Although the planet would spend less time near
pericentre, the torque there is much greater, so dominates
the orbit--averaged torque. We therefore expect
the stellar spin--orbit alignment time to be reduced as we increase $e$. In
addition, we expect that an orbit at large $e$ would more strongly
affect the rate of alignment over a circular orbit, than
one at large $i$ would over a coplanar orbit, because $e$ reduces $r_{p}$, whereas $i$ only changes the
difference $(\Omega\cos i -n)$. The tidal torque $\sim
r^{-6}(\Omega\cos i -n)$, which depends more strongly
on $r$ than $\Omega$.
This behaviour can be seen in Figs.~\ref{Coupleda0.08-i} and
\ref{inceccorbi} which shows that the ratios of stellar spin--orbit alignment times for an orbit with small $i$ and
large $i$ is $\sim O(1)$, whereas the ratios of stellar spin--orbit alignment times
for an orbit with small $e$ and large $e$ can be up to several orders of
magnitude.
From Fig.~\ref{Fig:incleccorb} we can compare the simple estimate of
$\tau_{i} \simeq 3.0 \times10^{11}$ yrs for an orbit at $a=0.08$ AU, with the
coupled evolution of the orbital and rotational elements from
integration of the full equations. We see that the simple estimate
gives a misleadingly long stellar spin--orbit alignment time compared
with that obtained from integrating the full equations in cases where
$e$ is initially non--negligible -- as a result of
the strong functions of the eccentricity in this model of tidal
friction (see Appendix \ref{Appendix}). This confirms the
conclusion of \cite{JacksonI2008}, in that it is essential to consider
coupled evolution of $a$ and $e$ in order to determine an accurate
system history. We also find that the associated changes in semi-major
axis strongly affect stellar spin--orbit alignment.
A marginally better estimate for $\tau_{i}$ can be made by replacing
the orbital period with the orbital period around periastron
(equivalent to replacing $a$ by $r_{p}$), though this is still inadequate since it
neglects evolution of $a$.
\subsection{Discussion}
For typical HJs, we find that the stellar spin--orbit alignment
time is comparable to the inspiral time i.e.\ $\tau_{i} \sim
\tau_{a}$. This means that if we observe a planet, then its
survival implies that tides are unlikely to have aligned its
orbit. For planets on an accelerating inspiral
into the star, the rate of inclination evolution will have been much
lower in the past. Therefore if we observe a planet well inside
corotation ($\Omega \cos i < n$), with a
roughly coplanar orbit, we can assume that it must have started off
similarly coplanar -- unless we are lucky enough to be observing
a planet on its final rapid inspiral into the star after it has
undergone most of the evolution, where it is now in a very
short--period orbit, close to being consumed.
We expect $\tau_{i} \sim \tau_{a}$ when $\alpha \ll 1$ -- which is true
for close--in terrestrial planets -- since $\mathbf{\Omega}$
can be considered fixed, with the inclination changing only due to changes in
$\mathbf{h}$. For typical values of $\alpha \sim O(1)$ for HJs, the inclination
changes due to rotations of both $\mathbf{\Omega}$ and $\mathbf{h}$,
so the timescales are not exactly the same, but would be expected to
be of the same order of magnitude.
\cite{Hut1981} showed by considering only the tide in the
star, that the
stellar spin--orbit alignment timescale is longer
than the circularisation timescale
($\tau_{i} > \tau_{e}$) unless $\alpha > 6$. This was based on
exponential decay estimates for small $e$ and $i$, but nevertheless
holds for the systems integrated in this work, as can be seen from
Fig.~\ref{Fig:incleccorb}. This makes intuitive sense, since
$\alpha \sim O(1)$ means that spin angular momentum is important, and circularisation
involves only a property of the orbit, whereas alignment involves both the spin and
the orbit. For typical HJs $\alpha < 6$; we therefore expect
$\tau_{e} < \tau_{i}$, especially when the eccentricity damping effect
of the tide in the planet
is taken into account, which further enhances
this inequality. The tide in the planet is completely negligible in
changing the stellar obliquity since $I_{2} \ll I_{1} \sim
m_{2}a^{2}$.
This means that if an orbit is initially inclined and eccentric as a
result of planet--planet scattering or Kozai migration into a short--period orbit, we would
expect the orbit to become circular before
it aligns with the spin of the star. We should therefore observe fewer eccentric orbits than inclined orbits,
\textit{if those systems start with a uniform distribution in
$e,i$--space}. This also means that if we observe a
close--in planet on a circular orbit with non-zero $i$, we cannot rule out a
non-negligible eccentricity in the past.
\section{Application to observed systems}
\label{Obs}
\subsection{An explanation of the misaligned spin and orbit of XO-3 b}
\label{XO-3}
\begin{figure
\begin{center}
\subfigure{\label{FigXO-3-1}
\includegraphics[width=0.47\textwidth]{{XO3semi}}}
\subfigure{\label{FigXO-3-2}
\includegraphics[width=0.47\textwidth]{{XO3inc}}}
\subfigure{\label{FigXO-3-3}
\includegraphics[width=0.47\textwidth]{{XO3ecc}}}
\end{center}
\caption{Tidal evolution of XO-3 b taking current values for the
orbital properties of the system, except that $\cos i = 90^{\circ}$
(not unreasonable since this roughly corresponds to the upper
limit on $\lambda$, which in any case gives a lower bound
on $i$). Magnetic braking is included with
$\gamma=0.1$, and $\Omega_{1}/n = 2$ initially (results do not depend
strongly on this choice). From the top and middle plots we
require $Q_{1}^{\prime} \geq 10^{10}$ for the planet to survive for
several Gyr, and maintain its high inclination. From the bottom
plot we see that
if $Q_{1}^{\prime} \geq 10^{10}$, we require $Q_{2}^{\prime} \geq 10^{8}$ to
maintain the current eccentricity for a few Gyr. Tidal dissipation
in both the planet and star must therefore be weak to explain the current
configuration of the system.}
\label{FigXO-3}
\end{figure}
The only system currently observed with a spin--orbit
misalignment\footnote{This spin--orbit misalignment has been confirmed
since the submission of the present paper by \cite{Winn2009}, who find that the sky--projected
spin--orbit misalignment angle $\lambda = 37.3^{\circ} \pm 3.7^{\circ}$, which is
significantly smaller than that found
by \cite{Hebrard2008}. Nevertheless, our conclusions below should still be valid.} is
XO-3 (\citealt{Hebrard2008};
\citealt{Johns-Krull2008}), which has a sky--projected spin--orbit misalignment angle of
$\lambda \simeq 70^{\circ} \pm 15^{\circ}$. This system has a
very massive $m_{2} = 12.5 \, M_{J}$ planet on a moderately
eccentric $e = 0.29$, $P = 3.2$ d orbit around an F-type star of
mass $m_{1} = 1.3 \, M_{\odot}$. Its age is
estimated to be $\tau_{\star} \simeq (2.4-3.1)$ Gyr. Note that
even if the star is rotating near breakup velocity ($P_{\star}\sim 1$ d), the planet is
still subject to tidal inspiral, since $P_{\star} > P \cos i$ (where we
henceforth assume $i = \lambda$, which may slightly
\textit{underestimate} $i$). If we assume that the angle of
inclination of the stellar equator to the plane of the sky is $\sim
90^{\circ}$, then $P_{\star} = 3.3$ d $\sim P$ i.e. $\Omega \sim n$.
\cite{Hebrard2008} quote a stellar spin--orbit alignment timescale of $\sim 10^{12}$
yr for this system, but we find that this is in error by $\sim
10^{5}$. We believe that the reason for this discrepancy is that
their estimate was based on
assuming that the spin--orbit alignment time for XO-3 b is the same as for
HD17156 b (\citealt{Narita2008}; \citealt{Cochran2008}), which is a less massive planet on a much
wider orbit. We find $\tau_{i}\sim 30$ Myr (using the expression in \S
\ref{timescales}) assuming $Q^{\prime}_{1}=10^{6}$ to
align the whole star with the orbit. Circularisation time of
$\tau_{e} \sim 10$ Myr and the inspiral time is estimated to be
$\tau_{a} \sim 16$ Myr from simple estimates.
Integrations for this system are given in Fig.~\ref{FigXO-3} for a variety of
stellar and planetary $Q^{\prime}$ values. These integrations again highlight the
importance of considering coupled evolution
of the orbital and rotational elements, since timescales for tidal
evolution can be quite different from the simple estimates given
above. Indeed, the actual spin--orbit alignment time from integrating the
full set of equations is about an order of magnitude smaller than that
from the simple decay estimate, due to the semi-major axis evolution.
\begin{figure
\begin{center}
\subfigure{\includegraphics[width=0.35\textwidth]{XO3meq0}}
\subfigure{\includegraphics[width=0.35\textwidth]{XO3meq1}}
\subfigure{\includegraphics[width=0.35\textwidth]{XO3meq2}}
\end{center}
\caption{Tidal $Q^{\prime}$-factor as a function of the ratio of tidal
frequency to spin frequency $\hat \omega/\Omega$, from dissipation
of the $l = 2$, $m = 0,\, 1,\, 2$ components respectively, of the
equilibrium tide and dissipation of inertial modes in the OCZ
of an F-type star (see OL07 for
details of this calculation). This used a stellar model appropriate
for XO-3 to model the convection zone. The dashed lines represent
the effect of omitting the Coriolis force, and therefore inhibiting
inertial waves. The prominent features in
each figure (which occur for $\hat \omega/\Omega = -1$ and $-1/6$ for $m=1$ and $\hat
\omega/\Omega = -1/3$ for $m=2$) are
Rossby waves, which are probably excited for tidal frequencies not
relevant for the XO-3 system. For most tidal frequencies $Q^{\prime} \geq
10^{10}$, which could explain the survival and remnant orbital
inclination of XO-3 b.}
\label{GioQFstar}
\end{figure}
For the cases considered, the system can only survive and remain with its
current inclination for $\sim 3$ Gyr if
$Q^{\prime}_{1}\geq 10^{10}$. An explanation for the survival and remnant orbital inclination of
XO-3 b could therefore be the inefficiency tidal dissipation in
the host star. The host star is an F--star of mass $m_{1} =
1.3 \pm 0.2 M_{\odot}$, so it will contain a small
convective core and a very thin outer convection zone (OCZ)
separated by a radiative zone. Dissipation in the convective core
will only weakly affect the tide, and
dissipation in the radiation zone will also be weak \citep{Zahn2008}. This is because
internal inertia--gravity waves excited at the interface between convective and radiative
regions cannot reach the photosphere, where they can damp
efficiently, as supposed for high--mass stars.
In addition, nonlinear effects due to geometrical concentration
of the waves in the centre of the star cannot occur because
the waves will reflect from the outer boundary of the inner
convection zone well before they become nonlinear
(\citealt{GoodmanDickson1998}; OL07).
We expect that most dissipation occurs in the OCZ of the star. A
calculation of $Q^{\prime}$ for the dissipation of the
equilibrium tide and inertial modes, in the thin OCZ,
using a stellar model\footnote{for which we use EZ Web at
http://shayol.bartol.udel.edu/$\sim$rhdt/ezweb/}
appropriate for this star (note that the metallicity of the star is
subsolar, with $Z\sim 0.01$), was performed (see
OL07; OL04 for details of the numerical method).
It must be noted that these calculations involve uncertainties
regarding the effective viscosity of turbulent convection, though
the general trends in the results below (and in the next section)
are likely to be quite robust. The
results for the $m = 0,\, 1,\, 2$ components of the tide are plotted in
Fig.~\ref{GioQFstar}. In Appendix \ref{TidalPotential}, we
show that a combination of the $m=0, \, 1, \, 2$ components of the tide
are relevant for spin--orbit alignment and inspiral, and so we must calculate
$Q^{\prime}$ for all components of the $l=2$ (quadrupolar) tide. The
relevant tidal frequencies, assuming $\Omega \sim n$ currently, would be those
of integer $\hat \omega / \Omega$. However, since the angle of
inclination of the stellar equatorial plane to the plane of the sky
has not been determined, the relevant
tidal frequencies cannot be calculated with any certainty.
Nevertheless, $Q^{\prime} \geq 10^{10}$ for most tidal frequencies for the host star XO-3.
This can explain the survival and remnant inclination of XO-3 b,
since both $\tau_{a}$ and $\tau_{i}$ are now much longer than the
age of the system.
In addition, the remnant eccentricity could be maintained due to weak
damping of the tide in the star for the same reasons.
However, we must also
explain the inefficient damping of the tide in the planet, if indeed the
reason for the eccentricity is that $\tau_{e} >
\tau_{\star}$.
The planet in this system is massive, and may be a low--mass brown dwarf. If it
formed without a core, then the dissipation of inertial modes may be
reduced if they are able to form global modes, as found in OL04.
An alternative explanation for the survival of XO-3 b involving a larger initial semi-major axis for
the planet is also possible, but this would require significant tidal
migration to bring the planet to its current location. This would
require a much lower stellar $Q^{\prime}$ than our calculations
predict, which as discussed in \S \ref{introtide} would imply a very
short inspiral time for the planets on the tightest orbits, if
their host stars have similar $Q^{\prime}$ values. We discuss this
further with regards to the host stars WASP-12 and OGLE-TR-56 in \S \ref{wasp12} below.
\subsection{Tidal dissipation in F--stars}
\label{FstarsQ}
\begin{figure
\begin{center}
\subfigure{\label{FigQFstar1}
\includegraphics[width=0.375\textwidth]{{M15Msunmeq2}}}
\subfigure{\label{FigQFstar2}
\includegraphics[width=0.375\textwidth]{{M12Msunmeq2}}}
\end{center}
\caption{Tidal $Q^{\prime}$-factor as a function of the ratio of tidal
frequency to spin frequency $\hat \omega/\Omega$, from dissipation
of the $l=m=2$ component (we find the behaviour of the $m=0$ and 1
components is similar in magnitude) of the
equilibrium tide, and dissipation of inertial modes, in the OCZ
of two F-type star models. The dashed lines represent
the effect of omitting the Coriolis force, and therefore inhibiting
inertial waves. The top figure shows a
high--mass $1.5 \, M_{\odot}$ F--star, with solar metallicity and an
age of $0.7$ Gyr, assumed to be spinning with a period of $3$ d.
This has a very thin OCZ, and tidal
dissipation is extremely weak. The bottom figure shows a low--mass
$1.2 \, M_{\odot}$ F--star, with solar metallicity and an age of
$1.0$ Gyr, assumed to be spinning with a period of $3$ d. Tidal
dissipation is much stronger in this star, though still smaller than
that in Fig.~6 from OL07 for a $1.0 \, M_{\odot}$ G--star. Together with
Fig.~\ref{GioQFstar} this shows that
$Q^{\prime}$ can vary considerably between different stars, even
within the range of F--stars.}
\label{FigQFstar}
\end{figure}
Our calculation of $Q^{\prime}$ for XO-3, indicates that it
would be worthwhile to study the range of $Q^{\prime}$ expected for
F--stars, and how these may differ from solar--type stars. We
performed calculations of tidal dissipation
in the OCZs of a variety of F--stars between the
masses of $1.2-1.6 \, M_{\odot}$,
using the numerical method of OL07, and the stellar
models of EZ-Evolution. These stars contain convective cores
surrounded by a radiative zone and an OCZ, and for
the reasons mentioned in the previous section, we expect that tidal
dissipation in the OCZ will dominate the dissipation.
We consider the range of OCZ properties in F--stars, as the mass
and metallicity are
varied. Below we present a selection of illustrative examples, which
represent the range of properties expected for F--stars.
Fig.~\ref{FigQFstar} shows $Q^{\prime}$ as a function of tidal
frequency for two F--stars, with different OCZ
properties. Fig.~\ref{FigQFstar1}
shows that tidal dissipation in more massive F--stars, with very thin OCZs,
is found to be extremely weak, with $Q^{\prime}\geq
10^{12}$ for most tidal frequencies. This implies that tidal
dissipation in such stars
is probably negligible in contributing
to the spin--orbit evolution of HJs.
Fig.~\ref{FigQFstar2} shows that tidal dissipation in lower mass
F--stars with thicker convective
envelopes, is similar to but slightly weaker than,
that for solar--type stars (OL07), since $Q^{\prime} \sim 10^{8}$
for most tidal frequencies in the range $|\hat \omega| < 2 |\Omega|$.
A model with $1.3 \, M_{\odot}$ star similar to XO-3, except that we
choose supersolar metallicity ($Z=0.03$), is found to have a similar
OCZ to the $1.2 \, M_{\odot}$ model in
Fig.~\ref{FigQFstar2}, and has a very
similar $Q^{\prime}$. The metallicity of the star affects the
thickness of the OCZ, and therefore the efficiency of tidal
disspation.
From these examples, it is clear that assuming a single
$Q^{\prime}$ applies for
all stars is probably incorrect.
Even within the mass range of F--stars there is considerable variation in
$Q^{\prime}\sim 10^{8}-10^{12}$ in our calculations, primarily as a result of the
variation in the mass
fraction contained in the OCZ.
There are also differences between G and F
stars, due to differences in internal structure -- most notably the
presence of a radiative core in a G--type star may lead to enhanced
dissipation by nonlinear effects. Lower mass
stars, and those with
higher metallicity, tend to have thicker OCZs than higher
mass, low--metallicity stars. In addition, higher mass stars are more
centrally condensed, so the mass fraction in their outer regions will be
lower. This results in low--mass, high--metallicity stars having lower
$Q^{\prime}$ then high--mass, low--metallicity stars. In addition, OLO7
found that the spin period of the star also affects $Q^{\prime}$.
\subsection{The survival of WASP-12 b and OGLE-TR-56 b}
\label{wasp12}
The results of the last two sections also allow us to propose
an explanation for the survival of the planets on the tightest orbits,
such as WASP-12 b \citep{Hebb2008} and OGLE-TR-56 b
\citep{Sasselov2003}. Taking the current values for the stellar
properties of the host stars in both systems, we find that they
are both likely to have a similar internal
structure to those discussed above, with a convective core present,
which will prevent internal inertia--gravity waves from reaching
the centre of each star. In addition, these stars
are slowly rotating, and so the relevant tidal frequencies are
likely to be outside the range of inertial waves
($|\hat \omega| > 2 |\Omega|$), which would imply that such waves
are not excited by tidal forcing.
This means that $Q^{\prime}$ is likely to be much
larger than that in the models discussed above
(in Fig.~\ref{FigQFstar}), particularly since reducing the spin frequency is found to increase
$Q^{\prime}$ for a given ratio $\hat \omega/\Omega$ (OL07). Therefore
it is
likely that the relevant $Q^{\prime}\ga 10^{10}$, which would imply that tidally induced inspiral
will not occur within the age of the system.
This means that weak dissipation in
the star could potentially explain the survival of both of these planets.
\section{Conclusions}
In this paper we have investigated the long--term tidal evolution of
close--in extrasolar planets. We studied the effects of magnetic braking on tidal
evolution in a simplified system, and then performed numerical
integrations for a variety of HJ systems, with particular emphasis on
inclination evolution. We now summarise the main results of this work.
Magnetic braking moves the corotation radius of
the star outwards such that any close--in planets will eventually orbit inside
corotation ($\Omega \cos i < n$), and be subject to orbital decay due
to tides. It makes no
sense to refer to any close--in planet as being \textit{tidally evolved},
since magnetic braking removes the only equilibrium state
accessible through tidal friction during the expected stellar lifetime.
Magnetic braking is found to be important for the tidal
evolution of HJs unless $\Omega \cos i \ll n$, in which case the
orbit is already well inside the orbit--projected corotation
radius. Highly inclined (especially retrograde) orbits tend to be affected less by magnetic
braking, since this condition is often satisfied regardless of the
spin rate. Magnetic braking is particularly important when it comes to interpreting the
tidal evolution of observed systems from formation to the present
day, since the star may have been rotating much more rapidly in the
past. Nevertheless, it is probably not a bad approximation to
neglect magnetic braking for calculating the \textit{future} tidal
evolution of most observed HJs, if the star has already spun down so that $\Omega \ll
n$.
Combining our results with those of \cite{JacksonI2008}, we find
that coupled evolution of the orbital and rotational elements is
essential to accurately model tidal evolution, and can result in
much faster evolution than simple timescale estimates predict. This is
especially true for highly eccentric orbits, for which the associated
semi-major axis evolution increases the rate of stellar spin--orbit
alignment by up to several
orders of magnitude.
We find that the true timescale for stellar spin--orbit alignment
is comparable to the inspiral time for HJs, therefore the orbits of most close--in
planets have probably not aligned with the spin of the star. Observed inclinations are likely to be a relic
of the migration process. This means that RM observations of
transiting planets can potentially distinguish between migration
caused by planet--planet scattering or Kozai
oscillations combined with tidal dissipation in the star, and that
produced by tidal interaction with the gas disc.
If the majority of candidates are found with
$\lambda\sim 0$, this strongly disfavours planet--planet scattering or
Kozai migration, since they are expected to produce significantly
inclined orbits, and we have found that tides are unlikely to have aligned orbits
without causing inspiral. Alternatively, if systems are found with
significantly nonzero $\lambda$,
then some planet--planet scattering or Kozai migration
could have occurred to produce
these orbital inclinations. We strongly encourage
future observations of the RM effect for transiting planets.
For most HJs, tides tends to circularise the planet's orbit
before spin--orbit alignment or inspiral occurs. If an orbit is initially
inclined and eccentric, as a
result of planet--planet scattering or Kozai migration into a
short--period orbit, then we would
expect the orbit to become circular before
it aligns. Therefore, if we observe a
planet on an inclined, circular orbit, we cannot rule out a
non-negligible eccentricity in the past. This
means that we should observe fewer eccentric orbits than
inclined orbits
\textit{if those systems start with a uniform distribution of points
in ($e,i$)--space}, due to tidal friction. This should be considered when
comparing the observed $(e,i)$ distribution with those predicted from
theoretical work on Kozai migration or planet--planet
scattering, before we can further constrain these theories (\citealt{Fabrycky2007}; \citealt{JT2008}).
The misaligned spin and orbit of the XO-3 system could
potentially be explained in terms of inefficient tidal dissipation
inside the host star. The required stellar $Q_{1}^{\prime} \ga 10^{10}$ required
for the survival and remnant misalignment is predicted from
theoretical calculations of tidal dissipation in the OCZ
of an F--star. In addition, the remnant
eccentricity poses constraints on the planetary
$Q_{2}^{\prime} \ga 10^{8}$, in the absence of perturbing forces
that could excite the eccentricity.
The stellar $Q^{\prime}$ has been shown to vary widely between
different stars, and even within the range of F--stars, from
$10^{8}-10^{12}$. There are also
differences between $Q^{\prime}$ for the F--star models discussed here, and the
solar--type star models discussed in OL07. This implies that
assuming a single value of $Q^{\prime}$ applies to all exoplanet host
stars is probably incorrect. $Q^{\prime}$ has been found to vary with stellar
mass, metallicity, spin period, as well as tidal frequency.
The presence of a convective core is likely to be important, in that
it acts to prevent internal inertia--gravity waves, which are excited at the interface between the
convective and radiative zones, from reaching the centre of the star,
where nonlinear effects could enhance their dissipation. This may
explain the survival of some of the planets on the tightest orbits, such as
WASP-12 b and OGLE-TR-56 b, whose host stars are likely to have convective cores.
It is clear that much more work is required to study the mechanisms
of tidal dissipation in rotating stars. It would
also be useful to perform a detailed study into the possibility of
core--envelope decoupling in such stars, since this may have
implications for the survival
and remnant inclinations of close--in planets. Future observations of
transiting planets will constrain these theories, hopefully leading to a better understanding
of the mechanisms at work in the formation and evolution of planetary systems.
\section*{Acknowledgments}
We would like to thank the referee, Richard Greenberg, for helpful comments
that have improved the manuscript. In addition, A.J.B would like to thank STFC for a research
studentship.
|
1,116,691,497,338 | arxiv | \section{Introduction}
Please follow the steps outlined below when submitting your manuscript to
the IEEE Computer Society Press. This style guide now has several
important modifications (for example, you are no longer warned against the
use of sticky tape to attach your artwork to the paper), so all authors
should read this new version.
\subsection{Language}
All manuscripts must be in English.
\subsection{Dual submission}
Please refer to the author guidelines on the CVPR 2018 web page for a
discussion of the policy on dual submissions.
\subsection{Paper length}
Papers, excluding the references section,
must be no longer than eight pages in length. The references section
will not be included in the page count, and there is no limit on the
length of the references section. For example, a paper of eight pages
with two pages of references would have a total length of 10 pages.
{\bf There will be no extra page charges for
CVPR 2018.}
Overlength papers will simply not be reviewed. This includes papers
where the margins and formatting are deemed to have been significantly
altered from those laid down by this style guide. Note that this
\LaTeX\ guide already sets figure captions and references in a smaller font.
The reason such papers will not be reviewed is that there is no provision for
supervised revisions of manuscripts. The reviewing process cannot determine
the suitability of the paper for presentation in eight pages if it is
reviewed in eleven.
\subsection{The ruler}
The \LaTeX\ style defines a printed ruler which should be present in the
version submitted for review. The ruler is provided in order that
reviewers may comment on particular lines in the paper without
circumlocution. If you are preparing a document using a non-\LaTeX\
document preparation system, please arrange for an equivalent ruler to
appear on the final output pages. The presence or absence of the ruler
should not change the appearance of any other content on the page. The
camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment
the \verb'\cvprfinalcopy' command in the document preamble.) Reviewers:
note that the ruler measurements do not align well with lines in the paper
--- this turns out to be very difficult to do well when the paper contains
many figures and equations, and, when done, looks ugly. Just use fractional
references (e.g.\ this line is $095.5$), although in most cases one would
expect that the approximate location will be adequate.
\subsection{Mathematics}
Please number all of your sections and displayed equations. It is
important for readers to be able to refer to any particular equation. Just
because you didn't refer to it in the text doesn't mean some future reader
might not need to refer to it. It is cumbersome to have to use
circumlocutions like ``the equation second from the top of page 3 column
1''. (Note that the ruler will not be present in the final copy, so is not
an alternative to equation numbers). All authors will benefit from reading
Mermin's description of how to write mathematics:
\url{http://www.pamitc.org/documents/mermin.pdf}.
\subsection{Blind review}
Many authors misunderstand the concept of anonymizing for blind
review. Blind review does not mean that one must remove
citations to one's own work---in fact it is often impossible to
review a paper unless the previous citations are known and
available.
Blind review means that you do not use the words ``my'' or ``our''
when citing previous work. That is all. (But see below for
techreports.)
Saying ``this builds on the work of Lucy Smith [1]'' does not say
that you are Lucy Smith; it says that you are building on her
work. If you are Smith and Jones, do not say ``as we show in
[7]'', say ``as Smith and Jones show in [7]'' and at the end of the
paper, include reference 7 as you would any other cited work.
An example of a bad paper just asking to be rejected:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of our
previous paper [1], and show it to be inferior to all
previously known methods. Why the previous paper was
accepted without this analysis is beyond me.
[1] Removed for blind review
\end{quote}
An example of an acceptable paper:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of the
paper of Smith \etal [1], and show it to be inferior to
all previously known methods. Why the previous paper
was accepted without this analysis is beyond me.
[1] Smith, L and Jones, C. ``The frobnicatable foo
filter, a fundamental contribution to human knowledge''.
Nature 381(12), 1-213.
\end{quote}
If you are making a submission to another conference at the same time,
which covers similar or overlapping material, you may need to refer to that
submission in order to explain the differences, just as you would if you
had previously published related work. In such cases, include the
anonymized parallel submission~\cite{Authors14} as additional material and
cite it as
\begin{quote}
[1] Authors. ``The frobnicatable foo filter'', F\&G 2014 Submission ID 324,
Supplied as additional material {\tt fg324.pdf}.
\end{quote}
Finally, you may feel you need to tell the reader that more details can be
found elsewhere, and refer them to a technical report. For conference
submissions, the paper must stand on its own, and not {\em require} the
reviewer to go to a techreport for further details. Thus, you may say in
the body of the paper ``further details may be found
in~\cite{Authors14b}''. Then submit the techreport as additional material.
Again, you may not assume the reviewers will read this material.
Sometimes your paper is about a problem which you tested using a tool which
is widely known to be restricted to a single institution. For example,
let's say it's 1969, you have solved a key problem on the Apollo lander,
and you believe that the CVPR70 audience would like to hear about your
solution. The work is a development of your celebrated 1968 paper entitled
``Zero-g frobnication: How being the only people in the world with access to
the Apollo lander source code makes us a wow at parties'', by Zeus \etal.
You can handle this paper like any other. Don't write ``We show how to
improve our previous work [Anonymous, 1968]. This time we tested the
algorithm on a lunar lander [name of lander removed for blind review]''.
That would be silly, and would immediately identify the authors. Instead
write the following:
\begin{quotation}
\noindent
We describe a system for zero-g frobnication. This
system is new because it handles the following cases:
A, B. Previous systems [Zeus et al. 1968] didn't
handle case B properly. Ours handles it by including
a foo term in the bar integral.
...
The proposed system was integrated with the Apollo
lunar lander, and went all the way to the moon, don't
you know. It displayed the following behaviours
which show how well we solved cases A and B: ...
\end{quotation}
As you can see, the above text follows standard scientific convention,
reads better than the first version, and does not explicitly name you as
the authors. A reviewer might think it likely that the new paper was
written by Zeus \etal, but cannot make any decision based on that guess.
He or she would have to be sure that no other authors could have been
contracted to solve problem B.
FAQ: Are acknowledgements OK? No. Leave them for the final copy.
\begin{figure}[t]
\begin{center}
\fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}}
\end{center}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\subsection{Miscellaneous}
\noindent
Compare the following:\\
\begin{tabular}{ll}
\verb'$conf_a$' & $conf_a$ \\
\verb'$\mathit{conf}_a$' & $\mathit{conf}_a$
\end{tabular}\\
See The \TeX book, p165.
The space after \eg, meaning ``for example'', should not be a
sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided
\verb'\eg' macro takes care of this.
When citing a multi-author paper, you may save space by using ``et alia'',
shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.)
However, use it only when there are three or more authors. Thus, the
following is correct: ``
Frobnication has been trendy lately.
It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by
Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.''
This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...''
because reference~\cite{Alpher03} has just two authors. If you use the
\verb'\etal' macro provided, then you need not worry about double periods
when used at the end of a sentence as in Alpher \etal.
For this citation style, keep multiple citations in numerical (not
chronological) order, so prefer \cite{Alpher03,Alpher02,Authors14} to
\cite{Alpher02,Alpher03,Authors14}.
\begin{figure*}
\begin{center}
\fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}}
\end{center}
\caption{Example of a short caption, which should be centered.}
\label{fig:short}
\end{figure*}
\section{Formatting your paper}
All text must be in a two-column format. The total allowable width of the
text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54
cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a
$\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the
first page) should begin 1.0 inch (2.54 cm) from the top edge of the
page. The second and following pages should begin 1.0 inch (2.54 cm) from
the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86
cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4
paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the
page.
\subsection{Margins and page numbering}
All printed material, including text, illustrations, and charts, must be kept
within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm)
high.
Page numbers should be in footer with page numbers, centered and .75
inches from the bottom of the page and make it start at the correct page
number rather than the 4321 in the example. To do this fine the line (around
line 23)
\begin{verbatim}
\setcounter{page}{4321}
\end{verbatim}
where the number 4321 is your assigned starting page.
Make sure the first page is numbered by commenting out the first page being
empty on line 46
\begin{verbatim}
\end{verbatim}
\subsection{Type-style and fonts}
Wherever Times is specified, Times Roman may also be used. If neither is
available on your word processor, please use the font closest in
appearance to Times to which you have access.
MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of
the first page. The title should be in Times 14-point, boldface type.
Capitalize the first letter of nouns, pronouns, verbs, adjectives, and
adverbs; do not capitalize articles, coordinate conjunctions, or
prepositions (unless the title begins with such a word). Leave two blank
lines after the title.
AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title
and printed in Times 12-point, non-boldface type. This information is to
be followed by two blank lines.
The ABSTRACT and MAIN TEXT are to be in a two-column format.
MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use
double-spacing. All paragraphs should be indented 1 pica (approx. 1/6
inch or 0.422 cm). Make sure your text is fully justified---that is,
flush left and flush right. Please do not place any additional blank
lines between paragraphs.
Figure and table captions should be 9-point Roman type as in
Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred.
\noindent Callouts should be 9-point Helvetica, non-boldface type.
Initially capitalize only the first word of section titles and first-,
second-, and third-order headings.
FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction})
should be Times 12-point boldface, initially capitalized, flush left,
with one blank line before, and one blank line after.
SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements})
should be Times 11-point boldface, initially capitalized, flush left,
with one blank line before, and one after. If you require a third-order
heading (we discourage it), use 10-point Times, boldface, initially
capitalized, flush left, preceded by one blank line, followed by a period
and your text on the same line.
\subsection{Footnotes}
Please use footnotes\footnote {This is what a footnote looks like. It
often distracts the reader from the main flow of the argument.} sparingly.
Indeed, try to avoid footnotes altogether and include necessary peripheral
observations in
the text (within parentheses, if you prefer, as in this sentence). If you
wish to use a footnote, place it at the bottom of the column on the page on
which it is referenced. Use Times 8-point type, single-spaced.
\subsection{References}
List and number all bibliographical references in 9-point Times,
single-spaced, at the end of your paper. When referenced in the text,
enclose the citation number in square brackets, for
example~\cite{Authors14}. Where appropriate, include the name(s) of
editors of referenced books.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Frobnability \\
\hline\hline
Theirs & Frumpy \\
Yours & Frobbly \\
Ours & Makes one's heart Frob\\
\hline
\end{tabular}
\end{center}
\caption{Results. Ours is better.}
\end{table}
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered. Please ensure that any point you wish to
make is resolvable in a printed copy of the paper. Resize fonts in figures
to match the font in the body text, and choose line widths which render
effectively in print. Many readers (and reviewers), even of an electronic
copy, will choose to print your paper in order to read it. You cannot
insist that they do otherwise, and therefore must not assume that they can
zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it's almost always best to use
\verb+\includegraphics+, and to specify the figure width as a multiple of
the line width as in the example below
{\small\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.eps}
\end{verbatim}
}
\subsection{Color}
Please refer to the author guidelines on the CVPR 2018 web page for a discussion
of the use of color in your document.
\section{Final copy}
You must include your signed IEEE copyright release form when you submit
your finished paper. We MUST have this form before your paper can be
published in the proceedings.
{\small
\bibliographystyle{ieee}
\section{Introduction}
Semantic segmentation and instance segmentation are two important and correlated vision problems. Their underlying connections recently motivated panoptic segmentation as a unification of both tasks~\cite{kirillov2019panoptic}. In panoptic segmentation, image contents are divided
into two types: things and stuff. Things are countable instances (\eg, person, car, and bicycle) and each instance has a unique id to distinguish it from the other instances. Stuff refers to the amorphous and uncountable regions (\eg, sky, grassland, and snow) and has no instance id~\cite{kirillov2019panoptic}.
The differences between things and stuff also lead to different ways to handle their predictions. A number of works simply decompose panoptic segmentation into an instance segmentation task for things and a semantic segmentation task for stuff~\cite{kirillov2019panoptic,kirillov2019panopticfpn, ren2021refine,xiong2019upsnet,qiao2021detectors}. However, such a separated strategy tend to increase model complexity and undesired artifacts. Several works further consider bottom-up (proposal-free) instance segmentation approaches but still maintain similar separate strategies~\cite{yang2019deeperlab,gao2019ssap,bonde2020towards,cheng2020panoptic,wang2020axial}. Some
recent methods try to simplify the panoptic segmentation pipeline by processing things and stuff with a unified framework. For example, several works~\cite{tian2020conditional,wang2020solov2,li2021fully,zhang2021k} achieve this with fully convolutional frameworks. These framework share a similar ``top-down meets bottom-up'' two-branch design where a kernel branch encodes object/region information, and is dynamically convolved with an image-level feature branch to generate the object/region masks.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{media/comp.pdf}
\hspace{-51mm}\resizebox{0.6\columnwidth}{!}{\tablestyle{2pt}{1}
\input{table/comp.tex}
}
\caption{\textbf{Comparison to the prior arts in panoptic segmentation methods on the COCO val2017 split.}
Under comparable number of parameters, Panoptic SegFormer models outperform the other counterparts among different models. Panoptic SegFormer (PVTv2-B5) achieves a new state-of-the-art 54.1\%PQ, outperforming the previous best method MaskFormer by 1.4\% with significantly fewer parameters.
}
\label{fig:params}
\end{figure}
Recently, Vision Transformers have been widely introduced to instance localization and recognition tasks~\cite{carion2020end,zhu2020deformable,wang2021pyramid,liu2021swin}. Vision Transformers generally divide an input image into crops and encode them as tokens. For object detection problems, both DETR~\cite{carion2020end} and Deformable DETR~\cite{zhu2020deformable} represent the object proposals with a set of learnable queries which are used to predict bounding boxes and are dynamically matched with object ground truths via a bipartite graph matching loss. The role of query features is similar to RoI features in conventional detection architectures, thus inspiring several methods~\cite{carion2020end,cheng2021per,wang2021max} with two-branch designs similar to Panoptic FCN~\cite{li2021fully}.
In this work, we propose Panoptic SegFormer, a concise and effective framework for end-to-end panoptic segmentation with Vision Transformers. Specifically, Panoptic SegFormer contains three key designs:
\begin{itemize}[leftmargin=*]
\item A query set to represent things and stuff uniformly, where the stuff classes are considered as special type of things with single instance ids;
\item A location decoder which focuses on leveraging the location information of things and stuff to improve the segmentation quality;
\item A mask-wise post-processing strategy to equally merge the segmentation results of things and stuff.
\end{itemize}
Benefiting from these three designs, Panoptic SegFormer achieves state-of-the-art panoptic segmentation performance tasks with efficiency.
To verify our framework, we conduct extensive experiments on the COCO dataset~\cite{lin2014microsoft}.
As shown in Figure~\ref{fig:params}, our smallest model, Panoptic SegFormer~(PVTv2-B0), achieves 49.0\% PQ on the COCO \texttt{val2017} split with only 22.2M parameters, surpassing prior arts such as MaskFormer~\cite{cheng2021per} and Max-Deeplab~\cite{wang2021max}, whose parameter sizes are twice and three times larger. Panoptic SegFormer~(PVTv2-B5) further achieves the state PQ of 54.1\%,
which is 3\% PQ higher than Max-Deeplab (51.1\% PQ) and 1.4\% PQ higher than MaskFormer (52.7\% PQ), respectively, while our method still enjoys significantly fewer parameters.
It is worth mentioning that Panoptic SegFormer achieves 54.4\% PQ on COCO \texttt{test-dev} with single scale input, outperforming
competition methods including Innovation~\cite{chen56joint},
which uses plenty of tricks such as model ensemble, multi-scale testing. Currently, Panoptic SegFormer~(PVTv2-B5) is the 1st place on COCO Panoptic Segmentation leaderboard\footnote{https://competitions.codalab.org/competitions/19507\#results}.
\begin{figure*}[h]
\begin{center}
\includegraphics[width=0.98\linewidth]{media/model_arch5.pdf}
\end{center}
\caption{
\textbf{Overview of Panoptic SegFormer.} Panoptic SegFormer is composed of backbone, encoder, and decoder.
The backbone and the encoder output and refines multi-scale features. Inputs of the decoder are $N$ queries and the multi-scale features. The decoder consists of two sub-decoders: location decoder and mask decoder, where location decoder aims to learn reference points of queries, and mask decoder predicts the final category and mask. Details of the decoder will be introduced below. We use a mask-wise merging method instead of the commonly used pixel-wise argmax method to perform inference.}
\label{fig:model}
\end{figure*}
\section{Related Work}
\paragraph{Panoptic Segmentation.}
The panoptic segmentation literature mainly treat this problem as a joint task of instance segmentation and semantic segmentation where things and stuff are handled separately. Kirillov \etal~\cite{kirillov2019panoptic} proposed the concept of and benchmark of panoptic segmentation together with a baseline which directly combines the outputs of individual instance segmentation and semantic segmentation models. Since then, models such as Panoptic FPN~\cite{kirillov2019panopticfpn}, UPSNet~\cite{xiong2019upsnet} and AUNet~\cite{li2019attention} have improved the accuracy and reduced the computational overhead by combining instance segmentation and semantic segmentation into a single model. However, these methods still approximate the target task by solving the surrogate sub-tasks, therefore introducing undesired model complexities and sub-optimal performance.
Recently, efforts have been made to unified framework of panoptic segmentation.
Li \etal~\cite{li2021fully} proposed Panoptic FCN where the panoptic segmentation pipeline is simplified with a ``top-down meets bottom-up'' two-branch design similar to CondInst~\cite{tian2020conditional}. In their work, things and stuff are jointly modeled by an object/region-level kernel branch and an image-level feature branch. Several recent works represent things and stuff as queries and perform end-to-end panoptic segmentation via transformers.
DETR~\cite{carion2020end} predicts the bounding boxes of things and stuff and combines the attention maps of the transformer decoder and the feature maps of ResNet~\cite{he2016deep} to perform panoptic segmentation.
Max-Deeplab~\cite{wang2021max} directly predicts object categories and masks through a dual-path transformer regardless of the category being things or stuff.
On top of DETR, MaskFomer~\cite{cheng2021per} uses an additional pixel decoder to refine high spatial resolution features and generated the masks by multiplying queries and features from the pixel decoder.
Due to the computational complexity of multi-head attention~\cite{vaswani2017attention},
both DETR and MaskFormer use feature maps with limited spatial resolutions for panoptic segmentation, which hurts the performance and requires combining additional high-resolution feature maps in final mask prediction.
These methods have provided unified frameworks for predicting things and stuff in panoptic segmentation. However, there is still a noticeable gap between these methods and the top leaderboard methods with separated prediction strategies in terms of performance~\cite{chen56joint,wang2019joint}.
\paragraph{End-to-end Object Detection.} The recent popularity of end-to-end object detection frameworks have inspired many other related works. DETR~\cite{carion2020end} is arguably the most representative end-to-end object detector among these methods. DETR models the object detection task as a dictionary lookup problem with learnable queries and employs an encoder-decoder transformer to predict bounding boxes without extra post-processing. DETR greatly simplifies the conventional detection framework and removes many hand-crafted components such as NMS~\cite{ren2015faster,lin2017focal} and anchors~\cite{lin2017focal}. Zhu \etal~\cite{zhu2020deformable} proposed Deformable DETR which further reduces the memory and computational cost in DETR through deformable attention layers. Although having these advantages, the attention maps of the deformable attention layers are sparse and cannot be directly used for dense prediction in panoptic segmentation.
\paragraph{Instance Segmentation.}
Mask R-CNN~\cite{he2017mask} has been one of the most representative two-stage instance segmentation methods by first extracting ROIs and then predicting the final results conditioned on these ROIs.
One-stage methods such as CondInst~\cite{tian2020conditional} and SOLOv2~\cite{wang2020solov2} further simplifies this pipeline by employing dynamic filters (conditional convolution)~\cite{Yang2019CondConvCP} with a kernel branch.
Recently, SOLQ~\cite{dong2021solq} and QueryInst~\cite{fang2021queryinst}, perform instance segmentation in an end-to-end paradigm without involving NMS. QueryInst is based on an end-to-end object detector Sparse-RCNN~\cite{sun2021sparse} and predicts masks through corresponding bounding boxes and queries. By encoding masks to vectors, SOLQ predicts mask vectors in a regressive manner and outputs the final masks by decoding the vectors. The proposed Panoptic SegFormer can also handle end-to-end instance segmentation by only predicting thing classes.
\section{Methods}
\subsection{Overall architecture}
As illustrated in Figure \ref{fig:model}, Panoptic SegFormer consists of three key modules: transformer encoder, location decoder, and mask decoder, where (1) the transformer encoder is applied to refine the multi-scale feature maps given by the backbone, (2) the location decoder is designed to capturing object's location clues, and (3) the mask decoder is for final classification and segmentation.
During the forward phase, we first feed the input image $X\!\in\!\mathbb{R}^{H \times W\times 3}$ to the backbone network, and obtain the feature maps $C_3$, $C_4$, and $C_5$ from the last three stages, whose resolutions are
$1/8$, $1/16$ and $1/32$ compared to
the input image,
respectively.
We then project the three feature maps to the ones with 256 channels by a fully-connected (FC) layer,
and flatten them into feature tokens $C'_3$, $C'_4$, and $C'_5$.
Here, we define $L_i$ as $\frac{H}{2^{i+2}}\!\times\!\frac{W}{2^{i+2}}$, and the shapes of $C'_3$, $C'_4$, and $C'_5$ are $L_1\!\times\!256$, $L_2\!\times\!256$, and $L_3\!\times\!256$, respectively.
Next, using the concatenated feature tokens as input,
the transformer encoder outputs the refined features of size $(L_1\!+\!L_2\!+\!L_3)\!\times\!256$.
After that, we use $N$ randomly initialized queries to uniformly describe things and stuff.
We then embed the location clues (\ie center location
and scale~({size of mask}).
Finally, we adopt a mask-wise strategy
to merge the predicted masks into the panoptic segmentation result, which will be introduced in detail in Section~\ref{mask-wise}.
\subsection{Transformer Encoder}
High-resolution and the multi-scale features maps are
important for the segmentation task~\cite{kirillov2019panopticfpn,wang2020solov2,li2021fully}.
Since the high computational cost of multi-head attention layer,
previous transformer-based methods~\cite{carion2020end,cheng2021per} can only process low-resolution feature map
(\eg, $C_5$ of ResNet)
in their encoders, which limits the segmentation performance.
Different from these methods,
we employ the deformable attention layer~\cite{zhu2020deformable} to implement our transformer encoder.
Due to the low computational complexity of the deformable attention layer,
our encoder can refine and involve positional encoding~\cite{vaswani2017attention} to high-resolution and multi-scale feature maps $F$.
\subsection{Location Decoder}
Location information plays an important role in distinguishing things with different instance ids in the panoptic segmentation task~\cite{wang2020solo,tian2020conditional,wang2020solov2}.
Inspired by this, we design a location decoder to introduce the location information (\ie, center location and scale) of things and stuff into the learnable queries.
Specifically, given $N$ randomly initialized queries and the refined feature tokens generated by transformer encoder, the decoder will output $N$ location-aware queries.
In the training phase, we apply an auxiliary MLP head on top of location-aware queries to predict the center locations and scales of the target object,
and supervise the prediction with a location loss $\mathcal{L}_{\rm loc}$.
Note that,
the MLP head is an auxiliary branch, which can be discarded during the inference phase.
Since the location decoder does not need to predict the segmentation mask, we implement it with computational and memory efficient deformable attention~\cite{zhu2020deformable}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.7\linewidth]{media/mask_decoder.pdf}
\end{center}
\caption{
\textbf{Architecture of mask decoder.}
The attention maps $A$ are the product of query $Q$ and key $K^T$. We split and reshape multi-scale attention maps to $A_{\{3,4,5\}}$,then we upsample and cat these features to $A_{\rm fuse}$. The mask is generated through attention maps with one $1\!\times\!1$ conv layer.
The category label is predicted from the refined query $Q_{\rm refine}$ with one linear projection layer.}
\label{fig:output_layer}
\end{figure}
\subsection{Mask Decoder}
As shown in Figure~\ref{fig:output_layer}, the mask decoder is proposed to predict the object category and mask according to the given queries.
The queries $Q$ of the mask decoder is the location-aware queries from the location decoder, and the keys $K$ and values $V$ of the mask decoder is the refined feature tokens $F$ from the transformer encoder.
We first pass the queries through 4 decoder layers, and then fetch the attention map $A\in \mathbb{R}^{N \times h \times (L_1+L_2+L_3)}$ and the refined query $Q_{\rm refine} \in \mathbb{R}^{N\times 256}$ from the last decoder layer,
\iffalse
, which can be formulated as:
\begin{equation}\label{At}
A = \frac{QK^T}{\sqrt{d_k}}\in \mathbb{R}^{N\!\times\!h\!\times\! (L_1+L_2+L_3)},
\end{equation}
\fi
where $N$ is the query number, $h$ is the head number of the multi-head attention layer, and $L_1\!+\!L_2\!+\!L_3$ is the length of feature tokens $F$.
Similar to previous method~\cite{wang2021max,carion2020end}, we directly perform classification through a FC layer on top of the refined query $Q_{\rm refine}$ from the last decoder layer.
At the same time, to predict the object mask, we first split and reshape the attention maps $A$ into
attention maps $A_3$, $A_4$, and $A_5$, which have the same spatial resolution as $C_3$, $C_4$, and $C_5$.
This process can be formulated as:
\begin{equation}\label{split}
(A_3, A_4, A_5) = {\rm Split}(A),\ \ \ A_{i}\in\mathbb{R}^{ \frac{H}{2^{i+2}}\!\times\!\frac{W}{2^{i+2}}\!\times h },
\end{equation}
where ${\rm Split}(\cdot)$ denotes the split and reshaping operation. After that, we upsample these attention maps to the resolution of $H/8\!\times\! W/8$ and concatenate them along the channel dimension, as illustrated in Eqn.~\ref{fuse}.
\begin{equation}\label{fuse}
A_{\rm fuse} = {\rm Concat}(A_1, {\rm Up}_{\times 2}(A_2),{\rm Up}_{\rm \times 4}(A_3)).
\end{equation}
Here, ${\rm Up}_{\times 2}(\cdot)$ and ${\rm Up}_{\times 4}(\cdot)$ mean the 2 times and 4 times bilinear interpolation operations, respectively. ${\rm Concat}(\cdot)$ is the concatenation operation.
Finally, based on the fused attention maps $A_{\rm fuse}$, we predict the binary mask through a $1\!\times\!1$ convolution.
Note that, because a complete attention map is required to predict segmentation masks, we implement the mask decoder by the common multi-head attention~\cite{vaswani2017attention}, instead of sparse attention layer such as deformable attention~\cite{zhu2020deformable} and Longfomer~\cite{Beltagy2020Longformer}.
\subsection{Loss Function}
During training,
follow common practices~\cite{carion2020end,stewart2016end} to search the best
bipartite matching between the prediction set $\hat{Y}\!=\!\{\hat{y}_i\}_{i=1}^{N}$ and the ground truth set $Y\!=\!\{y_i\}_{i=1}^{M}$, where $N\!\ge\!M$ is always guaranteed,
and the ground truth set $Y$ is padded with $\varnothing$ so that the element number is the same as the prediction set $\hat{Y}$.
Specifically, we utilize Hungarian algorithm~\cite{kuhn1955hungarian} to search for the permutation $\sigma$ with the minimum matching cost
which is the sum of the classification loss $\mathcal{L}_{\rm cls}$ and the segmentation loss $\mathcal{L}_{\rm seg}$.
\iffalse
\begin{equation}\label{loss}
\mathcal{C} = \lambda_{\rm cls}\mathcal{L}_{\rm cls} + \lambda_{\rm seg}\mathcal{L}_{\rm seg},
\end{equation}
\fi
The overall loss function of Panoptic SegFormer can be written as:
\begin{equation}\label{loss}
\mathcal{L} =
\lambda_{\rm cls}\mathcal{L}_{\rm cls} + \lambda_{\rm seg}\mathcal{L}_{\rm seg}
+ \lambda_{\rm loc}\mathcal{L}_{\rm loc},
\end{equation}
where $\lambda_{\rm cls}$, $\lambda_{\rm seg}$, and $\lambda_{\rm loc}$ are the weights to balance three losses. $\mathcal{L}_{\rm cls}$ is the classification loss that is implemented by Focal loss~\cite{lin2017focal}, and $\mathcal{L}_{\rm seg}$ is the segmentation loss implemented by Dice loss~\cite{milletari2016v}.
$\mathcal{L}_{\rm loc}$ is the location loss as formulated in Eqn.~\ref{E}:
\begin{equation}\label{E}
\mathcal{L}_{\rm loc} \!=\!\sum_{i}^{N}\mathbb{1}_{\{y_i\neq \varnothing\}}\,(\mathcal{L}_{\rm 1}({ f_{c}}(m_i),\hat{u}_{{\sigma}(i)})+\mathcal{L}_{\rm 1}({ f_{s}}(m_i),\hat{v}_{{\sigma}(i)})),
\end{equation}
where $\mathcal{L}_{\rm 1}$ is the L1 loss. $\hat{u}_{\sigma}(i)$ and $\hat{v}_{\sigma}(i)$ are the predicted
center points and scales from the location decoder.
$\sigma(i)$ denotes the $i^{\rm th}$ index in the permutation $\sigma$.
${f_{c}}(m_i)$ and ${f_{s}}(m_i)$ indicate the center location and scale (size of mask that normalized by the size of the image) of the target mask $m_i$, respectively.
$\mathbb{1}_{\{y_i\neq \varnothing\}}$ indicates that only pairs included real ground truth are taken into account.
\subsection{Mask-Wise Inference}\label{mask-wise}
Panoptic Segmentation requires each pixel to be assigned a category label (or void) and instance id (id is ignored for stuff)~\cite{kirillov2019panoptic}. One commonly used post-processing method is the heuristic procedure~\cite{kirillov2019panoptic}, which adopts a NMS-like procedure~\cite{kirillov2019panoptic} to generate the non-overlapping instance segments for things and we call it as mask-wise strategy here. The heuristic procedure also uses pixel-wise argmax strategy for stuff and resolves overlap between things and stuff in favor of the thing classes.
Recent methods~\cite{cheng2021per,carion2020end,wang2021max} directly use pixel-wise strategy directly to uniformly merge the results of things and stuff. Although pixel-wise argmax strategy is conceptually simple, we observe that it consistently produces results with noise due to the abnormally extreme pixel values. To this end, we adopt the mask-wise strategy to generate non-overlap results for stuff based on the heuristic procedure, instead of taking the pixel-wise strategy. However, we equally treat things and stuff and solve the overlaps among all masks by their confidence scores instead of favoring things over stuff in the heuristic procedure, which marks a difference between our approach and \cite{kirillov2019panoptic}.
As illustrated in Algorithm~\ref{algo:inference},
mask-wise merging strategy takes $c$, $s$, and $m$ as input, which denote the predicted categories, confidence scores, and segmentation masks, respectively, and output a semantic mask $\tt SemMsk$ and a instance id mask $\tt IdMsk$, to assign a category label and a instance id to each pixel.
\begin{algorithm}[t]
\SetAlgoLined
\PyCode{\small{def}} \PyCode{\small{MaskWiseMergeing}}\small{(c,s,m):}\\
\Indp
\PyComment{\small{category ${\rm c}\in\mathbb{R}^N$}} \\
\PyComment{\small{confidence score ${\rm s}\in\mathbb{R}^N$}} \\
\PyComment{\small{mask ${\rm m}\in\mathbb{R}^{N\!\times\!H\!\times\!W}$ }}
\small{SemMsk = np.zeros(H,W)}\\
\small{IdMsk = np.zeros(H,W)}\\
\small{order = np.argsort(-s)}\\
\small{id = 0}\\
\PyCode{\small{for}} \small{i} \PyCode{\small{in}} \small{order:} \\
\Indp
\PyComment{\small{drop low quality results}}\\
\PyCode{\small{if}} \small{s[i]< thr$_{\rm cls}$:}\\
\Indp
\PyCode{\small{continue}}\\
\Indm
\PyComment{\small{drop overlaps}}\\
\small{m$_i$ = m[i] \& (SemMsk==0)}\\
\small{SemMsk[m$_i$] = c[i]}\\
\PyCode{\small{if}} \PyCode{\small{isThing}}\small{(c[i]):}\\
\Indp
\small{IdMsk[m$_i$] = id}\\
\small{id += 1}\\
\Indm
\Indm
\PyCode{\small{return}} \small{SemMsk,IdMsk}\\
\Indm
\caption{\textbf{Mask-Wise Merging}}
\label{algo:inference}
\end{algorithm}
Specifically, $\tt SemMsk$ and $\tt IdMsk$ are first initialized by zeros.
Then, we sorted prediction results in descending order of confidence score, and fill the sorted predicted masks to $\tt SemMsk$ and $\tt IdMsk$.
Note that, the results with confidence scores below ${\rm thr}_{\rm cls}$ will be discarded, and the overlaps with lower confidence score will be removed to generate non-overlap panoptic results.
In the end, category label and instance id (only things) is added.
\begin{table*}[t]
\begin{center}
\input{table/panoptic_seg_val}
\end{center}
\caption{
\textbf{Experiments on COCO val set.} Panotic SegFormer achieves 50.0\% PQ on COCO \texttt{val} with ResNet-50 as backbone, surpasses previous methods such as DETR~\cite{carion2020end} and Panoptic FCN~\cite{cheng2021per} over 6.6\% PQ and 6.4\% PQ respectively. Under training for 12 epochs, Panoptic SegFormer can achieve 46.4\% PQ, which is comparable with 46.5\% PQ of MaskFormer~\cite{cheng2021per} that training for 300 epochs.
$^{\text{\textdagger}}$ notes that backbones are pre-trained on ImageNet-22K.}
\label{val}
\end{table*}
\begin{table*}[t]
\begin{center}
\input{table/panoptic_seg_test_dev}
\end{center}
\caption{
\textbf{Experiments on COCO test-dev set.} With PVTv2-B5~\cite{wang2021pvtv2} as backbone, Panoptic SegFormer achieves 54.4\% PQ on COCO \texttt{test-dev}, surpassed previous SOTA methods Max-Deeplabe-L~\cite{wang2021max} and competition-level methods Innovation~\cite{chen56joint} over 3.1\% PQ and 0.9\% PQ respectively with fewer parameters and computation cost.
}
\label{test_dev}
\end{table*}
\begin{table*}[t]
\begin{center}
\input{table/instance_seg}
\end{center}
\caption{ \textbf{Instance segmentation experiments on COCO \texttt{test-dev} set.} When training with things only, Panoptic SegFormer can perform instance segmentation. With ResNet-50 as backbone, Panoptic SegFormer achieves 41.7 mask AP on COCO \texttt{test-dev}, which is 1.6 AP higher than HTC~\cite{chen2019hybrid}.}
\label{t2}
\end{table*}
\begin{table*}[t]
\begin{center}
\input{table/complexity}
\end{center}
\caption{ Deformable-DETR$^*$ is implemented in MMdet~\cite{mmdetection} and we use the same encoder with them. Data is measured from the same platform. FLOPs are computed on input images with a size of $1200\!\times\! 800$, Frame-per-second (Fps) is measured on a Tesla V100 GPU with a batch size of 1 by taking the average runtime on the entire val set. We obtain the memory consuming data during the training phase with a batch size of 1.}
\label{complexity}
\end{table*}
\section{Experiments}
\renewcommand\tabcolsep{1.0pt}
\begin{figure*}[t]
\centering
\input{table/visual}
\caption{\textbf{Comparing visualization results of Panoptic SegFormer with other methods on the COCO \texttt{val} set.} For a fair comparison, all results are generated with ResNet-101~\cite{he2016deep} as the backbone. The second and fourth row results show that our method still performs well in highly crowded or occluded scenes. Benefits from our mask-wise inference strategy, our results have few artifacts, which often appear in the results of DETR~\cite{carion2020end} ({\eg, dining table of the third row}).}
\label{visual}
\end{figure*}
We evaluate Panoptic SegFormer on COCO~\cite{lin2014microsoft}, comparing it with several state-of-the-art methods. We provide the main results of panoptic segmentation and some visualization results. We also report the results of instance segmentation.
\subsection{Datasets}
We perform experiments on COCO 2017 datasets~\cite{lin2014microsoft} without external data. The COCO dataset contains 118K training images and 5k validation images, and it contains 80 things and 53 stuff.
\subsection{Implementation Details}
Our settings mainly follow DETR and Deformable DETR for simplicity.
Specially, we use Channel Mapper~\cite{mmdetection} to map dimensions of the backbone's outputs to 256. The location decoder contains 6 deformable attention layers, and the mask decoder contains 4 vanilla cross-attention layers.
The hyper-parameters in deformable attention are the same as Deformable DETR~\cite{zhu2020deformable}.
We train our models with 50 epochs, a batch size of 1 per GPU, a learning rate of $1.8 \!\times\! 10^{-4}$ (decayed at the 40th epoch by a factor of 0.1, learning rate multiplier of the backbone is 0.1). We use a multi-scale training strategy with the maximum image-side not exceeding 1333 and the minimum image size varying from 480 to 800.
The number of queries $N$ is set to 400. $\lambda_{\rm cls}$, $\lambda_{\rm seg}$, and $\lambda_{\rm loc}$ in Equation~\ref{loss} are set to 1, 1, 5, respectively. we employ threshold 0.5 to obtain binary masks from soft masks. Threshold ${\rm thr}_{\rm cls}$ used to filter low-quality results is 0.3. The PVTv2~\cite{wang2021pvtv2} is pre-trained on ImageNet-1K~\cite{deng2009imagenet} set.
All experiments are trained on one NVIDIA DGX node with 8 Tesla V100 GPUs. For our largest model Panoptic SegFormer~(PVTv2-B5), we use 4 DGX nodes to shorten training time.
\paragraph{Panoptic segmentation.} We conduct experiments on COCO \texttt{val} set and \texttt{test-dev} set. In Tables~\ref{val} and \ref{test_dev}, we report our main results, comparing with other state-of-the-art methods. Panoptic SegFormer attains 50.0\% PQ on COCO \texttt{val} with ResNet-50 as the backbone and single-scale input, and it surpasses previous methods Panoptic-FCN~\cite{li2021fully} and DETR~\cite{carion2020end} over 6.4\% PQ and 6.6\% PQ, respectively. Except for the remarkable effect, the training of Panoptic SegFormer is efficient. Under $1\!\times$ training strategy (12 epochs) and ResNet-50 as the backbone, Panoptic SegFormer achieves 46.4\% PQ that can be on par with 46.5\% PQ of MaskFormer\cite{cheng2021per} that training 300 epochs.
Enhanced by powerful vision transformer backbone PVTv2-B5~\cite{wang2021pvtv2}, Panoptic SegFormer attains a new record of 54.4\% PQ on COCO \texttt{test-dev} without TTA, surpassing Max-Deeplab\cite{wang2021max} over 3.1\% PQ.
Our method even surpasses the previous competition-level method Innovation~\cite{chen56joint} over 0.8 \% PQ \footnote{We only compare methods and results that dost not use external data.}. Figure~\ref{visual} shows some visualization results on the COCO val set. These original images are highly crowded or occluded scenarios, and our Panoptic SegFormer still can predict convincing results.
\paragraph{Instance segmentation.} In Table~\ref{t2}, we report our instance segmentation results on COCO \texttt{test-dev} set. For a fair comparison, we use 300 queries for instance segmentation and only things data is used. With ResNet-50 as the backbone and single scale input, Panoptic SegFormer achieves 41.7 mask AP, surpassing previous state-of-the-methods HTC~\cite{chen2019hybrid} and QueryInst~\cite{fang2021queryinst} over 1.6 AP and 1.1 AP, respectively.
\begin{figure*}[h]
\begin{center}
\includegraphics[width=0.95\linewidth]{media/mask.pdf}
\end{center}
\caption{
\textbf{Visualization of multi-head attention maps and corresponding outputs from mask decoder.} Different heads have different preferences. Head 4 and Head 1 pay attention to foreground regions, and Head 8 prefers regions that occlude foreground. Head 5 always pays attention to the background that is around the foreground. Through the collaboration of these heads, Panoptic SegFormer can predict accurate masks. The 3rd row shows an impressive result of a horse that is highly obscured by the other horse.
}
\label{fig:mask}
\end{figure*}
\paragraph{visualization of attention maps}
Different from previous methods, our results are generated through multi-scale multi-head attention maps. Figure~\ref{fig:mask} shows some samples of multi-head attention maps. Through a multi-head attention mechanism, different heads of one query learn their own attention preference. We observe that some heads pay attention to foreground regions, some heads prefer boundaries, and others prefer background regions. This shows that each mask is generated by considering various comprehensive information in the image.
\subsection{Complexity of Panoptic SegFormer}
We show model complexity and inference efficiency in Table~\ref{complexity}, and we can see that Panoptic SegFormer can achieve state-of-the-art performance on panoptic segmentation with acceptable inference speed.
\section{Conclusion}
We propose a concise model named Panoptic SegFormer by unifying the processing workflow of things and stuff. Panoptic SegFormer can surpass previous methods with a large margin and demonstrate the superiority of treating things and stuff with the same recipe.
{\small
\bibliographystyle{unsrt}
|
1,116,691,497,339 | arxiv | \section{Introduction}
Lovelock's theory suggests that Einstein gravity can be modified with higher derivative terms, with second order equations of motion \cite{ll1,ll2}. One example of such a theory is the well-known Einstein-Gauss-Bonnet (EGB) gravity. Increasing interest has been put on this sort of gravity in $4$-dimensional spacetime. Recent research \cite{lcs} gives a method to realise it through rescaling Gauss-Bonnet coupling constant $\tilde{\alpha}\to\alpha/(D-4)$, and taking $D\to4$ to obtain spherically symmetric 4D black hole solutions with non-vanishing Gauss-Bonnet term.\\
The strategy is quite straightforward. Considering an action with the contribution from the Gauss-Bonnet term after the rescaling of Gauss-Bonnet constant $\tilde{\alpha}$, one obtains
\begin{eqnarray}
S=\int d^Dx\sqrt{-g}\left(R-2\Lambda+\frac{\alpha}{D-4}(R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}-4R_{\mu\nu}R^{\mu\nu}+R^2)\right).
\end{eqnarray}
The action then yields the equations of motion:
\begin{equation}
R_{\mu\nu}-\frac 12 g_{\mu\nu}+\Lambda g_{\mu\nu}+\frac{\alpha}{D-4}H_{\mu\nu}=0,
\end{equation}
where $H_{\mu\nu}$ is the Gauss-Bonnet tensor:
\begin{eqnarray}
H_{\mu\nu}&=&2\left[RR_{\mu\nu}-2R_{\mu\rho\nu\sigma}R^{\rho\sigma}+R_{\mu\alpha\beta\gamma}R_{\nu}^{\ \alpha\beta\gamma}-2R_{\mu\alpha}R^{\alpha}_{\ \nu}\right.\nonumber\\
&&\left.-\frac 14g_{\mu\nu}(R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}-4R_{\mu\nu}R^{\mu\nu}+R^2)\right].
\end{eqnarray}
As we know, $H_{\mu\nu}$ vanishes in $4$ dimensions and the theory will reduce to Einstein's gravity. After being rescaled, however, the infinity caused by $(D-4)$ seems will leave us with a non-vanishing term. So it was suggested that taking $D\to4$ limit will give Gauss-Bonnet gravity in $4$ dimensions:
\begin{equation}
\lim\limits_{D\to 4}\left[R_{\mu\nu}-\frac 12 g_{\mu\nu}+\Lambda g_{\mu\nu}+\frac{\alpha}{D-4}H_{\mu\nu}\right]=0.
\end{equation}
This work sheds a light on the investigation into higher derivative gravity in four-dimensional spacetime. However, arguments are also raised, claiming that the method performed \cite{lcs} cannot actually provides Gauss-Bonnet gravity in four dimensions \cite{ar1,ar2,ar3,ar4}, i.e. it is not a ``novel Einstein-Gauss-Bonnet gravity''.\\
Since the strategy proposed above is unable to give topologically non-trivial solutions, another method is supposed with ``Kaluza-Klein-like'' procedure \cite{2003,galieon}, compactifying D-dimensional EGB gravity which is on a maximally symmetric $(D-4)$-dimensional space. With similar rescaling and $D\to4$ limit, one obtains a purely 4-dimensional EGB theory \cite{2003,2004}. The resulting theory can also be viewed as a special Hondeski gravity or generalised Galileons \cite{2003,hondeski,gali1,gali2}.\\
Meanwhile, as black holes have a good feature of thermodynamical properties \cite{hawking}, great attention has been paid in this area. During past decades, the AdS/CFT dictionary provides a powerful tool to investigate strongly coupled gauge theories, which are dual to black holes in AdS space. For such systems, there exists a universal bound, known as Kovtun-Starinets-Son (KSS) bound for shear viscosity to entropy density ratio \cite{kss1,kss2,kss3,kss4,kss5}:
\begin{equation}\label{kss}
\frac{\eta}{s}\geq\frac{1}{4\pi}.
\end{equation}
KSS bound has been found to be violated when small corrections are added to Einstein's gravity. Coupled with Gauss-Bonnet term, a modified version of this bound reads \cite{GBV}
\begin{equation}\label{gbbound}
\frac{\eta}{s}\geq\frac{1}{4\pi}(1-4{\alpha}_{GB}),
\end{equation}
where $\alpha_{GB}=-\Lambda\alpha/3$, with $\Lambda$ the cosmological constant.\\
It has been shown that when considering non-vanishing electric charge, bound (\ref{gbbound}) will be violated for $4$D Gauss-Bonnet gravity, and constraint on Gauss-Bonnet coupling could be obtained by analysing the causal structure in bulk \cite{4d1}. It is of our research interest to check whether violation will happen and will there be a new constraint for the coupling constant if we include two scalar fields linear in all the spatial directions.\\
In this paper, $4$D black hole solution coupled with higher derivative terms with both electric and magnetic charges as well as axions will be given. The ``Kaluza-Klein-like'' procedure introduced in \cite{2003} will be used to find the solution. Then, shear viscosity to entropy density ratio for a neutral black hole with scalar fields is studied, which violates the KSS bound.\\
This paper is organised as follows:\\
In section $2$, the Einstein-Maxwell-Gauss-Bonnet-axion gravity is found through ``Kaluza-Klein-like'' method, and a dyonic black hole solution coupled with scalar fields and higher derivative terms is obtained. The thermodynamics of this black hole will be briefly mentioned as well.\\
Removing electric and magnetic charges, in section $3$ we will calculate the shear viscosity to entropy density ratio for the neutral black holes in $4$D Gauss-Bonnet gravity with axions. It will be shown that the KSS bound will be violated because of the existence of scalar fields.\\
Section $4$ focuses on the difficulties one may meet when finding constraints on Gauss-Bonnet constant through causality analysis when black hole is uncharged.\\
Finally, conclusion and outlook will be represented in section $5$.
\section{Dyonic black holes in four dimensions with Gauss-Bonnet coupling}
\subsection{Reduced action}
Working in four dimensions, we first introduce two scalar fields. The general action in $D$-dimensional spacetime for Einstein-Maxwell-Gauss-Bonnet-axion (EMGBA) gravity reads
\begin{equation}\label{ori_action}
S=\int d^Dx\sqrt{-g}\left(R-2\Lambda+\tilde{\alpha}\mathcal{L}_{GB}-\frac 12\sum_{i=1}^{i=2}(\partial \varphi_i)^2-\frac 14 F_{\mu\nu}F^{\mu\nu}\right),
\end{equation}
where $\varphi_i$ are the scalar fields, and Gauss-Bonnet term takes the form
\begin{equation}
\mathcal{L}_{GB}=R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}-4R_{\mu\nu}R^{\mu\nu}+R^2.
\end{equation}
Now we parameterise a $D$-dimensional metric:
\begin{equation}
ds_D^2=ds_p^2+e^{2\phi}d\Sigma_{D-p,\lambda}^2,
\end{equation}
where $\phi$ is called ``breathing scalar'', depending only on external p-dimensional coordinates. Line elements $d\Sigma_{D-p,\lambda}^2$ describs the internal maximally symmetric space, and $\lambda$ gives the curvature of the internal spacetime. This is a diagonal reduction along $\Sigma_{D-p,\lambda}$. When $\lambda=0$, the ``internal'' space is flat. Here we only consider Abelian isometry group for the ``internal'' space, so the massive modes could be truncated.\\
Now let us rescale Gauss-Bonnet coupling constant, and
\begin{equation}
\alpha=\epsilon \tilde{\alpha},
\end{equation}
where $\epsilon=(D-p)$. The next step is to consider the cases where $p<5$, and take the limit $D\to p$. The resulting reduced $p$-dimensional action then reads \cite{galieon,2003}
\begin{eqnarray}\label{action}
S_{\text{reduced},\lambda}&=&\int d^px\sqrt{-g}\left[R-2\Lambda+\alpha\left(\phi\mathcal{L}_{GB}+4G^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi-4(\partial\phi)^2\square\phi+2((\partial\phi)^2)^2\right)\right.\nonumber\\
&&\left.-\frac 12\sum_{i=1}^2(\partial\varphi_i)^2-\frac 14F_{\mu\nu}F^{\mu\nu}-2\lambda R e^{-2\phi}
-12\lambda(\partial\phi)^2e^{-2\phi}-6\lambda^2e^{-4\phi}\right],
\end{eqnarray}
where $G_{\mu\nu}$ are Einstein tensors. This action is claimed to be the EMGBA theory in $4$ dimensions, and it also works for $D<4$. Constructed in a mathematically more rigorous way, (\ref{action}) does not suffer from defects of the na\"ive $D\to4$ limit such as being ill-defined \cite{ar1,ar2,2004}.
\subsection{Dyonic Black hole solution}
The next step is to find the solution in spherically symmetric spacetime. First of all, one could get equation of motion from the action (\ref{action}), and in four dimensions (p=4) it can be written as
\begin{equation}\label{eom}
g^{\mu\nu}\mathcal{E}_{\mu\nu}+\frac{\alpha}{2}\mathcal{E}_{\phi}=
4\Lambda-R-\frac{\alpha}2\mathcal{L}_{GB}+\frac 12\sum_{i=1}^{2}(\partial\varphi_i)^2=0,
\end{equation}
where $\mathcal{E}_{\mu\nu}$ is the equation of motion from metric variation, and $\mathcal{E}_{\phi}$ is that for ``breathing scalar''. In such a way, the dependence on $\phi$ can be removed.\\
We assume that $\phi=\phi(r)$, and apply planar symmetric ansatz
\begin{equation}\label{metric}
ds_4^2=-e^{-2\chi(r)}f(r)dt^2+\frac 1{f(r)}dr^2+r^2(dx^2+dy^2).
\end{equation}
Also, we choose the scalar fields to be linearly dependent on spatial coordinates, such that
\begin{equation}
\varphi_i=\beta x_i,
\end{equation}
where $\beta$ is a constant and $x_i=\{x,y\}$. Electric and magnetic charges are added to the black hole in such a way:
\begin{equation}
A=(A_t(r),0,0,A_y(x))=(-\frac qr,0,0,hx),
\end{equation}
where $q$ and $h$ are electric and magnetic charges respectively, and
\begin{equation}
F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}.
\end{equation}
Please find more details on equations of motion in the appendix. \\
Now combining (\ref{eom}) together with (\ref{metric}), one obtains
\begin{equation}\label{eom1}
-\frac{2 \alpha \left(f'(r)^2+f(r) f''(r)\right)}{r^2}+\frac{4 f'(r)}{r}+f''(r)+\frac{2 f(r)}{r^2}+4 \Lambda +\frac{\beta ^2}{r^2}=0.
\end{equation}
This equation is, however, not enough for us to find the solution for $f(r)$. Substituting the metric (\ref{metric}) into eqn.(\ref{action}), and removing total derivative terms, one finds the effective Lagrangian to be \cite{2003}
\begin{eqnarray}\label{lag}
\mathcal{L}&=&\frac{e^{\chi(r)}}{6r^2}\left[-3 \left(4 r^3 f'(r)+h^2-q^2 e^{2 \chi (r)}+2 \beta ^2 r^2+4 \Lambda r^4\right)\right.\nonumber\\
&&-4 r^2 f(r) \left(2 \alpha f'(r) \phi '(r) \left(r^2 \phi '(r)^2-3 r \phi '(r)+3\right)+3\right)\nonumber\\
&&+4 \alpha r^2 f(r)^2 \phi '(r) (4 \chi '(r) \left(r^2 \phi '(r)^2-3 r \phi '(r)+3\right)\nonumber\\
&&\left.\left.+\phi '(r) \left(3 r^2 \phi '(r)^2-8 r \phi '(r)+6\right)\right)\right],
\end{eqnarray}
where flat ``internal'' space is considered, which means $\lambda =0$ and the theory is invariant under a constant shift of $\phi$.\\
Doing variation of (\ref{lag}) with respect to $f(r)$, and making $\chi(r)$ to be zero, one finds
\begin{equation}
4 \alpha f(r) \left(r \phi '(r)-1\right)^2 \left(\phi '(r)^2+\phi ''(r)\right)=0,
\end{equation}
which implies
\begin{equation}\label{phi}
\phi '(r)= \frac{1}{r},
\end{equation}
and inserting (\ref{phi}) into $\delta \chi$ equation yields
\begin{equation}\label{eom2}
4 r \left(r^2-2 \alpha f(r)\right) f'(r)+4 f(r) \left(\alpha f(r)+r^2\right)+h^2+q^2+2 \beta ^2 r^2+4 \Lambda r^4=0
\end{equation}
Take $r_H$ to be the black brane horizon, that is, $f(r_H)=0$. One is now able to give the exact solution from (\ref{eom1}) and (\ref{eom2}):
\begin{equation}
f(r)=\frac{r^2}{2\alpha }\left(1-\sqrt{1+\frac{4 \alpha \Lambda }{3}-\frac{4 \alpha \Lambda r_H^3}{3 r^3}-\frac{\alpha h^2}{r^4}+\frac{\alpha h^2}{r^3 r_H}-\frac{\alpha q^2}{r^4}+\frac{\alpha q^2}{r^3 r_H}+\frac{2 \alpha \beta ^2}{r^2}-\frac{2 \alpha \beta ^2 r_H}{r^3}}\right)
\end{equation}
\subsection{Planar black brane in AdS space}
Now rescale the parametres, where
\begin{eqnarray}
\alpha_{GB}&=&-\frac{\Lambda}{3}\alpha,\\
\hat{\beta}^2&=&-\frac 3{2\Lambda}\frac {\beta^2}{r_H^2},\\
Q^2&=&-\frac 3{4\Lambda}\frac{q^2}{r_H^4},\\
H^2&=&-\frac 3{4\Lambda}\frac{h^2}{r_H^4}.
\end{eqnarray}
The line element for planar black brane in AdS space can be written as
\begin{equation}
ds^2=-F(r)N^2dt^2+F(r)^{-1}dr^2+\frac{r^2}{l^2}(dx^2+dy^2),
\end{equation}
with $l$ the AdS radius, and
\begin{equation}
F(r)=\frac{r^2}{2\alpha_{GB}l^2}\left(1-\sqrt{1-4\alpha_{GB}\left(1-\frac{r_H^3}{r^3}(1+H^2+Q^2)+\frac{r_H^4}{r^4}(H^2+Q^2)+\hat{\beta}^2(\frac{r_H^3}{r^3}-\frac{r_H^2}{r^2})\right)}\right),
\end{equation}
where we use the fact that when $D=4$,
\begin{equation}
\Lambda=-\frac{(D-1)(D-2)}{2l^2}=-\frac{3}{l^2}.
\end{equation}
To find the value of constant $N^2$, one needs to note that the geometry would reduce to the flat Minkowski metric conformally at the boundary. With $r \to \infty$, one has
\begin{equation}
F(r)N^2\to \frac{r^2}{l^2}.
\end{equation}
As a result,
\begin{equation}
N^2=\frac 12(1+\sqrt{1-4\alpha_{GB}}).
\end{equation}
This solution is a dyonic black hole in $4$D EMGB gravity with linear axions. Through the ``Kaluza-Klein-like'' method, this sort of black hole now contains the contribution from higher derivative terms.
\subsection{Thermodynamics}
Since Gauss-Bonnet term only contains curvature terms, its black hole thermodynamics is the same as that of Schwarzchild-AdS. This property is preserved by our reduction frame.
The Hawking temperature at the event horizon reads
\begin{equation}\label{temp}
T=\frac{1}{2\pi\sqrt{g_{rr}}}\frac{d \sqrt{g_{tt}}}{d r}|_{r=r_H}=\frac{N}{4\pi}F'(r_H)
=\frac{Nr_H}{4\pi l^2}(3-\hat{\beta}^2-H^2-Q^2).
\end{equation}
The black brane approaches extremal when $T\to0$, that is,
\begin{equation}
\hat{\beta}^2+H^2+Q^2=3.
\end{equation}
The entropy density of the horizon is given by \cite{v2,v3}
\begin{equation}
s=-\frac{1}{V_2}\frac {\partial}{\partial T}\mathcal{F}(T)=\frac{r_H^2}{4l^2},
\end{equation}
where $V_2=\int dx dy$ represents the spatial $2$-volume (or area), and the free energy $\mathcal{F}$ reads
\begin{equation}
\mathcal{F}(T)=\frac 43 V_2(\pi l T)^2\frac{T}{N^2(3-\hat{\beta}^2-H^2-Q^2)^2}.
\end{equation}
\section{Shear viscosity for neutral black holes}
\subsection{Weaker horizon formula}
Before the shear viscosity is investigated, let us change the coordinates where $u=r_H/r$, and
\begin{equation}
f(u)=\frac{1}{2\alpha_{GB}}\left(1-\sqrt{1-4\alpha_{GB}\left(1-u^2(1+H^2+Q^2)+u^4(H^2+Q^2)+\hat{\beta}^2(u^2-u^3)\right)}\right).
\end{equation}
From now on, for simplicity, only black holes without electric or magnetic charge are considered, i.e. $H=Q=0$, and
\begin{equation}
f(u)=\frac{1}{2\alpha_{GB}}\left(1-\sqrt{1-4\alpha_{GB}\left(1-u^2+\hat{\beta}^2(u^2-u^3)\right)}\right).
\end{equation}
The tensor type perturbation is considered where
\begin{equation}\label{pert}
(\delta g)^x_y=h(t,u)=h(u)e^{-i\omega t},
\end{equation}
and
\begin{equation}\label{metricpert}
ds^2=\frac{r_H^2}{u^2 l^2}\left(-f(u)N^2dt^2+d\vec{x}^2+2h(t,u)dxdy\right)+\frac {l^2}{u^2f(u)}du^2.
\end{equation}
Usually Kubo formula gives the shear viscosity in general cases:
\begin{equation}
\eta=\lim\limits_{\omega\to 0}\frac{1}{2\omega}\int dtd\mathbf{x}e^{i\omega t}\langle[T_{xy}(t,\mathbf{x}),T_{xy}(t,\mathbf{0})]\rangle,
\end{equation}
where $T_{xy}$ is the momentum-stress tensor. If translation invariance is not broken by any sources, then the momentum $T_{ty}$ is conserved, whose current corresponds to $T_{xy}$. The KSS bound is true in momentum-conserving situation. It is weakened and becomes (\ref{gbbound}) when higher derivative terms are non-vanishing.\\
Now with scalar fields, the momentum is no longer conserved, and the translation invariance is broken. As a consequence, the shear viscosity $\eta$ does not have a hydrodynamic interpretation any more, but $\eta/s$ is still closely associated with entropy production. In this situation, one cannot simply apply Kubo formula to compute shear viscosity $\eta$ here, and it will not work if one tries to find it only via horizon data directly. Instead, a ``weaker horizon formula'' has been suggested to study shear viscosity to entropy density ratio $\eta/s$ in such cases \cite{weak}. While the perturbation is massive, one has
\begin{equation}\label{weaker}
\frac{\eta}{s}=\frac{1}{4\pi}h(1)^2.
\end{equation}
Substituting (\ref{metricpert}) into (\ref{eom}) and taking $\omega\to0$, one obtains
\begin{eqnarray}\label{eompert}
&&-4 h(u) [f(u) \left(h'(u) (\alpha_{GB} u^2 f''(u)-6 \alpha_{GB} u f'(u)-3)+u h''(u) (\alpha_{GB} u f'(u)+1)\right)\nonumber\\
&&+u f'(u) h'(u) \left(\alpha_{GB} u f'(u)+1\right)+f(u)^2 (6 \alpha_{GB} h'(u)-2 \alpha_{GB} u h''(u))]\nonumber\\
&&+u h'(u) (f(u) \left(2 \alpha_{GB} u^2 f'(u) h''(u)+h'(u) (\alpha_{GB} u^2 f''(u)-8 \alpha_{GB} u f'(u)-3)\right)\nonumber\\
&&+\alpha_{GB} u^2 f'(u)^2 h'(u)+2 \alpha_{GB} f(u)^2 \left(5 h'(u)-2 u h''(u)\right))+4 \hat{\beta}^2 u h(u)^2=0,
\end{eqnarray}
with boundary conditions
\begin{eqnarray}
h(0)&=&\sqrt{1-4\alpha_{GB}},\label{bc}\\
h(1)&=&\text{regular}.
\end{eqnarray}
We rewrite Hawking temperature (\ref{temp}) as
\begin{equation}\label{temp2}
T=\frac{N r_H}{4\pi l^2}3\hat{\beta}(\frac{1}{\hat{\beta}}-\frac{\hat{\beta}}{3})=\frac{3}{4\sqrt{2}}\frac{N\beta}{ l}\frac{1}{\pi}(\frac{1}{\hat{\beta}}-\frac{\hat{\beta}}{3})=\xi\frac{1}{\pi}(\frac{1}{\hat{\beta}}-\frac{\hat{\beta}}{3}),
\end{equation}
where
\begin{equation}
\xi=\frac{3}{4\sqrt{2}}\frac{N\beta}{ l}.
\end{equation}
As (\ref{eompert}) is too complicated to be solved, neither analytically nor numerically, it needs to be solved at high and low temperatures separately.
\subsection{High temperature expansion}
At high temperature, $\hat{\beta}^2\to0$. Therefore, (\ref{eompert}) could be perturbatively expanded around $\hat{\beta}^2\sim 0$. In this way, $h(u)$ will also be expanded as
\begin{equation}
h(u)=\sum_{i=0}^{n}\hat{\beta}^{2i}h_{2i}(u).
\end{equation}
At $0$th order, $h_0(u)$ turns out to be a constant function. With boundary condition (\ref{bc}), one has
\begin{equation}
h_0(u)=\sqrt{1-4\alpha_{GB}}.
\end{equation}
At second order, one finds
\begin{eqnarray}\label{eomh}
&&\frac{1}{\alpha_{GB} \left(4 \alpha_{GB} \left(u^3-1\right)+1\right)^{3/2}}[2 \sqrt{1-4 \alpha_{GB} } [3 (2 \alpha_{GB} ^2 \left(4 u^3 \left(2 \sqrt{4 \alpha_{GB} \left(u^3-1\right)+1}-3\right)\right.\nonumber\\
&&\left.-8 \sqrt{4 \alpha_{GB} \left(u^3-1\right)+1}+u^6+8\right)+\alpha_{GB} \left(u^3 \left(6-4 \sqrt{4 \alpha_{GB} \left(u^3-1\right)+1}\right)\right.\nonumber\\
&&\left.+8 \left(\sqrt{4 \alpha \left(u^3-1\right)+1}-1\right)\right)
-\sqrt{4 \alpha_{GB} \left(u^3-1\right)+1}+1) {h_2}'(u)
+u (4 \alpha_{GB} ^2 \left(u^6-5 u^3+4\right)\nonumber\\
&&+\alpha_{GB} \left(5 u^3-8\right)+1) \left(\sqrt{4 \alpha_{GB} \left(u^3-1\right)+1}-1\right) {h_2}''(u)]]+4 (1-4 \alpha_{GB} ) u=0.
\end{eqnarray}
At this order, one has
\begin{equation}
\frac{\eta}{s}=h_0(1)^2+2h_0(1)h_2(1)\hat{\beta}^2+\mathcal{O}(\hat{\beta}^3).
\end{equation}
Also,
\begin{equation}
\frac{T}{\xi}\simeq\frac{1}{\pi}\frac{1}{\hat{\beta}},
\end{equation}
so
\begin{equation}
\hat{\beta}\simeq\pi\frac{\xi}{T}.
\end{equation}
The numerical solution for (\ref{eomh}) is found and the ratio $\eta/s$ as a function of $\xi/T$ is illustrated, which is shown in Fig.(\ref{hightemp}) as a log-log plot. One finds that at high temperature, the bound (\ref{gbbound}) is violated when the temperature is getting lower.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.7\textwidth]{high.pdf}
\hfill
\caption{\label{hightemp} Log-log plot of numerical solution for $\eta/s$ at high temperature as a function of $\xi/T$. From bottom to top, the green, red and blue lines represents solutions for $\alpha_{GB}=0.2, 0.1, -0.1$ respectively.}
\end{figure}
\subsection{Low temperature expansion}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.7\textwidth]{low.pdf}
\hfill
\caption{\label{lowtemp} Log-log plot for $\eta/s$ at low temperature as a function of $\xi/T$. From top to bottom, the blue dashed line stands for $\alpha_{GB}=0.1$, and the green dotted line represents $\alpha_{GB}=0.05$, while the red thick line for $\alpha_{GB}=0.5$.}
\end{figure}
With (\ref{temp2}), one has $\hat{\beta}^2\to3$ at low temperature. Similarly,
\begin{equation}
h(u)=\sum_{i=0}^{\infty}(\hat{\beta}^2-3)^ih_i(u).
\end{equation}
As the equations of motion are rather complicated, and what is needed is only the value of $h(1)$, we are not going to give the explicit form of it in this paper. However, one could find that at $0$th order,
\begin{eqnarray}
h_0'(0)&=&0,\\
h_0(1)&=&0,
\end{eqnarray}
and at the first order,
\begin{eqnarray}
h_1'(0)&=&0,\\
h_1(1)&=&\frac 16 h_0'(1).
\end{eqnarray}
According to formula (\ref{weaker}), to first order, the shear viscosity to entropy density ratio at low temperatures reads
\begin{eqnarray}
4\pi\frac{\eta}{s}&=&h_0(1)^2+2 (\hat{\beta}^2 -3) h_0(1) h_1(1)+(\hat{\beta}^2 -3)^2 h_1(1)^2+\mathcal{O}((\hat{\beta}^2 -3)^3)\nonumber\\
&\simeq&(\hat{\beta}^2 -3)^2 h_1(1)^2=\frac{1}{36}(\hat{\beta}^2 -3)^2h_0'(1)^2.
\end{eqnarray}
Therefore, the problem reduces to finding out the value of $h_0'(1)$. Though the equation at $0$th is too cumbersome to be solved directly, one could follow the similar step performed in the previous work, solving $h_0(u)$ at $u=0$ and $u=1$ respectively, and matching the solutions to get $h_0'(1)$ \cite{v5}. The strategy is
\begin{itemize}
\item [1)]
Solve $h_0(u)$ at $u=0$ and $u=1$, and these two solutions are labeled as $h_{00}(u)$ and $h_{01}(u)$ respectively. At this stage, the integral constant in $h_{01}(u)$ is still not fixed.
\item[2)]
Assume $h_{00}(u)$ and $h_{01}$ match at $u=u_m$, where
\begin{equation}\label{condition1}
h_{00}(u_m)=h_{01}(u_m),
\end{equation}
and
\begin{equation}\label{condition2}
h_{00}'(u_m)=h_{01}'(u_m).
\end{equation}
\item[3)]
Using the conditions above, one could fix $h_{01}(u)$, thus find $h_{01}'(1)$.
\end{itemize}
Following the steps, one finds that at $u=0$,
\begin{equation}
h_{00}(u)=\sqrt{1-4\alpha_{GB}}.
\end{equation}
Similarly, at horizon where $u=1$,
\begin{equation}
h_{01}''(1)= \frac{(26 \alpha +5) h_{01}'(1)}{6 (8 \alpha -3)},
\end{equation}
so
\begin{eqnarray}
h_{01}(u)&=&h_{01}(1)+(u-1) h_{01}'(1)+\frac{1}{2} (u-1)^2 h_{01}''(1)+O\left((u-1)^3\right)\nonumber\\
&\simeq&\frac{(u-1) (70 \alpha_{GB} +(26 \alpha_{GB} +5) u-41)}{96 \alpha_{GB} -36} h_{01}'(1).
\end{eqnarray}
With (\ref{condition1}) and (\ref{condition2}), one obtains
\begin{equation}
h_{01}'(1)=\frac{\sqrt{1-4 \alpha_{GB} } (26 \alpha_{GB} +5)}{9-24 \alpha_{GB} }.
\end{equation}
Next thing to do is exactly the same as that in high-temperature case: write $\eta/s$ as a function of $\xi/T$ according to (\ref{temp2}). As $\hat{\beta}^2\to3$,
\begin{equation}
\frac{T}{\xi}=-\frac{1}{3\pi\hat{\beta}}(\hat{\beta}^2-3)\simeq-\frac{1}{3\sqrt{3}\pi}(\hat{\beta}^2-3).
\end{equation}
Therefore, at low temperature,
\begin{equation}\label{lowsol}
4\pi\frac{\eta}{s}=\frac{1}{36}\left(3\sqrt{3}\pi\frac{T}{\xi}\right)^2\left(\frac{\sqrt{1-4 \alpha_{GB} } (26 \alpha_{GB} +5)}{9-24 \alpha_{GB} }\right)^2.
\end{equation}
One finds from Fig.(\ref{lowtemp}) that the KSS bound is violated as well. \\
In spite of the fact that ``Kaluza-Klein-like'' process gives us really different equation of motion from that in $5$-dimensional spacetime, the behaviour $\eta/s$ is quite similar with what has been found in five dimensions \cite{v4}. In our case, $\eta/s\sim(T/\xi)^2$ when $T/\xi \to 0$, which satisfies the conjecture \cite{v1} that $\eta/s\sim(T/\Delta)^2$ as $T/\Delta \to 0$ with $\Delta$ being some scale to be chosen. Here we take $\xi$ to be $\Delta$.
\subsection{Discussion}
It is obvious that at high temperature, the KSS bound in higher derivative gravity $\eta/s\geq (1-4\alpha_{GB})$ is hardly violated. This is because high temperatures correspond to very small $\hat{\beta}^2$, where the contribution from axions is nearly neglectable. While low-temperature behaviour of the ratio violates the bound dramatically, since in this situation, $\hat{\beta}^2$ is a large number comparing with that in high-temperature cases. Thus for fixed Gauss-Bonnet constant, the larger the mass of graviton, the bolder the violation would be, which evinces one's intuitive postulation. \\
One may find that (\ref{lowsol}) looks rather different with that in $5$ dimensions \cite{v5}, who contains confluent hypergeometric limit function related with Bessel functions. This results from the fact that our theory in four-dimensional spacetime is rather different from those in higher dimensions, since it also includes ``breathing scalar'' $\phi$ that brings a completely different Einstein equations. All calculation is based on these equations of motion, so it is natural that in this paper one would get a quite different form of the ratio. \\
Nevertheless, the critical characteristics of $\eta/s$ is almost the same in $4-$ and $5-$ dimensional spacetime. They both violates KSS bound markedly. Furthermore, in both cases, $\eta/s\sim(T/\Delta)^2$, and $\Delta\sim\beta$, only differing by a coefficient that is influenced by the dimensionality. Although governed by different actions and therefore different equations, the main features characterising the shear viscosity to entropy density ratios in four and higher dimensions are actually very similar to each other.
\section{Causality}
\subsection{Inability of ``Kaluza-Klein-like'' process in causality analysis}
When introducing Gauss-Bonnet terms, one would find that the causality will be violated, and charge as well as scalar fields have effects on such violation \cite{v1,v2,v3,v4,v5}. Through analysis of causal structure, one is capable of finding restrictions on $\alpha_{GB}$. For example, in $5$ dimensions, causality will be violated if $\alpha_{GB}>0.09$ \cite{v1,v2}. \\
We would study the causal structure of the bulk, and here we continue the research on neutral black holes. According to the AdS/CFT correspondence, $4D$ AdS gravity we study here is dual to a $3D$ quantum field theory on the boundary.\\
Usually, the procedure to study the causality in dimension $D$ is:
\begin{itemize}
\item [1)]
Start with a $D$-dimensional metric
\begin{equation}\label{metric2}
ds^2=-N^2f(r)dt^2+\frac{1}{f(r)}dr^2+\frac{r^2}{l^2}\left(\sum_{a=1}^{D-2}dx_a^2+2h(x_i,r)dx_mdx_n\right).
\end{equation}
Write the perturbation (which is the wave function of the transverse graviton) as
\begin{equation}\label{wf}
h(x_i,r)=e^{-i\omega t+ik_r r+ikx_i}.
\end{equation}
\item [2)]
Then take large momentum limit, where $k^{\mu}\to\infty$. The $x_m x_n$-component of equation of motion will reduce to
\begin{equation}\label{kt}
k^{\mu}k^{\nu}g_{\mu\nu}^{\text{eff}}\simeq 0,
\end{equation}
where
\begin{equation}
ds^2_{\text{eff}}=g_{\mu\nu}dx^{\mu}dx^{\nu}=N^2f(r)\left(-dt^2+\frac{1}{c_g^2}dx_i^2\right)+\frac{1}{f(r)}dr^2
\end{equation}
is the effective metric.
\item [3)]
Find $c_g$ and the constraint by letting $c_g^2-1\le 0$.
\end{itemize}
It is important to mention that $c_g$ can be interpreted as the local speed of graviton on a constant $r$-hypersurface. Its dependence on dimensionality is given by \cite{v4}
\begin{equation}\label{cg}
c_g^2(\mathfrak{r})=\frac{N^2 f}{\mathfrak{r}^2}\frac{1-\frac{2\mathfrak{a}}{(D-3)(D-4)}\left(f''+(D-5)(D-6)\mathfrak{r}^{-2}f+2(D-5)\mathfrak{r}^{-1}f'\right)}
{1-\frac{2\mathfrak{a}}{D-3}\left(\mathfrak{r}^{-1}f'+\mathfrak{r}^{-2}(D-5)f\right)},
\end{equation}
where $\mathfrak{r}=r/r_H$, and $\mathfrak{a}=(D-3)(D-4)\tilde{\alpha}/l^2$. The local speed of light is defined as $c_b=N^2f(\mathfrak{r})/\mathfrak{r}^2$. At the boundary, where $\mathfrak{r}\to \infty$, $c_b$ is $1$. It seems that if we simply rescale the Gauss-Bonnet constant $\tilde{\alpha}$ into $\alpha/(D-4)$, then we can get rid of $(D-4)$ term, and find the constraint on $\mathfrak{a}$ by taking $\mathfrak{r}\to \infty$ and $D\to 4$.\\
Before doing so, let us come back to the ``Kaluza-Klein-like'' procedure \cite{2003,2004} performed in this paper. It is expected that we could also make a similar perturbation like (\ref{wf}), and directly get the momentum term from (\ref{eom}), or just from $\mathcal{E}_{\mu\nu}$.\\
Since we only have $4$ dimensions, one should either take $x_i$ in (\ref{wf}) to be $x$ or $y$. As a consequence, the momentum $k$ will be in $x$ (or $y$) direction. More precisely, for instance, one has
\begin{equation}\label{px}
h(x,u)=e^{-i\omega t+ik_u u+ikx}.
\end{equation}
However, if one tries to substitute (\ref{px}) into (\ref{eom}), no momentum term could be found, neither in (\ref{eom}) nor in $\mathcal{E}_{\mu\nu}$. That is, it is impossible to analyse causal structure through ``Kaluza-Klein-like'' procedure as far as we are concerned currently.
\subsection{Vanishing momentum terms in four dimensions}
Turing back to (\ref{cg}), on the other hand, one is able to study the causality with this formula at the first sight. But one has to be careful when performing such a limitation.\\
There is a rather simple way to express (\ref{pert}) more generally, which also tells why things get more complicated when we deal with $4$ dimensions. \\
It will be useful to rewrite the metric (\ref{metric2}) as
\begin{equation}\label{metric3}
ds^2=-N^2f(r)dt^2+\frac{1}{f(r)}dr^2+\frac{r^2}{l^2}\left(dx^2+dy^2+2h(z_a,r)dxdy\right)+\mathfrak{z}\frac{r^2}{l^2}
\sum_{i=1}^{D-4}dz_i^2,
\end{equation}
where $\mathfrak{z}=1$ when $D\geq 5$, and $\mathfrak{z}=0$ for $D\leq 4$. With (\ref{metric3}), one obtains
\begin{equation}\label{cg2}
c_{g,\mathfrak{z}}^2(\mathfrak{r})=\frac{1}{\mathfrak{z}}\frac{N^2 f}{\mathfrak{r}^2}\frac{1-\frac{2\mathfrak{a}}{(D-3)(D-4)}\left(f''+(D-5)(D-6)\mathfrak{r}^{-2}f+2(D-5)\mathfrak{r}^{-1}f'\right)}
{1-\frac{2\mathfrak{a}}{D-3}\left(\mathfrak{r}^{-1}f'+\mathfrak{r}^{-2}(D-5)f\right)}.
\end{equation}
From (\ref{cg2}), one finds that if one works with dimensions higher than $5$, then $\mathfrak{z}=1$, she or he will definitely recover (\ref{cg}). But if we take $D=4$, we should also make $\mathfrak{z}=0$ at the same time, where (\ref{cg}) works no more. What will be got is an infinite graviton velocity. In short,\\
\begin{equation}
c_g^2\lvert_{D=4}\neq\lim\limits_{D\to 4}c_g^2.
\end{equation}
Moreover, the momentum term in $D$-dimensional equation of motion reads $c_{g,\mathfrak{z}}^2k^2/(N^2f(\mathfrak{r}))$. Obviously, $\mathfrak{z}$ appears in denominator. When $D$ is $4$, $k$ has to vanish in order that the momentum term would not diverge.\\
One could turn back to the very beginning to see why this happens, or what makes $4D$ cases so special. The answer is quite simple: the momentum term containing $k$ vanishes if $x_i=x_m$ or $x_i=x_n$, in spite of how many dimensions one has.\\
As a result, at least three spatial coordinates, i.e. five dimensions in total are required to construct the perturbation in (\ref{metric2}) that leads to non-zero momentum. For example, in $5$-dimensional spacetime, we often choose $x_mx_n$ to be $xy$, and thus $x_i$ is $z$.\\
However, this is just the cases for neutral black holes with axions. One may have another story when adding magnetic and electric charges back, where $H\neq0$ and $Q\neq0$. For charged black holes in $4$D EGB gravity, there in a constraint such that $\alpha_{GB}<0$ \cite{4d1,miao}. \\
Causality is not only one way to get this result \cite{4d1}, and investigation into the completeness of the spacetime also yields a similar upshot \cite{miao}. Thus, it is reasonable to expect that based on ``Kaluza-Klein-like'' method, one could get a similar constraint on $\alpha_{GB}$.
\section{Conclusion}
In this paper, we obtained the dyonic black hole solution with linear axions in $4$-dimensional higher derivative gravity through ``Kaluza-Klein-like'' process. Shear viscosity to entropy density ratio $\eta/s$ is investigated after electric and magnetic charges are removed. It turns out that violation still happens when $D=4$. \\
The behaviour of $\eta/s$ is rather similar with that in $5$ dimensions, such that $\eta/s\sim(T/\xi)^2$ when $T/\xi$ is very small. One important outcome is that the main feature of the ratio is almost the same with what has been found in $5$ dimensions \cite{v5}. The only difference comes from the different equations of motions brought by ``breathing scalar'', which is inevitable if one applies ``Kaluza-Klein-like'' process to get the four-dimensional theories. \\
While the bulk causal structure of uncharged black holes is studied, it is found that the momentum term vanishes in equations of motion, while the velocity formula in $D$ dimensions is only valid when $D>4$. Therefore, neither ``Kaluza-Klein-like'' process nor na\"ive $D\to4$ limit can help in causality analysis, since the construction itself is only well-defined in dimensions no lower than $5$. \\
As is mentioned, it has been shown that non-vanishing electric or magnetic charge may lead to a different result. Since we only consider neutral black hole with tensor type perturbations in this paper, our next task may focus on charged black holes and different types of perturbations as well. The research work done previously implies a possibility that the momentum term will exist in these cases, which means that causality could be studied. We expect a constraint similar with $\alpha_{GB}<0$ for charged black holes obtained from ``Kaluza-Klein-like'' procedure.\\
The dyonic black hole solution derived may be used as a tool to study the transport properties of the normal state of high-temperature superconductors. It is of our further interest as well to go further and explore more of its properties, such as transport behaviour like electric and thermal conductivity. Much interesting upshot is expected to appear for this is the first time for a $4$d dyonic black hole to contain contributions from higher-derivative terms, which may bring an insight into the study on high-temperature superconductivity.\\
|
1,116,691,497,340 | arxiv |
\section{Introduction}
In a series of a papers, \cite{Par74,Par76,Par78} introduced the idea
of hydraulic flux concentrations at the solar surface.
Here the hydraulic device is formed by magnetic flux tubes of varying
size and pumping is accomplished by turbulence.
In these papers, he envisaged turbulent pumping (analogous to a water
jet vacuum pump) as the relevant driver, but other alternatives such as
the negative effective magnetic pressure instability (NEMPI),
first studied by \cite{KRR89,KMR96}, are possible
and have also been discussed \citep{BGJKR14}.
The flux concentrations of Parker are thought to be just around $100k_{\rm m}$
in diameter.
Such tubes can be concentrated further through what is known as
convective collapse \citep{Par78,Spr79}.
Although these tubes are only about $100k_{\rm m}$, they might be relevant
for sunspots which can be more than a hundred times thicker.
Indeed, in the cluster model of sunspots an assembly of many such smaller
tubes are thought to constitute a full sunspot.
Even today, it is still unclear whether sunspots are monolithic or
clustered \citep[see review by][]{RS11}.
Nevertheless, the possibility of downward flows inside sunspots
(as seen in Parker's models of hydraulic magnetic flux concentrations) may be a
more universal feature which has also been identified as the driving mechanism
in producing magnetic flux concentrations by NEMPI \citep{BGJKR14} and has
recently also been seen at the late stages of flux emergence \citep{RC14}.
In most of the work that invokes NEMPI,
an isothermal equation of state is used.
This allows these effects to be studied in isolation from the downdrafts
that occur in convection.
However, it is important to assess the effects of thermodynamics and
radiation, which might either support or hinder tube formation and
amplification.
The goal of the present paper is to investigate how downward flows
produce flux concentrations in a partially ionized atmosphere with
full radiative transfer.
We model the effects of an additional negative pressure by imposing
an irrotational forcing function corresponding
to a localized gradient force of the form $-\mbox{\boldmath $\nabla$} {}\phi$
on the right-hand side of the momentum equation, where $\phi$ is a
localized Gaussian profile function that emulates the effects of
negative effective magnetic pressure in a controllable way.
By imposing a vertical magnetic field, we force the resulting flow
to be preferentially along magnetic field lines.
If $\phi$ is chosen to be negative, it corresponds to a negative extra
pressure.
Horizontal pressure balance then leads to a localized gas pressure and
density increase and consequently
to a downflow owing to the weight of this density enhancement.
The return flow closes in the upper parts of this structure.
The resulting flow convergence drives magnetic field lines together
and thus forms the magnetic flux concentration envisaged by
\cite{Par74,Par76,Par78}.
These flux concentrations are also similar to those seen
in studies of NEMPI with a vertical magnetic field \citep{BKR13,BGJKR14}.
We construct hydrostatic equilibrium solutions using a method
similar to that of \cite{BB14}, hereafter BB14.
They fixed the temperature at the bottom boundary, which then also
fixes the source function for the radiation field.
For the opacity we assume here either a generalized Kramers opacity
with exponents that result in a nearly adiabatic stratification
in the deep fully ionized layers.
Alternatively, we use an ${\rm H}^{-}$ opacity that is estimated from the
number density of ${\rm H}^{-}$ ions using the Saha equation with a
corresponding ionization potential \citep{KW90}.
For the purpose of our investigation, it is sufficient to restrict
ourselves to the ionization of hydrogen.
This approach was also used by \cite{HNSS07} in simulations of
the fine structure of sunspots.
A general problem in all approaches to time-dependent models of stellar
atmospheres is the large gap between acoustic and thermal timescales.
Their ratio is of the order of the ratio of the energy flux to
$\rhoc_{\rm s}^3$, where $\rho$ is the density and $c_{\rm s}$ is the sound speed.
For the Sun, this ratio is less than $10^{-10}$ in the deeper parts
of the convection zone \citep{BCNS05}.
This problem has been identified long ago \citep{CS86,CS89} and can be
addressed using models that are initially in equilibrium \citep{NSA09}.
Another possibility is to consider modified models with a larger flux
such that it becomes possible to simulate for a full Kelvin--Helmholtz
timescale \citep{KMCWB13}.
This is also the approach taken here and it allows us to construct models
whose initial state is very far away from the final one, as is the case
with an initially isothermal model.
\section{The model}
\label{RT}
\subsection{Governing equations}
We adopt the hydromagnetic equations for logarithmic density $\ln\rho$,
velocity $\bm{u}$, specific entropy $s$, and magnetic vector potential $\mbox{\boldmath $A$} {}$,
in the form
\begin{eqnarray}
{{\rm D} {} \ln \rho \over {\rm D} {} t}&=&-\mbox{\boldmath $\nabla$} {}\cdot\bm{u}, \\
\rho{{\rm D} {} \bm{u}\over {\rm D} {} t}&=&-\mbox{\boldmath $\nabla$} {}(p+\phi) +\rho\mbox{\boldmath $g$} {} + \mbox{\boldmath $J$} {}\times\mbox{\boldmath $B$} {}
+\mbox{\boldmath $\nabla$} {}\cdot(2\rho\nu\mbox{\boldmath ${\sf S}$} {}), \\
\label{dss}
\rho T{{\rm D} {} s \over {\rm D} {} t}&=&-\mbox{\boldmath $\nabla$} {}\cdot\mbox{\boldmath $F$} {}_{\rm rad}+2\rho\nu \mbox{\boldmath ${\sf S}$} {}^2
+\eta\mu_{\rm M}\mbox{\boldmath $J$} {}^2,\\
{\partial \mbox{\boldmath $A$} {} \over \partial t}&=&\bm{u}\times\mbox{\boldmath $B$} {}+\eta\nabla^2\mbox{\boldmath $A$} {},
\label{sRT}
\end{eqnarray}
where ${\rm D} {} / {\rm D} {} t= \partial /{\partial t} + \mbox{\boldmath $\nabla$} {} \cdot \bm{u}$,
$p$ is the gas pressure, $\mbox{\boldmath $g$} {}=(0,0,-g)$ is
the gravitational acceleration, $\nu$ is the viscosity,
${\sf S}_{ij}={\textstyle{1\over2}}(u_{i,j}+u_{j,i})-{\textstyle{1\over3}}\delta_{ij}\mbox{\boldmath $\nabla$} {}\cdot\bm{u}$
is the traceless rate-of-strain tensor and $\mbox{\boldmath ${\sf S}$} {}^2={\sf S}_{ij}{\sf S}_{ji}$
contributes to the (positive definite) viscous heating rate,
$\mbox{\boldmath $B$} {}=\mbox{\boldmath $B$} {}_0+\mbox{\boldmath $\nabla$} {}\times\mbox{\boldmath $A$} {}$ is the magnetic field
with $\mbox{\boldmath $B$} {}_0=\hat{\mbox{\boldmath $z$}} {} B_0$ representing an imposed vertical magnetic field,
$\mbox{\boldmath $J$} {}=\mbox{\boldmath $\nabla$} {}\times\mbox{\boldmath $B$} {}/\mu_{\rm M}$ is the current density,
$\mu_{\rm M}$ is the magnetic vacuum permeability
(not to be confused with the mean molecular weight $\mu$, defined below),
$\eta$ is the magnetic diffusivity,
and $\mbox{\boldmath $F$} {}_{\rm rad}$ is the radiative flux.
For the equation of state, we assume a perfect gas with
$p=({\cal R}/\mu)T\rho$, where ${\cal R}=k_{\rm B}/m_{\rm u}$ is the universal
gas constant in terms of the Boltzmann constant $k_{\rm B}$ and the atomic mass
unit $m_{\rm u}$, $T$ is the temperature,
and the dimensionless mean molecular weight is given by
\begin{equation}
\mu(\rho,T)=(1+4x_{\rm He})/(1+y_{\rm H}+x_{\rm He}),
\label{mu}
\end{equation}
where $y_{\rm H}(\rho,T)$ is the ionization fraction of hydrogen and $x_{\rm He}$
is the fractional number of neutral helium, which is related to the
mass fraction of neutral helium $Y$ through $4x_{\rm He}=Y/(1-Y)$.
In the following, we use the abbreviation $\mu_0=1+4x_{\rm He}=(1-Y)^{-1}=X^{-1}$,
where $X$ is the mass fraction of hydrogen (ignoring metals).
In relating various thermodynamic quantities to each other, we introduce
$\alpha=(\partial\ln\rho/\partial\ln p)_T$, which is a known function of $y_{\rm H}$,
as well as $\delta=(\partial\ln\rho/\partial\ln T)_p$ and the ratio
$\gamma=c_p/c_v$ of specific heats at constant volume and pressure,
$c_v=(\partial e/\partial T)_v$ and $c_p=(\partial e/\partial T)_p$,
respectively, which are known functions of both $y_{\rm H}$ and $T$;
see \cite{KW90}, \cite{Sti02}, and \App{Thermo}.
When $y_{\rm H}$ is either $0$ or $1$, we have $\alpha=\delta=1$ and
$c_v=(3/2)\,{\cal R}/\mu$ with $e=c_v T$.
In general, however, we have
$e=(3/2)\,{\cal R}T/\mu+e_{\rm H}$,
where $e_{\rm H}=y_{\rm H}{\cal R}T_{\rm H}/\mu_0$ is the specific energy
that is used (released) for ionization (recombination)
and $T_{\rm H}=\chi_{\rm H}/k_{\rm B}$ is the ionization temperature.
Using $\chi_{\rm H}=13.6\,{\rm eV}$ for the ionization energy of hydrogen,
we have $T_{\rm H}\approx1.58\times10^5\,{\rm K}$.
Instead of solving \Eq{dss} for $s$, it is convenient to solve directly
for $T$ using the relation \citep{KW90}
\begin{equation}
\rho T{{\rm D} {} s\over{\rm D} {} t}=\rho{{\rm D} {} e\over{\rm D} {} t}+p\mbox{\boldmath $\nabla$} {}\cdot\bm{u}=\rho c_v T
\left({{\rm D} {}\ln T\over{\rm D} {} t}+{\gamma-1\over\delta}\mbox{\boldmath $\nabla$} {}\cdot\bm{u}\right).\quad
\label{EntroTemp}
\end{equation}
The pressure gradient is computed as
\begin{equation}
{1\over\rho}\mbox{\boldmath $\nabla$} {} p={c_{\rm s}^2\over\gamma}(\mbox{\boldmath $\nabla$} {}\ln\rho+\delta\mbox{\boldmath $\nabla$} {}\ln T),
\end{equation}
where $c_{\rm s}$ is the adiabatic sound speed with $c_{\rm s}^2=\gamma p/\rho\alpha$.
This approach allows us to find the ionization fraction
of hydrogen from the Saha equation as
\begin{equation}
{y_{\rm H}^2\over1-y_{\rm H}}={\rho_{\rm e}\over\rho}
\left({T_{\rm H}\over T}\right)^{-3/2}
\exp\left(-{T_{\rm H}\over T}\right),
\label{Saha}
\end{equation}
where $\rho_{\rm e}=\mu_0 m_{\rm u}(m_{\rm e}\chi_{\rm H}/2\pi\hbar^2)^{3/2}$
is the electron density.
To compute $\mbox{\boldmath $\nabla$} {}\cdot\mbox{\boldmath $F$} {}_{\rm rad}$, we adopt the gray approximation,
ignore scattering, and assume that the source function $S$ is given
by the frequency-integrated Planck function, so $S=(\sigma_{\rm SB}/\pi)T^4$,
where $\sigma_{\rm SB}$ is the Stefan--Boltzmann constant.
The negative divergence of the radiative flux is then given by
\begin{equation}
-\mbox{\boldmath $\nabla$} {}\cdot\mbox{\boldmath $F$} {}_{\rm rad}=\kappa\rho \oint_{4\pi}(I-S)\,{\rm d} {}\Omega,
\label{fff}
\end{equation}
where $\kappa$ is the opacity per unit mass
(assumed independent of frequency) and $I(\bm{x},t,\hat{\mbox{\boldmath $n$}} {})$
is the frequency-integrated specific intensity in the direction $\hat{\mbox{\boldmath $n$}} {}$.
We obtain $I$ by solving the radiative transfer equation,
\begin{equation}
\hat{\mbox{\boldmath $n$}} {}\cdot\mbox{\boldmath $\nabla$} {} I=-\kappa\rho\, (I-S),
\label{RT-eq}
\end{equation}
along a set of rays in different directions $\hat{\mbox{\boldmath $n$}} {}$ using the
method of long characteristics.
For the opacity, we assume either a Kramers-like opacity
$\kappa=\kappa_0\rho^a T^b$ with adjustable coefficients $\kappa_0$,
$a$, and $b$, or a rescaled ${\rm H}^{-}$ opacity.
In the former case, following BB14, it is convenient to express
$\kappa$ in the form $\kappa=\tilde{\kappa_0}(\rho/\rho_0)^a ({T/T_0})^b$,
where $\tilde{\kappa_0}$ is a rescaled opacity and is related to $\kappa_0$ by
$\tilde\kappa_0=\kappa_0\rho_0^a T_0^b$.
With this choice, the units of $\tilde{\kappa_0}$ are independent of
$a$ and $b$, and always $\,{\rm Mm}^{-1}\,{\rm cm}^3\,{\rm g}^{-1}$ (=$10^{-8}\,{\rm cm}^2\,{\rm g}^{-1}$).
In the latter case we use for the ${\rm H}^{-}$ opacity the expression
\citep{KW90}
\begin{equation}
\kappa=\kappa_0y_{\rm H}(1-y_{\rm H}){\rho\over\rho_{e^-}}\!
\left({T_{{\rm H}^{-}}\over T}\right)^{3/2}\!\!
\exp\left({T_{{\rm H}^{-}}\over T}\right),
\label{Saha2}
\end{equation}
where $\kappa_0=\sigma_{{\rm H}^{-}}/4\mu_0 m_{\rm u}$ is a coefficient,
$\sigma_{{\rm H}^{-}}=4\times10^{-17}\,{\rm cm}^2$ is the cross section
of ${\rm H}^{-}$ \citep{Mih78},
$x_Z=10^{-4}$ is the fraction of metals,
$T_{{\rm H}^{-}}=\chi_{{\rm H}^{-}}/k_{\rm B}$ and $\chi_{{\rm H}^{-}}=0.754\,{\rm eV}$ are
the ionization temperature and energy of ${\rm H}^{-}$,
and $\rho_{e^-}=\mu_0 m_{\rm u}(m_{\rm e}\chi_{{\rm H}^{-}}/2\pi\hbar^2)^{3/2}$
is the relevant electron density.
An important quantity in a radiative equilibrium model is the radiative
conductivity $K=16\sigma_{\rm SB} T^3/3\kappa\rho$.
According to the results of BB14, $K$ is nearly constant in the optically
thick part.
This implies that $\rho\propto T^n$ with $n=(3-b)/(1+a)$ being effectively
a polytropic index of the model provided $n>-1$.
For large values of $T$, the exponential terms in \Eqs{Saha}{Saha2}
become unity, and only the terms $1-y_H\propto\rho/T^{3/2}$ from \Eq{Saha}
and an explicit $\rho/T^{3/2}$ term in \Eq{Saha2} remain.
Therefore, $\kappa\propto\rho^2 T^{-3}$, i.e., $a=2$ and $b=-3$,
resulting in a stable stratification with polytropic index $n=(3+3)/(1+2)=2$.
To identify the location of the radiating surface in the model,
we compute the optical depth as
\begin{equation}
\tau(x,z,t)=\int_z^{L_z}(\kappa\rho)(x,z',t)\;{\rm d} {} z'.
\end{equation}
The $\tau=1$ contour corresponds then to the surface from where most
of the radiation escapes all the way to infinity.
For the forcing function, we assume
\begin{equation}
\phi=\phi_0 e^{-[x^2+(z-z_0)^2]/2R^2},
\end{equation}
where $\phi_0$ is the amplitude with a negative value and $R$ the radius of the blob-like
structure.
\subsection{Boundary conditions}
We consider a two-dimensional (2D) Cartesian slab of size $L_x\times L_z$
with $-L_x/2<x<L_x/2$, $0\leq z\leq L_z$. We assume the
domain to be periodic in the $x$ direction and bounded by
stress-free conditions in the $z$ direction, so the velocity obeys
\begin{equation}
\partial u_x/\partial z=\partial u_y/\partial z=u_z=0
\quad\mbox{on $\;z=0$, $L_z$}.
\end{equation}
For the magnetic field we adopt the vertical field condition,
\begin{equation}
\partial A_x/\partial z=\partial A_y/\partial z=A_z=0
\quad\mbox{on $\;z=0$, $L_z$}.
\end{equation}
We assume zero incoming intensity at the top, and compute the incoming
intensity at the bottom from a quadratic Taylor expansion of the source
function, which implies that the diffusion approximation is obeyed;
see Appendix~A of \cite{HDNB06} for details.
To ensure steady conditions, we fix temperature at the bottom,
\begin{equation}
T=T_0\quad\mbox{on $z=0$},
\end{equation}
while the temperature at the top is allowed to evolve freely.
There is no boundary condition on the density, but since no mass
is flowing in or out, the volume-averaged density is
automatically constant (see Appendix~C of BB14).
Since most of the mass resides near the bottom, the density there
will not change drastically and will be close to its initial value
at the bottom.
\begin{figure*}[t!]\begin{center}
\includegraphics[width=\textwidth]{profs1_alln}
\includegraphics[width=\textwidth]{profs2_alln}
\end{center}\caption[]{
Comparison of $\rho$, $T$, $s$, $y_{\rm H}$, $\gamma$ and $c_p$ along each row,
top to bottom, from models with (solid) and without (dotted, blue)
hydrogen ionization for Runs~A, B and C along each column, left to right,
and $\tilde\kappa_0=10^5\,{\rm Mm}^{-1}\,{\rm cm}^3\,{\rm g}^{-1}$.
In the plots of $T(z)$, the closed circles (red) indicate
$\tau=1$.
}\label{profs1_alln}\end{figure*}
We use for all simulations the {\sc Pencil Code}\footnote{
\url{http://pencil-code.googlecode.com/}}, which solves
the hydrodynamic differential equations with a high-order
finite-difference scheme.
The radiation and ionization modules were implemented by \cite{HDNB06}.
All our calculations are carried out either on a one-dimensional (1D)
mesh with $576$ points in the $z$ direction or on a 2D
mesh with $1152\times576$ points in the $x$ and $z$ directions.
\subsection{Parameters and initial conditions}
\label{Parameters}
To avoid very large or very small numbers,
we measure length in $\,{\rm Mm}$, speed in $k_{\rm m}\,{\rm s}^{-1}$,
density in $\,{\rm g}\,{\rm cm}^{-3}$, and temperature in $\,{\rm K}$.
Time is then measured in kiloseconds ($\,{\rm ks}$).
We adopt the solar surface value for the gravitational acceleration,
which is then $g=274k_{\rm m}^2\,{\rm s}^{-2}\,{\rm Mm}^{-1}=2.74\times10^4\,{\rm cm}\,{\rm s}^{-2}$.
In most models we use $(a,b)=(1,0)$ and
$\tilde\kappa_0=10^5\,{\rm Mm}^{-1}\,{\rm cm}^3\,{\rm g}^{-1}$, which yields
a top temperature of about $10,000\,{\rm K}$ (BB14).
We also present results using the ${\rm H}^{-}$ opacity.
In both cases, the opacities are lowered by 5 to 6 orders of magnitude
relative to their realistic values to
allow thermal relaxation to occur within a few thousand sound travel times.
As discussed in BB14, this also leads to a larger flux and therefore
a larger effective temperature.
For the ${\rm H}^{-}$ opacity, we have applied a scaling factor
of $10^{-6}$ in \Eq{Saha2}.
In all the models we use $\nu=\eta=10^{-3}\Mmk_{\rm m}\,{\rm s}^{-1}$,
corresponding to $10^{10}\,{\rm cm}^2\,{\rm s}^{-1}$.
For the radius of the blob we take $R=1\,{\rm Mm}$ and for the magnetic field
we take $B_0=1\,{\rm kG}$.
\begin{table}[t!]\caption{
Summary of 1D runs leading to equilibrium solutions.
For the opacity we either give the values $(a,b)$
for Kramers opacity or we indicate ${\rm H}^{-}$. All runs
are carried out on 576 mesh points. $n$ is the polytropic
index, $T_0$ and $\rho_0$ are the initial bottom temperature and
density given in $\,{\rm K}$ and $\,{\rm g}\,{\rm cm}^{-3}$, respectively,
and $z_{\tau=1}$ is the height where $\tau=1$.
Runs~A, B and C were carried out with and without ionization.
}\vspace{12pt}\centerline{\begin{tabular}{lcccccc}
Run & opacity & $n$ & $T_0$ & $\rho_0$ & $z_{\tau=1}$ \\
\hline
A & $(1,-7/2)$ & 3.25 & $6\times10^{4}$ & $5\times10^{-4}$ & 7.6 \\
B & (1,0) & 1.5 & $1\times10^{5}$ & $1\times10^{-3}$ & 8.6 \\
C & (1,1) & 1 & $1\times10^{5}$ & $2\times10^{-3}$ & 6.8 \\
D & ${\rm H}^{-}$ & --- & $6\times10^{4}$ & $5\times10^{-4}$ & 7.5 \\
E & ${\rm H}^{-}$ & --- & $1\times10^{5}$ & $2\times10^{-3}$ & 13.8 \\
\label{T1summary}\end{tabular}}\end{table}
\begin{table}[t!]\caption{
Summary of 2D models discussed in this paper.
For the opacity we either give the values $(a,b)$
for Kramers opacity or we indicate ${\rm H}^{-}$.
$\phi_0$ and $p(z_0)$ are given in units of $\,{\rm g}\,{\rm cm}^{-3}k_{\rm m}^2\,{\rm s}^{-2}$,
while $\max|u_z|$ is given in $k_{\rm m}\,{\rm s}^{-1}$.
}\vspace{12pt}\centerline{\begin{tabular}{lccccccc}
Run & opacity & $z_0$ & $-\phi_0$ & $p(z_0)$ & $\max|u_z|$ \\
\hline
F3 & (1,0) & 3 & $3\times10^{-3}$ &1.00& 1.1\\
K3a & (1,0) & 3 & $3\times10^{-3}$ &0.94& 45 \\
K3b & (1,0) & 3 & $3\times10^{-4}$ &0.98& 40 \\
H3 &${\rm H}^{-}$& 3 & $7\times10^{-4}$&0.06 & 22 \\
H10 &${\rm H}^{-}$&10 & $3\times10^{-2}$&0.07 & 15 \\
\label{T2summary}\end{tabular}}\end{table}
\section{Results}
\begin{figure*}[t]\begin{center}
\includegraphics[width=\textwidth]{pcomp_rho}
\end{center}\caption[]{
Comparison of models with $\rho_0=0.002\,{\rm g}\,{\rm cm}^{-3}$ (solid),
$0.005\,{\rm g}\,{\rm cm}^{-3}$ (dashed), and $0.01\,{\rm g}\,{\rm cm}^{-3}$ (dotted lines).
Here $(a,b)$ is $(1,0)$, corresponding to $n=1.5$.
}\label{pcomp_rho}\end{figure*}
First we run 1D simulations with $\phi_0=0$ and isothermal
initial conditions using Kramers opacity and ${\rm H}^{-}$ opacity.
A summary of these runs is listed in \Tab{T1summary}.
We use the resulting equilibrium solutions from the 1D runs
as initial conditions for the 2D runs with $\phi_0\neq0$.
The summary of 2D runs is listed in \Tab{T2summary}.
\subsection{Kramers opacity}
As listed in \Tab{T1summary}, for Kramers opacity we use
three pairs of $(a,b)$, $(1,-3.5)$, $(1,0)$, and $(1,1)$
in Runs~A, B, and C, respectively.
In the absence of ionization, the resulting equilibrium solutions
have an optically thick part that is nearly polytropically stratified,
i.e., $\rho\propto T^n$, where $n=(3-b)/(1+a)=$3.25, 1.5 and 1,
respectively are the polytropic indices (BB14) for Runs~A, B and C.
In the outer, optically thin part, the temperature in all cases is nearly
constant and approximately equal to the effective temperature.
For $\gamma=5/3$, a polytropic index of $3/2$ corresponds to
an adiabatic stratification.
The pressure scale height, $H_{\rm p}={\cal R}T/\mu g$, in the case of
$n=1.5$ is about $3\,{\rm Mm}$ in the upper parts of the model
and increases to about $7\,{\rm Mm}$ at the bottom.
In the 2D runs, for the Kramers opacity, we have used only
$(a,b)$ of $(1,0)$, corresponding to $n=1.5$.
In \Tab{T1summary} we also list the height $z_{\tau=1}$ of the
photosphere, where $\tau=1$.
For our models with Kramers opacity, the value of $z_{\tau=1}$
is around $8\,{\rm Mm}$, but comparing the models with $T_0=10^5\,{\rm K}$
and $\rho_0=0.002\,{\rm g}\,{\rm cm}^{-3}$
using either Kramers or ${\rm H}^{-}$ opacity (Runs~C or E, respectively), we find
that $z_{\tau=1}$ doubles from about $7\,{\rm Mm}$ to $14\,{\rm Mm}$,
which is the reason we will choose
a shallower domain for our 2D experiment.
\subsubsection{Vertical equilibrium profiles}
In \Fig{profs1_alln} we compare vertical profiles of various
thermodynamic parameters in 1D models with (in solid black)
and without (in dotted blue) partial ionization with $\phi_0=0$.
Both models have in common that the temperature decreases approximately
linearly with increasing $z$ and then reaches a constant at a height where
$\tau=1$ (in the one with ionization);
this height is nearly the same in both cases.
By requiring thermostatic equilibrium,
\Eq{dss} yields $\mbox{\boldmath $\nabla$} {}\cdot\mbox{\boldmath $F$} {}_{\rm rad}=0$,
and in the absence of ionization,
it is seen that the solutions for the temperature profiles
are linearly decreasing for $\tau \gg 1$ and nearly constant for $\tau \ll 1$ (BB14).
The inclusion of ionization does not seem to affect the solutions for
temperature profiles much.
It can be seen that the polytropic density-temperature relation, $\rho\sim T^n$,
nearly follows in the optically thick part ($\tau > 1$) across all
atmospheres with different polytropic indices.
This is because in the optically thick part, the degree of ionization,
$y_{\rm H}$ remains nearly constant.
In the optically thin part, the models with ionization have lower densities
compared to the models without ionization, thus increasing the density contrast.
The specific entropy in the optically thick part is stratified according to
the respective polytropic indices (stable when $n=3.25$,
marginal when $n=1.5$, and unstable when $n=1$; cf.\ BB14).
Interestingly with ionization, all the entropy profiles in
Runs~A, B, and C behave in a similar fashion
near and above the height where $\tau=1$.
Near $\tau=1$, there is a narrow layer where the vertical entropy gradient
is negative, corresponding to Schwarzschild-unstable
stratification and the possibility of convection.
(We confirmed this and will comment on it in the discussion.)
It can be seen from \Fig{profs1_alln}, that on comparing the
specific entropy profiles with the $y_{\rm H}$ profiles,
the extrema in the entropy profiles
coincide with the ones in the corresponding $y_{\rm H}$ profiles.
This correspondence in the extrema between the two quantities,
specific entropy and degree of ionization can be shown mathematically.
We show in detail in \App{ionssprof} that,
using the equation of state, the first law of thermodynamics, and
the Saha ionization equation, for the case of $\tau\gg1$,
\begin{eqnarray}
{\rm d} {} s={\rm d} {}y_{\rm H}{{\cal R}\over\mu_0}\left[{1\over A_v}{(n-3/2)\over(n+3/2)}
- B_v\right],
\label{dsdh1}
\end{eqnarray}
where $A_v$ and $B_v$ are coefficients that are defined
in \Eq{AB} of \App{Thermo}.
In the case of $\tau \ll 1$, we have
\begin{equation}
{\rm d} {} s={\rm d} {}y_{\rm H}{{\cal R}\over\mu_0}\left[{1\over A_v}- B_v\right].
\label{dsdh2}
\end{equation}
From \Eqs{dsdh1}{dsdh2}, we find that the change in specific
entropy is directly proportional to the change in degree of
ionization and when ${\rm d} {} y_{\rm H}=0$, then ${\rm d} {} s=0$. Thus, the extrema
in $s$ directly correspond to extrema in $y_{\rm H}$.
The hydrogen ionization fraction $y_{\rm H}$, reaches a minimum of about 0.2
(in Runs~A and B) and about 0.4 (Run~C) near
the surface, but then increases again.
This is because of a low density and the exponential decrease in the
upper isothermal layer, leading to larger values of $y_{\rm H}$ even when
$T$ is small.
In the Sun, the surface temperatures are of course smaller still,
and therefore $y_{\rm H}\approx0$ can then be reached.
While the specific heats increase outward by a factor of about 5 to 10,
their ratio, $\gamma$, decreases below the critical value of 5/3.
In \Fig{pcomp_rho} we compare models with three values of $\rho_0$.
We recall that $\rho_0$ is the bottom value of the
density of the initially isothermal model.
Since temperature $T_0$ is fixed at the bottom, the pressure scale
height remains unchanged, but since the stratification evolves to
a nearly adiabatic one, the density scale height becomes larger than
the pressure scale height, so density drops more slowly and the
bottom density becomes smaller by about 2/3; a corresponding expression
for this is given by Eq.~(C.5) in BB14.
Note that models with larger values of $\rho_0$ result in lower surface
temperatures and lower degrees of ionization near the surface.
However, for a given number of mesh
points the height of the computational
domain has to be reduced for larger values of $\rho_0$, because the
density drops now much faster to small values.
This is just a numerical constraint that can be alleviated by using
more mesh points.
\begin{figure}[t!]\begin{center}
\includegraphics[width=\columnwidth]{pplane_yHfix}
\end{center}\caption[]{
Snapshots of Run~F3 (fixed ionization, $y_{\rm H}=1$)
showing temperature (color coded), magnetic field
lines, and velocity vectors at two times before and after the flux
concentration develops.
The solid yellow line at $z\approx9\,{\rm Mm}$ indicates the $\tau=1$ surface while
the dashed blue line indicates the height $z_0$ where suction operates.
}\label{pplane_yHfix}\end{figure}
\subsubsection{Two-dimensional models}
Next, we consider 2D models with $\phi_0\neq0$.
The 1D vertical equilibrium solutions form the initial condition
here along $z$ for all $x$.
We consider first the case $\phi_0=-3\times10^{-3}\,{\rm g}\,{\rm cm}^{-3}k_{\rm m}^2\,{\rm s}^{-2}$
using $z_0=3\,{\rm Mm}$ for the height of the blob.
In \Fig{pplane_yHfix} we show the result for Run~F3 (fixed ionization,
$y_{\rm H}=1$) at $t=5\,{\rm ks}$ and $10\,{\rm ks}$, while in \Fig{pplane} we show the result
for Run~K3a with partial ionization effects included at $t=1.6\,{\rm ks}$ and $2\,{\rm ks}$.
\begin{figure}[t!]\begin{center}
\includegraphics[width=\columnwidth]{pplane}
\end{center}\caption[]{
Same as \Fig{pplane_yHfix}, but for variable partial ionization for
Run~K3a at two times just before and after the flux concentration develops.
}\label{pplane}\end{figure}
In both the cases (runs F3 and K3a in \Figs{pplane_yHfix}{pplane}),
we see the effects of downward suction.
We also see how the magnetic field lines are being pushed together
at a place above the blob where the return flow tries to replenish
the gas in the evacuated upper parts.
In the case of partial ionization (Run~K3a in \Fig{pplane}),
the upper parts have a strongly
negative specific entropy gradient leading to an effect that is
most pronounced at a height considerably above the height of the blob.
Thus, as compared to the case without partial ionization (Run~F3),
the inclusion of partial ionization (Run~K3a) causes the
flux concentrations to form at the $\tau=1$ surface.
In the Run~K3a at the later time, however, when the magnetic structure
has collapsed almost entirely, the converging inflow has stopped and
there are now indications of an outflow.
\begin{figure}[h!]\begin{center}
\includegraphics[width=\columnwidth]{pprofs_TT}
\end{center}\caption[]{
Temperature versus optical depth and height at different times through
$x=0$ (solid lines and a dotted line for the last time) and $x=L_x/2$
(dashed lines) for Run~K3a at times $1.6$--$1.9\,{\rm ks}$.
}\label{pprofs_TT}\end{figure}
\begin{figure}[h!]\begin{center}
\includegraphics[width=\columnwidth]{pxprofs_TT_Bz}
\end{center}\caption[]{
Temperature and vertical magnetic field strength versus $x$
at different times ($1.4$--$2\,{\rm ks}$) through $z=8\,{\rm Mm}$ for Run~K3a.
}\label{pxprofs_TT_Bz}\end{figure}
\begin{figure}[h!]\begin{center}
\includegraphics[width=\columnwidth]{pprofs_uz}
\end{center}\caption[]{
Vertical velocity versus height at different times ($1.6$--$1.9\,{\rm ks}$)
for Run K3a.
}\label{pprofs_uz}\end{figure}
\begin{figure}[h!]\begin{center}
\includegraphics[width=\columnwidth]{pprofs_uz_3em4}
\end{center}\caption[]{
Similar to \Fig{pprofs_uz}, but for Run~K3b with 10 times weaker suction.
The red dash-dotted lines shows the profile of sound speed.
}\label{pprofs_uz_3em4}\end{figure}
\begin{figure}[h!]\begin{center}
\includegraphics[width=\columnwidth]{pprofs_yH}
\end{center}\caption[]{
Degree of ionization and ratio or specific heats versus height, from Run K3a.
}\label{pprofs_yH}\end{figure}
It is remarkable that at all times, the $\tau=1$ surface is approximately
flat, so there is no Wilson depression in our models.
To examine whether this is an artifact of the rather small values of
opacity in our models, which results in comparatively
larger radiative flux and radiative diffusivity,
and therefore horizontal temperature equilibration, we ran a similar
model, using however only vertical rays in the solution of \Eq{RT-eq}.
However, the results were virtually identical, suggesting that the
absence of Wilson depression is not connected with the enhanced luminosity
of our models that is used to reduce the Kelvin--Helmholtz timescale.
In \Fig{pprofs_TT} we show for Run~K3a vertical temperature profiles
though $x=0$ (i.e., through the structure) and $x=L_x/2$ (away from it)
as functions of $\tau$ and $z$.
At $x=0$, we clearly see that for $\tau\gg1$, the temperature drops
progressively below the value at $x=L_x/2$.
At $z=z_0$, the temperature is below $50,000\,{\rm K}$ at $x=0$,
while at $x=L_x/2$ we have $80,000\,{\rm K}$.
Note also that for $\tau<1$, the temperature is slightly enhanced
at $x=0$ compared to $x=L_x/2$.
This is expected, because here the vertical gradient of specific
entropy is positive, corresponding to stable stratification,
so any downward motion would lead to enhanced entropy and temperature
at that position.
In \Fig{pxprofs_TT_Bz} we show the corresponding temperature and magnetic
field profiles through a horizontal cut at $z=8\,{\rm Mm}$, which is just
beneath the surface.
Note that the temperature is reduced at the location of the structure,
but there is also an overall increase in the broader surroundings of the
structure, which we associate with the return flow from deeper down.
The magnetic field enhancement reaches values of the order of about
$50\,{\rm kG}$ (an amplification by a factor of 50) in a narrow spike.
These structures are confined by the strong converging return flow.
The downward speed can become comparable with the local sound speed;
see \Figs{pprofs_uz}{pprofs_uz_3em4}, where we compare two cases
with different forcing amplitudes.
Nevertheless, in both cases the speeds are similar.
This implies that the vertical motion is essentially in
free fall.
To verify this, we note that the speed of a body freely falling over
a distance $\Delta z$ is $v_{\rm ff}=\sqrt{2g\Delta z}$.
Using $\Delta z=5\,{\rm Mm}$, we find $v_{\rm ff}\approx50k_{\rm m}\,{\rm s}^{-1}$,
which is comparable with the speeds seen in \Figs{pprofs_uz}{pprofs_uz_3em4}.
As expected from earlier polytropic convection models with ionization
\citep{RT93}, the downflow advects less ionized material of lower
$\gamma$ and larger $c_{\rm p}$ downward; see \Figs{pprofs_yH}{pprofs_ss}.
Then again from time evolution plots of $s$ and $y_{\rm H}$ shown in
\Figs{pprofs_yH}{pprofs_ss}, we find a correspondence between the profiles of
specific entropy and $y_{\rm H}$, as expected according to \Eqs{dsdh1}{dsdh2}.
Not surprisingly, the suction-induced downflow leads to values of $s$ that,
at larger depths inside the structure, agree with the photospheric values
higher up.
However, temporal changes in $\gamma$ are not as dramatic as the changes
with height.
Inside the structure, the specific entropy has photospheric values also
deeper down, and $s$ is nearly constant (about $0.14k_{\rm m}^2\,{\rm s}^{-2}\,{\rm K}^{-1}$)
in the range $3\,{\rm Mm}\leq z\leq9\,{\rm Mm}$ at $t=1.9\,{\rm ks}$.
\begin{figure}[t!]\begin{center}
\includegraphics[width=\columnwidth]{pprofs_ss}
\end{center}\caption[]{
Specific entropy and specific heat at constant pressure versus height,
from Run K3a.
}\label{pprofs_ss}\end{figure}
\subsection{${\rm H}^{-}$ opacity}
Finally, we compare with models using the ${\rm H}^{-}$ opacity.
Again, we use here the implementation of \cite{HDNB06,HNSS07}, which
was found to yield reasonable agreement with realistic opacities.
\subsubsection{One-dimensional equilibrium models}
In \Fig{hmprofall}, we give 1D equilibrium solutions as functions of
depth focusing on the top $5\,{\rm Mm}$ (Run~D has a height of $9\,{\rm Mm}$,
where we have chosen $T_0=6\times10^4\,{\rm K}$).
The zero on the abscissa coincides with the $\tau=1$ surface
and depth $d=z(\tau=1)-z$.
We find a stably stratified lower part with an unstable part
just beneath the $\tau=1$ surface.
The temperature decreases linearly from the bottom,
where $K$ is seen to be constant,
indicating the regime where the diffusion approximation applies,
similar to the other runs with Kramers opacity.
However, close to the $\tau=1$ surface
there is a short jump (decrease) in the temperature
by a factor of $\sim 2$,
unlike the runs with Kramers opacity, where
the temperature profile simply turns
from linearly decreasing to a constant value.
The temperature profile eventually settles to a constant
for $z>z(\tau=1)$ or $d<0$.
This jump in the temperature profile resembles
the profile in Fig.~1 and Fig.~14 in \cite{SN98},
where again the jump is by a factor $\sim 2$ in temperature.
It is attributed
to the extreme temperature sensitivity of the ${\rm H}^{-}$ opacity.
\begin{figure*}[t!]\begin{center}
\includegraphics[width=\textwidth]{hmprofall}
\end{center}\caption[]{
Profiles of $T$, $s$, $\rho$, $y_{\rm H}$, $\kappa$, and $K$ for
Run~D, with ${\rm H}^{-}$ opacity.
The abscissa gives the depth $d=z(\tau=1)-z$ in $\,{\rm Mm}$.
}\label{hmprofall}\end{figure*}
For comparison, we include Run~E, for which we have chosen $T_0=10^5\,{\rm K}$
and a height of $20\,{\rm Mm}$.
The value of $z_{\tau=1}$ is then nearly $14\,{\rm Mm}$.
Now, however, there is an extended deeper layer which is stably stratified.
\subsubsection{Two-dimensional models}
In the 2D model with ${\rm H}^{-}$ opacity, we chose
$\phi_0=-3\times10^{-3}\,{\rm g}\,{\rm cm}^{-3}k_{\rm m}^2\,{\rm s}^{-2}$
with $z_0=3\,{\rm Mm}$ for the height of the blob.
In \Fig{pplane_Hm}, we see that the flux concentrations
form much above the blob location,
close to the $\tau=1$ surface. This is again mainly
due to the negative gradient in entropy just below $\tau=1$ surface
as seen in \Fig{hmprofall}.
Furthermore, there is a very narrow dip
in the $\tau=1$ surface in the lower panel of \Fig{pplane_Hm} at $t=1.1\,{\rm ks}$,
but is flanked by two peaks, which is due to the return flows.
\begin{figure}[h!]\begin{center}
\includegraphics[width=\columnwidth]{pplane_Hm}
\end{center}\caption[]{
Same as \Fig{pplane}, but for Run~H3 using the ${\rm H}^{-}$ opacity
with a scaling coefficient of $10^{-6}$.
}\label{pplane_Hm}\end{figure}
Owing to the stable stratification of the lower part,
the resulting speeds are much lower than those in runs K3a and K3b.
As a consequence, the cooling in the temperature profile
due to the downflow of low entropy material,
shown in \Fig{hmtempevol}, is decreased.
Compared to the case of Kramers opacity in \Fig{pprofs_TT},
most of the cooling here takes place to much lesser extent in depth.
This is further limited because
the stratification soon becomes unstable towards larger values of $z$.
\begin{figure}[t!]\begin{center}
\includegraphics[width=\columnwidth]{hmtempevol}
\end{center}\caption[]{
Same as \Fig{pprofs_TT}, but for Run~H3 using the ${\rm H}^{-}$ opacity
with a scaling coefficient of $10^{-6}$.
Note the increase in temperature at $z\approx8\,{\rm Mm}$.
}\label{hmtempevol}\end{figure}
Comparing with the deeper model, where $T_0=10^5\,{\rm K}$ (Run~H10,
whose equilibrium model was Run~E), significant downflows can
only be obtained when we place the blob higher up ($z_0=10\,{\rm Mm}$) and
increase the forcing ($\phi_0=-3\times10^{-2}\,{\rm g}\,{\rm cm}^{-3}k_{\rm m}^2\,{\rm s}^{-2}$).
This is because of the more extended stably stratified deeper layer.
The maximum downflow speed is only $15k_{\rm m}\,{\rm s}^{-1}$.
\section{Conclusions}
The inclusion of partial ionization along with radiative transfer
forms an important step towards bridging the gap between idealized
models of magnetic flux concentrations and more realistic ones.
In this work, we have studied the effects of partial ionization firstly in
1D hydrostatic models of the atmosphere in thermal equilibrium and
then in 2D hydraulic models of flux concentrations. In the radiative transfer
module, we have used either Kramers opacity or ${\rm H}^{-}$ opacity.
Comparison of the final 1D equilibrium atmospheres
with and without partial ionization shows that, while the solutions
do not differ much in the optically thick part, they are significantly
different in the range $1<\tau<100$, especially with respect
to the specific entropy and density profiles.
An interesting feature is the narrow layer with a negative gradient
in specific entropy close to the $\tau=1$ surface, which is persistent
across different atmospheres
with either Kramers opacity (for any polytropic index; shown for $n$=
3.25, 1.5 and 1) or the ${\rm H}^{-}$ opacity.
This minimum in the $s$ profile is directly
connected to the minimum in $y_{\rm H}$ profile.
In fact from \Eqs{dsdh1}{dsdh2}, it is clear that the extrema in $s$
correspond to the extrema in $y_{\rm H}$. This unstable layer
near $\tau=1$ is important since, in the 2D models, it causes the
flux concentrations to form right at the surface.
In 1D models with ${\rm H}^{-}$ opacity, the $\tau<1$ part is
stably stratified as expected and here also a narrow unstable layer is seen
close to surface.
Due to the extreme sensitivity of the ${\rm H}^{-}$ opacity
to temperature, there is a distinctive jump (by a factor $\sim2$)
in the temperature profile after a prolonged decrease.
In order to study the effect of partial ionization on hydraulic
flux concentrations, the model we used employed an artificially imposed
source of negative pressure in the momentum equation.
This work has demonstrated that such a forcing function can lead to a
dramatic downflow that is channeled along vertical magnetic field lines.
A corresponding return flow is produced that converges in the upper
parts and draws vertical magnetic field lines together, which leads to
significant magnetic field amplification.
This strong amplification is connected with the high-speed descent of gas.
It is much faster than what is expected based on the artificially
applied pumping and it is in fact virtually independent of it.
Weaker forcing only leads to a delay in what later always tends to
develop into nearly free fall.
We do not expect such rapid descent speeds to occur in the Sun,
because there the gas is turbulent and will behave effectively in a much
more viscous and also more irregular fashion, where downdrafts break up
and change direction before they can reach significant speeds.
In the case of ${\rm H}^{-}$ opacity, the flux concentrations are weaker
because the deeper parts are stably stratified.
Here again, the turbulence would have mixed the gas even before
triggering downflows, so the background stratification would be more
nearly adiabatic to begin with.
This can be seen clearly from realistic solar simulations of \cite{SN98};
see their Fig.~13.
In models without partial ionization, flux concentrations form
just above the height where the forcing function is placed,
whereas in models including partial ionization, such flux concentrations
form at the surface (where $\tau=1$).
Here the specific entropy is unstably stratified and tends to drop by a
significant amount. Under the influence of downward suction,
this could still lead to significant descent speeds with a corresponding
return flow as a result of mass conservation.
The return flow, instead of closing
near the height where the forcing function is placed, closes at
the surface, from where the gas had earlier been pulled down.
It is surprising that the temperature reduction inside the downdrafts
is rather modest and to some extent compensated for by the supply of
hotter material from the converging return flow.
Thus, the magnetic structure is in our case largely confined by dynamic
rather than gas pressure.
Therefore the changes in the thermodynamic properties across the flux tube
are only moderate. As a consequence, the $\tau=1$ surface remains nearly flat.
In view of applications to sunspots, it would be important to consider the
effects of turbulent convection and its suppression by the magnetic field.
Such effects have been used in the models of \cite{KM00} that could explain
the self-amplification of magnetic flux by a mechanism somewhat reminiscent
of the negative effective magnetic pressure instability.
In our model, convection would of course develop automatically if we only
let the simulation run long enough, because the stratification is already
Schwarzschild unstable.
The degree to which the resulting convection contributes to the vertical
energy transport should increase with increasing opacity, but with the
rescaled opacities in our models it will be less than in the Sun.
Our findings also relate to the question of what drives convection in the
outer layers of the Sun.
Solving just the radiative equilibrium equations for the solar envelope would
result in a stable stratification, because the standard Kramers opacity
with $a=1$ and $b=-7/2$, corresponding to a stable polytrope with $n=3.25$.
Yet, those layers are unstable
mainly because of the continuous rain of low entropy material from the top.
Clearly, a more detailed investigation of this within the framework of the
present model would be needed, but this is well outside the scope
of the present paper.
Based on the results obtained in the present work, we can say that
the effects of partial ionization and resulting stratification are of crucial
importance for the production of strong magnetic flux amplifications just
near the visible surface.
\begin{acknowledgements}
PB thanks Nordita for support and warm hospitality
while work on this paper was being carried out.
She also acknowledges support from CSIR in India and use
of high performance facility at IUCAA, India.
This work was supported in part by the European Research Council under the
AstroDyn Research Project No.\ 227952, and
by the Swedish Research Council under the project grants
621-2011-5076 and 2012-5797.
We acknowledge the allocation of computing resources provided by the
Swedish National Allocations Committee at the Center for
Parallel Computers at the Royal Institute of Technology in
Stockholm and the National Supercomputer Centers in Link\"oping, the High
Performance Computing Center North in Ume\aa,
and the Nordic High Performance
Computing Center in Reykjavik.
\end{acknowledgements}
|
1,116,691,497,341 | arxiv |
\section{Algorithm for approximate PMS}
\label{sec:alg}
We first find a suitable representation for rides in a high-dimensional ambient space and define a similarity measure between two rides that approximately captures the matching utility between them. We then use the asymmetric transformations of~\cite{SL15a} (from MIPS to MCSS) combined with cross-polytope LSH construction of~\cite{AILRS15} (for MCSS) to obtain our algorithm for approximate PMS.
\subsection{Spatial ride match}
\label{subsec:spatial}
Consider two rides $r$ and $r'$ as shown in Figure~\ref{fig:rides} and suppose it is feasible to match them together. For brevity, we denote the pickup $r_s$ and dropoff $r_t$ for ride $r$ simply as $s$ and $t$ respectively. Similarly, the pickup and dropoff of ride $r'$ are denoted by $s'$ and $t'$ respectively. Let $<s, a, b, c, d, t>$ and $<s', a', b', c', t'>$ denote the routes of rides $r$ and $r'$ respectively.
\begin{figure}
\includegraphics[width=\linewidth]{rides.jpg}
\caption{Two rides $r$ and $r'$. The route for $r$ is $<s, a, b, c, d, t>$ and for $r'$ is $<s', a', b', c', t'>$.}
\label{fig:rides}
\end{figure}
Intuitively, since points $a, b$ and $c$ are close to points $a', b'$ and $c'$ respectively, the matching utility, which is the cost savings by matching $r$ and $r'$ together, is approximately the cost $C(<a, b>)$ of the route segment $(a, b)$ plus the cost $C(<b, c>)$ of the route segment $(b, c)$. In order to capture the notion of spatial proximity of points, we discretize space and represent each point of the route by the space discretization it falls in. We can use discretization using geohashes or S2 cells for this purpose. Let the space discretized node for points $a$ and $a'$ be $A$, and so on, as shown in Figure~\ref{fig:rides-discretized}. The space discretized route for rides $r$ and $r'$ are $<S, A, B, C, D, T>$ and $<S', A, B, C, T'>$ respectively.
\begin{figure}
\includegraphics[width=\linewidth]{rides_discretized.jpg}
\caption{Two rides $r$ and $r'$. The space discretized route for $r$ is $<S, A, B, C, D, T>$ and for $r'$ is $<S', A, B, C, T'>$. The space discretization is shown in dotted lines and the space discretized nodes are shown in capital letters.}
\label{fig:rides-discretized}
\end{figure}
Since we are interested in the cost of the overlapping segments, we represent rides by the set of space discretized edges in their routes instead of the sequence of space discretized nodes. We call this the spatial set representation of the ride. (Note that this is equivalent to 2-shingling viewing the route as a document; shingling is a popular technique in document similarity search.) Representing rides by its set of edges has the additional benefit that we get directionality for free. For example, two rides with exactly reversed routes of each other will have no edges in common.
Representing ride $r$ with the edge set $\{SA, AB, BC, CD, DT\}$ and ride $r'$ with $\{S'A, AB, BC, CT'\}$, the utility of matching them is roughly the sum of the costs of the edges in their intersection. Here, the cost of an edge between two space discretized nodes can be approximated as the cost between the mid-points of the discretizations. (For this approximation to work well, we must have sufficiently fine space discretization).
In view of the above discussion, ignoring match feasibility, approximate PMS can be solved by solving the approximate {\em Spatial Overlapping Match Search}:
\begin{defn}
\label{defn:apx-soms}
Let $R$ be a set of rides. Let $S$ be the similarity measure between two rides defined as the sum of the costs of the edges in the intersection of their spatial set representations. The $(c, s, k)$-approximate Spatial Overlapping Match Search (SOMS) for $c < 1$ with failure probability $f$ is to construct a data structure over the set $R$ supporting the following query:
given any query ride $q$, if there exists $k$ rides $r_1, r_2, \ldots, r_k \in R$ such that $S(q, r) \geq s \ \forall r \in \{r_1, r_2, \ldots, r_k\}$, then report some $k$ rides $r'_1, r'_2, \ldots, r'_k \in R$ such that $S(q, r') \geq cs \ \forall r' \in \{r'_1, r'_2, \ldots, r'_k\}$, with probability $1 - f$.
\end{defn}
For simplicity of exposition, let us assume that the cost of each edge is unit. This assumption is somewhat justified if it can be ensured that the routing engine returns routes with approximately equidistant adjecent nodes; however, we will relax it soon. With this assumption, the similarity between two rides is simply the cardinality of the intersection of their spatial set representations. Since cardinality of set intersection can be computed as the inner product of the corresponding characteristic vectors, we represent each ride by the characteristic vector of its spatial set representation. In other words, the ambient space is a high-dimensional space where each dimension is indexed by a space discretized edge. The spatial vector representation of a ride is a vector in this space with all $0$'s except a $1$ along a dimension iff the corresponding edge is in the spatial set representation of the ride. The cardinality of the intersection between the set representations of two rides is the inner product between their vector representations. Approximate SOMS can thus be solved by solving approximate MIPS.
\paragraph{Relaxing unit edge cost assumption.} Let us now relax the assumption of unit cost per edge. We will define two vector representations for each ride: {\em spatial preprocessing vector representation} used while creating the asymmetric LSH dataset, and {\em spatial query vector representation} used while querying the asymmetric LSH dataset to find overlapping matches for the ride. Both the vector representations are defined in the same ambient space as defined above.
\begin{itemize}
\item {\bf spatial preprocessing vector representation:} A vector with all $0$'s except for along dimensions for which the corresponding edge is in the set representation of the ride, in which case it is the cost of that edge.
\item {\bf spatial query vector representation:} A vector with all 0's except for along dimensions for which the corresponding edge is in the set representation of the ride, in which case it is $1$.
\end{itemize}
It is easy to see that with the above representations, the similarity between two rides $r$ and $q$ can be computed as the inner product between the spatial preprocessing vector representation of $r$ and the spatial query vector representation of $q$. Thus, like before, approximate SOMS can be solved via approximate MIPS.
\subsection{Spatio-temporal ride match}
\label{subsec:spatio-temporal}
Let us now turn our attention to accounting for match feasibility. Let $\Delta T$ be the maximum allowable delay for any ride. We discretize time into intervals of length $2\Delta T$. We define a space-time discretized node as a pair consisting of a space discretization and a time discretization. A space-time discretized edge is simply an ordered pair of such nodes. The space discretized nodes in the route of a ride can be annotated with the discretized time of reaching there if served without any delay. Thus, the route of a ride can be represented as a sequence of space-time discretized nodes, or equivalently, as a set of space-time discretized edges. We call this the spatio-temporal set representation of the ride.
If rides $r$ and $r'$ are feasible to match and an overlapping segment contributes to the matching utility between them, the segment must be traveled for the unmatched rides within time $\Delta T$ of each other. This implies there is good chance that the spatio-temporal set representations of both the rides share the corresponding space-time discretized edge. Thus, approximate PMS can be solved by solving the approximate {\em (Spatio-Temporal) Overlapping Match Search}:
\begin{defn}
\label{defn:apx-oms}
Let $R$ be a set of rides. Let $S$ be the similarity measure between two rides defined as the sum of the costs of the edges in the intersection of their spatio-temporal set representations. The $(c, s, k)$-approximate (Spatio-Temporal) Overlapping Match Search (OMS) for $c < 1$ with failure probability $f$ is to construct a data structure over the set $R$ supporting the following query:
given any query ride $q$, if there exists $k$ rides $r_1, r_2, \ldots, r_k \in R$ such that $S(q, r) \geq s \ \forall r \in \{r_1, r_2, \ldots, r_k\}$, then report some $k$ rides $r'_1, r'_2, \ldots, r'_k \in R$ such that $S(q, r') \geq cs \ \forall r' \in \{r'_1, r'_2, \ldots, r'_k\}$, with probability $1 - f$.
\end{defn}
Like before, we can construct two spatio-temporal vector representations for a ride. Each dimension of the ambient vector space is now indexed by a space-time discretized edge. The {\em spatio-temporal preprocessing vector representation} of a ride is a vector of all $0$'s in this space except for any dimension present in its spatio-temporal set representation where it is the cost of the edge. Similarly, the {\em spatio-temporal query vector representation} of a ride is a vector of all $0$'s except for any dimension present in its spatio-temporal set representation where it is $1$. It is now easy to see that we can solve approximate OMS via MIPS using these two vector representations.
\subsection{The algorithm}
\label{subsec:alg}
We are now ready to formally present our algorithm for approximate potential match search. As noted in the previous subsection, our strategy is to solve it via solving approximate OMS. The algorithm is formally presented in Algorithm~\ref{alg:apx-oms}.
\begin{algorithm}
\KwIn{A set $R$ of $n$ rides.}
\KwOut{For each ride $r$, a set $S_r$ with $|S_r| = k$.}
Let $U = 0.75$\;
\For{each ride $r$ in R}{
Construct the spatio-temporal preprocessing vector representation $p_r$ of ride $r$\;
Normalize the vector $p_r$ to have $||p_r||_2 \leq U$ (See~\cite{SL15a}\;
Apply the preprocessing transformation of~\cite{SL15a} (see Section~\ref{sec:prelims}) on $p_r$ to get $P(p_r)$ with $m = 2$\;
}
Construct the LSH dataset using cross-polytope LSH of~\cite{AILRS15} with vectors $P(p_r)$, $\forall r \in R$\;
\For{each ride $r$ in R}{
Construct the spatio-temporal query vector representation $q_r$ of ride $r$\;
Normalize the vector $q_r$ to have $||q_r||_2 = 1$\;
Apply the query transformation of ~\cite{SL15a} (see Section~\ref{sec:prelims}) on $q_r$ to get $Q(q_r)$ with $m = 2$\;
Construct $S_r$ by retrieving $k$ nearest neighbors of $Q(q_r)$ from the LSH dataset\;
}
\caption{Algorithm for approximate PMS.}
\label{alg:apx-oms}
\end{algorithm}
We now show that our algorithm is efficient and has high probability of success.
\begin{thm}
\label{thm:apx-oms}
Given a set $R$ of $n$ rides, Algorithm~\ref{alg:apx-oms} solves $(c, s, k)$-approximate OMS requiring $O(n^\rho (k + \log n) \log k)$ time per query and $O(n^{1 + \rho} \log k)$ total space for some $\rho < 1$ depending on $c$ and $s$. Therefore, the total running time for the algorithm is $O(n^{1+ \rho} (k + \log n) \log k)$.
\end{thm}
\begin{proof}
Let $\mathcal{H}$' be the family of cross-polytope LSH functions of~\cite{AILRS15}. Similar to arguments in~\cite{SL15a}, coupled with the preprocessing and the query transformations, they form a $(c, s, p_1, p_2)$-sensitive asymmetric LSH family $\mathcal{H}$ for MIPS where $p_1, p_2$ depends on $c$ and $s$. Authors in~\cite{SL15a} show that $U = 0.75$ and $m = 2$ are reasonable parameter choices which we fixed in our algorithm.
We use the technique of amplification by concatenating $t = \log n/\log (1/p_2)$ randomly picked hash functions from $\mathcal{H}$ to get the family of hash functions $\mathcal{G}$. If the similarity between two rides $q$ and $r$, as defined in Definition~\ref{defn:apx-oms}, is at least $s$, we call them "similar". On the other hand, if the similarity between them is at most $cs$, we call them "dissimilar". We have:
\begin{itemize}
\item if $q$ and $r$ are similar, then $\Pr_{g \in \mathcal{G}}[g(q) = g(r)] \geq p_1^t = n^{-\rho}$ where $\rho = \log (1/p_1)/\log (1/p_2) < 1$.
\item if $q$ and $r$ are dissimilar, then $\Pr_{g \in \mathcal{G}}[g(q) = g(r)] \leq p_2^t = 1/n$
\end{itemize}
We construct the LSH data structure by using $L$ hash functions from $\mathcal{G}$ to create $L$ hash tables. The LSH dataset is constructed by hashing the spatio-temporal preprocessing vector representations of the rides in $R$ to each of these $L$ hash tables. Fix a query ride $q$. For a "similar" ride $r$, probability of no hash collision in any of the $L$ hash tables is $(1 - n^{-\rho})^L$. We want this probability to be at most $f/k$. Setting $L = n^\rho \ln(k/f)$, we have:
\begin{align*}
(1 - n^{-\rho})^L &\leq (e^{-n^{-\rho}})^L \\
&= (e^{-n^{-\rho}})^{n^\rho \ln(k/f)} \\
&= f/k
\end{align*}
By union bound, the probability that for any of the $k$ similar rides $r_1, r_2, \ldots, r_k$, there is no hash collision in any of the $L$ tables is upper bounded by $f$.
For a "dissimilar" ride $r$, the probability of hash collision in any of the $L$ hash tables is upper bounded by $L/n$.
Given a query ride, we hash the spatio-temporal query vector representation of $q$ using each of the $L$ hash functions to get the corresponding $L$ hash buckets. We retrieve all the rides from the those $L$ hash buckets. We check the similarity of each of these rides with $q$ and return $k$ non-dissimilar ones (i.e. similarity $> cs$ or "approximately similar").
Time needed for hashing the query is $O(Lt) = O(n^\rho \log k \log n)$. Time required for processing the first $k$ non-dissimilar rides is $O(kL) = O(k n^\rho \log k)$ as the same non-dissimilar ride can be retrieved at most $L$ times. Since the expected number of dissimilar rides is at most $n(L/n) = L$, time needed for processing the dissimilar rides is $O(L) = O(n^\rho \log k)$. Therefore, our algorithm solves $(c, s, k)$-approximate $k$-OMS requiring $O(n^\rho (k + \log n) \log k)$ time per query. The space requirement of the algorithm is $O(nL) = O(n^{1 + \rho} \log k)$.
\end{proof}
\section{Experiments}
\label{sec:exp}
In order to validate our algorithm and the overall approach of solving approximate PMS via solving approximate OMS, we conducted extensive experimentation and evaluated the performance of our algorithm against benchmark approaches.
\subsection{Setup and methodology}
\label{subsec:setup_method}
\paragraph{Objective.} The problem we set out to experiment on is to search for approximate potential matches where the cost function is {\em travel duration}. Hence, the matching utility of two rides was defined as the savings in terms of travel duration achievable by matching and serving the two rides together.
\paragraph{Benchmarks.} We evaluated the performance of our algorithm (which we call LSH) against three benchmark state-of-the-art heuristics: CLOSEBY, HAVERSINE, and CLOSEBY-HAVERSINE. These heuristics are described in Section~\ref{sec:prelims}. Additionally, we computed the optimal matching utility achievable by any approach given unbounded time, and measured the performance of each approach against this optimal.
\paragraph{Metrics.} The metrics of interest to us are (i) total matching utility, (ii) computation time to construct the shareability network, and (iii) number of routing calls. Ideally, we want an algorithm that achieves high matching utility and is efficient both in terms of time for constructing edges of the shareability network and number of routing calls.
Routing calls are different from other computations done during the construction of the shareability network. This is because services often need to pay for these calls in terms of network latency as well as usage cost to a routing service provider. It is common for the routing service provider to allow batch routing calls. We accounted 10 ms (including computation time and network latency) for every batch routing call with at most 100 routing requests. This choice of accounting insulates our results against any time-varying irregularities in the execution efficiency of these calls. Additionally, we accounted for the usage cost by also tracking the number of routing calls separately.
\paragraph{Experiments.} We wanted to evaluate our algorithm under different traffic patterns and varying load. For traffic pattern, we constructed two experiment scenarios:
\begin{itemize}
\item {\bf Morning commute:} This scenario consisted of 21310 rides from NY yellow taxi trip data requested on 2016-06-08 (Wed) between 8:00 AM and 9:00 AM local time.
\item {\bf Evening commute:} This scenario consisted of 19610 rides from NY yellow taxi trip data requested on 2016-06-08 between 6:00 and 7:00 PM local time.
\end{itemize}
For varying load, we subsampled the rides in each scenario at subsampling rates of $20\%$, $40\%$, $60\%$, $80\%$ and $100\%$ (full dataset). Thus we ran a total of 10 experiments, 5 for each scenario.
To model feasibility of ride matching, we enforced a match constraint of maximum allowable pickup delay which rendered a match infeasible if either of the rides in the match incurred a delay of more than $10$ minutes due to the matching. We used travel duration between pickup locations as pickup delay.
\paragraph{Implementation.} We implemented our algorithm as well as the three benchmark approaches to find top-k ($k = 10$) ride matches for each ride in the experiment.
The CLOSEBY approach was implemented using the Ball-Tree algorithm for near neighbor search using Haversine distance between pickup locations as the distance meaasure.
The HAVERSINE approach was implemented by computing pairwise Haversine matching utilities for every pair of rides, and then selecting the $k$ highest utility matches for each ride. For the CLOSEBY-HAVERSINE approach, we first found $1000$ matches for each ride using the CLOSEBY approach, and then selected $k$ matches among them with highest Haversine matching utility with the ride.
The LSH approach was implemented with geohash-$7$ space discretization and time discretization of $20$ minute intervals. The routes were obtained by running an instance of Open Source Routing Machine (OSRM) backend server locally. We set up the LSH data structure using the cross-polytope LSH family of~\cite{AILRS15} that is implemented in the FALCONN library. For this, we used $100$ hash tables and $14$ hash bits per table.
\paragraph{Evaluation.} The total matching utility achieved by an approach was computed by solving non-bipartite matching on the shareability network constructed. To recap, the nodes of the network were the rides in the experiment. A weighted (undirected) edge connecting two ride nodes was added to the network if either ride appeared as a match for the other. The edge was assigned a weight equal to the matching utility of the match between the two rides. The maximum weight matching obtained by solving this instance of non-bipartite matching gave us the total matching utility.
\subsection{Results}
\label{subsec:results}
Matching utilities obtained using different approaches under varying load for the morning commute scenario is presented in Figure~\ref{fig:morning_commute_utility}, and for the evening commute scenario in Figure~\ref{fig:evening_commute_utility}.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{morning_commute_utility.png}
\caption{Matching utility as a fraction of the optimal for the morning commute scenario under varying load.}
\label{fig:morning_commute_utility}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{evening_commute_utility.png}
\caption{Matching utility as a fraction of the optimal for the evening commute scenario under varying load.}
\label{fig:evening_commute_utility}
\end{figure}
Time to compute the shareability network using different approaches under varying load for the morning commute scenario is presented in Figure~\ref{fig:morning_commute_timing}, and for the evening commute scenario in Figure~\ref{fig:evening_commute_timing}.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{morning_commute_timing.png}
\caption{Shareability network computation time for the morning commute scenario under varying load.}
\label{fig:morning_commute_timing}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{evening_commute_timing.png}
\caption{Shareability network computation time for the evening commute scenario under varying load.}
\label{fig:evening_commute_timing}
\end{figure}
As for the number of routing calls, the LSH approach does require one upfront routing call per ride to get its route, while the other approaches only need to make routing calls after their match search process (i.e., once the edges of the shareability network are found and the edge weights need to be computed). However, it should be noted that the total number of routing calls is the same for all the approaches. To see this, note that computing the matching utility between two rides $r$ and $r'$ requires $8$ segment costs: $r_s$ to $r_t$, $r'_s$ to $r'_t$, $r_s$ to $r'_s$, $r'_s$ to $r_s$, $r_t$ to $r'_t$, $r'_t$ to $r_t$, $r_s$ to $r'_t$, and $r'_s$ to $r_t$. Thus for the entire network, we need $1$ routing call per node, and $6$ routing calls per edge. The calls per node are not dependent upon the outcome of the match search process and are known in advance. Hence those calls can be made upfront. These are precisely the routing calls the LSH approach requires for its match search process.
\subsection{Discussion}
\label{subsec:discussion}
In each of the 10 experiments, the optimal matching utility was found to be around $40\%$ of the total ride cost, suggesting huge potential for sharing rides and reducing cost.
Out of the four approaches, CLOSEBY is the fastest as it simply needs to find near neighbors based on Haversine distance between pickup locations. Not surprisingly, it performs the worst in terms of matching utility. HAVERSINE is impractically slow, being required to perform an exhaustive search on a quadratic search space, even if without routing calls. Perhaps surprisingly, even with this exhaustive search, its performance in terms of matching utility is worse than that of CLOSEBY-HAVERSINE. This is because an exhaustive search finds matches that may have better overlap with a given ride but not feasible to match with it.
In each of the experiments, LSH consistently achieved about $6\%$ higher matching utility than CLOSEBY-HAVERSINE. This adds up to significant cost savings that can be shared by the riders, the drivers and the platform. It is important to note that CLOSEBY-HAVERSINE misses this opportunity not because of restricted computation time. In fact, it is faster than LSH, growing linearly with load while LSH grows slightly superlinearly. CLOSEBY-HAVERSINE suffers from its lack of knowledge of the underlying road network which forces it to miss out on achievable efficiency consistently. This can be confirmed by noting that the matching utility achieved by CLOSEBY-HAVERSINE stays almost constant at slightly over $90\%$ even with decreasing load to $20\%$. Since the number of CLOSEBY candidate matches stays constant at $1000$, this means CLOSEBY-HAVERSINE cannot achieve additional match utility even when it searches over a higher fraction of total rides.
We further note that computation time for LSH, although slightly larger than CLOSEBY-HAVERSINE for higher loads, is still practical. The algorithm is completely parallelizable, and the computation time for the shareability network can be brought down to about 30 secs with 10 parallel workers even for a gigantic ride pool of 20000 rides. One can also employ dimensionality reduction techniques mentioned in the next section.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{utility_alt_routes.png}
\caption{Matching utility as a fraction of the optimal with alternate routes.}
\label{fig:utility_alt_routes}
\end{figure}
\paragraph{Alternate routes.} Having access to alternate routes for a ride helps the LSH approach to find better matches. A ride often has multiple routes with almost similar route costs. A feasible match may have a very good overlap with one of the routes that is not the minimum cost one. The computation time does increase with number of alternate routes, however. In our experiments, we used up to one additional alternate route whenever available (available for about $40\%$ of rides). Figure~\ref{fig:utility_alt_routes} compares LSH performance with one route and up to two routes.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{utility_geohash.png}
\caption{Matching utility as a fraction of the optimal with varying geohash discretization.}
\label{fig:utility_geohash}
\end{figure}
\paragraph{Discretizations.} It is important to choose the space discretization for the LSH algorithm carefully. Choosing a discretization too coarse improves computation time as the dimension decreases. But it hurts matching utility by finding spurious route overlaps (segments too dissimilar look the same with space discretization) as well as missing some genuine ones (endpoints of an overlapping segment fuse with space discretization). This becomes more pronounced with increasing load as the chance of finding spurious overlaps increases. Choosing a discretization too fine increases the computation time significantly. In some cases, it might also hurt matching utility by causing the algorithm to miss some useful overlaps. Figure~\ref{fig:utility_geohash} compares performance with different space discretizations. In general, choosing a space discretization proportional to the typical length of a route edge works well (geohash-$7$ in our case). For time discretization, it is best to choose a large interval. This does not increase the dimension and avoids missing feasible matches, but can lead to some infeasible matches which can later be filtered out.
\section{Optimizations and Extensions}
\label{sec:opt_ext}
In this final section, we present some optimizations we learned while carrying out our experiments that we believe can provide significant performance boost in a real-world implementation. We also present several ideas for extensions that we believe can be exploited to extract additional value from our approach.
\subsection{Optimizations}
\label{subsec:opt}
As with machine learning algorithms, LSH-based algorithms can enjoy significant performance boost with the right tuning of parameters and domain-based insights.
\paragraph{Number of tables.} Increasing the number of tables in the LSH data structure improves success probability for finding nearest neighbors but also increases the space requirement and the query time. It is advisable to first choose a suitable value based on space availability and the dataset size.
\paragraph{Number of hash functions.} Increasing the number of hash functions per table causes fewer hash collisions improving query time but also decreases the success probability and necessitates larger number of tables. Fortunately, this can be handled using multi-probe query strategies described next. This parameter should be chosen roughly equal to the logarithm of the dataset size and further tuning should be done by performing parameter search jointly with number of query probes.
\paragraph{Multi-probe query.} It is possible to query a table multiple times using multi-probe query strategies to improve success probability without increasing the number of tables. The total number of probes across all table per query should be chosen jointly with number of hash functions per table. This can be achieved by iterating over the latter, and for each value doing a binary search over the former to achieve desired success probability and efficiency.
\paragraph{Dimension reduction.} The ambient vector space of ride representations has very large dimension and the data is very sparse along most dimensions. This is because most rides happen around busy areas while the remote areas only see few rides. This makes our approach ripe for employing dimensionality reduction techniques to transform the ride vectors to a low dimensional space before storing them in the LSH data structure. This can boost efficiency significantly without hurting success probability much.
\paragraph{Normalization.} Normalizing the dataset by centering the ride vectors along each dimension provides a significant gain in success probability.
\subsection{Extensions}
\label{subsec:ext}
Our approach can be extended in several interesting ways.
\paragraph{Driver matching.} Although we presented our approach and conducted our experiments with a match pool of only rides, our algorithm can be extended to also match drivers with rides. For this, each existing driver route (with possibly multiple passengers in the car) can be represented by its vector representation. The algorithm can then find matches between drivers and rides as well as between rides.
\paragraph{General linear cost function.} Our approach works for any cost function that is linear in the route traveled and utility defined as savings in terms of that cost function. Here, linearity means total cost of the route is the sum of costs incurred along the edges of the route. Examples of such cost functions include travel duration, travel distance, costs incurred along the route such as tolls and taxes, or any linear combination of such linear cost functions. This is useful as the cost of serving a ride is often estimated by a combination of time, distance and other factors.
\paragraph{Time varying cost function.} Our approach can naturally adapt to time-varying route costs. For example, travel durations vary significantly with time. As another example, a smart transportation system can impose time-varying tolls to regulate flow of traffic. As long as we have the accurate edge weights, our algorithm can find near-optimal matches for the current time, while the heuristic methods cannot adapt to these changes intelligently.
\paragraph{Incremental computation.} Our algorithm can be implemented to do incremental computation where the LSH data structure is persisted, and gets updated as new rides come (by adding them to the data structure) and rides get matched (by replacing them with the combined driver route). Dynamic LSH schemes are excellent candidates for such an implementation.
\paragraph{Hybrid rideshare.} A rideshare platform can have a ride pool where some rides are on-demand and real-time while others are scheduled ahead of time. Since our algorithm employs a spatio-temporal search, it can intelligently match rides from such hybrid pools respecting pickup constraints. For example, it can match a current ride with another scheduled to arrive 20 minutes from now, if the current ride is expected to reach near the pickup of the future ride in that time and they have good route overlap thereafter.
\section{Introduction}
\label{sec:intro}
In this paper, we consider carpooling or sharing a ride with other passengers. The importance of this problem for planning urban transportation and designing "smart cities" lies in its promise to provide a solution for serious urban issues of excessive traffic congestion, resource consumption and air pollution~\cite{AS94,CAWYB13}.
Real-time information and monitoring of urban mobility and the ability to do large scale computation on real-time data together allow ridesharing platforms to enable sharing of rides to unprecedented levels by exploiting unused vehicle capacity. A number of works~\cite{S14,AESW11,MZW13} have studied urban traffic and confirmed the tremendous potential of real-time urban-scale ridesharing to reduce the burden on urban transportation. In order to balance the costs (detours, increased travel time, loss of privacy) with the utility (traffic decongestion, decreased resource consumption and air pollution, cheaper rides, use of high-occupancy lanes, marketplace efficiency) of ridesharing, matching rides in a way that minimizes cost and maximizes utility of sharing is of paramount importance for the success of ridesharing.
The graph-theoretic framework of {\em shareability networks}, introduced in~\cite{S14} and extended in the influential work of Alonso-Mora et al.~\cite{ASWFR17}, has been the most promising approach to tackle the challenge of urban-scale, real-time ride matching. The nodes of the network are either rides or driver routes. An edge between two rides represents feasibility of matching them together while an edge between a ride and and a driver route represents feasibility of adding the ride to the driver route. Here, feasibility means that the platform-defined user experience constraints (like maximum pickup wait, maximum detour etc.) are satisfied for all the rides concerned. The basic building block of shareability network has been used in several research works~\cite{ASWFR17,SMG19,S14}, which build on top of this framework and solve various matching problems (bipartite, non-bipartite, hypergraph) to enable real-time ridesharing.
Computing the real-time shareability network, however, is a computationally intensive task. Comparing nodes with each other for feasibility has a time complexity of $O(n^2)$ and requires $O(n^2)$ calls to a routing service for a match pool with $n$ nodes (rides or driver routes). For a ride-sharing platform, it is common to have several thousand rides in the match pool for a large, densely populated urban region at peak commute times. Such platforms also employ features such as batching of arriving ride requests (for example, Lyft shared saver, Uber express pool), swapping already made matches etc. which further thicken the match pool. To generate the shareability network by a quadratic complexity search is therefore ruled out.
Heuristic-based methods have been proposed and comprise the state-of-the-art to deal with this combinatorial challenge. These methods output a small set of potential matches for each node using some heuristic search algorithm. For example, ~\cite{ASWFR17} suggested constructing edges only between ride requests with nearby pick-up locations. However, it is not uncommon to find large number of co-located ride requests at roughly the same time (e.g outside train stops when trains arrive, outside event venues when events end). Moreover, a service that lets passengers wait longer for cheaper rides or lets passengers schedule their rides in advance should be able to make matches between rides with far-away pickups. Another class of heuristics involves computing Haversine overlap between matches in the hope that it captures the true utility of matching. These methods need a way to contain the search space (brute force is still quadratic) and suffer from ignoring the real road network.
\subsection{Our contributions}
\label{subsec:contribution}
In this work, we propose a principled approach to tackle this combinatorial challenge. We devise an efficient randomized spatio-temporal search algorithm to construct a sparse, utility-aware shareability network.
In order to efficiently find a small set of high utility potential matches for each node, we use the theory of locality sensitive hashing (LSH). In particular, we use LSH construction for Maximum Inner Product Search (MIPS). At a high level, our approach works by finding suitable vector representations for rides and driver routes in a high-dimensional ambient vector space that capture their physical routes. Next, we define a similarity metric for this space that respects matching utility: larger similarity between vector representations imply higher matching utility between nodes. We then construct a locality sensitive hashing data structure for storing the vector representations that allows efficient search for similar vectors: for every vector, we can find $k$ (say $10$) approximately most-similar vectors (according to our similarity measure) efficiently in time $O(n^{1 + \rho} (k + \log n) \log k)$ and space $O(n^{1 + \rho} \log k)$ for a small $\rho < 1$. In the process, we only make $n$ calls to a routing service, one for each ride. This is sufficient to allow our algorithm to exploit the knowledge of the real road network.
In the rest of the paper, for simplicity of exposition, we only talk of matching rides, and describe how to handle driver routes in the last section. Our algorithm can be extended in several interesting ways. It works for matching utility defined in terms of any cost function which is linear in the route (e.g. travel duration, travel distance, tolls etc.). Moreover, it can handle time-varying cost functions (e.g. varying with traffic). It can work on a hybrid ridesharing system that provides real-time, on-demand rides as well as pre-scheduled ones. It even allows incremental computation. All these extensions are sketched in Section~\ref{sec:opt_ext}.
In order to demonstrate the practicality of our algorithm, we conducted large-scale experiments using publicly available NY yellow taxi trip datasets under different traffic patterns and varying load. Results showed that our algorithm consistently outperforms the best state-of-the-art heuristic method by as much as $6\%$.
To summarize, our contributions include:
\begin{enumerate}
\item A novel principled approach for an efficient randomized construction of a sparse, real-time, utility-aware shareability network.
\item Several interesting and useful extensions to the proposed algorithm that makes it even more practically appealing.
\item Demonstration of practical utility of the algorithm via large-scale experiments.
\end{enumerate}
\section{Preliminaries}
\label{sec:prelims}
\subsection{LSH and nearest neighbor search}
\label{subsec:lsh-nns}
For a similarity measure $S$ on a domain $D$, define a ball of radius $s$ centered around point $q$ as $B(q, s) \coloneqq \{p : S(q, p) \geq s\}$.
\begin{defn}[Locality sensitive hashing]
\label{defn:lsh}
A family of functions $\mathcal{H} = \{h : D \rightarrow U\}$ is called $(c, s, p_1, p_2)$-sensitive for domain $D$ with similarity measure $S$ if for any $p, q \in D$:
\begin{itemize}
\item if $p \in B(q, s)$ then $\Pr_{h \in \mathcal{H}}[h(q) = h(p)] \geq p_1$,
\item if $p \notin B(q, cs)$ then $\Pr_{h \in \mathcal{H}}[h(q) = h(p)] \leq p_2$.
\end{itemize}
\end{defn}
In order for an LSH family to be useful, it must satisfy inequalities $p_1 > p_2$ and $c < 1$. Note that locality sensitive hashing can be defined both in terms of a similarity measure or a distance measure. The similarity measure definition is convenient for our purposes in this paper.
An LSH family can be used to solve approximate near neighbor search.
\begin{defn}[Approximate near neighbor search]
\label{defn:apx-near}
The $(c, s)$-approx-imate near neighbor search problem for $c < 1$ with failure probability $f$ is to construct a data structure over a set of points $P$ in domain $D$ with similarity measure $S$ supporting the following query:
given any query point $q \in D$, if $p \in B(q, s)$ for some $p \in P$, then report some $p' \in P \cap B(q, cs)$, with probability $1 - f$.
\end{defn}
\begin{thm}[Restatement of Theorem 3.4 in~\cite{HIM12}]
\label{thm:lsh2nn}
Let $p_1, p_2 \in (0,1)$, $c < 1$ and $\rho = \log(1/p_1)/\log(1/p_2) < 1$. Given an $(c, s, p_1, p_2)$-sensitive family $\mathcal{H}$ for a domain $D$, there exists a data structure for $(c, s)$-approximate near neighbor search over a set $P \subset D$ of at most $n$ points with constant failure probability requiring $O(n^\rho \log n)$ query time and $O(n^{1+\rho})$ space.
\end{thm}
The notion of asymmetric LSH was defined in~\cite{SL14}, who also showed approximate near neighbor search can also be solved using asymmetric LSH.
\begin{defn}[Asymmetric LSH]
A family of functions $\mathcal{H}$, along with the two vector functions $P: R^d \rightarrow R^{d'}$ ({\em preprocessing transformation}) and $Q: R^d \rightarrow R^{d'}$ ({\em query transformation}), is called $(c, s, p_1, p_2)$-sensitive for $R^d$ with similarity measure $S$ if for any $p, q \in R^d$:
\begin{itemize}
\item if $S(q, p) \geq s$, then $\Pr_{h \in \mathcal{H}}[h(Q(q))) = h(P(p))] \geq p_1$,
\item if $S(q, p) \leq cs$, then $\Pr_{h \in \mathcal{H}}[h(Q(q)) = h(P(p))] \leq p_2$.
\end{itemize}
\end{defn}
\begin{defn}[Approximate maximum inner product search]
\label{defn:apx-mips}
Given a collection $P \subset R^d$ of size $n$, the $c$-approximate Maximum Inner Product Search (MIPS) with failure probability $f$ is to support the following query: given an input query point $q \in R^d$, find $p' \in P$ such that
\[ q^Tp' \geq c \max_{p \in P}q^T p,\]
with probability $1-f$.
\end{defn}
A connection between inner product and cosine similarity was established by Shrivastava and Li~\cite{SL15a} defining the following preprocessing and query transformations from $R^d$ to $R^{d+m}$ ($m = 3$ suffices):
\[P(x) \coloneq [x; 1/2 - ||x||_2^2; 1/2 - ||x||_2^4; \ldots; 1/2 - ||x||_2^{2^m}, \]
\[Q(x) \coloneqq [x; 0; 0; \ldots; 0]. \]
This lets one use an LSH scheme for cosine similarity to obtain an asymmetric LSH scheme for inner product, and thereby solve approximate MIPS. In this paper, we use the cross-polytope LSH of Andoni et al.~\cite{AILRS15} for cosine similarity along with the preprocessing and query transformations of~\cite{SL15a} for this purpose.
\subsection{Potential match search}
\label{subsec:prob}
Let $p_1, p_2, \ldots p_l$ be points on earth. Let $<p_1, p_2, \ldots, p_l>$ denote a route that starts at point $p_1$, traverses $p_2$, \ldots $p_{l-1}$ in order and ends at $p_l$, and $C(<p_1, p_2, \ldots, p_l>)$ denote its cost. Given a ride $r$, let $r_s$ and $r_t$ denote its pickup and dropoff locations respectively. The cost of the ride $r$, denoted by $C(r)$ abusing notation, is defined as $C(<r_s, r_t>)$. The cost can be measured in terms of travel distance, travel duration or any other metric linear in the route.
Fix a cost function. The utility of matching two rides comes from the cost savings achieved by serving them together compared to serving them individually. More formally, abusing notation, let $C(\{r, r'\})$ denote the cost of serving rides $r$ and $r'$ together:
\begin{equation*}
\begin{split}
C(\{r, r'\}) = \min\{& C(<r_s, r'_s, r_t, r'_t>), C(<r_s, r'_s, r'_t, r_t>), \\
& C(<r'_s, r_s, r'_t, r_t>), C(<r'_s, r_s, r_t, r'_t>)\}.
\end{split}
\end{equation*}
We will assume that it is feasible to match two rides $r$ and $r'$ only if the matching satisfies a maximum allowable delay constraint for each of the rides. The feasibility function $F(\{r, r'\})$ is $1$ if it is feasible to match $r$ and $r'$, $0$ otherwise. The matching utility $U$ of matching rides $r$ and $r'$ together is then defined as:
\[ U(\{r, r'\}) = ( C(r) + C(r') - C(\{r, r'\})) * F(\{r, r'\}).\]
Note that in order to maximize the total matching utility from all the matches made, it may not be desirable to match a given ride with the one with which it has the highest matching utility. Therefore, the approximate potential match search is to find $k$ matches for every ride that approximately maximize matching utility, for a small enough $k$ so that a sparse, utility-aware shareability network can be constructed. An optimal (non-bipartite) matching on this sparse shareability network then yields approximately optimal total matching utility.
\begin{defn}[Approximate potential match search]
\label{defn:apx-pms}
Let $R$ be a set of rides. Let $S$ be the similarity measure between two rides defined as the matching utility between them. The $(c, s, k)$-approximate Potential Match Search (PMS) for $c < 1$ with failure probability $f$ is to construct a data structure over the set $R$ supporting the following query:
given any query ride $q$, if there exists $k$ rides $r_1, r_2, \ldots, r_k \in R$ such that $S(q, r) \geq s \ \forall r \in \{r_1, r_2, \ldots, r_k\}$, then report some $k$ rides $r'_1, r'_2, \ldots, r'_k \in R$ such that $S(q, r') \geq cs \ \forall r' \in \{r'_1, r'_2, \ldots, r'_k\}$, with probability $1 - f$.
\end{defn}
\subsection{Heuristic methods}
\label{subsec:heuristic}
To the best of our knowledge, search for approximate ride matches has not been formalized before this work and there are no principled methods known. The state-of-the-art heuristic methods include:
\begin{itemize}
\item {\bf CLOSEBY:} $k$ nearest rides to the given ride with respect to pickup location are chosen as potential matches.
\item {\bf HAVERSINE:} $k$ rides with highest Haversine matching utilities with the given ride are chosen as the potential matches. Haversine matching utility is defined with respect to Haversine cost function which is the Haversine distance between two points.
\item {\bf CLOSEBY-HAVERSINE:} A hybrid approach of the above two where a sufficiently large number of rides nearest to the given ride with respect to pickup location are first selected, and the $k$ among them with highest Haversine matching utilities with the given ride are then chosen as potential matches.
\end{itemize}
\section{Related literature}
\label{sec:lit_review}
The cost and effects of congestion for urban transportation has been widely studied~\cite{AS94,CAWYB13,PH13,CL12}.
Many of the studies related to mobility-on-demand and fleet management consider ridesharing without pooling rides, e.g~\cite{PSFR12,ZP14,SSGF16}. Heuristic-based solutions to matching problems were studied in~\cite{AESW11,MZW13}. There has also been a lot of research interest in studying autonomous ride-sharing systems~\cite{PSFR12,SSGF16,CA16}.
Recently, there has been a lot of research interest in carpooling and associated challenges. Quite a few studies have analyzed urban traffic and estimated huge potential for sharing rides~\cite{S14,AESW11,MZW13}. In fact, Santi et al~\cite{S14} showed about $80\%$ of rides in Manhattan can be shared by two riders.
Alonso-Mora et al.~\cite{ASWFR17} studied real-time high-capacity ride-shar-ing by extending the framework of shareability networks and proposed heuristic methods to construct the network based on spatial proximity of ride pickups. Simonetto et al.~\cite{SMG19} proposed a heuristic to speeden up matching at a loss of efficiency by matching arriving ride requests in small chunks by solving a bipartite matching at very high frequency. The greedy nature of their algorithm makes the system increasingly inefficient with increasing batch window of matching such as employed in Lyft shared saver, Uber express pool. Other approaches for sharing rides have been proposed in~\cite{BS15,STY15}. Very Recently, carpooling has also been formulated and studied in formal graph-theoretic online matching settings~\cite{HKTWZZ18,ABDJSS19}.
This nearest neighbor search (NNS) is a problem of major importance in several areas of science and engineering. Approximate nearest neighbor search techniques based on locality sensitive hashing (LSH) was introduced in~\cite{IM98,GIM99}. Owing to being parallelizable and suitable for high dimensional data, these techniques have found widespread use in research and industrial practice~\cite{GSM03,B01,JDS11,STSMIMD13}. However, LSH based techniques have not been introduced for the ride matching problem yet.
The approximate Maximum Inner Product search (MIPS) is a fundamental problem with variety of applications in areas such as recommendation systems~\cite{LCYM17,SRJ05}, deep learning~\cite{SS17} etc. The concept of Asymmetric LSH (ALSH) was proposed in~\cite{SL14} and several ALSH schemes for MIPS have been proposed since~\cite{SL14,SL15a,HMFFT18}. In our work, we use the ALSH construction of~\cite{SL15a} that reduces MIPS to Maximum Cosine Similarity Search (MCSS). We then use the cross-polytope LSH of~\cite{AILRS15} to solve MCSS efficiently.
|
1,116,691,497,342 | arxiv | \section{Double pion photoproduction on the nucleon.}
The $\gamma N \rightarrow \pi^+ \pi^- N$ reaction is attracting attentions
of both theoretical and experimental groups and is bound to play a
significant role in photonuclear reactions much as the
$\gamma N \rightarrow \pi N$ played in the past.
Apart from the work of the Valencia group which I will report here, there
is work by other groups. A simplified model containing many of the
important features of the reaction was worked out in \cite{a1} and
improved in \cite{a2}, \cite{1}. The model of \cite{1} contains more
mechanisms than the one of \cite{a2}, presently under revision
\cite{a3}. On the other hand, the model of \cite{a2} incorporates an
approximate unitarization prescription which allows one to go to higher
energies with the model. In ref. \cite{a4} a simplified model is also used
incorporating, however, some $\rho$ decay channels. This work has been
revised in \cite{a5} in view that some mass distribution was in
disagreement with the data, and a new parametrization is offered, which
relies on a range parameter for the $\rho$ of the order of
$200 \, MeV$, which would not accommodate easily other known
facts of phenomenology as the isovector $\pi N$ s-wave amplitude.
The model of \cite{1} contains parameters determined solely from $\gamma$
and $\pi$ couplings to nucleons and resonances plus known properties
of resonance decay with some undetermined sign borrowed from
quark models.
The $(\gamma, \pi \pi)$ reaction has also been studied at threshold with
the aim of testing chiral perturbation theory \cite{a6,a7},
particularly the $\gamma p \rightarrow \pi^0 \pi^0 p$ reaction where
chiral loops are very important.
The $\gamma p \rightarrow \pi^+ \pi^- p$ reaction was studied in ref.
\cite{1} using effective Lagrangians, which incorporate the couplings
of the photon and pion to the nucleon and resonances. The $N$
and the $\Delta (1232), \, N^* (1440)$ and $N^* (1520)$ (or $N'^*$)
resonances were taken into account.
Furthermore, the $\rho$ as an intermediate state coupling to two pions
was also considered. The model reproduces fairly well the experimental
cross section \cite{2}. The model is further improved \cite{3} to account
for $s - d$ waves in the $N'^* \rightarrow \Delta \pi$ decay, while
at the same time reduces from 67 to 20 the number of Feynman diagrams
needed to study the reaction in the range of Mainz energies
$E_\gamma \leq 800\; MeV$. In ref. \cite{3} this simplified model is
used to evaluate cross sections for all other charge channels:
$\gamma p \rightarrow \pi^+ \pi^0 n \, , \,\; \gamma p \rightarrow
\pi^0 \pi^0 p \, , \,\; \gamma n \rightarrow \pi^+ \pi^- n \, , \; \,
\gamma n \rightarrow \pi^- \pi^0 p \, , \,\; \gamma n \rightarrow
\pi^0 \pi^0 n $.
The agreement with the data is overall good but some discrepancies
remain in the peak of the $\gamma p \rightarrow \pi^+ \pi^0 n$
reaction and its charge conjugate one, the $\gamma n \rightarrow
\pi^- \pi^0 p$ reaction, recently measured \cite{4}.
The relevance of this reaction for the resonance field is the novel
information that it provides on the $N^* (1520)$ resonance, which
I try to explain here. In Fig. 1a, I show the dominant diagram in the
$\gamma p \rightarrow\pi^+ \pi^- p$ reaction. It is the
$\Delta N \pi \gamma$ Kroll Ruderman or gauge term. On the other hand in
Fig. 1b, I show a diagram where the
$N^* (1520,J^\pi = \frac{3}{2}^-)$ is photoexcited from the nucleon and
then it decays into $\Delta \pi$, the $\Delta $ decaying later into $N \pi$.
\begin{figure}[h]
\centerline{\protect\hbox{
\psfig{file=fig1.eps,height=3.0cm,width=8.0cm}}}
\caption{Feynman diagrams}
\label{fig1}
\end{figure}
\noindent
From the 1/2 and 3/2 experimental
$N^* (1520)$ helicity amplitudes we can construct an
effective Lagrangian from where we obtain a transition operator
given by
\begin{equation}
- i \delta H = i g_\gamma \vec{S}\,^\dagger \vec{\epsilon}
+ g_\sigma
(\vec{S}\,^\dagger \times \vec{\sigma}\,) \vec{\epsilon} \ ,
\end{equation}
\noindent
where $S^\dagger $ is a spin transition operator from spin 1/2 to spin
3/2. Furthermore, we write the $N'^* \rightarrow \Delta \pi$
transition operator as
\begin{equation}
- i \delta H = - [\tilde{f} + \frac{\tilde{g}}{\mu^2} (\vec{S}\,^\dagger
\vec{q}\;) \; (\vec{S} \vec{q} \,)\; ] T^{\dagger \lambda} \quad + \;h. c. \ ,
\end{equation}
\noindent
where $T^\dagger $ is the isospin 1/2 to 3/2 transition operator and
$\mu$ the pion mass.
The choice of eq. (2) is not arbitrary. It allows $N'^* \rightarrow
\Delta \pi$ decay in $s$ and $d$ waves and provides a $q$ dependence
of the amplitudes ($q$ is the CM pion momentum) which
provides the best agreement with experiment. By means of eq. (2) we can
write the $s$ and $d$ wave decay amplitudes in
$N'^* \rightarrow \Delta \pi$. We find
\begin{equation}
\begin{array}{l}
A_s = - \sqrt{4 \pi} (\tilde{f} + \frac{1}{3} \tilde{g} \frac{\vec{q}\,^2}{
\mu^2}) \ ,\\[2ex]
A_d = \frac{\sqrt{4 \pi}}{3} \; \tilde{g}\; \frac{\vec{q}\,^2}{\mu^2} \ ,
\end{array}
\end{equation}
\noindent
and the width is given by
\begin{equation}
\Gamma_{N'^* \rightarrow \Delta \pi} = \frac{1}{4 \pi^2} \frac{m_\Delta}{
m_{N'^*}} \, q (|A_s|^2 + | A_d|^2) .
\end{equation}
From the analysis of the $\pi N \rightarrow \pi \pi N$ reaction
of ref. \cite{5} one has information on $\Gamma_s , \Gamma_d$ plus
also another ingredient, the relative
sign of $A_s$ to $A_d$ which is positive. With this information we obtain
$A_s$
and $A_d$ up to a global sign (a sign relative to the $\gamma N \rightarrow
N^*$ amplitudes).
This sign is the first novel thing that the $\gamma p \rightarrow
\pi^+ \pi^- p $ reaction provides. Indeed we can see in Fig. 2 the
results with the two signs
and we observe that while one of the signs is in good agreement
with the experiment (a), the other choice leads to unacceptable results (b).
\begin{figure}[h]
\centerline{\protect\hbox{
\psfig{file=fig2.eps,width=6.0cm,height=5.0cm,angle=-90}}}
\caption{Total cross section for $\gamma p\rightarrow \pi^+ \pi^- p $ reaction
for different global sign}
\label{fig2}
\end{figure}
The reason for the so different results with the two signs is that the
$N^* (1520)$ mechanism of Fig. 1b interferes with the dominant one of Fig.
1a. The two amplitudes (by taking the s-wave part of the $N'^*
\rightarrow \Delta \pi$ decay) have the same momentum and spin structure,
and the $N^* (1520)$ piece can be accounted for by making
a simple substitution in the $N \Delta \pi \gamma$ Kroll Ruderman piece:
\begin{equation}
e \frac{f^*}{\mu} \rightarrow - (g_\gamma - g_\sigma)
(\tilde{f} + \frac{1}{3} \tilde{g} \frac{\vec{q}\,^2}{\mu^2}\;)
D_{N'^*} (s) \ ,
\end{equation}
\noindent
where $D_{N'^*}$ is the $N^* (1520)$ propagator. We can see that with the
value $g_\gamma - g_\sigma = 0.157 >0$ and $\tilde{f} + \frac{1}{3}
\tilde{g} \frac{\vec{q}\,^2}{\mu^2} > 0$ one gets a constructive
interference before the $N'^*$ pole and a destructive one after it.
This is what can be observed in Fig. 2.
I shall not discuss here the other channels. Some results and comments
can be found in the talk of Krusche in this Workshop \cite{6}. The
$\gamma p \rightarrow \pi^0 \pi^0 p$ is well reproduced and here the
$N^* (1520)$ term does not show up through the interference but
as the main term by itself. On the other hand there are some discrepancies
in the $\gamma p \rightarrow \pi^+ \pi^0 n$ channel which we can
not explain so far.
\section{Repercussion of the $N^* (1520)$ findings on quark models.}
With the values of $\tilde{f} = 0.911$ and $\tilde{g} = - 0.552$
obtained from a fit to the $s$ and $d$ wave partial decay widths of
$N^* (1520) \rightarrow \Delta \pi$ and the global sign given by the
$\gamma p \rightarrow \pi^+ \pi^- p$ experiment, the amplitudes
$A_s, A_d$ of eqs. (3) provide a definite $q$ dependence of these amplitudes.
As mentioned, this $q$ dependence is the one providing an optimal fit
to the experiment. We have checked that any other $q$ dependence of the
s-wave amplitude, consistent with the value for the on shell decay
width, provides a worse agreement.
At this point it is worth mentioning the repercussion of these results
in the quark models. This has been shown recently \cite{8}. In this work
a nonrelativistic constituent quark model using the input of Badhuri's
model \cite{9}, adapted by Silvestre-Brac to the baryonic sector
\cite{10}, is employed, and the decay amplitudes $B \rightarrow B' \pi$
are evaluated. For this purpose one starts with a coupling of pions
to quarks
\begin{equation}
H_{q q \pi} = \frac{f_{qq \pi}}{\mu} \bar{\psi}_q \gamma^\mu \gamma_5
\vec{\tau} \psi_q \partial_\mu \vec{\phi}
\end{equation}
\noindent
and makes a nonrelativistic expansion keeping recoil terms
\begin{equation}
H_{q q \pi} = f_{q q \pi} [ \vec{\sigma} \vec{q} e^{- i \vec{q} \vec{r}}
- \frac{\omega_\pi}{2 m_q} \vec{\sigma} (\vec{p} e^{-i \vec{q} \vec{r}}
+ e^{- i \vec{q} \vec{r}} \vec{p} ) ] .
\end{equation}
Now, when evaluating the $N^* , \Delta$ transition matrix elements,
since one has a radial excitation in the $N^*$ state, one needs to expand the
exponential in the first term of eq. (7) (direct term) up to order
$\vec{q}$. On the other hand, the second term of eq. (7) (recoil term)
already gives a contribution keeping the unity in the expansion of the
exponential. Hence, we find
\begin{equation}
\hbox{DIR} \; \propto \; \vec{q}\,^2 \quad ; \quad \hbox{REC} \; \propto
\; 1
\end{equation}
A direct evaluation of the $s$ and $d$-wave amplitudes
for the $N'^* \rightarrow \Delta \pi$ decay gives
\begin{equation}
\frac{A_d}{A_s} = \frac{\hbox{DIR}}{ 2 \hbox{REC} - \hbox{DIR}} \ ,
\end{equation}
\noindent
which implies
\begin{equation}
\begin{array}{rl}
A_d \, \propto & \hbox{DIR} \, \propto \; \vec{q}\,^2\\[2ex]
A_s + A_d \, \propto & \hbox{REC} \; \propto \; 1
\end{array}
\end{equation}
Hence, the non relativistic constituent quark model keeping recoil terms
makes very clear predictions on the $q$ dependence
of the amplitudes. Now, by looking at the $q$ dependence
demanded by the $\gamma
p \rightarrow \pi^+ \pi^- p $ reaction, expressed in eq. 3, we obtain
\begin{equation}
A_d = \frac{\sqrt{4 \pi}}{3} \tilde{g}\, \frac{\vec{q}\,^2}{\mu^2} \;
; \, A_s + A_d = - \sqrt{4 \pi} \, \tilde{f} \ ,
\end{equation}
\noindent
which is the exactly the $\vec{q}$ dependence provided by the quark
model with recoil terms, eq. (10).
The global sign of these amplitudes prefered by the $\gamma p \rightarrow
\pi^+ \pi^- p$ experimental data is also the one provided by the
quark model. This is
another accomplishment of these quark models, but one should recall
that not all variants of nonrelativistic, or relativized quark models
will satisfy these new constraints. This is important to note since
problems still remain when one comes to absolute values of these
amplitudes\cite{8}. Extra work is needed to explain these discrepancies, but
it is important that these improvements are done respecting the new
constraints found thanks to the $\gamma p \rightarrow \pi^+ \pi^- p$ reaction.
Actually, a treatment similar to the present one but making an expansion in
terms of $\omega_\pi/ E_q$ instead of $\omega_\pi / m_q$ seems
to lead to very much improved results, while keeping the consistency
with the findings discussed in this section \cite{11}.
\section{Meson exchange current and coherent $2 \pi$ photoproduction}
Assume the $\gamma N \rightarrow \pi \pi N$ reaction occurs inside
a nucleus and one of the pions, say the $\pi^-$, is produced off
shell and absorbed by a second nucleon. One obtains then meson
exchange current mechanisms which contribute to the ($\gamma, \pi^+$)
reaction in nuclei and which would be represented by diagrams like those in
Fig. 1 with the $\pi$ line attached to a nucleon line. This
mechanism has already been explored in \cite{b1} where it was found
to contribute significantly to the $\gamma \; ^3He \rightarrow t \; \pi^+$
reaction at large momentum transfer.
In addition, the coherent $2 \pi$ photoproduction process in nuclei
has been studied in \cite{b2} and has shown very interesting features
tied to the isospin structure of the amplitudes. A photon coupling to
a nucleon can have an isoscalar and isovector component. Assume we
have the coherent reaction occurring in isospin $I = 0$ nuclei
\begin{equation}
\begin{array}{ll}
\gamma + A \rightarrow & \pi^+ \pi^- \quad + A_{g.s.}\\
& \pi^0 \pi^0 \quad \; + A_{g.s.}
\end{array}
\end{equation}
\noindent
and let us take the isoscalar part of the amplitude. This will force
the $\pi^+ \pi^- (\pi^0 \pi^0)$ system to have $I = 0$ and, because
of symmetry, even angular momentum $L = 0, 2 \ldots$ The isovector part
will force the $\pi^+ \pi^- (\pi^0 \pi^0)$ system into $I = 1$. This
is forbidden for the $\pi^0 \pi^0$ system, so only the
$\pi^+ \pi^-$ can go with $I = 1$, which forces $L = 1, 3 \ldots $ The
dynamics of the elementary reaction is such that the $\gamma
N \rightarrow \pi^+ \pi^- N$ reaction is dominated by the diagram of Fig. 1a,
where the photon behaves as an isovector, while this mechanism is forbidden for
$\pi^0 \pi^0$ production. Obvious consequences of that are that
the $\pi^+ \pi^-$ system is largely suppressed when the pions travel
together ($L = 0$ and hence $I = 0)$.
Similarly the $\pi^0 \pi^0$ system is only produced in $I = 0$ and
hence the pions prefer to travel together. On the other hand the
strength of the isoscalar part of the $\gamma N \rightarrow \pi \pi N$
amplitude is much smaller than the isovector part in the model of \cite{3}
and the consequence of it is that the maximum of the $\pi^0 \pi^0$
cross section is about three orders of magnitude smaller than the
maximum of the $\pi^+ \pi^-$ one. These are very strong tests of the
model which should encourage the experimentalists to perform such
reactions.
\section{Two pion electroproduction.}
The model of ref. \cite{2} can be extended to virtual photons coming from the
$(e, e')$ vertex. These reactions are presently under experimental
investigation at TJNAF \cite{d1,d2}. We have studied the $2 \pi$
production processes where there is a $\Delta \pi$ in the final
state , ie. $e p \rightarrow e' \Delta^{++} \pi^-$ and
$e p \rightarrow e' \Delta^0 \pi^+$ (with $\Delta^0 \rightarrow \pi^- p)$.
Only 8 diagrams of the model of \cite{3} contain a $\Delta$ in the final
state and we depict these diagrams below in Fig. 3
\begin{figure}[h]
\centerline{\protect\hbox{
\psfig{file=fig3.eps,width=10.0cm,height=4.5cm}}}
\caption{Feynman diagrams used in the model for $\gamma_v p\rightarrow
\pi\Delta$}
\label{fig3}
\end{figure}
The evaluation of the amplitudes for these reactions requires the
extension of the model of \cite{3} to account for the zeroth component
of the electromagnetic current and the implementation of adequate form
factors. This task has been undertaken in \cite{d3}. In Fig. 4 we show
the results obtained for the cross section of the virtual photons,
defined in the standard way
\begin{equation}
\frac{d \sigma}{d \Omega'_e d E'_e} = \Gamma (\sigma_{\gamma_v}^T +
\epsilon \sigma^L_{\gamma_v}) = \Gamma \sigma_{\gamma_v} \ ,
\end{equation}
\noindent
with $\Gamma$ and $\epsilon$ the flux factor and the polarization
of the virtual photon \cite{d4}. The results are shown
for the $e p \rightarrow e' \Delta^{++} \pi^-$ reaction as a
function of $Q^2 = - q^2$ and averaged over the range
$0.3 < Q^2 < 1.4 \, GeV^2$ in order to compare with the data.
As one can see, the agreement is fair but more precise data are
expected to come soon which will impose stronger constraints on the
theory.
\begin{figure}[h]
\centerline{\protect\hbox{
\psfig{file=fig4.eps,height=5cm,width=11.5cm,angle=-90}}}
\caption{Cross sections for $\gamma_v p\rightarrow\Delta^{++} \pi^-$ as a
function of the $\gamma_v p$ center of mass energy}
\label{fig4}
\end{figure}
\section{Application of isoscalar $N^*$ excitation in the $NN \rightarrow
NN \pi \pi $ reaction.}
We have developed recently a model for the $NN \rightarrow NN \pi \pi$
reaction which contains terms coming from chiral Lagrangians, $\Delta$
excitation and Roper excitation \cite{e1}. The model is depicted
in Fig. 5. The excitation on the second nucleon and antisymmetry are
incorporated in addition. Summarizing the results we find that the
$N^*$ excitation terms (4 - 7), where the $N^*$ decays into
$N (\pi \pi)^{I= 0}_{s-wave}$ are largely dominant close to threshold
in the channels where
the two pions can be in $I = 0$ in the final state, like the
$p p \rightarrow p p \pi^+ \pi^-$ reaction. On the other hand in the
$p p \rightarrow p n \pi^+ \pi^0$ reaction the $\Delta \Delta$ excitation
terms are the most important. The comparison of these two channels allows
us to appreciate the role played by $N^*$ excitation in some of the isospin
channels which, as we can see, is essential to understand the
experiment at low energies. In Fig. 6, we show the cross sections for
the $ p p \rightarrow p p \pi^+ \pi^-$ and $p p \rightarrow p n \pi^+
\pi^0$ reactions. The calculations are done with plane
waves, but the results are increased at lower energies when final state
interaction is considered, and the agreement with experiment is improved.
In the figures, the total cross sections are given by the solid lines,
corresponding to two different options for the $N (\pi \pi)^{I= 0}_{s-wave}$
decay\cite{e1}.
\begin{figure}[h]
\centerline{\psfig{file=fig5.eps,height=4.5cm,width=12.cm}}
\caption{Complete set of Feynman diagrams of our model.}
\label{fig5}
\end{figure}
\begin{figure}[h]
\begin{minipage}{.47\linewidth}
\centerline{\psfig{file=fig6.eps,height=5.cm,width=5.cm}}
\end{minipage}
\hfill
\begin{minipage}{.47\linewidth}
\centerline{\psfig{file=fig7.eps,height=5.cm,width=5.cm}}
\end{minipage}
\caption{Total cross sections for two of the channels, as a function of the
incoming proton kinetic energy in lab. frame }
\label{fig6}
\end{figure}
One of the important ingredients in this reaction is that the largest
strength for $N^*$ excitation comes from isoscalar exchange. The
strength of this transition was obtained from a theoretical analysis
\cite{e2} of the $(\alpha, \alpha')$ reaction on proton targets
exciting the Roper resonance \cite{e3}.
In conclusion we have seen several reactions involving two pions in the
final state. In all of them the $N^*$ resonances play an important
role and we have clarified the links between some resonance
properties and observables in $2 \pi$ production reactions. Further
investigations both theoretical and experimental, extending the work
at higher energies, look also like a fertile land to extend our
knowledge about $N^*$ resonance properties.
|
1,116,691,497,343 | arxiv | \section{Introduction}
The Gaussian graphical model is commonly used for describing the joint distribution of multiple random variables (Lauritzen, 1996).
The graph structure induced by this model not only delineates the conditional dependence between these variables, but also is critical for improving estimation precision. In estimating regression parameters in generalized estimating equations (GEE) for example, Zhou $\&$ Song (2016) found that incorporating a suitable dependence structure can
improve estimation efficiency, sometimes substantially. In another example, Li $\&$ Li (2008) showed that
a correctly specified dependence structure is also useful to improve estimation efficiency in regularized
estimation and variable selection in linear regression models. On the other hand however,
a mis-specified dependence structure affects efficiency negatively (Zhou $\&$ Song, 2016).
Therefore, specifying an appropriate graph is critical for efficiently estimating a parameter of interest.
In practice, the underlying graph structure of a given data set is rarely known and is often estimated or assumed a priori. Regardless how this graph becomes available, a natural question is whether it is adequate to describe the data from a statistical perspective. This paper aims to develop a novel goodness-of-fit test to address this issue, in the context of high-dimensional data in which dimensionality can exceed the sample size.
There is abundant literature focusing on estimating the underlying graph in the Gaussian graphical model. For fixed-dimensional data, Edwards (2000) studied this problem by using a model selection approach that employs stepwise likelihood ratio tests, while Drton $\&$ Perlman (2004) developed a multiple testing procedure using partial correlations. For high-dimensional data, a popular approach is to employ a penalized likelihood approach, with a penalty explicitly formulated to encourage the sparsity of the resulting precision matrix that induces the underlying dependence structure. On this, we refer to Yuan $\&$ Lin (2007), Friedman et al. (2007), Cai et al. (2011), Liu $\&$ Wang (2017), and Eftekhari et al. (2021), among many others. On testing the graphical structure itself, there exist methods for testing elements of the graphical structure. For example, Liu (2013) proposed a bias-corrected estimator of the precision matrix and applied it to test individual components of the precision matrix.
Similar tests for individual components in a
precision matrix are also discussed in Jankov\'a and Geer (2017), Ren et al. (2015) and Ning $\&$ Liu (2017). {There are also some existing global tests for precision matrices. For example, Xia et al. (2015) and Cheng et al. (2017). However, these is no existing general specification test for precision matrices, and the existing methods can not be directly applied to global specification tests considered in this paper for testing the entire graph structure. }
{\color{black}
Our work is also related to a growing body of literature on testing specific covariance structures for high dimensional data. For example, Chen et al. (2010) considered testing sphericity and identity structures, Qiu $\&$ Chen (2012) and Wang et al. (2022) developed tests for bandedness structures, Zhong et al. (2017) developed tests for some parameterized covariance structures such as autoregressive and moving average structures, Zheng et al. (2019) considered tests on linear structures, and Gou $\&$ Tang (2021) considered specification tests for covariance matrices with nuisance parameters in regression models.} These tests are not applicable to test graph structures.
Moreover, compared with the above tests which usually
involve the estimation of a finite number of nuisance
parameters, one significant challenge associated with testing the graph structure in this paper is the need to estimate a high dimensional nuisance parameter.
The main novelty of this paper lies in a new goodness-of-fit test that explores the difference between a graph structure specified under the null and the true underlying graph structure, based on an appropriate maximum norm distance. We overcome the challenge of estimating the high dimensional nuisance parameter by employing a simple and direct plug-in method, thus bypassing the need of choosing tuning parameters in many regularization methods in the literature for estimating a graph. Despite its simplicity, our test has a limitation in that it is not consistent whenever the graph under the null encompasses but is not equal to the true graph. To tackle this, we carefully develop a novel consistency-empowered test statistic by amplifying the noise, in the sense that small stochastic noises as a result of estimating zero entries in the graph will be enlarged. This modified test statistic is shown to be consistent for testing all types of graphs.
\setcounter{equation}{0}
\section{Basic Setting and Our Proposed Test Statistic}
Let $\mathbf{X}_1,\ldots,\mathbf{X}_n$ be independent and identically distributed realizations of a $p$-dimensional random vector $\mathbf{X}$ with mean $\boldsymbol{\mu}$ and covariance matrix $\boldsymbol{\Sigma}^*=(\sigma_{ij}^*)$. The corresponding precision matrix is denoted as $\boldsymbol{\Omega}^*=(\omega_{ij}^*)=\boldsymbol{\Sigma}^{*-1}$. It is known that $\boldsymbol{\Omega}^*$ naturally induces a graph denoted as $\mathcal{G}^*=(\mathcal{V}, \mathcal{E}^*)$, where $\mathcal{V}=\{1,\ldots, p\}$ is the set of nodes and $\mathcal{E}^* =\{(i,j): \omega_{ij}^*\not= 0 \}
\subset \mathcal{V} \times \mathcal{V}$ is the set of edges consisting of node pairs whose corresponding entries in $\boldsymbol{\Omega}^*$ are not zero. The absence of a pair of nodes in $\mathcal{E}^*$ indicates that the corresponding variables are conditionally independent given all the others (Lauritzen, 1996).
While graph $\mathcal{E}^*$ is rarely known, in practice it can be estimated via the penalized likelihood methods discussed in Introduction or assumed a priori. For the latter, when the dimension $p$ is high, a convenient assumption popular in the literature is that $\boldsymbol{\Omega}^*$ admits some simple structure such as a banding or a block diagonal structure. We will denote the corresponding graph under the assumption as $\mathcal{E}_0$ and the main aim of this paper is to ascertain whether this assumption is valid. That is, we
consider the following hypothesis
$$H_0: \mathcal{E}^*=\mathcal{E}_0 \quad\mbox{vs.}\quad H_1: \mathcal{E}^*\neq\mathcal{E}_0,$$
where in our high-dimensional setup, $\mathcal{E}_0$ usually has a cardinality much smaller than $p^2$. {\color{black} Our hypothesis corresponds to a hypothesis for testing the precision matrices $\Omega^*$ with its non-zero elements completely unspecified. The number of the unknown parameters under the null is allowed to grow with $p$, which is drastically different from existing tests in the literature for testing a covariance matrix $\Sigma^*$ with its inverse under the null often specified up to a finite number of unknown parameters (e.g., Zhong et al. (2017); Zheng et al. (2019)).}
Let $\boldsymbol{\Omega}_0=(w_{ij,0})$ be a $p\times p$ precision matrix of $\mathbf{X}$ compatible with $\mathcal{E}_0$ under the null, in the sense that if $(i, j) \not\in \mathcal{E}_0$, then $w_{ij, 0}=0$. The exact definition of $\boldsymbol{\Omega}_0$ is not important for the discussion below. Our main idea is that if $\mathcal{E}_0$ is correctly specified, $\boldsymbol{\Omega}_0$ will be equal to $\boldsymbol{\Omega}^*$; that is,
\[ \boldsymbol{\Sigma}^* \boldsymbol{\Omega}_0- \mathbf{I}_p =\boldsymbol{0}_p,
\]
where $\mathbf{I}_p$ is the $p$-dimensional identity matrix and $\boldsymbol{0}_p$ is a $(p\times p)$-dimensional matrix with entries all being zero. That is, if $\mathcal{E}_0=\mathcal{E}^*$, we can write the above equation elementwise as
\begin{equation}\label{eq:1}
\max_{1\leq i, j\leq p}{\vert } \mathbf{e}_j^{ \mathrm{\scriptscriptstyle T} }\boldsymbol{\Sigma}^*\mathbf{w}_{i,0} - \mathbf{e}_j^{ \mathrm{\scriptscriptstyle T} }\mathbf{e}_i {\vert } =0,
\end{equation}
where $\boldsymbol{\Omega}_0=(\mathbf{w}_{1,0},\ldots, \mathbf{w}_{p,0})$ by denoting $\mathbf{w}_{i,0}$ as the $i$-th column of $\boldsymbol{\Omega}_0$ and $\mathbf{I}_p=(\mathbf{e}_1,\ldots, \mathbf{e}_p)$ with $\mathbf{e}_i$ being the $i$-th basis vector. On the other hand, if $\mathcal{E}_0$ is not correctly specified in the sense that $\mathcal{E}_0\not=\mathcal{E}^*$, the maximum element of $ \boldsymbol{\Sigma}^* \boldsymbol{\Omega}_0- \mathbf{I}_p$ may be different from zero.
Thus, to assess whether $H_0$ is true is equivalent to check \eqref{eq:1}. If $\boldsymbol{\Omega}_0$ and so $\mathbf{w}_{i,0}$ is known in advance,
an estimator of $(\mathbf{e}_j^{ \mathrm{\scriptscriptstyle T} }\boldsymbol{\Sigma}^*\mathbf{w}_{i,0} - \mathbf{e}_j^{ \mathrm{\scriptscriptstyle T} }\mathbf{e}_i)^2$ may be obtained by replacing $\boldsymbol{\Sigma}^*$ by the sample covariance matrix
$\boldsymbol{S}_n =\sum_{i = 1}^n (\mathbf{X}_i-\bar\mathbf{X}) (\mathbf{X}_i-\bar{\mathbf{X}})^{ \mathrm{\scriptscriptstyle T} }/(n-1)$ with $\bar{\mathbf{X}}=\sum_{i=1}^n \mathbf{X}_i/n$. Then, we may use the following statistic $D_n$ to distinguish $H_0$ and $H_1$,
$$D_n= \max_{1\leq i, j\leq p} D_{ij}^2, \quad D_{ij}^2:= ( \mathbf{e}_j^{ \mathrm{\scriptscriptstyle T} } \boldsymbol{S}_n \mathbf{w}_{i,0}-\mathbf{e}_j^{ \mathrm{\scriptscriptstyle T} } \mathbf{e}_i)^2 /\theta_{ij, 0},$$
where $\theta_{ij, 0} = \mbox{var}(\mathbf{e}_j^{ \mathrm{\scriptscriptstyle T} } \boldsymbol{S}_n \mathbf{w}_{i,0} -\mathbf{e}_j^{ \mathrm{\scriptscriptstyle T} }\mathbf{e}_i)$, whose leading order term is given in the following Lemma.
\begin{lemma*}\label{covlm1}
For all $1 \leq i, j \leq p$, we have (a) $\mbox{var}({\mathbf{e}_j^{ \mathrm{\scriptscriptstyle T} }\boldsymbol{S}_n \mathbf{w}_{i,0}-\mathbf{e}_j^{ \mathrm{\scriptscriptstyle T} }\mathbf{e}_i})=\omega_{ii}^*\sigma_{jj}^*/n$, for $1 \leq i \neq j \leq p$
and (b) $\mbox{var}({\mathbf{e}_i^{ \mathrm{\scriptscriptstyle T} }\boldsymbol{S}_n \mathbf{w}_{i,0}-\mathbf{e}_i^{ \mathrm{\scriptscriptstyle T} }\mathbf{e}_i})=(\omega_{ii}^*\sigma_{ii}^* + 1)/n,$ for $1 \leq i \leq p$.
\end{lemma*}
However, $D_{n}$ is not directly applicable because several quantities involved are unknown. Noting that
under the null hypothesis, $\boldsymbol{\Omega}_0$ is a sparse matrix, we denote the number of nonzero entries in the $j$th column of $\boldsymbol{\Omega}_0$ as $s_j$, where $\max_{1\le j\le p} s_j=o(\surd{n})$ is a typical assumption made in estimating high-dimensional precision matrices ( Cai et al. (2011), Liu $\&$ Wang (2017) ). Let $\mathbf{w}_{i1, 0}$ and $\mathbf{w}_{i0, 0}$ represent sub-vectors of $\mathbf{w}_{i, 0}$ with only nonzero and only zero elements respectively, and $(\mathbf{B}_{i, 0})$ be a $p\times s_i$ matrix with elements being either 0 or 1 such that $\mathbf{B}_{i, 0} \mathbf{w}_{i1, 0} = \mathbf{w}_{i, 0}$.
The precision matrix $\boldsymbol{\Omega}_0$ can be estimated in the following column-by-column fashion.
Denote $\boldsymbol{\Sigma}_0=\boldsymbol{\Omega}_0^{-1}$. By definition, $\boldsymbol{\Sigma}_0\mathbf{w}_{i,0}=\boldsymbol{\Sigma}_0\mathbf{B}_{i, 0}\mathbf{w}_{i1,0}=\mathbf{e}_i$
and then $\mathbf{B}_{i, 0}^{ \mathrm{\scriptscriptstyle T} }\boldsymbol{\Sigma}_0\mathbf{B}_{i, 0}\mathbf{w}_{i1,0}=\mathbf{B}_{i, 0}^{ \mathrm{\scriptscriptstyle T} }\mathbf{e}_i$. Thus, $\mathbf{w}_{i1,0}=(\mathbf{B}_{i, 0}^{ \mathrm{\scriptscriptstyle T} }\boldsymbol{\Sigma}_0\mathbf{B}_{i, 0})^{-1}\mathbf{B}_{i, 0}^{ \mathrm{\scriptscriptstyle T} }\mathbf{e}_i$. Under $H_0$,
$\mathbf{B}_{i, 0}^{ \mathrm{\scriptscriptstyle T} }\mathbf{X}_1,\ldots,\mathbf{B}_{i, 0}^{ \mathrm{\scriptscriptstyle T} }\mathbf{X}_n$ are $s_i$-dimensional independent and identically distributed random vectors with covariance $\mathbf{B}_{i, 0}^{ \mathrm{\scriptscriptstyle T} }\boldsymbol{\Sigma}_0\mathbf{B}_{i, 0}$.
Because $s_i$ are of smaller order of $\surd{n}$, $\mathbf{B}_{i, 0}^{ \mathrm{\scriptscriptstyle T} }\boldsymbol{\Sigma}_0\mathbf{B}_{i, 0}$ can be consistently estimated by the sample covariance of $\mathbf{B}_{i, 0}^{ \mathrm{\scriptscriptstyle T} }\mathbf{X}_1,\cdots,\mathbf{B}_{i, 0}^{ \mathrm{\scriptscriptstyle T} }\mathbf{X}_n$ given by
$\mathbf{B}_{i, 0}^{ \mathrm{\scriptscriptstyle T} } \boldsymbol{S}_n \mathbf{B}_{i, 0}$ under $H_0$. Then, $\hat{\mathbf{w}}_{i1,0}=(\mathbf{B}_{i, 0}^{ \mathrm{\scriptscriptstyle T} } \boldsymbol{S}_n \mathbf{B}_{i, 0} )^{-1} \mathbf{B}_{i, 0}^{ \mathrm{\scriptscriptstyle T} } \mathbf{e}_i$
and $\hat{\mathbf{w}}_{i,0} = \mathbf{B}_{i, 0} \hat{\mathbf{w}}_{i1,0}$ is a consistent estimator of $\mathbf{w}_{i,0}$ under $H_0$. By assembling $\hat{\mathbf{w}}_{i,0}$ as $\hat{\boldsymbol{\Omega}}_0$, we have a consistent estimator of ${\boldsymbol{\Omega}}_0$.
The technical detail of the preceding argument can be found in Le and Zhong (2021).
Based on Lemma \ref{covlm1}, we can then estimate $\theta_{ij, 0}$ as
$\hat{\theta}_{ij,0} = (\hat{\omega}_{ii,0}s_{jj}+\delta_{ij})/n$ where $\hat{\omega}_{ii,0}$ is the
$(i,i)$th element of $\hat{\boldsymbol{\Omega}}_0$, $s_{jj}$ is the $(j,j)$th element of matrix $\boldsymbol{S}_n$, and $\delta_{ij}=1$ if $i\not=j$ and $\delta_{ij}=0$ if $i=j$.
Replacing the unknown parameters by their estimators, we construct a
test statistic $\hat{D}_n$ using the plug-in estimators
$\hat{\mathbf{w}}_{i,0}$,
$$\hat{D}_n =\max_{1\leq i, j\leq p} {\hat{D}_{ij}}^2,$$
with $\hat{D}_{ij}^2 =(\mathbf{e}_j^{ \mathrm{\scriptscriptstyle T} } \boldsymbol{S}_n \hat{\mathbf{w}}_{i,0} - \mathbf{e}_j^{ \mathrm{\scriptscriptstyle T} } \mathbf{e}_i)^2/\hat{\theta}_{ij,0}$.
The test statistic $\hat{D}_n$ is free of tuning and extremely easy to calculate for practical use. These advantages should be compared to those penalized likelihood methods for which the choice of tuning parameters for estimating ${\boldsymbol{\Omega}}_0$ is critical for the performance of the resulting estimator.
Despite the above advantages, $\hat{D}_n$ does not have much power in rejecting $\mathcal{E}_0$ if $\mathcal{E}^* \subsetneq \mathcal{E}_0 $; that is, when $\mathcal{E}^*$ is a subset of $\mathcal{E}_0$ but they are not equal.
An example is given in Figure \ref{networkprespecified} where $\hat{D}_n$ will have no power in rejecting the $\mathcal{E}_0$ in (b). This is simply because under the null hypothesis that $\mathcal{E}=\mathcal{E}_0$, any reasonable estimator of $\boldsymbol{\Omega}_0$ of $\boldsymbol{\Omega}^*$ denoted as $\hat\boldsymbol{\Omega}_0$, including the one discussed above, will asymptotically converge to $\boldsymbol{\Omega}^*$, making $\boldsymbol{\Sigma}^* \hat\boldsymbol{\Omega}_0- \mathbf{I}_p$ very small stochastically.
\begin{figure}[h]
\centering
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{Truenetwork.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{Includednetwork.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{Nestednetwork.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{Wrongnetwork.pdf}
\caption{}
\end{subfigure}
\caption{Different dependence structures: (a) the true graph $\mathcal{E}^*$; (b) $\mathcal{E}_0$ that satisfies $\mathcal{E}_0 \supsetneq \mathcal{E}^*$; (c) $\mathcal{E}_0$ that satisfies $\mathcal{E}_0 \subsetneq \mathcal{E}^*$; (d) $\mathcal{E}_0$ that is not nested within or outside $\mathcal{E}^*$. }
\label{networkprespecified}
\end{figure}
\subsection{\color{black} A novel consistency-empowered test statistic}
When $\mathcal{E}_0 \supsetneq \mathcal{E}^*$ as illustrated in Figure \ref{networkprespecified}, we know that its compatible estimator $\hat\boldsymbol{\Omega}_0$ will be close to $\boldsymbol{\Omega}^*$ loosely speaking. Thus, if $(i, j)\in \mathcal{E}_0$ but $(i, j)\not\in \mathcal{E}^*$, $\hat\omega_{ij,0}$ will be close to zero. For the test to have power, we need to offset the effect of those small estimates. Our idea is to augment those small estimates with a constant that is just large enough for us to reject the null. Thus, in certain sense, we are amplifying those small noises as a means to empower the consistency of a new test statistic.
Of course, how close is close to zero for a small noise should be gauged against its standard error, which motivates the development of the following {\color{black} consistency-empowered} test statistic.
Let $\hat{\omega}_{i1,0}^{(j)}$ be the $j$th component of $\hat{\mathbf{w}}_{i1,0}$ with the associated standard error $\sigma_{i1,0}^{(j)}$, where
$\hat{\mathbf{w}}_{i1,0}$ is defined in the previous section.
Let $\hat{\sigma}_{i1,0}^{(j)}$ be a consistent estimator of $\sigma_{i1,0}^{(j)}$ which will be defined shortly. Define
$\tilde{\mathbf{w}}_{i1,0}=(\tilde{\omega}_{i1,0}^{(1)},\ldots, \tilde{\omega}_{i1,0}^{(s_i)})^{ \mathrm{\scriptscriptstyle T} }$ where
$$
\tilde{\mathbf{w}}_{i1,0}^{(j)}=\hat{\mathbf{w}}_{i1,0}^{(j)}+\Delta_{i1}^{(j)},
$$
with
\[\Delta_{i1}^{(j)}=C_nI\{|\hat{\omega}_{i1,0}^{(j)}|/\hat{\sigma}_{i1,0}^{(j)}\leq\delta_n\}.\]
Here $C_n\neq 0$ and $\delta_n$ are tuning parameters which will be discussed in the next section. Clearly, what this procedure does is to add a constant to those elements of $\hat{\mathbf{w}}_{i1,0}$ that are stochastically small. Or put differently, it simply amplifies the noise, as opposed to the usual notion of filtering out noises for better estimation accuracy.
Recall the definition of $B_{i, 0}$ in the last section.
Let $\tilde{\mathbf{w}}_{i,0}=B_{i,0}\tilde{\mathbf{w}}_{i1,0}$ and $\Delta_{i}=B_{i,0}\Delta_{i1}$ where $\Delta_{i1}=(\Delta_{i1}^{(1)},\ldots, \Delta_{i1}^{(s_i)})^{ \mathrm{\scriptscriptstyle T} }$.
Our proposed consistency-empowered test statistic is then
$$\tilde{D}_n = \max_{1\leq i, j\leq p}{(\mathbf{e}_j^{ \mathrm{\scriptscriptstyle T} } \boldsymbol{S}_n \tilde{\mathbf{w}}_{i,0} - \mathbf{e}_j^{ \mathrm{\scriptscriptstyle T} }\mathbf{e}_i)^2}/{\hat{\theta}_{ij,0}}=\max_{1\leq i, j\leq p} {\tilde{D}_{ij}}^2,$$
where $\tilde{D}_{ij}^2 =(\mathbf{e}_j^{ \mathrm{\scriptscriptstyle T} } \boldsymbol{S}_n \tilde{\mathbf{w}}_{i,0} - \mathbf{e}_j^{ \mathrm{\scriptscriptstyle T} } \mathbf{e}_i)^2/\hat{\theta}_{ij,0}$. {\color{black} Interestingly, the form of the consistency-empowered estimator $\tilde{\mathbf{w}}_{i1,0}^{(j)}$ appears to be similar to the
power-enhanced test statistic proposed by Fan et al. (2015) on surface. In essence however, they are fundamentally different because
our modified estimator is to ensure the consistency of the proposed test for all types of null hypotheses,
while Fan et al. (2015) focuses on improving power for some type of alternatives. Having said this, we point out that the test statistic $\hat{D}_n$ without consistency empowerment is powerless for testing those null hypotheses in which the true graph is nested within the graph under the null, regardless of the sample size.}
We now discuss a consistent estimator of $\sigma_{i1,0}^{(j)}$ required
in $\Delta_{i1}^{(j)}$.
Any non-zero element $\hat{\omega}_{i1,0}^{(j)}$ of $\hat{\mathbf{w}}_{i1,0}$ corresponds to $\hat{\omega}_{ik,0}$ ($k = 1,\ldots, p$), $(i,k)$-th component of $\hat{\boldsymbol{\Omega}}_0$, such that $\hat{\omega}_{i1,0}^{(j)}$ = $\hat{\omega}_{ik,0}$. Le $\&$ Zhong (2021) established that $\hat{\omega}_{i1,0}^{(j)}$ is asymptotically normal, in the sense that
\begin{equation}
\label{asynormOmg}
\surd{n} (\hat{\omega}_{i1,0}^{(j)} -\omega_{i1,0}^{(j)}) = \surd{n} (\hat{\omega}_{ik,0} -\omega_{ik,0}) \to N(0,h_{ik})
\end{equation}
in distribution,
where
$h_{ik}= \omega_{ii}^* \omega_{kk}^* +\omega_{ik}^*$.
Thus, a consistent estimator of $\sigma_{i1,0}^{(j)}$ is simply $\hat{\sigma}_{i1,0}^{(j)} = \surd{(\hat{\omega}_{ii,0} \hat{\omega}_{kk,0} +
\hat{\omega}_{ik,0}^2)}/\surd{n}$.
\section{Asymptotic Distributions}
In this section, we study the asymptotic distributions of $\hat{D}_n$ and the consistency-empowered test statistic $\tilde{D}_n$, when $p\to\infty$ as $n\to\infty$. We assume the following regularity conditions.
\begin{itemize}
\item[] (C1) There exist constants $C_1, C_2 > 0$ such that $\vert \vert \boldsymbol{\Sigma}^* \vert \vert_1 \leq C_1$ and $\vert \vert \boldsymbol{\Omega}^* \vert \vert_1 \leq C_2$,
where $\vert \vert \mathbf{M} \vert \vert_1= \max_{1\leq j\leq n}\sum_{i=1}^m {\vert } M_{ij}{\vert }$ for any $m \times n$ matrix $\mathbf{M}=(M_{ij})$;
\item[] (C2) $s_0\surd{(\log p/n)} = o(1)$, where $s_0=\max_{1\le j\le p} s_j$.
\end{itemize}
These two conditions are commonly employed in the literature (Zhou et al. (2011), Liu $\&$ Lou (2015), cf).
Many commonly assumed precision matrix structures such as polynomial decay,
exponential decay, banded and factor models satisfy Condition (C1). See the
Appendix for details.
A node is called \textit{isolated} if it does not connect with any other nodes. That is, node $i$ is isolated if and only if $\omega_{ij}^*= 0$ for all $ j \neq i$. We have the following results on the asymptotic distribution of $\hat{D}_n$.
\begin{theorem}
\label{generalTheo}
Under conditions (C1)-(C2), if the graph $\mathcal{E}^*$ has $k$ isolated nodes where
$\lim_{p \rightarrow \infty} {k}/{p}
= \beta, 0 \leq \beta<1$, then under the null $H_0$,
$$ \mbox{pr}\{ \hat{D}_n - 4\log p + \log(\log p) \leq t \} \to \exp\{-\exp(-{t}/{2})/\surd{(2\gamma\pi)}\}$$
where $\gamma = (1 - {\beta^2}/{2})^{-2}$.
\end{theorem}
Interestingly, the asymptotic distribution of $\hat{D}_n$ depends on $k$, the number of isolated nodes in $\mathcal{E}^*$.
From Theorem \ref{generalTheo}, if the number of isolated nodes $k$ is of a smaller order of the number of variables $p$ as $k = o(p)$, then
$\hat{D}_n$ converges to the following Gumbel distribution
\begin{equation}
\label{TestProblem}
\mbox{pr}\{ \hat{D}_n - 4\log p + \log(\log p) \leq t \} \to \exp\{-\exp(-{t}/{2})/\surd{(2\pi)}\}.
\end{equation}
Example 1 and Example 2 below further illustrate Theorem \ref{generalTheo}.
\begin{example}
Assume that $\mathcal{E}^*$ has a Toeplitz structure
$\mathcal{E} = \{(i,j), {\vert } i - j {\vert } \leq s_0 \}$.
The number of the isolated node in $\mathcal{E}^*$ is 0 and hence $\gamma=1$. Then under the null $H_0:\mathcal{E}_0=\mathcal{E}$, we have
$\mbox{pr}\{ \hat{D}_n - 4\log p + \log(\log p) \leq t\} \rightarrow \exp\{-\exp({-t}/{2})/{\surd{(2\pi)}}\}.$
The limiting distribution of $\hat{D}_n$ is $\mbox{Gumbel}(-\log 2\pi, 2)$.
\end{example}
\begin{example}
Assume that $\mathcal{E}^*$ follows a factor model structure with
$\boldsymbol{\Omega}^*= \mathbf{I}_p + \boldsymbol{u}_1 \boldsymbol{u}_1^T$, where $\boldsymbol{u}_1 = (1,1,1,0,\ldots,0) \in \mathbb{R}^{p}$.
The limiting distribution of $\hat{D}_n$ satisfies
$\mbox{pr}\{\hat{D}_n - 4\log p + \log(\log p) \leq t \}\to \exp\{-\exp({-t}/{2})/{\surd{(8\pi)}}\}.$ In this case, the limiting distribution of $\hat{D}_n$ is $\mbox{Gumbel}(- \log 8\pi, 2)$.
\end{example}
As an application of Theorem \ref{generalTheo}, if one is interested in assessing the local graph structure associated with the $i$-th node by testing
\begin{equation}
\label{coltest}
H_0: \mathcal{E}_i=\mathcal{E}_{0i} \quad\mbox{vs.}\quad H_1: \mathcal{E}_{i}\neq \mathcal{E}_{0i}
\end{equation}
where $\mathcal{E}_{0i}:=\{(i, j): (i, j) \in \mathcal{E}^*, \forall j \}$ is the graph structure specified for the $i$-th node, we can modify our test statistic as
$$\hat{D}_{ni} = \max_{1\leq j\leq p}{( \mathbf{e}_j^{ \mathrm{\scriptscriptstyle T} } \bold{S}_n \hat{\mathbf{w}}_{i,0}-\mathbf{e}_j^{ \mathrm{\scriptscriptstyle T} }\mathbf{e}_i)^2}/{\hat{\theta}_{ij,0}}.$$
The asymptotic distribution of the test statistic $\hat{D}_{ni}$ is summarized in the following corollary.
\begin{corollary}\
For the testing problem in (\ref{coltest}), if conditions (C1)-(C2) are satisfied, under the null hypothesis $\hat{D}_{ni}$ converges to the following Gumbel distribution
$$ \mbox{pr}\{\hat{D}_{ni} - 2\log p + \log(\log p) \leq t\}\to \exp\{-\exp(-{t}/{2})/\surd{\pi}\}. $$
\end{corollary}
We are ready to show the asymptotic distribution of the consistency-empowered test statistic $\tilde{D}_n$ under $H_0$ and discuss the choices of tuning parameters $C_n $ and $\delta_n$ involved in its definition. Recall that
the main reason for introducing $C_n$ and $\delta_n$ is to detect network structures whose edge sets are strictly supersets of that of the true graph.
Due to this, we need to choose $C_n$ and $\delta_n$ such that $\tilde{D}_n$ and $\hat{D}_n$ have the same asymptotic distribution under $H_0$, while the
test based on $\tilde{D}_n$ can reject network structures $\mathcal{E}_0\supset \mathcal{E}^*$ with probability one.
Under $H_0$ when $\mathcal{E}=\mathcal{E}_0$, $\omega_{i1,0}^{(j)}$ ($j=1,\ldots, s_i$) are all non-zeros. However, if
$\mathcal{E}_0\supset \mathcal{E}^*$ but $\mathcal{E}^*\neq\mathcal{E}_0$, $\omega_{i1,0}^{(j)}$ ($j=1,\ldots, s_i$) are supposed to be non-zeros because of the specification of $\boldsymbol{\Omega}_0$ but some of them will be estimated as zeros (in asymptotic sense). Based on the asymptotic normality in (\ref{asynormOmg}), we have
$\hat{\omega}_{i1,0}^{(j)} = \omega_{i1,0}^{(j)} + O_p(1/\surd{n})$. This results hold when the true value of $\omega_{i1,0}^{(j)}$ is zero or non-zero.
If $\omega_{i1,0}^{(j)}\neq 0$, then $\hat{\omega}_{i1,0}^{(j)}/{\sigma}_{i1,0}^{(j)}= O_p(\surd{n})$ where ${\sigma}_{i1,0}^{(j)}$ is
defined in equation (\ref{asynormOmg}). If $\omega_{i1,0}^{(j)}=0$, then $\hat{\omega}_{i1,0}^{(j)}/{\sigma}_{i1,0}^{(j)}= O_p(1)$.
Based on these observations, we may choose $C_n=C\surd{\log(p)}$ for some $C>0$ and $\delta_n = \surd{\log(n)}$ so that $\tilde{D}_n$ and $\hat{D}_n$ have the same asymptotic distribution under $H_0$,
and $\tilde{D}_n$ rejects any network structure satisfying $\mathcal{E}_0\supsetneq \mathcal{E}^*$ with probability one. Theorem \ref{generalTheoThreshold} below formally provides the asymptomatic distribution of $\tilde{D}_n$.
\begin{theorem}
\label{generalTheoThreshold}
Under conditions (C1)-(C2), if the true structure has $k$ isolated node where
$\lim_{p\to\infty} {k}/{p}=\beta$ for some $0 \leq \beta <1$, $\delta_n = \surd{\log(n)}$ and $C_n=C\surd{\log(p)}$ for some constant $C > 0$,
then under the null $H_0$,
$$ \mbox{pr}\{\tilde{D}_n - 4\log p + \log(\log p) \leq t \} \to \exp\{-\exp(-{t}/{2})/\surd{(2\gamma\pi)}\},$$
where $\gamma = (1 - {\beta^2}/{2})^{-2}.$ Moreover,
if $\mathcal{E}_0\supsetneq \mathcal{E}^*$, by choosing $C>\max_{i,j}4(\omega_{ii}^*\sigma_{ii}^* + 1)/(\sigma_{ii}^*\sigma_{jj}^*+2\sigma_{ij}^{*2})$,
the consistency-empowered test statistic $\tilde{D}_n$ rejects the null hypothesis $H_0$ with probability tending to one.
\end{theorem}
\section{Simulation}
\subsection{Numerical performance of the test statistic $\hat{D}_n$}
We perform numerical study to evaluate the finite sample performance of the proposed test statistic $\hat{D}_n$ in terms of its size and power properties. We generate $n$ i.i.d. multivariate normally distributed $p$-dimensional random vectors with mean vector 0 and covariance matrix $\boldsymbol{\Sigma}^*$ with its corresponding graph $\mathcal{E}^*$ admitting a banded structure such that
$\mathcal{E}^*=\{(i,j): |i-j|<s_0\}$.
Let $\boldsymbol{\Omega}^*={\boldsymbol{\Sigma}^*}^{-1}=(\omega_{ij}^*)_{p \times p}$ be the precision matrix. Because different precision matrices can correspond to the same underlying graph,
we specify two precision matrices to examine the performance of the proposed test statistic. For the first precision matrix, we set it as
banded with its non-zero components decaying at an exponential rate away from its diagonals. More specifically, we set $\omega_{ij}^*= 0\!\cdot\! 6^{- {\vert } i -j {\vert }}$ for $ {\vert } i -j {\vert } < s_0$
and $\omega_{ij}^*= 0$ otherwise. For the other precision matrix, we again set it as banded with its non-zero components decaying at the polynomial rate, that is, $\omega_{ij}^*=(1 + {\vert } i-j {\vert })^{-2}$ for $ {\vert } i -j {\vert } < s_0$ and $\omega_{ij}^*= 0$ otherwise. We consider two
different sparsity levels as $s_0 = 4$ or $6$.
To evaluate the empirical size and power of the proposed test, we consider various specification of $\mathcal{E}_0$, the structure specified in the null hypothesis.
For evaluating the empirical size, we consider $\mathcal{E}_0=\mathcal{E}^{*}$.
To evaluate the the power of the proposed test, we consider the following four different specifications of $\mathcal{E}_0$.
\begin{itemize}
\item[1)] (Isolated structure) Set $\mathcal{E}_{0}=\mathcal{E}_{0,1}=\{(i,j): i=j\}$. All the nodes are isolated.
\item[2)] (Nested structure) Set $\mathcal{E}_{0}=\mathcal{E}_{0,2}=\{(i,j):|i-j|<3\}$. This structure is nested in the true network structure $\mathcal{E}^*$.
\item[3)] (1-diff: structure with edges to node 1 different) Set $\mathcal{E}_{0}=\mathcal{E}^{*}_1 \cup\mathcal{E}_{0,3}$,
where $\mathcal{E}^{*}_1 = \mathcal{E}^{*}$ on the set of edges $\{ (i,j), i, j \neq 1$\}, and $\mathcal{E}_{0,3}=\{(1,3),(1,7),(1,8),(1,9)\}$ are edges connected with node 1.
\item[4)] (2-diff: structure with edges to 2 nodes different) Set $\mathcal{E}_{0}=\mathcal{E}^{*}_2\cup\mathcal{E}_{0,4}$, where
$\mathcal{E}^{*}_2 = \mathcal{E}^{*}$ on the set of edges $\{ (i,j), i, j \neq 1, 2$\}, and $\mathcal{E}_{0,4}=\{(1,3),(1,7),(1,8),(1,9),(2,4),(2,9), (2,12)\}$ are edges connected to nodes 1 and 2.
\end{itemize}
To understand the effect of sample size and data dimension, we choose two different sample sizes $n =300$ and $n =1000$.
For each sample size, data dimension is changed by setting $p/n$ at three different values $0\!\cdot\!5, 1$, and 2.
Because the true precision matrix $\boldsymbol{\Omega}^*$ specified above satisfies Conditions (C1)-(C2), we applied the results in Theorem \ref{generalTheo}.
More specifically, we rejected the hypothesis if the test statistic values $\hat{D}_n$ are greater than the $4 \log p - \log ( \log p ) + \mbox{Gumbel}_{.95}(- \log 2 \pi,2) $,
where $\mbox{Gumbel}_{.95}(- \log 2 \pi,2)$ is the 95 $\%$ quantile value of the Gumbel distribution with location
parameter $- \log 2 \pi$ and scale parameter 2. Simulation results are reported based on 500 simulation replications.
Table \ref{Power of the test Toeplitz} reports the empirical sizes and power of the proposed test statistic $\hat{D}_n$ for testing different structures $\mathcal{E}_{0}$ specified in the above 1)-4).
It can be seen that our proposed test controls type I error rate well at the nominal level $0\!\cdot\!05$ under various settings. The proposed test statistic is consistent as the power of the tests are one in many scenarios.
Based on the pattern of empirical power, we see that power of the proposed test $\hat{D}_n$ increases as $n$ increases or $p$ decreases.
Table \ref{Power of the test Toeplitz} also shows that sparsity level has some impact on the power of the test statistic, the increasing of $s_0$ leads to a decreasing power.
\begin{table}[tbhp!]
\begin{center}
\captionof{table}{Type I error and power of the proposed test statistic $\hat{D}_n$ under different alternatives when the precision matrix
has banded structure and decays at an exponential rate. \label{Power of the test Toeplitz}}
\begin{tabular}{ccrccccc}
& & & \multicolumn{1}{c}{Empirical} & \multicolumn{4}{c}{Power of the Test $\hat{D}_n$}\\
$s_0$ & $n$ & ${p}/{n}$ & Size & Isolated & Nested & 1-diff & 2-diff \\
4 & 300 & 0$\cdot $5 & 0$\cdot$034 & 1$\cdot $000 & 1$\cdot $000 & 1$\cdot $000 & 1$\cdot $000 \\
& & 1 & 0$\cdot $042 & 1$\cdot $000 & 1$\cdot $000 & 1$\cdot$000 & 1$\cdot $000 \\
& & 2 & 0$\cdot $030 & 1$\cdot $000 & 1$\cdot $000 & 1$\cdot $000 & 1$\cdot $000 \\
& 1000 & 0$\cdot $5 & 0$\cdot $038 & 1$\cdot $000 & 1$\cdot $000 & 1$\cdot $000 & 1$\cdot $000 \\
& & 1 & 0$\cdot$032 & 1$\cdot $000 & 1$\cdot $000 & 1$\cdot $000 & 1$\cdot $000 \\
& & 2 & 0$\cdot$044 & 1$\cdot $000 & 1$\cdot $000 & 1$\cdot $000 & 1$\cdot $000 \\
6 & 300 & 0$\cdot $5 & 0$\cdot$026 & 1$\cdot $000 & 0$\cdot $824 & 1$\cdot $000 & 1$\cdot $000 \\
& & 1 & 0$\cdot$032 & 1$\cdot $000 & 0$\cdot $766 & 1$\cdot $000 & 1$\cdot $000 \\
& & 2 & 0$\cdot$028 & 1$\cdot $000 & 0$\cdot $648 & 1$\cdot $000 & 1$\cdot $000 \\
& 1000 & 0$\cdot $5 & 0$\cdot$032 & 1$\cdot $000 & 1$\cdot $000 & 1$\cdot $000 & 1$\cdot $000 \\
& & 1 & 0$\cdot$046 & 1$\cdot $000 & 1$\cdot $000 & 1$\cdot $000 & 1$\cdot $000 \\
& & 2 & 0$\cdot$036 & 1$\cdot $000 & 1$\cdot $000 & 1$\cdot $000 & 1$\cdot $000 \\
\end{tabular}
\end{center}
\end{table}
\begin{table}[tbhp!]
\begin{center}
\captionof{table}{Type I error rate and power of the proposed test statistic $\hat{D}_n$ under different alternatives where the precision matrix has banded structure and decays at a polynomial rate. \label{Power of the test Polynomial decay}}
\begin{tabular}{ccrccccc}
& & & \multicolumn{1}{c}{Empirical} & \multicolumn{4}{c}{Power of the Test $\hat{D}_n$}\\
$s_0$ & $n$ & ${p}/{n}$ & Size & Isolated & Nested & 1-diff & 2-diff \\
4 & 300 & 0$\cdot $5 & 0$\cdot$022 & 1$\cdot $000 & 0.034 & 0$\cdot $348 & 0$\cdot $468 \\
& & 1 & 0$\cdot$034 & 1$\cdot $000 & 0.038 & 0$\cdot $226 & 0$\cdot $336 \\
& & 2 & 0$\cdot$024 & 1$\cdot $000 & 0$\cdot$024 & 0$\cdot $170 & 0$\cdot $248\\
& 1000 & 0$\cdot $5 & 0$\cdot$034 & 1$\cdot $000 & 0$\cdot $274 & 0$\cdot $998 & 1$\cdot $000 \\
& & 1 & 0$\cdot$046 & 1$\cdot $000 & 0$\cdot $210 & 0$\cdot $994 & 1$\cdot $000 \\
& & 2 & 0$\cdot$038 & 1$\cdot $000 & 0$\cdot $204 & 0$\cdot $988 & 1$\cdot $000 \\
6 & 300 & 0$\cdot $5 & 0$\cdot$032 & 1$\cdot $000 &0$\cdot$034& 0$\cdot $320 &0$\cdot $474\\
& & 1 & 0$\cdot$030 & 1$\cdot $000 & 0$\cdot$028& 0$\cdot $248& 0$\cdot $328\\
& & 2 & 0$\cdot$024 & 1$\cdot $000 & 0$\cdot$024 &0$\cdot $164& 0$\cdot $224\\
& 1000 & 0$\cdot $5 & 0$\cdot$046 & 1$\cdot $000 & 0$\cdot $130 & 0$\cdot $992 & 1$\cdot $000 \\
& & 1 & 0$\cdot$032 & 1$\cdot $000 & 0$\cdot $106 & 0$\cdot $994 & 1$\cdot $000 \\
& & 2 & 0$\cdot$024 & 1$\cdot $000 & 0$\cdot$084 & 0$\cdot $994 & 1$\cdot $000 \\
\end{tabular}
\end{center}
\end{table}
Table \ref{Power of the test Polynomial decay} summarizes the empirical size and power of the proposed test $\hat{D}_n$ under the polynomial rate decay structure.
We see that its patterns are similar to that in Table \ref{Power of the test Toeplitz} where the empirical power increases as sample size increases, and decreases as $p$ or $s_0$ increases.
We observe that in this case the power is not as high as those in Table \ref{Power of the test Toeplitz} where the precision matrix decays at an exponential rate in Table \ref{Power of the test Toeplitz}.
This is also something expected because the signals of the precision matrix in Table \ref{Power of the test Polynomial decay} is weaker than the signals in the previous example.
For example, when $s_0 = 4$, the non-zeros in the first column of polynomial decayed precision matrix $\boldsymbol{\Omega}^*$ is $(1, 0\!\cdot\!6, 0\!\cdot\! 36, 0\!\cdot\!216)^{ \mathrm{\scriptscriptstyle T} }$
while the first column non-zeros are $(1, 0\!\cdot\!25, 0\!\cdot\!11, 0\!\cdot\!06)^{ \mathrm{\scriptscriptstyle T} }$ in the exponentially decayed $\boldsymbol{\Omega}^*$.
\subsection{Numerical comparison of $\hat{D}_n$ and $\tilde{D}_n$}
In this simulation studies, we evaluate the finite sample performance of $\hat{D}_n$ and $\tilde{D}_n$ in terms of empirical sizes and powers in detecting the nested network and the included network structures.
Similar to Simulation Settings I, we generate $n$ IID multivariate normally distributed $p$-dimensional random vectors with mean vector 0 and covariance matrix $\boldsymbol{\Sigma}^*$.
The corresponding precision matrix is $\boldsymbol{\Omega}^*={\boldsymbol{\Sigma}^*}^{-1} = (\omega_{ij}^*)_{p \times p}$ where $\omega_{ij}^*= 0\!\cdot\! 6^{- {\vert } i -j {\vert }}$ for $ {\vert } i -j {\vert } < s_0$ and $\omega_{ij}^*= 0$ otherwise.
For evaluating the empirical sizes, we consider $\mathcal{E}_0=\mathcal{E}^*$.
We consider the following two specified structures hypotheses in the simulation
\begin{itemize}
\item[5)] (Nested structure) Set $\mathcal{E}_0=\mathcal{E}_{0,5}=\{(i,j): |i-j|<s_0-1 \}$. The structure $\mathcal{E}_0$ is nested in the true structure $\mathcal{E}^*$.
\item[6)] (Included structure) Set $\mathcal{E}_0=\mathcal{E}_{0,6}=\{(i,j): |i-j|<s_0+1 \}$. The structure $\mathcal{E}_0$ includes in the true structure $\mathcal{E}^*$.
\end{itemize}
Table \ref{Thresholdtestpower} summarizes the empirical sizes and powers of the tests based on $\hat{D}_n$ and $\tilde{D}_n$. Table \ref{Thresholdtestpower} demonstrates both tests have the similar power in rejecting the nested
structure and control the type 1 error rate. The modified test statistics version $\tilde{D}_n$ has the ability to reject the pre-specified network structures that include the true network structure, while the test statistic $\hat{D}_n$
loses power for included networks. We chose $\delta_n=\surd{\log(n)}$ and $C_n=0\!\cdot\!05$ for the test statistics $\tilde{D}_n$ in our simulation studies.
\begin{table}[tbhp!]
\begin{center}
\captionof{table}{Type 1 error and empirical power of the test statistics $\hat{D}_n$ and $\tilde{D}_n$ for both nested
and included structures \label{Thresholdtestpower}}
\begin{tabular}{ccccccccc}
& & & \multicolumn{3}{c}{$\hat{D}_n$} & \multicolumn{3}{c}{$\tilde{D}_n$}\\
& & & \multicolumn{1}{c}{Empirical} & \multicolumn{2}{c}{Power of $\hat{D}_n$} & \multicolumn{1}{c}{Empirical} & \multicolumn{2}{c}{Power of $\tilde{D}_n$}\\
$s_0$ &$n$ & $p/n$ & Size & Nested & Included & Size & Nested & Included \\
4&500 & 0$\cdot $50 & 0$\cdot$034 & 1$\cdot $000 & 0$\cdot$032 & 0$\cdot$034 & 1$\cdot $000 & 0$\cdot $986 \\
& & 1$\cdot $00 & 0$\cdot$038 & 1$\cdot $000 & 0$\cdot$036 & 0$\cdot$038 & 1$\cdot $000 & 0$\cdot $996 \\
& & 2.00 & 0$\cdot$038 & 1$\cdot $000 & 0.038 & 0.038 & 1$\cdot $000 & 1$\cdot $000 \\
&1000 & 0$\cdot $25 & 0$\cdot$030 & 1$\cdot $000 & 0$\cdot$036 & 0$\cdot$030 & 1$\cdot $000 & 1$\cdot $000 \\
& & 0$\cdot $50 & 0$\cdot$040 & 1$\cdot $000 & 0$\cdot$040 & 0$\cdot$040 & 1$\cdot $000 & 1$\cdot $000 \\
& & 1$\cdot $00 & 0$\cdot$032 & 1$\cdot $000 & 0$\cdot$034 & 0$\cdot$032 & 1$\cdot $000 & 1$\cdot $000 \\
6&500 & 0$\cdot $50 & 0$\cdot$026 & 0$\cdot $116 & 0$\cdot$030 & 0$\cdot$030 & 0$\cdot $146 & 0$\cdot $696 \\
& & 1$\cdot $00 & 0$\cdot$042 & 0$\cdot$092 & 0$\cdot$044 & 0$\cdot$048 & 0$\cdot $116 & 0$\cdot $660 \\
& & 2$\cdot$00 & 0$\cdot$036 & 0$\cdot$066 & 0$\cdot$040 & 0$\cdot$032 & 0$\cdot$090 & 0$\cdot $578 \\
&1000 & 0$\cdot $25 & 0$\cdot$036 & 0$\cdot $810 & 0$\cdot$034 & 0$\cdot$042 & 0$\cdot $818 & 1$\cdot $000 \\
& & 0$\cdot $50 & 0.036 & 0$\cdot $740 & 0$\cdot$034 & 0$\cdot$034 & 0$\cdot $688 & 0$\cdot $998 \\
& & 1$\cdot $00 & 0$\cdot$040 & 0$\cdot $614 & 0$\cdot$044 & 0$\cdot$044 & 0$\cdot $600 & 1$\cdot $000 \\
\end{tabular}
\end{center}
\end{table}
\section{Real Data Analysis}
We illustrate the use of the proposed test statistics for identifying the structure of a graphical model by applying them to a correlated data analysis.
Towards this, we examined a COVID-19 dataset provided by The New York Times (NYT (2022)) that is publicly available on https://github.com/nytimes/covid-19-data. The data set includes daily confirmed COVID-19 cases observed over 51 states of the U.S. from January 1, 2021 to December 31, 2021. We aggregated the data on a weekly basis such that the data contain
52 weekly confirmed cases in thousands from 51 states, which is denoted as
a matrix of size 51$\times$ 52.
Our interest was to understand how the numbers of COVID cases depend on geographical locations. Towards this, we coded three dummy variables according to whether a state is in the North East, West, Mid West, or the South. The following linear regression was postulated
\begin{equation}
\label{GEE}
E(y_{ij}|x_i) = \beta_0 + \beta_1 x_{i1} + \beta_2 x_{i2} + \beta_3 x_{i3}, (i = 1,\ldots,51; j = 1,\ldots,52),
\end{equation}
where $x_{i1}$ is an indicator variable whether the state is in the North East, $x_{i2}$ is for the Mid West, and $x_{i3}$ is for the West. Denote
$\boldsymbol{Y_i} = (y_{i1},\ldots,y_{i52})^{ \mathrm{\scriptscriptstyle T} }$, $\boldsymbol{\beta} = (\beta_0, \beta_1, \beta_2, \beta_3)^{ \mathrm{\scriptscriptstyle T} }$, $\boldsymbol{X_i} =\boldsymbol{1} \otimes (1,x_{i1},x_{i2},x_{i3})$, where $\boldsymbol{1} = (1,\ldots,1)^{ \mathrm{\scriptscriptstyle T} }$ is a $52 \times 1$ matrix, $\otimes$ is the Kronecker product, and $\boldsymbol{X_i}$ is a $52 \times 4$ matrix. Since the components of $\boldsymbol{Y_i}$ are correlated, we applied the method of generalized estimation equations (Liang $\&$ Zeger, 1986) for estimating $\boldsymbol{\beta}$ by incorporating the correlation structure of $\boldsymbol{Y_i}$. That is, we estimate $\boldsymbol{\beta}$ by solving
\begin{equation}\label{GEE1}
\sum_{i=1}^{51}\boldsymbol{X_i}^{ \mathrm{\scriptscriptstyle T} }
\boldsymbol{V}^{-1}(\boldsymbol{Y_i} - \boldsymbol{X_i} \boldsymbol{\beta} ) = 0,
\end{equation}
where $\boldsymbol{V}$ is the so-called working covariance matrix. It is known that correct specification of $\boldsymbol{V}$ improves the estimation efficiency of the resulting estimator.
To choose an appropriate graph corresponding to $\boldsymbol{\Omega}=\boldsymbol{V}^{-1}$, we test if the one of the following specified graphical structures fits the data well.
\begin{itemize}
\item[a)] (Isolated structure) Set $\mathcal{E}_0=\mathcal{E}_{0,1}=\{(i,j), i = j\}$.
\item[b)] (Banded structure with bandwidth 3, denoted as Band(3)) Set $\mathcal{E}_0=\mathcal{E}_{0,2}=\{(i,j), |i-j| < 3\}$.
\item[c)] (Structure learned from TIGER method, denoted as TIGER) Set $\mathcal{E}_0=\mathcal{E}_{0,3}$, where $\mathcal{E}_{0,3}$ is the network learned from TIGER method as in the default setting by the flare R package by Li et al. (2020).
\item[d)] (Structure learned from GLASSO method, denoted as GLASSO) Set $\mathcal{E}_0=\mathcal{E}_{0,4}$, where $\mathcal{E}_{0,4}$ is the network learned from the GLASSO R package with tuning value $\rho = 10$ by Friedman et al. (2019).
\end{itemize}
We applied our proposed methods to test the above hypothetical structures.
Since the number of
isolated nodes of the true structure is unknown, we chose $\gamma=1$ for the limiting distribution in Theorem \ref{generalTheo} so that our test
is conservative because we
only reject the null hypothesis if our test statistic value is large enough.
The test statistic values and its corresponding p-values (in parentheses) for testing Isolated, Band(3), TIGER and GLASSO are, respectively, $ 49\!\cdot\!57 (<0\!\cdot\!0001)$, $17\!\cdot\!61 (0\!\cdot\!08)$, $32\!\cdot\!58(<0\!\cdot\!0001)$ and $16\!\cdot\!34 (0\!\cdot\!14)$.
Therefore we reject the null hypothesis that the true structure is the Isolated or the TIGER structure with 95$\%$ confidence. However, we cannot reject the null hypothesis that the true structure is the Band(3) network or the GLASSO network at the 95$\%$ confidence level.
\begin{table}[tbhp!]
\begin{center}
\captionof{table}{Estimated coefficients parameters under four different pre-specified structures, standard errors of the estimated parameters in the parentheses, and * denotes p-value less than 0$\cdot$05. \label{coefficients}}
\begin{tabular}{crlll}
Coefficients & \multicolumn{1}{c}{Isolated} & \multicolumn{1}{c}{Band(3)} &\multicolumn{1}{c}{TIGER} & \multicolumn{1}{c}{GLASSO}\\
${\beta}_1$ & -0$\cdot $78 (1$\cdot$59) & 0$\cdot$64 (0$\cdot$73) & 1$\cdot $58 (1$\cdot$07) & 0$\cdot$56 (0$\cdot$47) \\
${\beta}_2$ & -0$\cdot $68 (1$\cdot$30) & 1$\cdot$01 (0$\cdot$80) & 1$\cdot $91 (1$\cdot$00) & 1$\cdot$99 (0$\cdot$47)* \\
${\beta}_3$ & -0$\cdot $85 (1$\cdot$47) & 1$\cdot$91 (0$\cdot$66)* & 2$\cdot $80 (1$\cdot$04)* & 1$\cdot$34 (0$\cdot$62)* \\
\end{tabular}
\end{center}
\end{table}
We then used these four pre-specified structures to obtain the estimating coefficients for the model (\ref{GEE}). Table \ref{coefficients} reports the estimated results for model (\ref{GEE}), including the estimated coefficients, their standard errors, and their statistical significance (p-value less than 0$\cdot$05). When using the Isolated structure, all the coefficients $\beta_1, \beta_2$ and $\beta_3$ are not significant which indicates
that there is no significant difference in COVID cases among the four regions in the U.S. However, under the pre-specified GLASSO, there are significant differences in COVID cases between the Mid West and the South, and between the West and the South. In addition, the estimated standard errors under the GLASSO pre-specified network
is also smallest, followed by the Band(3), TIGER, and Isolated structure. These results are consistent with our proposed test statistics since they suggest that the GLASSO structure fits
the data best, followed by the Band(3), TIGER and Isolated structure.
This suggests that the proposed test statistics can be used as a powerful tool to identify a good pre-specified structure for further analysis.
Finally, we applied a bootstrap method to further
evaluate the standard errors of
coefficient estimators, and compared
the efficiency gain in terms of the standard errors when using different pre-specified network structures. More
specifically, we subsampled 40 states from 51 states without replacement for 100 times. At each time we used subsampled data in the GEE equation (\ref{GEE1}) to estimate the coefficients. Note that here to increase the stability of the procedure, we reuse the estimated precision matrix $\boldsymbol{V}^{-1}$ based on the data from all 51 states. At the $i$-th replication, we denote the corresponding standard errors of each coefficient by $(\mbox{Sd}_{1,i},\mbox{Sd}_{2,i}, \mbox{Sd}_{3,i} ) $. To evaluate the variability of standard errors from the subsampling process, we then calculate the corresponding means and standard deviations of $(\mbox{Sd}_{1,i},\mbox{Sd}_{2,i}, \mbox{Sd}_{3,i} ) $, for $i = 1,\cdots,100$ as follows:
\begin{equation*}
\mbox{AVE}_j = \sum_{i=1}^{100}\mbox{Sd}_{j,i} / 100, \
\mbox{SD}_j = \surd{ \big\{ \sum_{i=1}^{100}(\mbox{Sd}_{j,i} - \mbox{AVE}_j)^2/ 100}\big\}\; \mbox{for} \; j = 1,2,3.
\end{equation*}
Figure \ref{Standarderror} shows the mean and standard deviation of the coefficients obtained from the above subsampling procedure. Both panels 2(a) and 2(b) of the figure demonstrate that the standard error obtained from the pre-specified GLASSO structure is the smallest, followed by Band(3), TIGER, and Isolated network. The result again agrees with our test statistics obtained and confirms that choosing a good pre-specified graphical structure is essential for efficiency gain when using the GEE method. Therefore, the proposed test statistics can serve as a valuable tool to help select a reliable pre-specified graphic structure.
\begin{figure}
\centering
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{BootstrapMean1.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{BootstrapSD1.pdf}
\caption{}
\end{subfigure}
\caption{Average (a) and standard error (b) of absolute prediction errors obtained
from four graphic structures: Isolated, Band(3), TIGER and GLASSO. }
\label{Standarderror}
\end{figure}
\vskip 14pt
\noindent {\large\bf Acknowledgements}
This research was partially supported by an NSF grant. The authors thank Dr. Peter Song of University of Michigan for sharing the code for the real data analysis.
\section*{Supplementary material}
\label{SM}
The supplementary material includes detailed proofs of the main results and examples of precision structures satisfying technical condition (C1).
|
1,116,691,497,344 | arxiv | \section{Introduction}
The X-ray emitting gas at the centers of clusters of galaxies can have
very short cooling times. Without a heating source this gas should cool and
condense at rates of up to thousands of solar masses per year. However, high resolution
X-ray spectroscopy has been unable to detect large
quantities of gas cooling below one third of the cluster virial
temperature (for a review see \citealt{peterson2006}). Observed rates
of star formation and quantities of cool and cold gas in brightest cluster
galaxies (BCGs), the most massive galaxies known, are far lower than expected
(see for example \citealt{mcnamara2006}). For the hot gas to be radiating
but not cooling in the predicted quantities a heating mechanism is required.
To determine how the heating and cooling balance in clusters of
galaxies is maintained requires a multi-wavelength approach to probe the many
phases of matter. The hot intra-cluster medium (ICM), seen in X-rays,
is at a temperature of $\sim10^{7}-10^{8}$~K. Cool, optical
emission line gas, found to surround the majority of BCGs in cool core
clusters is seen at temperatures of $\sim10^{4}$~K. Cold molecular gas
reservoirs have also been observed, however observations of gas at
intermediate temperatures ($10^{5}-10^{6}$~K)
have thus far proved elusive.
High-ionisation, collisionally-excited optical coronal lines are emitted
at temperatures between $10^{5}-10^{6}$~K making them important tracers
of intermediate temperature gas. Optical emission lines
have the advantage of being observable from the ground. The high spectral
and spatial resolution in principle allow us to determine and trace the
velocity structure, and therefore examine the transport processes, in
the gas. Emission lines from these plasmas can also provide a direct means of
measuring the mass flow rate \citep{cowie1981} and therefore the rate at which
the ICM is condensing. \cite{graney1990}, \cite{sarazin1991} and \cite{voit1994} have
modelled coronal line emission in conditions appropriate to those of the cores
of galaxy clusters.
There have been several attempts to detect coronal line emission
from cooling gas in clusters of galaxies, however reports of a significant
detection remain inconclusive.
\cite{hu1985} observed the core regions of 14 clusters
of galaxies but were unable to detect the presence of the [Fe {\small XIV}]$\lambda$5303
in any of their long slit spectra. Similarly \cite{heckman1989} attempted
detection of another coronal line, [Fe {\small X}]$\lambda$6374, in a sample of nine
suspected `cooling flow' clusters. Likewise they found no detectable
emission.
\begin{figure*}
\begin{center}
\includegraphics[width=0.4\textwidth]{Xray_T_coronal_1.eps}
\includegraphics[width=0.4\textwidth]{Xray_z_coronal_1.eps}
\includegraphics[width=0.4\textwidth]{scan_coronal_3_1.eps}
\caption{Top left: X-ray temperature map of the centre of the Centaurus cluster \protect \cite{fabian2005b}.
The colour bar is in units of \rm\thinspace keV. Box 1 and 2 are situated at the regions of
lowest X-ray temperature. Top right: X-ray metallicity map based on Fe abundance.
The colour bar is in units of solar metalicity. The $0.4-0.8$~\rm\thinspace keV\ X-ray temperature contours
are overlaid. Bottom: [N {\small II}]$\lambda6583$ emission in NGC 4696 overlaid with X-ray
contours of $0.4-0.8$~\rm\thinspace keV\ gas. All images are on the same scale
(10'' $=2.1$\rm\thinspace kpc). The black X indicates the centre of the radio emission. Box
1 is at position RA $12^{h}48^{m}49^{s}.4$, Dec. $-41^{\circ}$ 18' 48.9'' (J2000) and
box 2 is situated at RA $12^{h}48^{m}48^{s}.7$, Dec. $-41^{\circ}$ 18' 38.6'' (J2000). Both boxes are
5 by 5 fibres large corresponding to 3.4 arcseconds (0.7~\rm\thinspace kpc) across. \label{maps}}
\end{center}
\end{figure*}
The first reported detection of coronal emission was by \cite{anton1991} in Abell 1795.
The authors claimed to have found a 13$\sigma$ detection of the [Fe {\small X}]$\lambda$6374
line in the core of the cluster with an aperture centred on the central
2'' of the cD galaxy.
A later study of the same object by \cite{donahue1994} failed to confirm this
detection. The sample
used by \cite{donahue1994} contained five massive cool core clusters and in this
same paper they report three more non-detections (A2199, 2A0335+096 and A2597) and one
marginal (3$\sigma$) detection in PKS 0745-191. This detection has also
failed to be confirmed in a later study by \cite{yan1995}.
The emission line nebulosities around NGC 1275, the BCG
in the Perseus cluster were studied by \cite{shields1992}. They searched for
[Fe {\small X}]$\lambda$6374 emission in off-nuclear regions, away from the central AGN, but also
obtained no detection.
Up until now the two most recent searches for coronal line emission have been by
\cite{yan1995} and \cite{sabra2000}. \cite{yan1995} searched for emission from
[Fe {\small X}]$\lambda$6374 in Abell 1795 and PKS 0745-191, the only two clusters where
detections of coronal line emission had previously been reported. Contrary
to these results they did not find any significant detection in either cluster but
were able to place upper limits on the surface brightness a factor of 10 deeper.
In a larger program studying the properties of the emission line nebulosity in
NGC 1275, \cite{sabra2000} searched for coronal lines over a wide
wavelength range using Keck. Six coronal line transitions were covered in this range from
[Ni XII]$\lambda$4232 to [Ni XII]$\lambda$6702. No coronal line emission was seen
in their data.
This work reports on deep, high spectral resolution, integral field spectroscopy (IFS)
observations of NGC 4696 (RA $12^{h}48^{m}49^{s}$, Dec. $-41^{\circ}$ 18' 40'' J2000), the BCG in the Centaurus cluster, Abell 3526. The
Centaurus cluster at redshift $z=0.0104$ is the second nearest
example of a cool core cluster. The heating and cooling in this cluster
is apparently very well balanced despite the short central cooling time of only
200~\rm\thinspace Myr. NGC 4696 houses a radio source, and multiple bubbles.
These are accompanied
by soft X-ray filaments and a sharp rise in the metal abundance in the central
30~\rm\thinspace kpc, among the highest seen in any cluster, $\sim$~twice solar
\citep{sanders2002, fabian2005b, sanders2006}. Centaurus also has the broadest range
of X-ray temperatures seen, containing gas from 0.35 to 3.7~\rm\thinspace keV, over a
factor of 10 in temperature \citep{sanders2008b}.
\cite{crawford2005} presented images showing the extensive,
filamentary H$\alpha$ nebulosity surrounding NGC 4696. These
share the morphology of the soft X-ray filaments, and of a prominent dust lane
\citep{sparks1989}. The low X-ray temperatures, short cooling times and
exceptionally high metallicities in this nearby cluster make it the ideal target
for a deep search for coronal line emission.
The observations are briefly described in $\S$ 2, method, analysis and
limits on coronal line emission is given in $\S$ 3 and in $\S$ 4 we discuss
the implications of these limits. We summarise our results and main
conclusions in $\S$ 5.
At the redshift of the Centaurus cluster ($z=0.0104$, 44.3~\rm\thinspace Mpc) one \rm\thinspace arcsec\ corresponds
to 0.210~\rm\thinspace kpc\ (throughout this paper we adopt H$_0=71$~\hbox{$\kmps\Mpc^{-1}\,$}, $\Omega_{\mathrm{M}}=0.27$ and
$\Omega_{\Lambda}=0.73$). Abundances given in this paper are relative to the
solar metallicities of \cite{anders1989}.
\section{Observations and Data Reduction}
Observations were made on 2009 March 27-30 using the VIsible MultiObject Spectrograph
(VIMOS) on the VLT in La Paranal, Chile (see \citealt{lefevre2003} and \citealt{zanichelli2005} for a
description of the VIMOS IFU and a discussion of data reduction techniques). We obtained High
Resolution Orange (HRO) data using VIMOS Integral Field Unit (IFU). We used the larger 0.67'' fibres
giving a field of view of 27''$\times$27'' with the HR grism.
We acquired 10.5 hours exposure centred on the inner region of NGC 4696 (12$^{\mathrm{h}}$48'49.3'', -41$^{\circ}$18'40''),
the deepest observations thus far to try to detect emission from cooling hot gas in these objects. We also
took six 15 minute exposures using the same set up to image the entire galaxy including many
sky fibres.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{SSP_model_fit_5800_6650.eps}
\caption{The best-fit BC03 SSP model (red) to the observed spectrum, from
box 1, black between $5700-6700$~$\mathrm{\text{\AA}}$ and the
fit residuals. Regions where we expect object emission lines and regions of poor sky subtraction
were masked out in the fit and are shown above in grey. \label{ssp}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{hist_FeX_2.eps}
\caption{Top: The range of \hbox{\sc STARLIGHT}\ fits over the [Fe {\small X}]$\lambda$6374 region,
in box 1, determined by perturbing and re-fitting the observed spectrum.
Bottom: The distribution of integrated flux values for 200 \hbox{\sc STARLIGHT}\ fits over the
[Fe {\small X}]$\lambda$6374 region. \label{ssperr}}
\end{center}
\end{figure}
The data were reduced by the VIPGI\footnote{VIPGI-VIMOS Interactive Pipeline Graphical Interface,
obtained from http://cosmos.iasf-milano.inaf.it/pandora/.} pipeline (see \citealt{scodeggio2005} for a
description of VIPGI). The 3D-datacubes were combined and analysed with a set of IDL routines (R.
Sharp, private communication).
Instrument flexures in VIMOS are very dependent on rotator position and suffer
from hysteresis \citep{Amico2008}. For this reason we took calibration frames after
each observation block.
Cosmic-ray rejection, final fibre-to-fibre transmission corrections, sky subtraction,
correction for Galactic extinction and shifting to the object rest frame
were performed outside of VIPGI using IDL routines. The data reduction procedure will
be explained thoroughly in a forthcoming paper, Canning et al. (in prep), detailing
observations of the emission line nebulae surrounding NGC 4696.
\section{Method, Analysis, Detections and Limits}
Our HR Orange spectra cover the wavelengths of a number of coronal lines, specifically
[Fe {\small XIV}]$\lambda5303$, [Ca {\small XV}]$\lambda5445$, [Ca {\small XV}]$\lambda5694$, [Fe {\small X}]$\lambda6374$
and [Ni {\small XV}]$\lambda6702$.
The expected location of the [Fe {\small X}]$\lambda$6374 feature coincides with
the red wing of the [O {\small I}]$\lambda$6363 object emission so requires special
attention (see section \ref{sec:FeX}). None of our other expected coronal lines coincide
with object emission lines or sky lines.
\subsection{Continuum subtraction}
\label{sec:subtraction}
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\textwidth]{box_1_and_2_together_2.eps}
\includegraphics[width=0.4\textwidth]{FeX_box_1_and_2_together_2.eps}
\caption{Top: Spectra from box 1 (black) and box 2 (red) without continuum subtraction. All
sky lines except [O {\small I}]$\lambda$6300 and [O {\small I}]$\lambda$6363 sky emission have
been subtracted in order to more clearly see object emission line features. Bottom: A blow-up of the expected [Fe {\small X}]$\lambda$6374 region.
Emission from cool gas in box 2 has a broader velocity width than box 1 in general except over the
[O {\small I}]$\lambda$6363 and [Fe {\small X}]$\lambda$6374 region. This demonstrates the excess flux on the red wing
of oxygen emission in box 1. This can not be accounted for by subtraction of a template from the object
[O {\small I}]$\lambda$6300 emission line.\label{box1and2}}
\end{center}
\end{figure*}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{FeX_smooth_2.eps}
\caption{Spectrum from box 1, after subtraction of a smooth continuum, showing the fit to the [O {\small I}]$\lambda$6300 object emission
line and the scaled template for the [O {\small I}]$\lambda$6363 emission (red). The blue
Gaussian is the result of a fit to the remaining excess flux after subtraction of the
oxygen line. The dashed spectrum is the original continuum subtracted data and the solid
line shows the effect of the [O {\small I}]$\lambda$6363 line subtraction. Here also two neighbouring
sky lines are fit and subtracted. The continuum is allowed to vary from zero and is fit locally
around the emission lines of interest. \label{fit}}
\end{center}
\end{figure}
When determining flux from possibly broad and faint emission lines, errors in
continuum subtraction are likely to dominate. For this reason continuum
subtraction was attempted in a number of ways. The first method we employ is to
use a small sample of
fibers from within the galaxy, in a region where no significant coronal
emission is seen. A second method is to subtract a smoothed continuum from the spectra
(as in \citealt{donahue1994}) and finally we model the continuum with simple
stellar populations (SSP) models from \cite{bc03}, hearafter refered to as BC03.
These methods are described below.
1. In an initial search we binned the spectra based on X-ray temperature, the
lowest temperature regions (box 1 and 2, Fig. \ref{maps}) overlap with the
temperature range at which some coronal lines are emitted, specifically
the lines of [Ca {\small XV}] (5 million K).
In box 1 we found evidence for a flux excess on the red wing of the [O {\small I}]$\lambda$6363
emission line, blueshifted slightly from the 6374~$\mathrm{\text{\AA}}$ where we would expect
[Fe {\small X}]$\lambda$6374. We found
no significant excess in box 2. We then use the spectrum
of box 2 to correct the continuum of box 1 and search for significant
flux excess in regions of expected coronal emission.
2. A smooth continuum is fitted to the sky subtracted spectra with
all regions where emission lines are expected and regions of poor sky
subtraction masked out. The smooth continuum is then subtracted from those spectra.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{sky_spectra.eps}
\caption{The sky spectra extracted from fibres at the edge of the field $\sim$~35'' from the
centre of NGC 4696. The [O {\small I}]$\lambda$6363 sky emission line is labelled as are the
regions where we would expect the object [O {\small I}] emission lines and the [Fe {\small X}]
emission. The spectrum has been shifted to the rest-wavelength of NGC 4696. \label{sky}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{TB1_TB2.eps}
\includegraphics[width=0.48\textwidth]{TB1_smooth.eps}
\includegraphics[width=0.48\textwidth]{fex_ssp_new.eps}
\caption{Top: Box 1 spectra after continuum subtraction
using the continuum from box 2 spectra.
Middle: Box 1 spectra with smooth continuum subtracted.
Bottom: Box 1 spectra with
best fit \hbox{\sc STARLIGHT}\ continuum model subtracted.
In each plot the top panel shows the data over
the wavelength region containing the [O {\small I}]$\lambda$6363 object emission and probable
[Fe {\small X}]$\lambda$6374 object emission. The red line shows the scaled [O {\small I}]$\lambda$6300
emission used as a template to subtract the contribution of the [O {\small I}]$\lambda$6363 line.
The bottom panel, in each case, shows the residuals from a subtraction of the [O {\small I}]$\lambda$6363
template. In each plot an excess of flux can be seen around a wavelength of 6368$\mathrm{\text{\AA}}$.
The two emission lines that have been greyed out, to the left of the [O {\small I}]$\lambda$6363
line, are sky lines. These were fit and subtracted from the spectrum before a fit to the
remaining excess flux was performed.\label{FeXfits}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{FeX_fits_TB2.eps}
\caption{Box 2 spectrum with a smooth continuum subtracted. There is no
significant excess emission seen on the red wing of the object [O {\small I}]$\lambda$6363
emission line. \label{TB2_fit}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{blue.eps}
\caption{The \hbox{\sc STARLIGHT}\ fit in the bluest region of our spectra. After
subtraction of the model stellar continuum we are left with variations in
the residual spectrum on the scale of $10-20~\mathrm{\text{\AA}}$. This is comparable
to the width of feature we find on the red wing of the [O {\small I}]$\lambda$6363
object emission line. For comparison the line shows the $\pm$1$\sigma$ width
of a 475~\hbox{$\km\s^{-1}\,$}\ FWHM velocity width line at the rest wavelength of the
[Fe {\small XIV}]$\lambda$5303 coronal emission line. \label{blue}}
\end{center}
\end{figure}
3. The continuum shape is fit using the \hbox{\sc STARLIGHT}\ software \citep{cidfernandes2005, cidfernandes2009}
and basefiles made up of 240 BC03 models spanning 6 metallicities, covering the
range $0.005-2.5~Z_{\odot}$, and 40 ages, covering the range $0-20$~Gyr. Using
this software requires that we re-grid our spectra to integer wavelengths.
\hbox{\sc STARLIGHT}\ uses a Markov Chain approach to fitting the data. We determine the
parameters to use by fitting the data many times and looking at the distribution
of fits over the region of interest. The fit with the
best $\chi^{2}$ value is taken as the continuum
and subtracted from the spectrum.
An example of a fit with emission lines masked out is shown in Fig. \ref{ssp}.
\hbox{\sc STARLIGHT}\ does not allow us to examine the uncertainty in the parameters
from a fit and so we use a Monte Carlo approach to determine the error.
We quantify the additional uncertainty introduced due to the continuum subtraction
by perturbing our data points by a Gaussian random number with
mean and variance given by the data and associated poission error. These
new perturbed spectra are then
fit with the \hbox{\sc STARLIGHT}\ software using the same parameters as before. We repeat this
200 times for our box 1 spectrum, the range of fits over the [Fe {\small X}]$\lambda$6374 region can
be seen in the top panel of Fig. \ref{ssperr}. The lower panel of Fig. \ref{ssperr}
shows the distribution of the integrated flux (counts-model) over the
wavelength region $6355-6380~\mathrm{\text{\AA}}$. The Gaussian shape of this
distribution implies we have repeated the fits enough times to properly
sample the range of models which could provide a good fit to the data.
The 200 model continua are then re-gridded to the same wavelength
grid as our original data.
We estimate the additional uncertainty in each spaxel as the one sigma
deviation in the range of models. This uncertainty is dependent on the error
spectrum of the entire wavelength range of the spectra being fit and as
such will only be weakly correlated with the possionian error on each spaxel.
We therefore estimate the total
uncertainty as the quadrature addition of the poisonion error and the
uncertainty in the continuum model.
An advantage of using synthetic spectra for the continuum subtraction is that we
can ensure that there is no coronal line emission in the fit, so we are not subtracting
any signal. The stellar spectra are much more complicated than a smoothed profile
and we would like to test whether any apparent excess flux found where we would
expect coronal line emission could be due to a `bump' in the stellar spectra. However the
spectral models assume the relative abundances are the same which may not be appropriate
to NGC 4696. The stellar continuum will also vary across the field and in binning spectra
over a large region of the galaxy we would expect the SSP model fit to deteriorate. For
these regions fits were run seperately on all spectra i.e. we did not use the best fit
\hbox{\sc STARLIGHT}\ model of box 1 to correct for the stellar continuum of any spectra other
than box 1.
\subsection{[Fe {\small X}]$\lambda$6374}
\label{sec:FeX}
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\textwidth]{all_tb1.eps}
\caption{The 90 per cent upper limits overplotted on the
continuum subtracted spectra from box 1 for the lines of [Fe {\small XIV}]$\lambda5303$,
[Ca {\small XV}]$\lambda5445$, [Ca {\small XV}]$\lambda5694$ and
[Ni {\small XV}]$\lambda6702$. Here the continum was subtracted using SSP models
fit with the \hbox{\sc STARLIGHT}\ software package. The redshift and velocity width of the
emission lines are constrained to be the same as that of the detected
[Fe {\small X}]$\lambda$6374 line. \label{all}}
\end{center}
\end{figure*}
The [Fe {\small X}]$\lambda$6374 coronal emission feature coincides with the red wing of
the [O {\small I}]$\lambda$6363 emission and as such warrants special treatment (see Fig. \ref{fit}). Unfortunately
the object [O {\small I}]$\lambda$6300 emission also coincides with the sky [O {\small I}]$\lambda$6363
emission feature.
We simultaneously fit Gaussians to the lines of [O {\small I}], the [N {\small II}] doublet,
the [S {\small II}] doublet, H$\alpha$ and the [Fe {\small X}] emission lines. This
is in order to allow for the errors in scaling and removing the [O {\small I}]$\lambda$6363
object emission line. A single Gaussian for each emission line in the obtical nebulosity
gives a good fit in the regions of box 1 and 2 (see upper panel of Fig. \ref{box1and2}).
We tie the redshift and velocity width of the object optical emision
line nebulae ([N {\small II}] doublet, [S {\small II}] doublet and
H$\alpha$ and [O {\small I}]) emission lines. The redshift and velocity
width of the sky [O {\small I}] lines are also tied. The integrated
flux of the [N {\small II}] doublet and that of the two [O {\small I}]
lines can be tied to each other, the scaling in each case being dictated
by atomic parameters \citep{osterbrock2006}. The integrated flux
of the [Fe {\small X}]$\lambda$6374 emission line is fixed to be a
positive value but the redshift, and velocity width are allowed to be
free in the fit as there is no evidence yet for a spatial correspondence
between the optical nebulosity and any coronal line emitting plasma.
The continuum level for each line is fit in a region
local to the emission line. An example of the fitting process with
the fit to the object [O {\small I}] emission lines and the
[Fe {\small X}]$\lambda$6374 emission line is shown in Fig. \ref{fit}.
\subsection{Detections and upper limits}
After continuum subtraction, detections and upper limits are established by fitting a Gaussian
line profile to the spectra using MPFIT \citep{more1978, markwardt2009}. The
90 per cent flux limits are found by increasing the integrated flux in the fit until
$\Delta\chi^{2}$ has grown by 2.7, these results are presented in columns 2-4
of table \ref{results}. The last three columns show, for two cooling rates
and metallicities, the predicted luminosity of
the coronal lines assuming they are the product of cooling from the hot ICM
(see \citealt{sarazin1991}).
During the fit we fix the Gaussian normalisation to be positive. When determining
the upper limits we also constrain
the redshift and velocity width to be the same as
that of the [Fe {\small X}]$\lambda$6374 emission line. The coronal lines probe a
broad range of temperatures so it is perhaps not obvious that they should have an
identical velocity. The continuum is fit
in a region local to the emission lines.
We report a 6.3$\sigma$ detection of excess flux at the expected wavelength of
[Fe {\small X}]$\lambda$6374 emission in a region 15'' away from the
nucleus (box 1, Fig. \ref{maps}). The quoted significance of this feature is from
a fit to the SSP model continuum subtracted spectrum. This method of continuum subtraction
gave the least significant result. We do not see
any evidence of this emission in a region of the galaxy where there is similar low
temperature X-ray gas nearer to the core (box 2). Fig. \ref{box1and2} shows a comparison of the spectra
in these two regions. The velocity width of the cool ($10^{4}$~K) gas near the core is
larger than that of the gas farther away in all cases except the region around the
[O {\small I}]$\lambda$6363 object emission. Here the box 1 spectra shows an excess of flux
towards the red wing of the oxygen line, this is not seen in the box 2 spectra or in
the line profile of the [O {\small I}]$\lambda$6300 line in box 1.
Fig. \ref{sky} shows the sky spectrum. This spectrum is formed from regions at the edge of
the field of view where we have no detectable object emission. The regions where we would
expect object [O {\small I}]$\lambda$6300, [O {\small I}]$\lambda$6363 and
[Fe {\small X}]$\lambda$6374 emission is indicated. There is no obvious sky emission in
the region around the [O {\small I}]$\lambda$6363 and [Fe {\small X}]$\lambda$6374
lines and as the excess flux is seen as a very broad feature and it is not
seen everywhere in the spectra it is unlikely to be confusion due to sky emission.
To fit the excess flux
requires a redshift different from that of the optical emission line nebulae, the line
in box 1 is blueshifted with respect to the optical nebulosity. The line of
sight velocity difference between the redshift of the H$\alpha$ nebulae and the best fits
to the excess flux is $\sim$~230\hbox{$\km\s^{-1}\,$}. This is comparable to the spread of the
line of sight velocities in the optical nebulosity itself.
The results from the three methods of continuum subtraction agree within
the error (see Fig. \ref{FeXfits}). In box 1 the integrated flux with subtraction of
the continuum using a region where no significant excess flux is
observed, after subtraction of the [O {\small I}]$\lambda$6363 line, gives an integrated
flux of $1.8\times10^{-16} \pm 2\times10^{-17}$\hbox{$\erg\cm^{-2}\s^{-1}$}, subtraction of
a smoothed continuum gives an integrated flux in the line of
$1.6\times10^{-16} \pm 2\times10^{-17}$\hbox{$\erg\cm^{-2}\s^{-1}$}, in both these cases the error is
determined from the possionian error in the spectrum which is similar
to the one sigma deviation after continuum subtraction of a nearby emission free region
of the spectrum, and finally
SSP model fitting gives
$1.5\times10^{-16} \pm 2.4\times10^{-17}$\hbox{$\erg\cm^{-2}\s^{-1}$}, including both the
possionian error and the additional error in the continuum fit.
Fig. \ref{TB2_fit}
shows, for comparison, the same region with the template [O {\small I}]$\lambda$6363
line subtracted. There is no significant excess emission seen to the red-wing of
the [O {\small I}]$\lambda$6363 line.
In regions containing strong stellar continuum features where the
background spectrum is highly structured ($\lambda<6000~\mathrm{\text{\AA}}$)
the error
after continuum subtraction is larger than the simple poissonian error.
The noise features in these regions have
a width similar to the emission feature we are looking for ($10-20~\mathrm{\text{\AA}}$, see Fig. \ref{blue}). We estimate
the noise in these regions in two ways. First we calculate the one sigma
deviation from zero in an emission free region of the spectrum (after continuum subtraction)
on the scale of the variations in the continuum. Second we fit a Gaussian
of fixed width equal to the variations in the continuum. We then step the Gaussian
over the emission line free region pixel by pixel. Due to the fluctuations in
the noise the Gaussian fit will sometimes have a positive normalisation
and sometimes a negative one. The distribution of the Gaussian area values
in this region provide a second mechanism for estimating the one sigma
uncertainty. The errors determined in both fashions were consistent with each other.
Fig. \ref{all} shows
the upper limits overplotted on the continuum subtracted spectrum for the
[Fe {\small XIV}]$\lambda5303$, [Ca {\small XV}]$\lambda5445$,
[Ca {\small XV}]$\lambda5694$, and [Ni {\small XV}]$\lambda6702$ lines.
\begin{table*}
\centering
\begin{tabular}[h]{|c|c|c|c|c|c|c|}
\hline
\hline
Line & \multicolumn{3}{c}{Luminosity (\hbox{$\erg\s^{-1}\,$})} & \multicolumn{3}{c}{Model (\hbox{$\erg\s^{-1}\,$})} \\
& Box 1.$^{a}$ ($10^{37}$) & Box 2.$^{a}$ ($10^{37}$) & 20 arcsec$^{2}$.$^{b}$ ($10^{37}$)
& $Z=Z_{\odot}$ ($10^{37}$) & $Z=0.5Z_{\odot}$ ($10^{37}$) & $Z=Z_{\odot}$ ($10^{37}$) \\
\hline
$[\mathrm{Fe}$~{\scriptsize XIV}$]\lambda5303$ &$<$4.5&$<$4.0&$<$11.5&3.1&3.1&10.3\\
$[\mathrm{Ca}$~{\scriptsize XV}$]\lambda5445$ &$<$1.0&$<$0.9&$<$9.1&1.4&1.2&4.7\\
$[\mathrm{Ca}$~{\scriptsize XV}$]\lambda5694$ &$<$3.8&$<$1.4&$<$3.2&2.1&1.8&7.0\\
$[\mathrm{Fe}$~{\scriptsize X}$]\lambda6374$ &3.5$\pm$0.56&$<$7.0&25.8$\pm$8.3&4.1&4.2&13.7\\
$[\mathrm{Ni}$~{\scriptsize XV}$]\lambda6702$ &$<$1.1&$<$0.2&$<$1.7&0.2&0.2&0.6\\
\hline
\end{tabular}
\caption{The 90 per cent (2.7$\sigma$) upper limits on the luminosity
and detections of [Fe {\small X}]$\lambda$6374 emission
with one sigma errors in the regions specified. The luminosity quoted as detections
used the continuum subtraction technique which gave the lowest integrated flux, in all
cases this was when using the SSP model fits. Where the results are upper limits and
not clear detections the continuum subtraction method yielding the largest
integrated flux was taken.
The model
approximate luminosities are from \protect \cite{sarazin1991} and scale
simply with the cooling rate. Here they are given for a cooling rate of 3
(Columns 5 and 6) and 10
$\mathrm{M}_{\odot}~\mathrm{yr}^{-1}$ (Column 7). These limits are determined by spectral
fitting of the X-ray spectrum \protect \citep{sanders2008b}.
The sizes of the regions in square arc seconds (\rm\thinspace kpc) are $^{a}$ 3.4
arcsec$^{2}$ (0.7 \rm\thinspace kpc$^{2}$) and $^{b}$ 20.1 arcsec$^{2}$ (4.2 \rm\thinspace kpc$^{2}$). These
limits have been corrected for galactic extinction but not for the intrinsic
reddening of NGC 4696. The upper limits on [Ca {\small XV}]$\lambda$5694
imply this is deficient by a factor of 2 in the inner 20 arcseonds compared with
the luminosity predicted from the X-ray cooling rate within this region.}
\label{results}
\end{table*}
\begin{table*}
\centering
\begin{tabular}[h]{|c|c|c|c|}
\hline
\hline
Line & & Flux (\hbox{$\erg\cm^{-2}\s^{-1}$}) & \\
& Box 1.$^{a}$ & Box 2.$^{a}$ & All pixels.$^{b}$ \\
\hline
$[\mathrm{Fe~XIV}]\lambda5303$ &$<1.9\times10^{-16}$&$<1.7\times10^{-16}$&$<4.9\times10^{-16}$ \\
$[\mathrm{Ca~XV}]\lambda5445$ &$<4.4\times10^{-17}$&$<3.7\times10^{-17}$&$<3.9\times10^{-16}$ \\
$[\mathrm{Ca~XV}]\lambda5694$ &$<1.6\times10^{-16}$&$<5.9\times10^{-17}$&$<1.4\times10^{-16}$ \\
$[\mathrm{Fe~X}]\lambda6374$ &$1.5\times10^{-16}\pm2.4\times10^{-17}$&$<3.0\times10^{-16}$&$1.1\times10^{-15}\pm3.5\times10^{-16}$ \\
$[\mathrm{Ni~XV}]\lambda6702$ &$<4.6\times10^{-17}$&$<6.6\times10^{-18}$&$<6.0\times10^{-17}$ \\
\hline
\end{tabular}
\caption{Same as Column 1-4 of table \ref{results} in units of flux.}
\label{results2}
\end{table*}
\subsection{Spatial distribution of the hot gas}
The detection of [Fe {\small X}]$\lambda$6374 emission in box 1 but not
in box 2 implies there may be some `clumpiness' to the spatial distribution of
the intermediate temperature gas. A small change in temperature
equates to a vast change in emissivity (see Ferland et al. in prep), so this clumpy appearance may be indicative
of small variations in temperature of the gas across the field of view.
Using IFS observations we can trace the morphology of the regions where there appears
to be an excess of flux towards the red wing of the [O {\small I}]$\lambda$6363
line. Fig. \ref{flux} shows the spatial distribution of this excess flux across the
field of view, overlaid with the contours of radio emission (see
\citealt{taylor2007}).
We binned the spectra in 5 by 5 regions, this size binning was used as a compromise
between spatial resolution, signal to noise and goodness of fit to the stellar continuum.
We then fit each region with SSP models as decribed
in section \ref{sec:subtraction}. The [O {\small I}]$\lambda$6363 object emission line was
then fit and removed and the remaining `counts' between $6363-6380~\mathrm{\text{\AA}}$
were summed (no emission line shape was assumed here) to create the map of relative
flux, seen in the bottom panel of Fig. \ref{flux}. As an illustration we show, in
Fig. \ref{bins}, spectra from bin A and B, the regions of highest excess flux, and
spectra from bin C and D, where little excess flux is seen.
\subsection{Velocity of the hot gas}
Assuming a Gaussian profile for the [Fe {\small X}]$\lambda$6374 emission line detected in
box 1 we can fit the velocity width of the detected broad excess flux. We do this
for each of the methods of continuum subtraction, as described above. The FWHM velocity width of
the feature determined from the different continuum subtraction methods is 300 to 650~\hbox{$\km\s^{-1}\,$}, from box 1 (the
velocity width of the [O {\small I}]$\lambda$6363 is $\sim200$~\hbox{$\km\s^{-1}\,$}). It
should be noted that the predicted integrated line profile of the coronal line emission
is model dependent \citep{sarazin1991}, however our data are not deep enough to
allow for an investigation of the line profile. The thermal line
width of the coronal line emission is typically only $20-30$~\hbox{$\km\s^{-1}\,$}, so the broad width
of this feature is likely due to turbulent motions of the hot gas.
\cite{sarazin1991} show that for a gas of 2 million K with maximal turbulent broadening
(velocities limited by the sound speed in the hot ambient medium) predicted velocity
widths at FWHM are approximately 1700~\hbox{$\km\s^{-1}\,$}. The motions in the gas we observe are
therefore highly subsonic.
\section{Discussion}
We have conducted a deep search for 5 species of optical coronal
line emission, specifically [Fe {\small XIV}]$\lambda5303$,
[Ca {\small XV}]$\lambda5445$, [Ca {\small XV}]$\lambda5694$,
[Fe {\small X}]$\lambda6374$ and [Ni {\small XV}]$\lambda6702$.
These lines probe gas at temperatures of 2, 5, 5, 1.3 and 2.5 million K
respectively.
We report four upper limits
and a 6.3 sigma detection of
[Fe {\small X}]$\lambda$6374 emission. This detected feature is broader than the
H$\alpha$ nebulosity at 10$^{4}$ K, having a FWHM velocity width of about 300$-$650~\hbox{$\km\s^{-1}\,$}
and is blueshifted with respect to this cooler emission.
[Fe {\small X}]$\lambda$6374 is emitted from gas at one million K, the lowest
temperature of the coronal lines in our wavelength range.
Assuming that coronal line emission is a product of the low level of
residual cooling out of the hot gas observed in the central galaxies
of galaxy clusters we can determine the rate of cooling from the flux of
these lines. Upper limits for the detection of [Fe {\small XIV}]$\lambda$5303 and
[Ni {\small XV}]$\lambda6702$ are consistent with the X-ray cooling rate,
in the absence of heating, of $\sim$10~\hbox{$\Msun\yr^{-1}$}\ \citep{sanders2008b} above
0.8~\rm\thinspace keV, over
a region 20 arcseconds in diameter. However in the case of
[Fe {\small XIV}]$\lambda$5303 we stress the difficulties involved in
the subtraction of the stellar continuum which introduces a larger
error into the detection limit (see section \ref{sec:subtraction} and Fig.
\ref{blue}).
The rate of cooling implied by the detection of [Fe {\small X}]$\lambda$6374
emission is a rate of $\sim$~20\hbox{$\Msun\yr^{-1}$}\ in the central 20 arcsec$^2$,
twice that inferred from X-ray observations and from the upper limits of
the other coronal lines (see table \ref{results}). If this emission is solely due to the
cooling of the ICM in this cluster we would expect similar or
greater detections from the [Fe {\small XIV}]$\lambda$5303 and [Ca {\small XV}]
lines. An alternative for the iron emission is that it originates in gas which
was cooler and has been heated.
There are lots of possible sources of heating including shocks, photoionisation from
central active galactic nuclei (AGN), thermal conduction, turbulent mixing or
heating associated with the radio bubbles. We caution that recent changes in the atomic parameters
indicate that the emissivity of [Fe {\small X}]$\lambda$6374 is larger than
previously thought. This would imply a slightly lower cooling rate than calculated here
and will be investigated in further work by Ferland et al. (in prep).
C {\small IV} 1549$\mathrm{\text{\AA}}$
emission has been discovered by \cite{sparks2009} in the outer regions of
M87, co-located with its H$\alpha$ filaments. This probes the slightly
lower temperatures of $\sim$10$^{5}$~K. O {\small VI} 1032, 1035 $\mathrm{\text{\AA}}$ emission
($\sim$10$^{5.5}$~K) has also been seen in a small sample of cool core clusters
\citep{oegerle2001, bregman2006}. \cite{bregman2006} note that in Abell 1795
the cooling rate implied by the 10$^{5.5}$~K is larger than that implied by the
X-ray emission, as is the case here. They suggests that non-steady cooling
of material may be the cause of this conflict.
Most AGN show emission from forbidden, high-ionisation lines, the so
called extended coronal line regions (CLRs) \citep{mullaney2009, mazzalay2010}.
This emission is believed to be from a region outside of the broad-line
region and inside the narrow-line region and lines are often observed to be
blueshifted by 500 or more \hbox{$\km\s^{-1}\,$}. \cite{mazzalay2010} find that photoionisation
from the central AGN is the likely major ionisation mechanism for these
regions. Their results also confirm the observation that higher-ionisation
lines are emitted from a more compact area while the lower-ionisation lines
can be found further into the narrow-line region.
For a typical Seyfert
galaxy with an ionising luminosity $L_{\mathrm{ion}}=10^{43.5}$~\hbox{$\erg\s^{-1}\,$},
\cite{ferguson1997} find that the region containing [Fe {\small X}]$\lambda$6374
emission should be restricted to the inner 20~\rm\thinspace pc.
The size of the CLR scales with the ionising luminosity of the central source as
$L_{\mathrm{ion}}^{1/2}$ \citep{ferguson1997}. Box 1, where we find significant
[Fe {\small X}]$\lambda$6374 emission, is located $\sim$15'' from the centre
of the radio source, a distance of $\sim$3~\rm\thinspace kpc. Scaling up from the values of
\citep{ferguson1997}, a region of the size $\sim$3~\rm\thinspace kpc\ implies a huge ionising
luminosity of 10$^{47}$~\hbox{$\erg\s^{-1}\,$}. \cite{taylor2006} have found the nucleus of
NGC 4696 to be very faint. Their upper limit on the luminosity from X-ray
observations between $0.1-10$~\rm\thinspace keV\ is $\sim$10$^{40}$~\hbox{$\erg\s^{-1}\,$}. The spatial
extent over which we find the coronal line emission leads us to rule out the
possibility of photoionisation by the central AGN. It is possible, looking at
the residual flux over the spectral region where we would expect [Fe {\small X}]$\lambda$6374 emission,
that the spatial distribution of the 1 million K gas
(Fig. \ref{flux}) traces that of the 10$^{4}$~K filaments
and large dust lane however this is a tentative result and would need confirmation
with deeper observations.
Another explanation for the excess flux such as emission from
a merging clump of gas or another filament is unlikely, as any extra component
of [O {\small I}]$\lambda$6363 emission should also be present in the
[O {\small I}]$\lambda$6300 emission and no red wing is observed on this line.
One explanation for the blueshift could be that the radio source is interacting
with and driving the plasma away possibly entraining with it cooler gas.
Fig. \ref{ha_nII} shows a two component fit to the [N {\small II}] and H$\alpha$
emission in the box 1 spectrum. There is evidence for a blueshifted component
to these lines and the best fit gaussians imply a blueshifted velocity of
$\sim$140\hbox{$\km\s^{-1}\,$}. This is less than the observed blueshift of the
[Fe {\small X}]$\lambda$6374 line ($\sim$230\hbox{$\km\s^{-1}\,$}) so is inconclusive.
\cite{sanders2008b} have used deep {\it XMM-Newton}\ Reflection Grating Spectrometer
(RGS) observations to show that the centre of the Centaurus cluster
contains X-ray emitting gas down to at least 4 million K through
detections of the Fe {\small XVII} line and limits on the O {\small VII} emission.
The metal abundances are very high in the inner 30~\rm\thinspace kpc\ of this
cluster \citep{sanders2006}. The iron abundance of the
hot gas is $\sim$2 times solar and the calcium
abundance $\sim$3 times solar. The {\it Chandra}\ spectra show evidence of an
off-centre abundance peak, at $\sim$20~\rm\thinspace kpc. The best traced elements
of Fe and Si then show a decline in abundance in the nucleus.
At these high abundance values we expect to detect, in our optical spectra,
lines of [Ca {\small XV}], probing gas of 5 million K, detected already
by its X-ray Fe {\small XVII} emission. Contrary to this expectation
we do not detect any significant calcium emission. Upper limits
in most cases suggest the rate of radiative cooling is less than that inferred from
the X-ray spectra of hotter gas and provide a more stringent constraint than the other
coronal emission lines.
We explore four possible explanations for this result; inaccurate continuum
subtraction; a lower cooling rate; a low calcium abundance and
calcium deficiency in the intermediate temperature gas. We will deal
with each of these points in turn.
The continuum subtraction has been attempted in a variety of ways (see section
\ref{sec:subtraction}) and in all cases produces similar results. Where
these results were upper limits and not clear detections the largest value was
taken and is given in table \ref{results}. Where the stellar spectra exhibit
many features the uncertainties are large and the detection of broad line emission is
very difficult. However, in the absence of a template spectrum in which we
are sure there is no emission from intermediate temperature gas and observations
with similar abundance ratios across the field of view this is the best we
can do.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{radio_scan.eps}
\includegraphics[width=0.5\textwidth]{spat_dist3.eps}
\caption{The spatial distribution of the 1 million K gas. Above: The [N {\small II}]$\lambda$6583
emission, overlaid with VLA 5 \rm\thinspace GHz\ radio contours
(\citealt{taylor2007}. The white box indicates the region
where we have our deepest observations. Below: This region binned on a 5 by 5 fibre
basis showing the residual flux at the position of the [Fe {\small X}]$\lambda$6374 emission across
the field of view. The brightest bins are regions of highest flux. The spectra from
bins A-D are shown in Fig. \ref{bins}. For comparison box 1 and 2 are marked in red. \label{flux}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{bin1234.eps}
\caption{A comparison of the spectra from four of the regions marked on Fig. \ref{flux}. The
spectra have been continuum subtracted and the [O {\small I}]$\lambda$6363 object
emission line and the two sky lines to the left of the Oxygen line have been fitted and removed.
The bottom panel shows the residuals between the two spectra. \label{bins}}
\end{center}
\end{figure}
The cooling rates are derived for cooling without heating and so may be lower
than stated, they may also vary across the temperature range probed. Investigations
of the quantity of molecular gas and dust and of recent star formation in these
objects may help constrain the quantity of cooling, to very low temperatures, that we
expect. A much lower cooling rate may provide an explanation for the lack of [Fe {\small XIV}]$\lambda$5303
and [Ni {\small XV}]$\lambda$6702 emission we see however the calcium lines should
originate from a temperature range that has been directly observed from
measurements of the Fe {\small XVII} line ratios, so the lack of this emission
is not easily understood.
The Centaurus cluster has very high metallicities which peak at
a radius of 20~\rm\thinspace kpc, then decline towards the nucleus \citep{sanders2006}. This enhancement and
decline in the metallicities are reproduced using both {\it XMM-Newton}, {\it Chandra}\ CCD
observations and {\it XMM-Newton}\ RGS observations. An off-centre abundance peak with a depression
in abundance towards the centre of clusters and groups is often observed in
objects with cool cores (see for example \citealt{degrandi2001, rasmussen2007} and
\citealt{degrandi2009}).
There are a number of biases and uncertainties in abundance measurements from
spectral modelling. Inaccuracies in modelling the temperature structure of galaxy
clusters can introduce an `Fe bias' \citep{buote2000}. Here the measured Fe
abundance is lower than the actual value due to fitting a multiphase gas with
single or only a couple of temperature components. The opposite effect,
an `inverse Fe bias', is also seen in cool core clusters when a single temperature
model is fitted to a multiphase plasma with temperatures between $2-4$~\rm\thinspace keV\
\citep{rasia2008}. In the case of Centaurus the high spectral resolution of
{\it XMM-Newton}\ RGS allows the temperature to be constrained with line strength ratios giving
a more robust check on the temperature components of the models \citep{sanders2008b}.
Sedimentation in the centre of galaxy clusters \citep{fabian1977b}
can cause the metal abundances to rise, and could be
reversed by the effects of thermal diffusion \citep{shtykovskiy2010}.
Neglecting the effects of resonance scattering also underestimates the abundances
of metals in clusters. This effect is at most 10 per cent,
so cannot fully explain the central abundance dips observed in galaxy clusters \citep{sanders2006b}. The drop in
abundances in the Centaurus cluster may be explained with a complex model
involving three temperature components and additional absorption however
the errors on the inner-most radial bins become sufficiently large that it is
impossible to tell if the drop is real or not \citep{fabian2005b}.
A major contributor to a central abundance drop must be depletion on dust.
The central few \rm\thinspace kpc\ of the hot gas will be dominated by stellar mass loss, in which most
metals are bound in dust grains. They will slowly be introduced into the hot phase
by sputtering, at a rate dependent on grain size \citep{draine1979, barlow1978}. Much iron
however could be injected into the hot phase through SNIa.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{ha_nII_decomposistion.eps}
\caption{A two component fit to the H$\alpha$ and [N {\small II}] emission from box 1.
There is a evidence for a slightly blueshifted component to these lines with a blueshifted
velocity of $\sim$140\hbox{$\km\s^{-1}\,$}. This is smaller than the velocity shift seen in the
[Fe {\small X}]$\lambda$6374 emission line of $\sim$230\hbox{$\km\s^{-1}\,$}. \label{ha_nII}}
\end{center}
\end{figure}
Calcium
is one of the most depleted of all refractory elements in the presence of dust
\citep{field1974, spitzer1975, cowie1986}, with depletions of 10$^{-4}$
relative to solar abundance typical in dense clouds. NGC 4696
hosts a huge dust lane which almost completely encircles the core and
spirals out to the north east, tracing the morphology of the H$\alpha$ filaments.
The X-ray absorption column density is also highest in the same
region \citep{crawford2005}. It has moreover a large quantity of infrared-emitting dust in its nucleus
\citep{goudfrooij1995, kaneda2007}. \cite{kaneda2007} show from
Spitzer Multiband Imaging Photometer (MIPS) 24~$\mu$m
surface brightness profiles that the dust emission increases steeply in the inner
30~arcseconds (6.3~\rm\thinspace kpc), a region slightly larger than our field of view
and where we observe the metallicity drop.
The cumulative gas mass obtained from
the X-ray emitting hydrogen density profile of \cite{graham2006} in the inner 30~arcseconds or
6.3~\rm\thinspace kpc\ (10~arcseconds, 2.1~\rm\thinspace kpc) is $9.4\times10^{8}$~\hbox{$\rm\thinspace M_{\odot}$}\
($9.2\times10^{7}$~\hbox{$\rm\thinspace M_{\odot}$}). Assuming a dust to gas ratio similar to our
Galaxy \citep{crawford2005, edge2010} we get a total dust mass of $9.4\times10^{6}$~\hbox{$\rm\thinspace M_{\odot}$}\
($9.2\times10^{5}$~\hbox{$\rm\thinspace M_{\odot}$}). The dust mass estimated by \cite{goudfrooij1995}
using the Infrared Astronomical Satellite (IRAS) is $4.7\times10^{6}$~\hbox{$\rm\thinspace M_{\odot}$}\
and the Spitzer MIPS result from \cite{kaneda2007} is $4.2\times10^{5}$~\hbox{$\rm\thinspace M_{\odot}$}. These
values are considered to be lower limits to the dust mass as the instruments
are insensitive to very cold dust and imply that the inner interstellar medium
in NGC 4696 is highly deficient in refractory elements.
A number of cool core clusters have been fround to be deficient in calcium in the warm (10$^{4}$~K) emission
line nebulae, where [Ca {\small II}] and [Ca {\small V}] emission lines would be expected
\citep{ferland1993, donahue1993}. Our
spectra of NGC 4696 are consistent with this since they
show no evidence for [Ca {\small II}] or [Ca {\small V}] emission in the inner regions
of NGC 4696.
These results indicate that NGC 4696 is deficient in gas-phase calcium at temperatures
below $\sim$5 million K. The lack of this warm and intermediate temperature gas
phase in the inner 20 arcseconds implies the calcium in dust has never been part of
the hot ICM and was probably introduced in dusty stellar mass-loss.
The high calcium abundance in the hot gas, discovered by \cite{sanders2006}, has its
greatest contribution from the hot 4~\rm\thinspace keV\ emission (the contribution from the best
fit to the lower temperature component, 0.5~\rm\thinspace keV, is negligible). This is consistent
with the dust being sputtered by the hotter gas at large radii ($\geq$few~\rm\thinspace kpc), where
there are also no shielding `cold' filaments.
\section{Conclusion}
We report the detection of [Fe {\small X}]$\lambda$6374 coronal line emission in
NGC 4696. This emission probes temperatures of 1 million K and is the first detection
of intermediate temperature gas in this object. We fail to detect emission
from coronal lines of higher temperature gas including those of [Ca {\small XV}]
which probe gas of 5 million K and which we had expected to detect due to the
high abundance of calcium in the hot X-ray emitting gas.
We conclude that calcium is likely to be depleted in the dusty central regions
of NGC 4696. This is consistent with our apparent lack of [Ca {\small V}] and [Ca {\small II}]
ions which probe lower temperature gas and with the negligible contribution to the
abundance in the hot gas of the lower temperature 0.5~\rm\thinspace keV\ calcium lines. The dust in the central
region of the galaxy is likely due to stellar mass-loss and has survived as dust grains due to the
shelter of surrounding cooler gas.
The abundance of calcium is higher in the outer 4~\rm\thinspace keV\ ICM since dust is sputtered there
where the dust is less protected by the cooler
surroundings and the cold filaments. Deeper, high resolution X-ray observations
which better constrain the central abundances of Fe, Si, Ca and Ne would help to
distinguish between the processes that contribute to the metal abundance.
There is now strong evidence to show that the central galaxies in many cool core
clusters are playing host to large quantities of dust and cool gas (see for example
\citealt{mcnamara1992, edge2010}). \cite{donahue1993} have also shown that
the [Ca {\small II}] doublet is much weaker than expected in a sample of BCGs.
They conclude similarly that calcium is most likely depleted into dust grains in their
sample.
The cooling rate inferred by the [Fe {\small X}]$\lambda$6374 emission is large
(20~\hbox{$\Msun\yr^{-1}$} in a spatial region of 20 arcseconds$^{2}$) compared with that determined from other lines in the optical and
X-ray spectrum. This and the apparent lack of [Fe {\small XIV}]$\lambda$5303
and [Ni {\small XV}]$\lambda$6702 emission rule out a steady cooling flow from the
10$^7$~K gas in this object. Some gas may however be cooling non-radiatively for
example by mixing with the colder gas. The strength of the [Fe {\small X}] emission
suggests that the million K gas is being heated rather than condensing out of the hot ISM.
\section{Acknowledgements}
REAC acknowledges STFC for financial support. ACF thanks the Royal Society. GJF gratefully acknowledges
support by NSF (0607028 and 0908877) and NASA (07-ATFP07-0124). REAC would also like to thank
Rob Sharp for allowing use of his IFU IDL routines and Ryan Cooke, Paul Hewett and Ben Johnson for
help and valuable discussions.
This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet
Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics
and Space Administration.
The \hbox{\sc STARLIGHT}\ project is supported by the Brazilian agencies CNPq, CAPES and
FAPESP and by the France-Brazil CAPES/Cofecub program.
The data published in this paper have been reduced using VIPGI, designed by the VIMOS Consortium
and developed by INAF Milano.
The figures in this paper were produced using {\sc Veusz}\footnote[2]{http://home.gna.org/veusz/}.
\bibliographystyle{mnras}
|
1,116,691,497,345 | arxiv |
\section{Introduction}
Economic data often contain large clusters such as countries, regions, villages, or firms. Units within these clusters can be expected to influence one another or are influenced by the same political, environmental, sociological, or technical shocks. Several analytical and computer-intensive procedures such as the bootstrap are available to account for the presence of data clusters. These procedures generally achieve consistency by letting the number of clusters go to infinity. Numerical evidence by \citet{bertrandetal2004}, \citet{mackinnonwebb2014}, and others in the context of mean regression suggests that this type of asymptotic approximation often causes substantial size distortions when the number of clusters is small or the clusters are heterogenous. True null hypotheses are rejected far too often in both situations. \citet{hagemann2017} shows that this phenomenon is also present in quantile regression. In this paper, I develop a generic method for inference on the entire quantile or regression quantile process in the presence of a finite number of large and arbitrarily heterogeneous clusters. The method, which I refer to as \emph{cluster-randomized Kolmogorov-Smirnov} (\emph{CRK}) test, asymptotically controls size by generating Kolmogorov-Smirnov statistics that exhibit enough distributional symmetry at the cluster level such that randomization tests \citep{fisher1935, canayetal2014} can be applied. The CRK test is not limited to the pure quantile regression setting and can be used in distributional difference-in-differences estimation \citep{callawayli2019} and related situations where quantile treatment effects are identified by between-cluster comparisons. The CRK test is free of user-chosen parameters, powerful against fixed and root-$n$ local alternatives, and performs well at conventional significance levels with as few as twelve clusters if parameters are identified between clusters. If parameters are identified within clusters, then even five clusters are sufficient for inference.
Quantile regression (QR), introduced by \citet{koenkerbassett1978}, is an important empirical tool because it can quantify the effect of a set of covariates on the entire conditional outcome distribution. An issue with QR in the presence of clustering is that estimates normalized by their asymptotic covariance kernel have standard normal marginal limit distributions but are no longer pivotal for any choice of weight matrix \citep{hagemann2017}. Cluster-robust tests about the QR coefficient function therefore have asymptotic distributions that cannot be tabulated for inference about
ranges of quantiles. Even if only individual quantiles are of interest, consistent covariance matrix estimation in large clusters is challenging. It requires knowledge of an explicit ordering of the dependence structure within each cluster combined with a kernel and bandwidth choice to give distant observations less weight. Because time has a natural order, this weighting is easily done for time-dependent data but ordering data within states or villages may be difficult or impossible. The common empirical strategy of simply assuming that the clusters are small and numerous enough to satisfy a central limit theorem circumvents these issues but can lead to substantial size distortions with as few as 20 clusters \citep{hagemann2017}. This remains true if a cluster-robust version of the bootstrap is used. Distortions can be especially severe if clusters differ greatly in their size and dependence structure.
I show that the CRK test is robust to each of these concerns: It performs well even when the number of clusters is small, the dependence varies from cluster to cluster, and the cluster sizes are heterogenous. The reason for this robustness is that the CRK test does not rely on clustered covariance matrices to rescale the estimates. I instead use randomization inference to generate random critical values that automatically scale to the data. There are no kernels, bandwidths, or spatio-temporal orderings of the data to choose. The test achieves consistency with a finite number of large but heterogeneous clusters under interpretable high-level conditions. Despite being based on randomization inference, the CRK test can perform standard (non-sharp) inference on entire quantile or regression quantile processes. Randomization is performed with a fixed set of estimates and does not require repeated estimation to obtain its critical values.
The randomization method underlying the CRK test was first used in the cluster context by \citet{canayetal2014} as a way to perform inference on a finite-dimensional parameter with Student $t$ and Wald statistics in least squares regression. They do not consider inference on quantile functions or Kolmogorov-Smirnov statistics. Here, I considerably extend the scope of their method under explicit regularity conditions to allow for inference on the entire QR process and related objects. The proofs below are fundamentally different from those of \citeauthor{canayetal2014}\ to account for the infinite-dimensional setting and do not rely on the Skorokhod almost-sure representation theorem. A practical issue with their method is that they require treated clusters to be matched ex-ante with an equal number of control clusters. Each match corresponds to a separate test and two researchers working with the same data can reach different conclusions based on which matches they choose. If there is not an equal number of treated and control clusters, then some clusters have to be combined or dropped in an ad-hoc manner. The CRK test sidesteps these issues completely and explicitly merges all potential tests into a single, uniquely determined test decision using results of \citet{ruschendorf1982}.
Cluster-robust inference in linear regression models has a long history; recent surveys include \citet{cameronmiller2014} and \citet{mackinnonnielsenwebb2022}.
\citet{chenetal2003}, \citet{wanghe2007}, \citet{wang2009}, \citet{parentesantossilva2013}, and \citet{hagemann2017} provide bootstrap and analytical methods for cluster-robust inference in QR models. \citet{yoongalvao2020} discuss the situation where clusters arise from correlation of individual units over time. All of these papers require the number of clusters to go to infinity for consistency. The CRK test differs from these papers because it is based on randomization inference and is consistent with a finite number of clusters.
Several papers show that pointwise inference with a fixed number of clusters is possible under a variety of conditions. \citet{ibragimovmueller2010, ibragimovmueller2016} use special properties of the Student $t$ statistic to perform inference on scale mixtures of normal random variables. \citet{besteretal2014} use standard cluster-robust covariance matrix estimators but adjust critical values under homogeneity assumptions on the clusters. \citet{canayetal2018} show that certain cluster-robust versions of the wild bootstrap can be valid under strong homogeneity assumptions with a fixed number of clusters. \citet{hagemann2019b} adjusts permutation inference for arbitrary heterogeneity at the cluster level but his bounds only apply to finite-dimensional objects. All of these methods can be used for inference at a single quantile but not for simultaneous inference across ranges of quantiles. In contrast, the CRK test provides uniformly valid inference on the entire quantile process even if clusters are arbitrarily heterogeneous.
The remainder of the paper is organized as follows: Section \ref{s:sym} establishes new results on randomization inference on Gaussian processes. Section \ref{s:asym} uses these results to show consistency of the CRK test and gives specific examples where the test applies, including quantile difference-in-differences. Section~\ref{s:mc} illustrates the finite sample behavior of the test in Monte Carlo experiments and an empirical example using Project STAR data. The appendix contains proofs.
I use the following notation: $1\{\cdot\}$ is the indicator function, cardinality of a set $A$ is $|A|$, the smallest integer larger than $a$ is $\lceil a \rceil$, and the largest integer smaller than $a$ is $\lfloor a \rfloor$. The minimum of $a$ and $b$ is denoted by $a \wedge b$.
Limits are as $n\to\infty$ unless noted otherwise. Convergence in distribution under the parameter $\delta$ is denoted by $\wto{\delta}$.
\section{Randomization inference on Gaussian processes}\label{s:sym}
In this section I study the size of randomization tests when the data come from heterogeneous Gaussian processes. I then analyze asymptotic size when a limiting experiment is characterized by such processes. The next section applies these generic results to the quantile setting.
I first introduce some notation for randomization tests and Gaussian processes that I will use throughout the paper. Define $\mathcal{G} = \{1, -1\}^q$ as the $q$-dimensional product of $\{1, -1\}$ and, for $g = (g_1,\dots,g_q)\in \mathcal{G}$, define $g\mapsto gx$ as the direct product $gx = (g_1x_1,\dots, g_qx_q)$ of $g$ and $x\in\mathbb{R}^q$. Let $u\mapsto X_j(u)$, $1\leq j\leq q$, be independent mean-zero Gaussian processes with $u\in\mathcal{U}$, where $\mathcal{U}$ is a compact subset of $(0,1)$. A stochastic process is Gaussian if and only $(X_j(u_1), \dots, X_j(u_m))$ is multivariate normal for any finite collection of indices $u_1,\dots,u_m\in\mathcal{U}$. Symmetry about zero implies that $(X_j(u_1), \dots, X_j(u_m))$ and $-(X_j(u_1), \dots, X_j(u_m))$ are identically distributed. Because this is true for every finite collection of indices $u_1,\dots,u_m\in\mathcal{U}$, $X_j$ and $-X_j$ have the same (finite-dimensional) distributions. Independence and symmetry together imply that $u\mapsto X(u) = (X_1, \dots, X_q)(u)$ and $u\mapsto gX(u)$ have the same distribution for every $g\in\mathcal{G}$ as long as $X$ has mean zero. The quantile and quantile-like processes discussed in the next section have this property under the null hypothesis. Deviations from the null cause non-zero means and therefore also asymmetry in $X$. The goal of this section is to develop a test of the null hypothesis of symmetry about zero,
\begin{equation}\label{eq:invar}
H_0\colon X(u) \sim gX(u), \qquad \text{all $g\in\mathcal{G}$, all $u\in\mathcal{U}$}.
\end{equation}
To test this hypothesis, I use the Kolmogorov-Smirnov-type statistic \begin{equation}\label{eq:crk} T(X) = \sup_{u \in \mathcal{U}}\Biggl(\frac{1}{q}\sum_{j=1}^q X_j(u)\Biggr)
\end{equation}
This statistic is large if symmetry is violated because the mean of the $X_j$ is positive. I focus on one-sided tests to the right for simplicity but this is not restrictive. To test whether the mean is negative, simply use $-X$ instead of $X$ in the definition of $T$. These test statistics can be combined for two-sided tests. I explain this in detail at the end of Section \ref{s:asym}.
Randomization inference uses distributional invariance to generate null distributions and critical values. In the present case, $X$ is distributionally invariant to all transformations $g$ contained in $\mathcal{G}$ because $X$ is symmetric. Let $T^{(1)}(X, \mathcal{G} ) \leq $ $T^{(2)}(X, \mathcal{G} ) \leq \dots \leq T^{(|\mathcal{G}|)}(X, \mathcal{G} )$ be the $|\mathcal{G}| = 2^q$ ordered values of $T(g X)$ across $g\in\mathcal{G} $ and let \begin{equation}\label{eq:critval}T^{1-\alpha}(X,\mathcal{G} ) := T^{(\lceil(1-\alpha) |\mathcal{G} |\rceil)}(X, \mathcal{G} )\end{equation}
be the $1-\alpha$ quantile of these values. The randomization test function is then
\begin{equation}\label{eq:crktest}
\varphi_{\alpha}(X, \mathcal{G} ) = 1\{T(X) > T^{1-\alpha}(X,\mathcal{G} )\}.
\end{equation}
If $\mathcal{U}$ is a finite set, distributional invariance under $H_0$ immediately implies $\ev \varphi_{\alpha}(X, \mathcal{G} ) = \ev \varphi_{\alpha}(gX, \mathcal{G} ).$ By an argument due to Hoeffding (1952), the test function must satisfy $|\mathcal{G} |\alpha \geq \sum_{g\in\mathcal{G} } \varphi_{\alpha}(gX, \mathcal{G} )$ and, after taking expectations on both sides, equality of the distributions yields $|\mathcal{G} |\alpha \geq \ev \sum_{g\in\mathcal{G} } \varphi_{\alpha}(gX, \mathcal{G} ) = \sum_{g\in\mathcal{G} } \ev \varphi_{\alpha}(gX, \mathcal{G} ) = |\mathcal{G} |\ev \varphi_{\alpha}(X, \mathcal{G} )$. This implies $\ev \varphi_{\alpha}(X, \mathcal{G} ) \leq \alpha$, which makes $T^{1-\alpha}(X,\mathcal{G} )$ an $\alpha$-level critical value.
If $\mathcal{U}$ is a not finite, this argument does not immediately go through because \eqref{eq:crk} is a statement about possibly uncountably many $u \in \mathcal{U}$ but I have only established equivalence of the finite-dimensional distributions. However, as the following theorem shows, the conclusion that the test controls size holds nonetheless. The proof of the theorem extends \citeauthor{hoeffding1952}'s proof to stochastic processes with smooth sample paths by showing that \eqref{eq:invar} implies equality of the distributions of $T(gX)_{g\in\mathcal{G}}$ and $T(g\tilde{g}^{-1}X)_{g\in\mathcal{G}}$ for every $\tilde{g}\in\mathcal{G}$. (Here $g^{-1} = -g$ is the inverse of $g$.) I prove that this is enough for \citeauthor{hoeffding1952}'s argument to go through as long as at least one of the processes has positive variance at every $u$.
\begin{theorem}\label{th:hoeffding}
Let $X_1, \dots, X_q$ be independent mean-zero Gaussian processes with continuous sample paths indexed by the compact set $\mathcal{U}\subset (0,1)$ and let $u\mapsto X(u) := (X_1,\dots, X_q)(u)$. If there is a $j\in\{1,\dots, q \}$ such that $\prob(X_j(u) = 0) = 0$ for all $\mathcal{U}$, then $\ev \varphi_{\alpha}(X, \mathcal{G} )\leq \alpha$.
\end{theorem}
If $X$ is only an approximation in the sense that $X_n\leadsto X$ as a process in $\ell^\infty(\mathcal{U})^q$, the space of bounded maps from $\mathcal{U}$ to $\mathbb{R}^q$, then the conclusions of the theorem still hold as long as the non-degeneracy conditions are strengthened. Here and in the following I tacitly assume that a process is indexed by a compact $\mathcal{U}\subset (0,1)$.
\begin{theorem}\label{th:asysize} If $X_n \leadsto X = (X_1,\dots, X_q),$ where the $X_1, \dots, X_q$ are independent mean-zero Gaussian processes with continuous sample paths that satisfy $\prob(X_j(u) = 0) = 0$ and $\prob(X_j(u) = -X_j(u'))=0$ for all $u,u'\in \mathcal{U}$ and $1\leq j\leq q$, then $\ev \varphi_{\alpha}(X_n, \mathcal{G} ) \to \ev \varphi_{\alpha}(X , \mathcal{G} )$.
\end{theorem}
\begin{remarks}
(i)~For the non-degeneracy assumption $\prob(X_j(u) = -X_j(u'))=0$ to fail, a Gaussian process with uniformly continuous sample paths has to traverse, with certainty, from $X_j(u)$ to $X_j(u') = -X_j(u)$ while maintaining a positive variance along the entire path. The process would have to have identical variances at time $u$ and $u'$ but be perfectly negatively correlated at those times, which is impossible for Brownian bridges and related processes that typically arise in a quantile context. Still, such Gaussian processes exist and have to be ruled out.
(ii)~The main difficulty of the proof of Theorem \ref{th:asysize} is that the critical value $T^{1-\alpha}(X_n, \mathcal{G})$ does not settle down in the limit and is highly dependent on $T(X)$. The assumptions of Theorem~\ref{th:asysize} rule out degeneracies in the limit process that could lead to ties in the order statistics of $\{T(gX) : g\in\mathcal{G} \}$. This would put probability mass on the boundary of the set $\{T(X) > T^{1-\alpha}(X, \mathcal{G})\}$ and prevent application of the portmanteau lemma. \citet{canayetal2014} use a delicate construction based on Skorokhod's representation theorem to account for the randomness in the limit. While these results could be extended from vectors to processes, I instead give a direct proof that I can also use to analyze the behavior of the test under both local and global alternatives when I discuss quantile processes in the next section.\xqed{\square}
\end{remarks}
\section{Inference on quantile processes with a finite number of clusters}\label{s:asym}
This section gives high level conditions under which asymptotically valid inference on quantile processes and related objects can be performed even if the underlying data come from a fixed number of heterogeneous clusters.
\subsection{Inference when parameters are identified within clusters}\label{ss:within}
Suppose data from $q$ large clusters (e.g., counties, regions, schools, firms, or stretches of time) are available. Throughout the paper, the number of clusters $q$ remains fixed and does not grow with the number of observations~$n$. Observations are independent across clusters but dependent within clusters. Data from each cluster $1\leq j\leq q$ separately identify a quantile or quantile-like scalar function $\delta : \mathcal{U}\to \mathbb{R}$.
The $\delta$ can be estimated by $\hat{\delta}_j$ using data from only cluster~$j$ such that a total of $q$ separate estimates $(\hat{\delta}_1, \dots, \hat{\delta}_q) =: \hat{\delta}$ of $u \mapsto \delta(u)$ are available. The goal is to use randomization inference on a centered and scaled version of $\hat{\delta}$ to develop tests of the null hypothesis
\begin{equation}\label{eq:h0}
H_0\colon \delta(u) = \delta_0(u), \quad\text{all } u\in\mathcal{U},
\end{equation}
for some known function $\delta_0 : \mathcal{U} \to \mathbb{R}$. The following two examples describe simple but empirically relevant situations that fit this framework.
\begin{example}[Regression quantiles]\label{ex:qr}
Suppose an outcome $Y_{i,j}$ of individual $i$ in cluster $j$ can be represented as $Y_{i,j} = X_{i,j}\delta(U_{i,j}) + Z_{i,j}'\beta_j(U_{i,j})$, where $u\mapsto X_{i,j}\delta(u) + Z_{i,j}'\beta_j(u)$ is strictly increasing in $u$ and $U_{i,j}$ is standard uniform conditional on covariates $(X_{i,j}, Z_{i,j})$. Here $X_{i,j}$ is the scalar covariate of interest and the $Z_{i,j}$ are additional controls. Monotonicity implies that the $u$-th conditional quantile of $Y_{i,j}$ is $X_{i,j}\delta(u) + Z_{i,j}'\beta_j(u)$ and linear QR as in \citet{koenkerbassett1978} can provide estimates $(\hat{\delta}_j,\hat{\beta}_j)$ of $(\delta, \beta_j)$ for each cluster. Testing \eqref{eq:h0} with $\delta_0 \equiv 0$ tests whether $Y_{i,j}$ and $X_{i,j}$ are associated at any quantile after controlling for $Z_{i,j}$.
Several related models fit the framework of this example: (i)~The $\beta_j$ can be constant across clusters. This does not impact the null hypothesis or the computation of the $\hat{\delta}_j$. (ii)~The $\delta$ can vary by cluster in the QR model $Y_{i,j} = X_{i,j}\delta_j(U_{i,j}) + Z_{i,j}'\beta(U_{i,j})$ under the alternative. This has no impact on the computation of the ${\delta}_j$ and the null hypothesis simply becomes $H_0 \colon \delta_1 = \dots = \delta_q = \delta_0$. Identical $\delta_j$ are required only under the null hypothesis. (iii)~If $\beta_j\equiv 0$ and $X_{i,j}\equiv 1$, then $u \mapsto \hat{\delta}(u)$ reduces to the $u$-th unconditional empirical quantile of $Y_{i,j}$. The null \eqref{eq:h0} can then be used to test whether $\delta$ has a specific functional form, e.g., a standard normal quantile function. \xqed{\square}
\end{example}
\begin{example}[Quantile treatment effects]\label{ex:qte} Consider predetermined pairs $\{(j, j+q) : 1\leq j\leq q \}$ of $2q$ groups. Suppose the first $q$ groups received treatment, indicated by $D_{j} = 1\{j\leq q\}$, and the remaining groups did not. Groups here could be manufacturing plants or villages. Treatment could be management consulting or introduction of a new technology. Denote treatment and control potential outcomes by $Y_{j}(1)\sim F_{Y(1)}$ and $Y_{j}(0)\sim F_{Y(0)}$, respectively. The observed outcome is $Y_{j} = D_{j}Y_{j}(1) + (1-D_{j})Y_{j}(0)$. For each group $j$, the experimenter observes identically distributed but potentially highly dependent copies $Y_{i,j}$ of $Y_{j}$ representing workers $i$ within group $j$. View each pair $(j, j+q)$ for $1\leq j\leq q$ as a cluster and define the quantile treatment effect (QTE) as\[ u\mapsto \delta(u) = F^{-1}_{Y(1)}(u) - F^{-1}_{Y(0)}(u). \] This QTE can be estimated as difference of the empirical quantiles \[ u\mapsto \hat{\delta}_j(u) = \hat{F}^{-1}_{Y_{j}}(u) - \hat{F}^{-1}_{Y_{j+q}}(u) \] or, alternatively, as the coefficient on $D_j$ in a QR of $Y_{i,j}$ on a constant and $D_{j}$ using data only from cluster $j$. The situation where $\delta$ varies with $j$ is again included in the analysis as long as the null hypotheses is $\delta_1 = \dots = \delta_q = \delta_0$. Estimation remains unchanged. I discuss the more complex scenario where the counterfactual $F_{Y(0)}$ has to be identified through difference-in-differences methods in Example \ref{ex:qdid} ahead.
\xqed{\square}
\end{example}
The $\hat{\delta}$ is neither limited to the estimators discussed in the preceding two examples nor does it need to have a special functional form. However, I assume that it can be approximated by a Gaussian process as in Theorem~\ref{th:asysize}. Let $1_q$ be a $q$-vector of ones.
\begin{assumption}\label{as:weakc}
The random function $\hat{\delta} \colon \mathcal{U}\to \mathbb{R}^q$ satisfies
\begin{equation}\label{eq:wc}
X_n := \{\sqrt{n}(\hat{\delta}-\delta 1_q)(u) : u\in \mathcal{U}\} \wto{\delta} X = (X_1,\dots, X_q),
\end{equation}
where the $X_1, \dots, X_q$ are independent mean-zero Gaussian processes with continuous sample paths, $\prob(X_j(u) = 0) = 0$ and $\prob(X_j(u) = -X_j(u'))=0$ for all $u,u'\in \mathcal{U}$ and $1\leq j\leq q$.
\end{assumption}
\noindent Examples of $X_n$ that can satisfy this assumption include unconditional quantile functions, coefficient functions in quantile regressions, quantile treatment effects, and other quantile-like objects. \citet{machkouriaetal2013} present invariance principles and moment bounds that can be used to establish the convergence condition \eqref{eq:wc} under explicit weak dependence conditions.
I now connect the results from Section \ref{s:sym} about heterogeneous Gaussian processes to tests about $\delta$ under Assumption \ref{as:weakc}. The key property is that if $H_0$ in \eqref{eq:h0} does not hold, then $\sqrt{n}(\hat{\delta}-\delta_0 1_q) = X_n + \sqrt{n}(\delta - \delta_0)1_q$. The $X_n$ converges to a symmetric process but $\sqrt{n}(\delta-\delta_0)(u)$ grows without bound for some $u$, which makes the distribution of $\sqrt{n}(\hat{\delta}-\delta_0 1_q)$ highly asymmetric. Testing for symmetry using randomization inference is therefore informative about the hypothesis that $\delta=\delta_0$. I refer to a test that uses $\hat{\delta} - \delta_0 1_q$ in place of $X$ in test function \eqref{eq:crktest} as the \emph{cluster-randomized Kolmorogov} (\emph{CRK}) test. From a practical perspective, the function $\delta_0$ is almost always $\delta_0 \equiv 0$. This tests the null of no effect at any quantile but more general hypotheses can be considered.
The test function $x\mapsto\varphi_{\alpha}(x, \mathcal{G} )$ is invariant to scaling of $x$ by positive constants. If $H_0\colon \delta = \delta_0$ is true, then the CRK test satisfies \[T(\hat{\delta}-\delta_0 1_q) > T^{1-\alpha}(\hat{\delta}-\delta_0 1_q,\mathcal{G} )\] if and only if $T(X_n) > T^{1-\alpha}(X_n,\mathcal{G} )$. That the CRK test is an asymptotic $\alpha$-level test
is then an immediate consequence of Theorems~\ref{th:hoeffding} and \ref{th:asysize}.
\begin{theorem}[Size]\label{th:size}
Suppose Assumption \ref{as:weakc} holds. If $H_0\colon \delta = \delta_0$ is true, then $\lim_{n\to\infty} \ev \varphi_{\alpha}(\hat{\delta} - \delta_0 1_q, \mathcal{G} ) \leq \alpha.$
\end{theorem}
\begin{remarks}
(i)~The canonical limit of quantile and regression quantile processes such as those in Examples~\ref{ex:qr} and \ref{ex:qte} is a scaled version of a $q$-dimensional Brownian bridge. That process easily satisfies the non-standard condition $\prob(X_j(u) = -X_j(u'))=0$ imposed by Assumption \ref{as:weakc}.
(ii)~The inequality in the theorem becomes an equality if $(1-\alpha)2^q$ is an integer. In that case, the test in the limit experiment is ``similar,'' i.e., it has rejection probability exactly equal to $\alpha$ for all Gaussian processes that satisfy Assumption \ref{as:weakc}. The CRK test can therefore be asymptotically similar in some situations. If desired, the test decision can be randomized to make the CRK test similar in the limit for all choices of $\alpha$.
\xqed{\square}
\end{remarks}
To analyze the power of the CRK test, I consider fixed alternatives $\delta(u) = \delta_0(u) + \lambda(u)$ with a positive function $u\mapsto \lambda(u)$, and local alternatives $\delta(u) = \delta_0(u) + \lambda(u)/\sqrt{n}$ converging to the maintained null hypothesis $H_0\colon \delta = \delta_0$. In the local case, $\delta_0$ is fixed but $\delta$ now depends on $n$ and the convergence \eqref{eq:wc} is under the sequence of functions $\delta = \delta_0 + \lambda/\sqrt{n}$. As the following results show, the CRK test has power against both types of alternatives.
\begin{theorem}[Global and local power]\label{th:power}
Suppose Assumption \ref{as:weakc} holds and $\alpha \geq 1/2^{q} $. If $H_1\colon \delta = \delta_0 + \lambda$ is true with $\lambda \colon \mathcal{U} \to [0,\infty)$ continuous and $\sup_{u\in\mathcal{U}} \lambda(u) > 0$, then $\lim_{n\to\infty}\ev \varphi_{\alpha}(\hat{\delta}-\delta_0 1_q, \mathcal{G} ) = 1$. If $H_1\colon \delta = \delta_0 + \lambda/\sqrt{n}$ is true with $\sup_{u\in\mathcal{U}} \lambda(u) > \ev \sup_{u\in\mathcal{U}} X_j(u)$, $1\leq j\leq q$, then \[ \lim_{n\to\infty }\ev \varphi_{\alpha}(\hat{\delta}-\delta_0 1_q, \mathcal{G} )\geq \prod_{j=1}^q \Bigl( 1 - e^{-[\sup\lambda(u) - \ev \sup X_j(u)]^2/2 \sup \ev X^2_j(u)}\Bigr) > 0, \] where the suprema in the exponent are over $u\in\mathcal{U}$.
\end{theorem}
\begin{remarks}
(i)~The lower bound used for the local power result comes from the Borell-Tsirelson-Ibragimov-Sudakov (Borell-TIS) inequality \citep[see, e.g.,][p.\ 50]{adlertaylor2007}. For large $q$, the bound is relatively crude but for small $q$, the only crude part is the assumption that $\delta$ is moderately large when compared to $X$. This is reflected in the condition that $\sup_{u\in\mathcal{U}} \lambda(u) > \ev \sup_{u\in\mathcal{U}} X_j(u)$ instead of $\sup_{u\in\mathcal{U}} \lambda(u) > 0$. The local power bound can be made arbitrarily close to $1$ by choosing $\sup_{u\in\mathcal{U}} \lambda(u)$ large enough.
(ii)~ If $(1-\alpha)|\mathcal{G} | > |\mathcal{G} |-1$, the power of the test is identically zero. In that case $T^{1-\alpha}(X,\mathcal{G}) = \max_{g\in\mathcal{G}}T(gX)$ and $T(X) > T^{1-\alpha}(X,\mathcal{G})$ becomes impossible because $T(X)$ is contained in $\{T(gX) : g\in\mathcal{G} \}$. I therefore I focus on the case $(1-\alpha)|\mathcal{G} |\leq |\mathcal{G} |-1$, which is equivalent to $\alpha \geq 1/2^q$.
(iii)~The test also has power against alternatives where $\lambda$ varies with the cluster index $j$ and at least some of the $\lambda_j$ are large. However, a precise statement without additional conditions on the relative sizes of the $\lambda_j$ is involved. I do not pursue this here to prevent notational clutter.
\xqed{\square}
\end{remarks}
\subsection{Inference when parameters are identified across clusters} \label{ss:between}
In applications, the treatment effect is often not identified from within a cluster but by comparisons across two clusters. This is the case, for example, if treatment is assigned at random at the cluster level or if identification comes from comparing changes in one cluster to changes in another cluster in a quasi-experimental context. In this situation, each individual pairing of a treated cluster $j$ with a control cluster $k$ is generally informative about the treatment effect of interest $\delta$ and each $(j,k)$ pair gives rise to an estimate $\hat{\delta}_{j,k}$ of $\delta$ that could be used in a CRK-type test. The following example illustrates this for difference-in-differences estimation of quantile treatment effects.
\begin{example}[Quantile difference in differences]\label{ex:qdid} Let $\Delta Y_t(0) = Y_t(0) - Y_{t-1}(0)$ be time differences of untreated outcomes. Periods $t \in \{0,-1 \}$ are pre-intervention periods and $t=1$ is the post-intervention period; $Y_1(1)$ is a treated potential outcome and $Y_t$ are observed outcomes.
Denote by $F_{Y\mid D=d}$ the distribution of a variable $Y$ conditional on the treatment indicator taking on the value $d\in\{0,1\}$. \citet{callawayli2019} show that the distribution $F_{Y_1(0)\mid D =1 }(y)$ of the untreated potential outcome of a treated observation at time $t=1$ can be identified as
\begin{equation}\label{eq:qttcounterfactual}
\prob\Bigl( F^{-1}_{\Delta Y_1\mid D=0} \bigl(F_{\Delta Y_{0} \mid D = 1}(\Delta Y_{0})\bigr) + F^{-1}_{Y_{0}\mid D=0} \bigl(F_{Y_{-1} \mid D = 1}(Y_{-1})\bigr) \leq y\mid D = 1\Bigr)
\end{equation} as long as a distributional version of the standard parallel trends assumption and some additional stability and smoothness conditions hold. This identifies the quantile treatment on the treated (QTT) effect \[u\mapsto \delta(u) = F^{-1}_{Y_1(1)\mid D=1}(u) - F^{-1}_{Y_1(0)\mid D=1}(u),\] where $F^{-1}_{Y_1(1)\mid D=1}(u)$ can be estimated by the sample quantile $\hat{F}^{-1}_{Y_1\mid D=1}(u)$. To estimate the counterfactual quantile, \citeauthor{callawayli2019}
replace $\prob$ and every $F$ in \eqref{eq:qttcounterfactual} with sample equivalents. This yields the estimated QTT
\begin{equation}\label{eq:qteest}
u\mapsto \hat{F}^{-1}_{Y_1\mid D=1}(u) - \hat{F}^{-1}_{Y_1(0)\mid D=1}(u).
\end{equation}
\citeauthor{callawayli2019} show that $\sqrt{n}(\hat{F}^{-1}_{Y_1\mid D=1} - \hat{F}^{-1}_{Y_1(0)\mid D=1} - \delta)$ converges to a well-behaved Gaussian process under mild regularity conditions.
Suppose that data come from $q_1$ states that received treatment and $q_0$ states that did not. View a single state over time as a cluster. Then two clusters are enough to compute \eqref{eq:qteest}: $\hat{F}^{-1}_{Y_1\mid D=1}$ can be computed from a treated cluster $j$ and $\hat{F}^{-1}_{Y_1(0)\mid D=1}$ can be computed from $j$ and an untreated cluster $k$. Denote by $\hat{\delta}_{j,k}$ the QTT estimated in this fashion using only data from clusters $j$ and $k$. Each $(j,k)$ pair provides a valid estimate of $\delta$ and each $\hat{\delta}_{j,k}$ could potentially be used in a CRK-type test of the null hypothesis $H_0\colon \delta = \delta_0$.
\xqed{\square}
\end{example}
I again assume that centered and scaled $\hat{\delta}_{j,k}$ converge in distribution to non-degenerate Gaussian processes with smooth sample paths as in Assumption~\ref{as:weakc}. I only adjust this condition for the fact that estimates are constructed from pairwise combination of clusters. Let $q_1$ be the number of treated clusters and let $q_0$ be the number of control clusters.
\begin{assumption}\label{as:weakc2} The process $\{\sqrt{n}(\hat{\delta}_{j,k} - \delta)(u) : u\in \mathcal{U}\}$ converges, jointly in $j$ and $k$, in distribution to mean-zero Gaussian processes $X_{j,k}$ with continuous sample paths that satisfy $\prob(X_{j, k}(u) = 0) = 0$ and $\prob(X_{j,k}(u) = -X_{j,k}(u'))=0$ for all $u,u'\in \mathcal{U}$, $1\leq j\leq q_1$, and $1\leq k\leq q_0$. If both $j\neq j'$ and $k\neq k'$, then $X_{j,k}$ and $X_{j',k'}$ are independent.
\end{assumption}
A na\"ive test of $H_0\colon \delta \equiv \delta_0$ would now take $X_{n,j,k} := \sqrt{n}(\hat{\delta}_{j,k} - \delta_{0})$ and generate randomization distributions from $\{X_{n,j,k} : 1\leq j\leq q_1, 1\leq k\leq q_0\}$ via sign changes. However, $X_{n,j,k}$ and $X_{n,j,k'}$ are dependent for any choice of $j,k,k'$ because $j$ is used twice. This remains true even in large samples and if the data from all $q_1+q_0$ groups are independent. Dependence causes problems because $(X_{n,j,k}, X_{n,j,k'})$ and $(X_{n,j,k}, -X_{n,j,k'})$ generally do not have the same joint distribution even when $n\to\infty$. Invariance under transformations with $g$ therefore fails. This issue can be avoided if one works with a subset of $\{X_{n,j,k} : 1\leq j\leq q_1, 1\leq k\leq q_0\}$ that uses each $j$ and $k$ only once. While this solves the dependence issue, it introduces another problem: each of the $q_1$ treatment groups now has to be paired with exactly one of the $q_0$ control groups. Two researchers working with the same data and methodology could therefore arrive at different conclusions because they chose different pairings. To address this problem, I now develop a method that maintains invariance under sign changes but avoids any decisions on the part of the researcher.
I first introduce some notation. If $q_1\leq q_0$, there are $q_0 \times (q_0-1) \times \dots \times (q_0-q_1+1)$ ways of choosing $q_1$ ordered elements out of $(1,\dots, q_0)$. Identify each such choice with an $h$ and denote the collection of all $h$ by $\mathcal{H}$. The ordering within $\mathcal{H}$ will not affect the test decision. For each $h\in\mathcal{H}$, denote by
\begin{equation} \label{eq:hq1leqq0}
\hat{\delta}_{[h]} = (\hat{\delta}_{1,h(1)}, \hat{\delta}_{2,h(2)}, \dots, \hat{\delta}_{q_1,h(q_1)}), \qquad q_1\leq q_0,
\end{equation}
the vector that matches the subset of control groups associated with the label $h = (h(1),\dots, h(q_1))$ to the (unpermuted) treated groups. If there are more treated than control groups such that $q_1 > q_0$, permute treated groups instead and take $h$ as enumerating ways of choosing $q_0$ elements out of $(1,\dots, q_1)$ to define
\begin{equation} \label{eq:hq1geqq0}
\hat{\delta}_{[h]} = (\hat{\delta}_{h(1), 1}, \hat{\delta}_{h(2), 2}, \dots, \hat{\delta}_{h(q_0), q_0}), \qquad q_1 > q_0.
\end{equation}
By construction, the entries of $\hat{\delta}_{[h]}$ are independent of one another but $\hat{\delta}_{[h]}$ and $\hat{\delta}_{[h']}$ for $h,h'\in\mathcal{H}$ are potentially highly dependent.
To address the issue that there are multiple ways of combining clusters, I use an adjustment based on the randomization \emph{p}-value
\begin{equation}\label{eq:pval}
p(X, \mathcal{G}) = \inf\{ p \in (0,1) : T(X) > T^{p}(X,\mathcal{G}) \} = \frac{1}{|\mathcal{G}|}\sum_{g\in\mathcal{G}}1\{ T(gX) \geq T(X) \}.
\end{equation}
Testing with this $p$-value is equivalent to a test with a critical value because $T(X) > T^{1-\alpha}(X,\mathcal{G})$ if and only if $p(X, \mathcal{G}) \leq \alpha$. The multiple comparisons adjustment is based on an inequality of \citet{ruschendorf1982}. It states that arbitrary, possibly dependent variables $U_h$ indexed by $h\in\mathcal{H}$ with the property that $\prob(U_h\leq u)\leq u$ for every $u\in [0,1]$ satisfy
\begin{equation}\label{eq:rueschendorf}
\prob\Biggl(\frac{2}{|\mathcal{H}|}\sum_{h\in\mathcal{H}} U_h \leq u\Biggr) \leq u, \qquad \text{every $u\in [0,1]$}.
\end{equation}
This specific form of the inequality is given in \citet{vovk2012}. Here the indexing set $\mathcal{H}$ is arbitrary and does not need to be related to permutations. The only condition is that $H = |\mathcal{H}| \geq 2$. The randomization $p$-value $p(\hat{\delta}_{[h]} - \delta_0 1_{q_1 \wedge q_0}, \mathcal{G})$ for testing whether the treatment effect of interest equals $\delta_0$ can be expected to behave like the $U_h$ in \eqref{eq:rueschendorf} in a large enough sample. Combining $p$-values of the CRK test to reject the null if
\begin{equation}\label{eq:avgpval}
\frac{2}{H}\sum_{h\in\mathcal{H}} p(\hat{\delta}_{[h]} - \delta_0 1_{q_1 \wedge q_0}, \mathcal{G})
\end{equation}
does not exceed $\alpha$ should then asymptotically control size. The following theorem confirms that this is indeed true.
\begin{theorem}[Size with combined \emph{p}-values]\label{th:size2}
Suppose Assumption \ref{as:weakc2} holds. If $H_0\colon \delta = \delta_0$, then \[\limsup_{n\to\infty} \prob\Biggl( \frac{2}{H}\sum_{h\in\mathcal{H}} p(\hat{\delta}_{[h]} - \delta_0 1_{q_1 \wedge q_0}, \mathcal{G}) \leq \alpha\Biggr) .\]
\end{theorem}
\begin{remarks}
(i)~The theorem can be improved slightly if $\alpha |\mathcal{G}| H/2$ is not an integer. In that case, the limit superior in the theorem is a proper limit that equals $\prob( (2/H)\sum_{h\in\mathcal{H}} p(X_{[h]},\mathcal{G}) \leq \alpha ),$ where $X_{[h]}$ is the weak limit of $\sqrt{n}(\hat{\delta}_{[h]} - \delta_0 1_{q_1 \wedge q_0})$.
This is because the sum in the preceding display can vary discontinuously at certain values. The limit inferior is $\prob( (2/H)\sum_{h\in\mathcal{H}} p(X_{[h]},\mathcal{G}) < \alpha ).$
(ii)~Results of \citet{diciccio2romano} suggest that other ways of combining $p$-values such as the median $p$-value instead of an average $p$-value are likely to be applicable here as well. However, the proof of the theorem given here relies crucially on the properties of the \citeauthor{ruschendorf1982} inequality. \xqed{\square}
\end{remarks}
The price paid for not matching treated and control clusters before the analysis is lower relative power. When $p$-values are averaged, \citeauthor{ruschendorf1982}'s inequality essentially decreases $\alpha$ to $\alpha/2$ to control size. \citet{meng1993} shows that the constant $2$ cannot be improved. Still, as I establish below, the test has power against global and local alternatives if $\alpha > 1/2^{q_1 \wedge q_0-1}$, which is slightly stronger than what is needed in Theorem \ref{th:power}. Compared to Theorem~\ref{th:power}, I also do not state an explicit bound for the local power analysis because applying the Borel-TIS inequality to the averaged $p$-values directly yields only relatively crude results. I instead show that if the alternatives $\lambda/\sqrt{n}$ converging to the null hypothesis are scaled up by a constant $c$, the test can detect these alternatives in the limit experiment with arbitrary accuracy if $c$ is large enough, that is, if first $n\to\infty$ and then $c\to\infty$.
\begin{theorem}[Global and local power with combined \emph{p}-values]\label{th:power2}
Suppose Assumption~\ref{as:weakc2} holds and $\alpha > 1/2^{q_1 \wedge q_0-1} $. If $H_1\colon \delta = \delta_0 + \lambda$ with $\lambda \colon \mathcal{U} \to [0,\infty)$ continuous and $\sup_{u\in\mathcal{U}}(u) > 0$, then $\lim_{n\to\infty} \prob( (2/H)\sum_{h\in\mathcal{H}} p(\hat{\delta}_{[h]} - \delta_0 1_{q_1 \wedge q_0}, \mathcal{G}) \leq \alpha) = 1$. If $H_1\colon \delta = \delta_0 + c \lambda/\sqrt{n}$, then \[ \lim_{c\to\infty} \liminf_{n\to\infty } \prob\Biggl( \frac{2}{H}\sum_{h\in\mathcal{H}} p(\hat{\delta}_{[h]} - \delta_0 1_{q_1 \wedge q_0}, \mathcal{G}) \leq \alpha\Biggr) = 1. \]
\end{theorem}
\subsection{Implementation} I now turn to some practical aspects of the CRK test. I discuss (i)~what to do if $\mathcal{G}$ is large, (ii)~what to do if $\mathcal{H}$ is large, and (iii)~how to implement the test with a step-by-step guide.
First, $\mathcal{G}$ can be prohibitively large if the number of clusters is large. If computing the entire randomization distribution is too costly, then $\mathcal{G}$ can be approximated by a random sample $\mathcal{G}_m$ consisting of $m$ draws from $\mathcal{G}$ with replacement. This is often referred to as ``stochastic approximation.'' The theorems presented in Sections \ref{ss:within} and \ref{ss:between} continue to hold if $\mathcal{G}_m$ is used in place of $\mathcal{G}$ as long as a limit superior or inferior as $m\to \infty$ is applied before $n\to\infty$. The order of limits is not restrictive because, in a given sample of size $n$, the number of draws can $m$ always be made as large as computationally feasible. Under stochastic approximation, the statement in Theorem~\ref{th:size} becomes $\lim_{n\to\infty}\limsup_{m\to\infty} \ev \varphi_{\alpha}(\hat{\delta} - \delta_0 1_q, \mathcal{G}_m ) \leq \alpha$, whereas statements about power use a limit inferior. Limit superior and inferior are needed here because of potential discontinuities but can be replaced by regular limits for most values of $\alpha$. Theorems \ref{th:size}, \ref{th:size2}, and \ref{th:power2} hold without additional conditions but the conditions of Theorem~\ref{th:power} have to be strengthened marginally to avoid a discontinuity at $\alpha = 1/2^q$.
\begin{proposition}\label{pr:stochastic}
Suppose $\mathcal{G}_m$ consists of $m$ iid draws from $\mathcal{G}$. If every instance of $\mathcal{G}$ is replaced by $\mathcal{G}_m$, then
\begin{enumerate}[\upshape (i)]
\item Theorem~\ref{th:size} holds if $\lim_{n\to\infty}$ is replaced by $\lim_{n\to\infty}\limsup_{m\to\infty}$,
\item Theorem~\ref{th:power} holds if every $\lim_{n\to\infty}$ is replaced by $\lim_{n\to\infty}\liminf_{m\to\infty}$ and $\alpha > 1/2^q$,
\item Theorem~\ref{th:size2} holds if $\limsup_{n\to\infty}$ is replaced by $\limsup_{n\to\infty}\limsup_{m\to\infty}$,
\item Theorem~\ref{th:power2} holds if $\lim_{n\to\infty}$ is replaced by $\lim_{n\to\infty}\liminf_{m\to\infty}$ and $\liminf_{n\to\infty}$ is replaced by $\liminf_{n\to\infty}\liminf_{m\to\infty}$.
\end{enumerate}
If $\alpha \not\in \{ j/|G| : 1\leq j\leq |G| \}$, then $\liminf_{m\to\infty}$ and $\limsup_{m\to\infty}$ can be replaced by $\lim_{m\to\infty}$ in {\upshape (i)-(iv)}.
\end{proposition}
Second, the number of elements of $\mathcal{H}$ can similarly be large if the number of clusters is large or if there is a large discrepancy between the number of treated and the number of control clusters. In that case one can again work with a random subset $\mathcal{I}$ of $\mathcal{H}$. The crucial difference to the preceding result is that both Theorems~\ref{th:size2} and \ref{th:power2} continue to hold even if $\mathcal{I}$ consists of only a finite number of random draws. In fact, the result goes through for any $\mathcal{I}$ as long as $\mathcal{I}$ is independent of the data.
\begin{proposition}\label{pr:randomrueschendorf}
Let $\mathcal{I}$ with $|\mathcal{I}|\geq 2$ be a fixed or random subset of $\mathcal{H}$ independent of the data. Then Theorems \ref{th:size2} and \ref{th:power2} continue to hold if $\mathcal{H}$ is replaced by $\mathcal{I}$.
\end{proposition}
Finally, the following two algorithms outline and summarize how to apply the CRK test in practice. By Theorems \ref{th:size} and \ref{th:size2}, the procedures provide an asymptotically $\alpha$-level test in the presence of a finite number of large clusters that are arbitrarily heterogeneous. They are free of nuisance parameters and do not require any decisions on the part of the researcher. By Theorems \ref{th:power} and \ref{th:power2}, the tests are able to detect fixed and $1/\sqrt{n}$-local alternatives. The first algorithm describes the CRK test when the parameters are identified within clusters. The second algorithm describes the between-cluster case, which is needed for distributional difference in differences. The tests can be two-sided or one-sided in either direction.
\begin{algorithm}[CRK test for parameters identified within clusters]\label{al:within}\item[]
\begin{enumerate}
\item Compute for each $j = 1,\dots, q$ and using only data from cluster $j$ an estimate $\hat{\delta}_{j}$ of a parameter of interest $\delta$. (See Examples \ref{ex:qr} and \ref{ex:qte}.) Define $\hat{\delta} = (\hat{\delta}_{1},\dots, \hat{\delta}_{q})$.
\item Compute $\mathcal{G}$, the set of all vectors of length $q$ with entries $1$ or $-1$, or replace $\mathcal{G}$ with a large random sample $\mathcal{G}_m$ from $\mathcal{G}$ in the following.
\item\label{al:practice4} Reject the null hypothesis $H_0\colon \delta(u) = \delta_0(u)$ for all $u$ (e.g., $\delta_0\equiv 0$ tests for no effect of treatment) against
\begin{enumerate}
\item $\delta(u) > \delta_0(u)$ for some $u$ if $T(\hat{\delta}-\delta_0 1_q) > T^{1-\alpha}(\hat{\delta}-\delta_0 1_q,\mathcal{G})$ for a test with asymptotic level $\alpha$,
\item $\delta(u) < \delta_0(u)$ for some $u$ if $T(\hat{\delta}-\delta_0 1_q) < T^{\alpha}(\hat{\delta}-\delta_0 1_q,\mathcal{G})$ for a test with asymptotic level $\alpha$,
\item $\delta(u) \neq \delta_0(u)$ for some $u$ if (a) or (b) are true for a test with asymptotic level $2\alpha$,
\end{enumerate}
where $T$ is defined in \eqref{eq:crk} and $T^{1-\alpha}(\cdot, \mathcal{G})$ is the $\lceil (1-\alpha)|\mathcal{G}|\rceil$-th largest value of the randomization distribution of $T$, defined in \eqref{eq:critval}.
\end{enumerate}
\end{algorithm}
\begin{algorithm}[CRK test for parameters identified between clusters]\label{al:between}\item[]
\begin{enumerate}
\item Compute $\mathcal{H}$, as defined above \eqref{eq:hq1leqq0}, or replace $\mathcal{H}$ with a large subset $\mathcal{I}$ in the following.
\item Compute $\mathcal{G}$, the set of all vectors of length $q$ with entries $1$ or $-1$, or replace $\mathcal{G}$ with a large random sample $\mathcal{G}_m$ from $\mathcal{G}$ in the following.
\item For each $h$, compute $\hat{\delta}_{[h]}$ from \eqref{eq:hq1leqq0} if $q_1\leq q_0$ or from \eqref{eq:hq1geqq0} if $q_1 > q_0$. (See Example~\ref{ex:qdid}.) Use \eqref{eq:pval} and \eqref{eq:avgpval} to compute \begin{equation}\label{eq:avgpval2}
\frac{2}{|\mathcal{H}|}\sum_{h\in\mathcal{H}} p(\hat{\delta}_{[h]} - \delta_0 1_{\min\{q_1,q_0\}}, \mathcal{G}) \leq \alpha
\end{equation}
\item Reject the null hypothesis $H_0\colon \delta(u) = \delta_0(u)$ for all $u$ (e.g., $\delta_0\equiv 0$ tests for no effect of treatment) against
\begin{enumerate}
\item $\delta(u) > \delta_0(u)$ for some $u$ if \eqref{eq:avgpval2} is true for a test with asymptotic level $\alpha$,
\item $\delta(u) < \delta_0(u)$ for some $u$ if \eqref{eq:avgpval2} is true when $\hat{\delta}_{[h]} - \delta_0 1_{\min\{q_1,q_0\}}$ is replaced by $-(\hat{\delta}_{[h]} - \delta_0 1_{\min\{q_1,q_0\}})$ for a test with asymptotic level $\alpha$,
\item $\delta(u) \neq \delta_0(u)$ for some $u$ if (a) or (b) are true for a test with asymptotic level $2\alpha$.
\end{enumerate}
\end{enumerate}
\end{algorithm}
In some contexts, Algorithm \ref{al:within} can be used even if the parameter of interest is identified by comparisons between treated and untreated clusters. For this to work, the researcher has to merge each treated cluster with an untreated cluster into a single cluster to recover within-cluster identification. If the number of treated clusters and control clusters is equal, then every treated cluster can be matched with a control cluster according to some rule. If the number of clusters is not equal, then two or more clusters can be merged to force an equal number of treated and control clusters. The merged clusters can then be reinterpreted as clusters and Algorithm~\ref{al:within} can be applied to these new clusters. While this comes with a large number of decisions, it is a valid method for inference if these decisions are made before the data are analyzed. For example, when estimating quantile treatment effects, a pre-analysis plan can be put in place that prescribes how clusters that received treatment will be merged with clusters that did not receive treatment. This reduces the problem to the one described in Example \ref{ex:qte}.
The next section investigates the finite sample performance of Algorithms~\ref{al:within} and \ref{al:between} in several sitations.
\section{Numerical results}\label{s:mc}
This section presents several Monte Carlo experiments to investigate the small-sample properties of the CRK test in comparison to other methods of inference. I discuss significance tests on quantile regression coefficient functions (Example~\ref{ex:qr2}), inference in experiments when parameters are identified between clusters (Example~\ref{ex:qte2}), and estimation of QTEs in Project STAR (Example~\ref{ex:star}). I test one-sided hypotheses to the right but the results apply more broadly.
\begin{example}[Regression quantiles, cont.] \label{ex:qr2} In this example, I adapt an experiment of \citet{hagemann2017} and use the data generating process (DGP)
\begin{align*}
Y_{i,j,k} = U_{i,j,k} + U_{i,j,k}Z_{i,j,k},
\end{align*}
where $U_{i,j,k} = \sqrt{\varrho}V_{j,k} + \sqrt{1-\varrho}W_{i,j,k}$ with $\varrho \in [0, 1)$; $V_{j,k}$ and $W_{i,j,k}$ are standard normal, independent of one another, and independent across indices. This ensures that the $U_{i,j,k}$ are standard normal and, for a given $j,k$, any pair $U_{i,j,k}$ and $U_{i',j,k}$ has correlation $\varrho$. The $Z_{i,j,k}$ satisfy $Z_{i,j,k} = X^2_{i,j,k}/3$ with $X_{i,j,k}$ standard normal independent of $U_{i,j,k}$ to ensure that the $U_{i,j,k} Z_{i,j,k}$ have mean zero and variance one. Both $X_{i,j,k}$ and $U_{i,j,k}$ are independent across $j$ and $k$, and $X_{i,j,k}$ is also independent across $i$. I discard information on $k$ after data generation and drop the $k$ subscripts in the following because they are not assumed to be known. This induces a dependence structure where each cluster $j = 1,\dots, q$ consists of several (unknown) neighborhoods $k = 1,\dots,K$ where observations are dependent if they come from the same $k$ but are independent otherwise. If $K\to\infty$ and the size of the neighborhoods is fixed or grows slowly with $K$, then this dependence structure is compatible with Assumptions~\ref{as:weakc} and \ref{as:weakc2} because it generates the weak dependence needed for central limit theory. In the experiments ahead, I set $K$ to either 10 or 20 and draw the size of each neighborhood from the uniform distribution on $\{5,6,\dots,15\}$.
The DGP in the preceding display corresponds to the QR model
\begin{align}\label{eq:mcqr}
Q(u\mid X_{i,j}, Z_{i,j}) = \beta_0(u) + \beta_1(u)X_{i,j} + \beta_2(u)Z_{i,j}
\end{align}
with $\beta_1(u)\equiv 0$ and $\beta_0(u) = \Phi^{-1}(u) = \beta_2(u)$, where $\Phi$ is the standard normal distribution function.
For the CRK test, I estimated \eqref{eq:mcqr} separately for each cluster, obtained $q$ estimates of $\beta_1$ and applied Algorithm \ref{al:within} with 1,000 new draws from $\mathcal{G}$ for each Monte Carlo replication.
I compare the CRK test to inference with the wild gradient bootstrap of \citet{hagemann2017}, a version of the bootstrap that perturbs the gradient of the QR objective function in a computationally efficient way while accounting for cluster dependence. It requires the number of clusters $q\to\infty$ for consistency. The wild gradient bootstrap is the default option for cluster-robust inference in the \texttt{quantreg} package in \texttt{R}. I use the package default settings with Mammen bootstrap weights and 200 bootstrap simulations. I do uniform inference with sup-Wald statistics as outlined in Algorithm~3.4 of \citet{hagemann2017} with critical values and standard errors computed from the wild gradient bootstrap. \citet{hagemann2017} documents excellent performance of the wild gradient bootstrap even with challenging DGPs as long as there are more than 20 clusters. However, \citet{hagemann2017} notes that size distortions can occur when fewer than 20 clusters are present. I focus on this situation in the following.
\begin{figure}
\centering
\resizebox{\textwidth}{!}{
\input{ex41-fig-size.tex}
}
\caption{Rejection frequencies in Example \ref{ex:qr2} of a true null $H_0\colon \beta_1(u) = 0$ for all $u$ as a function of the number of clusters for the bootstrap (left) and the CRK test (right) with (i)~$K=10$ neighborhoods per cluster with intra-neighborhood correlation $\varrho = .5$ (solid lines), (ii)~$K=20$ with $\varrho = .5$ (long-dashed), and (iii)~$K=10$ with $\varrho = .1$ (dotted). Short-dashed line equals nominal level $.05$.}\label{f:ex41-fig-size}
\end{figure}
Figure~\ref{f:ex41-fig-size} shows the rejection frequencies of a true null hypothesis $H_0\colon \beta_1(u) = 0$ for all $u$ as a function of the number of clusters $q\in \{5,6,\dots, 20\}$ for the wild gradient bootstrap (left) and the CRK test (right) at the 5\% level (short-dashed line). The figure shows rejection frequencies in 5,000 Monte Carlo replications for each horizontal coordinate with (i)~$K=10$ neighborhoods per cluster with intra-neighborhood correlation $\varrho = .5$ (solid lines), (ii)~$K=20$ with $\varrho = .5$ (long-dashed), and (iii)~$K=10$ with $\varrho = .1$ (dotted). Both methods were faced with the same data and I estimated $\beta_1$ at $u = .1, .2, \dots, .9$ for both methods. As can be seen, the wild gradient bootstrap over-rejected mildly with 20 clusters but over-rejected substantially for smaller numbers of clusters. It exceeded a 10\% rejection rate if only 12 clusters were available. With 5 clusters, the wild gradient bootstrap falsely discovered an effect in up to 20.9\% of all cases ($K=10, \varrho = .1$). In contrast, the CRK test rejected at or slightly below nominal level for all $q$ and all configurations of $K$ and $\varrho$.
I also experimented with a large number of alternative DGPs under the null. I considered (not shown) larger neighborhoods, different values of $\varrho$, different spatial dependence structures such as (spatial) autoregressive models, and different distributions for $X_{i,j,k}$. However, I found that these changes had little qualitative impact on the results described in the preceding paragraph or in \citet{hagemann2017}. The wild gradient bootstrap generally performed very well but experienced size distortions with fewer than 20 clusters. The CRK test rejected at or slightly below nominal level in all situations I investigated.
I now turn to the behavior of the test under the alternative. I repeated the experiment but now tested the incorrect null hypothesis $H_0\colon \beta_2(u) = 0$ for all $u\in\mathcal{U}$. Figure~\ref{f:ex41-fig-pow} shows the rejection frequencies of this null against the alternative $H_1\colon \beta_2(u) > 0$ for some $u\in\mathcal{U}$, where $\mathcal{U}$ was either $(0, 1)$ (black) or $(.5, 1)$ (grey). The null hypothesis is false in both situations but the case where $\mathcal{U} = (0, 1)$ is more challenging because $\beta_2(u) < 0$ for all $u < .5$ so that estimates below the median provide evidence in the direction away from the alternative. I again considered (i)~$K=10$ neighborhoods per cluster with intra-neighborhood correlation $\varrho = .5$ (solid lines), (ii)~$K=20$ with $\varrho = .5$ (long-dashed), and (iii)~$K=10$ with $\varrho = .1$ (dotted). As could be expected, the bootstrap rejected a large fraction of null hypotheses mostly because it was unable to control the size of the test. However, it had high power when the number of clusters was above 20 and the size distortions disappeared (not shown). The CRK test had high power while maintaining size control even when the number of clusters was below 20. For example, at $q=12$ it detected a deviation from the null between 22.5\% ($K=10, \varrho=.5, \mathcal{U} = (0,1)$) and 84.26\% ($K=20, \varrho=.5, \mathcal{U} = (.5,1)$) of all cases. More generally, additional clusters, lower intra-cluster dependence, and additional neighborhoods per cluster increased the power of the CRK test. \xqed{\square}
\begin{figure}
\centering
\resizebox{\textwidth}{!}{
\input{ex41-fig-pow.tex}
}
\caption{Rejection frequencies in Example \ref{ex:qr2} of false nulls $H_0\colon \beta_2(u) = 0$ for $u > .5$ (grey) and $H_0\colon \beta_2(u) = 0$ for all $u$ (black) as a function of the number of clusters for the bootstrap (left) and the CRK test (right) with (i)~$K=10$ neighborhoods per cluster with intra-neighborhood correlation $\varrho = .5$ (solid lines), (ii)~$K=20$ with $\varrho = .5$ (long-dashed), and (iii)~$K=10$ with $\varrho = .1$ (dotted).
}
\label{f:ex41-fig-pow}
\end{figure}
\end{example}
\begin{example}[Quantile treatment effects, cont.]\label{ex:qte2}
For this experiment, I reuse the setup of Example \ref{ex:qr2} but replace the variable $X_{i,j,k}$ with a cluster-level treatment indicator $D_j$ that equals one if cluster $j$ received treatment and equals zero otherwise. I randomly assign $q_1 = \lfloor q/2 \rfloor$ clusters to treatment and $q_0 = \lceil q/2 \rceil$ to control. The coefficient of interest is $\delta$ in
\begin{align*}
Q(u\mid D_{j}) = \beta_0(u) + \delta(u)D_j + \beta_2(u)Z_{i,j}.
\end{align*}
I do not assume that pairings are predetermined and therefore use the adjusted $p$-values of the CRK test from Algorithm \ref{al:between}. For each Monte Carlo replication, I drew a collection $\mathcal{I}$ with $|\mathcal{I}| = 50$ from $\mathcal{H}$ without replacement. The CRK test with unknown cluster parings requires $\alpha = .05 > 1/2^{q_1\wedge q_0-1}$ to have power, which is satisfied here as long as $q\geq 12$. I therefore restrict $q$ to be between 12 and 20. All other parameters of the experiment are exactly as in Example \ref{ex:qr2}.
\begin{figure}
\centering
\resizebox{\textwidth}{!}{
\input{ex42-fig.tex}
}
\caption{Rejection frequencies in Example \ref{ex:qte2} of a true null (left) $H_0\colon \delta(u) = 0$ for all $u$ and false nulls (right) $H_0\colon \delta(u) = 0$ for $u > 0$ (grey) and $H_0\colon \delta(u) = 0$ for all $u$ (black) as a function of the number of clusters for the CRK test when cluster pairings are not known with (i)~$K=10$ neighborhoods per cluster with intra-neighborhood correlation $\varrho = .5$ (solid lines), (ii)~$K=20$ with $\varrho = .5$ (long-dashed), and (iii)~$K=10$ with $\varrho = .1$ (dotted).
}
\label{f:ex42-fig}
\end{figure}
The left panel of Figure~\ref{f:ex42-fig} shows the rejection frequencies of a true null hypothesis $H_0\colon \delta(u) = 0$ for all $u$ in 5,000 Monte Carlo experiments per horizontal coordinate as $q$ increases. I again considered (i)~$K=10$ neighborhoods per cluster with intra-neighborhood correlation $\varrho = .5$ (solid lines), (ii)~$K=20$ with $\varrho = .5$ (long-dashed), and (iii)~$K=10$ with $\varrho = .1$ (dotted). As can be seen, adjusting the CRK test for unknown cluster pairings results in a markedly more conservative test relative to an unadjusted test from Figure~\ref{f:ex41-fig-size}. However, as the right panel of Figure~\ref{f:ex42-fig} shows, this did not translate into poor power under the alternative. When I repeated the experiment with $\delta(u)\equiv .5$, the CRK test with identification across clusters had no problem detecting the that neither $H_0\colon \delta(u)$ for all $u\in (0,1)$ (black) nor $H_0\colon \delta(u)$ for $u > .5$ (grey) were true. Compared to Example \ref{ex:qr2}, the alternative where $\mathcal{U} = (0,1)$ rejects slightly more nulls because now every $u$ provides evidence against the null.
A noteworthy feature of the right panel of Figure~\ref{f:ex42-fig} is the ``zig-zag'' pattern in the rejection frequencies. The reason for this pattern is the treatment assignment mechanism. If $q=12$, then $q_1 = 6$ clusters receive treatment and $q_0 = 6$ do not. If $q=13$, then again $6 = \lfloor 13/2\rfloor$ clusters receive treatment but now $7 = \lceil 13/2 \rceil$ do not. Algorithm~\ref{al:between} uses a large number of potential pairings of treatment to control for inference but effectively reduces the number of clusters to $\min\{q_1,q_0\}$. In this experiment, inference with $6+7$ clusters is therefore effectively the same as inference with $6+6$ clusters, which explains the similar performance of the test at $q$ and $q-1$ when $q$ is odd.\xqed{\square}
\end{example}
\begin{example}[Placebo interventions in Project STAR]\label{ex:star}
In this example, I revisit a challenging placebo exercise of \citet[][Experiment~5.1]{hagemann2017} in data from the first year of the Tennessee \emph{Student/Teacher Achievement Ratio} experiment, known as Project STAR. Details about the data can be found in \citetalias{wordetal1990} and \citet{graham2008}. I only provide a brief summary.
In 1985, incoming kindergarten students in 79 project schools were randomly assigned to small classes (13-17 students) or regular-size classes (22-25 students) with or without a teacher's aide.
Each of the project schools was required to have at least one of each kindergarten class type. The outcome is standardized student performance on the \emph{Stanford Achievement Test} (SAT) in mathematics and reading administered at the end of the school year. The raw test scores are standardized as in \citet{krueger1999}. He finds across several mean regression models that students in small classes perform about five percentage points better on average than students in regular classrooms. (Assigning teachers aides had no effect uniformly across specifications and I do not consider such classes in the following.) \citet{hagemann2017} documents similar effects in quantile regressions but finds that the effects are smaller for students near the bottom and top of the conditional outcome distribution and larger near the center of the distribution. For example, in the model
\begin{align}\label{eq:star1}
Q_{Y_{i,j}}(u\mid X_{i,j}) = \beta_0(u) + \delta(u) \mathit{small}_{i,j} + \beta_2(u)^T Z_{i,j}
\end{align}
where the treatment dummy $\mathit{small}$ indicates whether the student was assigned to a small class and $Z$ contains school dummies, the effect of being in a small class relative to a regular class varies between 2.78 percentage points at the 10th percentile to 7.23 percentage points at the 60th percentile.
For the placebo experiment, I removed all small classes from the sample and only kept the 16 schools that had two regular-size classes without aide. In each of these 16 schools, I then randomly assigned one of the regular-size classes the treatment indicator $\mathit{small}=1$. This mimics the random assignment of class sizes within schools in the original sample, even though in this case no student actually attended a small class. I applied the CRK test as in Algorithm~\ref{al:within} by running 16 separate quantile regressions, one for each school, on a constant and $\mathit{small}$ to get 16 separate estimates of $\delta$. The fixed effects as in \eqref{eq:star1} are not needed here because the constant can vary freely by cluster in these quantile regressions. This also means that I am effectively clustering at the school level because I am comparing, within each school, classes with $\mathit{small} = 1$ to classes with $\mathit{small} = 0$. There is only one such comparison per school. (If multiple small classes per school were available, then Algorithm \ref{al:between} could be used instead.) For the wild gradient bootstrap, I reran the QR in \eqref{eq:star1} in the placebo data but clustered at the classroom level as in \citet{hagemann2017}. For both methods, I tested at the 5\% level the correct null hypothesis that $H_0\colon \delta(u) = 0$ jointly at $u \in \{.1, .2, \dots, .9 \}$ against the alternative that $\delta$ is positive.
The rejection frequencies in `size' column in Table \ref{t:placebo} show the outcome of repeating the placebo assignment 1,000 times. As can be seen, the CRK test provided a nearly exact test but the bootstrap over-rejected somewhat. The over-rejection for the bootstrap here was documented by \citet{hagemann2017} and can be attributed to the placebo sample being very small with about 69 students per school and the effect of interest being identified off of comparisons within these 16 schools.
\begin{table}\caption{Rejection frequencies of $H_0\colon \delta(u) = 0$ for all $u$ in placebo interventions in Project STAR for the CRK test and the wild gradient bootstrap at 5\% level}\label{t:placebo}
{\centering
\scalebox{1}{
\begin{tabular}{lp{0cm}cp{0cm}cccccc}
\hline
& & size & &\multicolumn{6}{c}{power} \\ \cline{3-3}\cline{5-10}
& &$\delta = 0$ & &$\delta = 2$ & $\delta = 3$ &$\delta = 4$ & $\delta = 5$ & $\delta = 6$ & $\delta = 7$\\ \hline
CRK test & &.043 & &.122 & .161 &.212 & .318 & .379 & .478 \\
Bootstrap & &.091 & &.233 & .316 &.428 & .580 & .691 & .814 \\ \hline
\end{tabular}
}}
\end{table}
I also investigated power by increasing the percentile scores of all students in the randomly drawn small classes of the placebo experiment by $\delta \in \{2,3,4,5,6 ,7\}$ percentage points. These increases are of the same or smaller magnitude as the estimated quantile treatment effects in the actual sample. Then I tested the incorrect hypothesis $H_0\colon \beta_1(u) = 0$ for all $u$ with the same experimental setup as before. The results are shown in `power' column of Table \ref{t:placebo}. As can be seen, the CRK test was able to reliably detect effects for moderate deviations from the null hypothesis. The wild gradient bootstrap rejected more often, but this was likely caused by its tendency to over-reject in this data set.\xqed{\square}
\end{example}
\section{Conclusion}
I introduce a generic method for inference on quantile and regression quantile processes in the presence of a finite number of large and arbitrarily heterogeneous clusters. The method asymptotically controls size by generating statistics that exhibit enough distributional symmetry such that randomization tests can be applied. This randomization test can even be asymptotically similar in empirically relevant situations. The test does not require ex-ante matching of clusters, is free of user-chosen parameters, and performs well at conventional significance levels with as few as five clusters. The main focus on the paper is inference on quantile treatment effects and quantile difference in differences but the method applies more broadly. Numerical examples and an empirical application are provided.
\bibliographystyle{chicago}
|
1,116,691,497,346 | arxiv | \section{Brownian motion with respect to time-changing Riemannian metrics}
Let $M$ be a $d$-dimensional differentiable manifold, $\pi: \mathcal F(M) \to M$ the frame bundle and $(g(t))_{t \in [0,T]}$
a family of Riemannian metrics on $M$ depending smoothly on $t$ such that $(M, g(t))$ is geodesically complete for all $t \in [0,T]$.
Let $(e_i)_{i = 1}^d$ be the standard basis of $\mathbb R^d$.
For each $t \in [0,T]$ let $(H_i(t))_{i=1}^d$ be the associated $g(t)$-horizontal vector fields on $\mathcal F(M)$
(i.e.~$H_i(t,u)$ is the $g(t)$-horizontal lift of $ue_i$), and let $(V_{\alpha, \beta})_{\alpha, \beta = 1}^d$
be the canonical vertical vector fields.
Let $(W_t)_{t \geq 0}$ be a standard $\mathbb R^d$-valued Brownian motion.
In this situation Arnaudon, Coulibaly and Thalmaier~\cite{act, coulibaly} defined horizontal Brownian motion
on $\mathcal F(M)$ as the solution of the following Stratonovich SDE:
\[
dU_t = \sum_{i=1}^d H_i(t, U_t) \circ dW_t^i - \frac{1}{2} \sum_{\alpha, \beta = 1}^d \frac{\partial g}{\partial t}(t, U_te_\alpha, U_t e_\beta) V_{\alpha \beta}(U_t) dt.
\]
They showed that if $U_0 \in \mathcal O_{g(0)}(M)$, then $U_t \in \mathcal O_{g(t)}(M)$ for all $t \in [0,T]$.
$g(t)$-Brownian motion on $M$ is then defined as $X_t := \pi U_t$.
We denote the law of $g(t)$-Brownian motion on $M$ started at $x$ by $P^x$, and expectation with respect to that measure by $E^x$.
\section{Main result}
The main result of this paper is the following theorem:
\begin{theorem}\label{mainresult}
If the family of metrics evolves under backwards super Ricci flow, i.e.
\begin{equation} \label{backwards}
\frac{\partial g}{\partial t} \leq \Ric,
\end{equation}
then Brownian motion on $M$ cannot explode up to time $T$.
In particular this result holds for backwards Ricci flow $\frac{\partial g}{\partial t} = \Ric$.
\end{theorem}
By recent work (see section~\ref{related}),
it has turned out that backwards Ricci flow tends to compensate the effects of Ricci curvature on the behaviour of heat flow and Brownian motion.
Thus our result is quite natural
because a lower bound of Ricci curvature yields the non-explosion property in the fixed metric case.
\begin{remark}\hfill
\begin{enumerate}
\item
In Section~\ref{nonsymmetric} we will give an extension of Theorem~\ref{mainresult} including the case of non-symmetric diffusion processes.
\item
For the question of explosion or non-explosion of Brownian motion on a manifold equipped with a fixed Riemannian metric
see e.g.~\cite{grigoryan}, \cite[Section~7.8]{hackenbrochthalmaier} or \cite[Section~4.2]{hsu}.
\end{enumerate}
\end{remark}
As an important tool we prove the following It\^o formula for the radial process $\rho(t, X_t)$,
where $\rho(t,x)$ denotes the distance with respect to $g(t)$ between $x$ and a fixed reference point $o$:
\begin{theorem}\label{itoradial}
There exists a non-decreasing continuous process $L$ which increases only when $X_t \in \Cut_{g(t)}(o)$ such that
\begin{equation} \label{eq:Itoradial}
\rho(t, X_t) = \rho(0, X_0) + \int_0^t \left[ \frac{1}{2} \Delta_{g(s)} \rho + \frac{\partial \rho}{\partial s} \right] (s, X_s) ds
+ \sum_{i = 1}^d \int_0^t (U_s e_i) \rho(s,X_s) d W_s^i - L_t.
\end{equation}
\end{theorem}
\begin{remark} \hfill
\begin{enumerate}
\item The usual It\^o formula fails to apply because the distance function is not smooth at the cut-locus.
A priori it is even not clear that $\rho(t,X_t)$ is a semimartingale.
\item In the case of a fixed Riemannian metric Theorem~\ref{itoradial} was proved by Kendall~\cite{kendall}
(see also \cite[Theorem~7.254]{hackenbrochthalmaier} or \cite[Theorem~3.5.1]{hsu}).
The idea of our proof is based on Kendall's original one.
\end{enumerate}
\end{remark}
\section{Remarks concerning related work} \label{related}
McCann and Topping~\cite{mccanntopping}
(see Topping~\cite{topping} and Lott~\cite{lott} also)
showed contraction in the Wasserstein metric
for the heat equation under backwards Ricci flow on a compact manifold.
More precisely, they showed that the following are equivalent:
\begin{enumerate}
\item \label{super}
$g$ evolves under backwards super Ricci flow, i.e.~$\frac{\partial g}{\partial t} \leq \Ric$.
\item \label{contraction}
Whenever $u$ and $v$ are two non-negative unit-mass solutions of the heat equation
\[
\frac{\partial u}{\partial t} = \frac{1}{2} \Delta_{g(t)} u - \left( \frac{1}{2} \tr \frac{\partial g}{\partial t} \right) u
\]
(the term $\left( \frac{1}{2} \tr \frac{\partial g}{\partial t} \right) u$ comes from the change in time of the volume element),
the function $t \mapsto W_2(t, u(t, \cdot) \vol_{g(t)}, v(t, \cdot) \vol_{g(t)})$ is non-increasing. Here
\[
W_2(t, \mu, \nu) := \left( \inf_{\pi} \int_{M \times M} d_{g(t)} (x,y)^2 \pi(dx, dy) \right)^{1/2}
\]
is the $L^2$-Wasserstein distance of two probability measures $\mu$ and $\nu$ on $M$. (The infimum is over all probability measures $\pi$
on $M \times M$ whose marginals are $\mu$ and $\nu$.)
\end{enumerate}
It means that backwards super Ricci flow is characterised by the contractivity property for solutions of the heat equation.
Moreover, in recent work by
Topping~\cite{topping} and Lott~\cite{lott}
(see Brendle~\cite{brendle} also)
the heat equation and the theory of optimal transport
are efficiently used to
derive several monotonicity results
including a new proof for the monotonicity of
Perelman's reduced volume.
These facts indicate that
it would be effective
for deeper understanding of Ricci flow
to study the heat equation
in conjunction with backwards Ricci flow
and the theory of optimal transport.
The non-explosion property of the Brownian motion
is one of the first problems we face
when we begin to consider
the heat equation on a noncompact manifold.
Our result tells us that
it is always satisfied
as far as we consider
the heat equation under backwards Ricci flow.
It will be quite helpful
for the study of Ricci flow
on a noncompact manifold
by means of the heat equation.
In fact,
our result enables us to remove the assumption
on the non-explosion in recent work
by Arnaudon, Coulibaly and Thalmaier~\cite[Section~4]{act2}.
They extend McCann and Topping's implication
\ref{super} $\Rightarrow$ \ref{contraction}
in the case on a noncompact manifold.
In addition,
they sharpen the monotonicity of
$L^2$-Wasserstein distance
to a pathwise contraction
in the following sense;
There is a coupling
$( \bar{X}_t^{(1)} , \bar{X}^{(2)}_t)_{t \ge 0}$ of
two Brownian motions
starting from $x,y \in X$ respectively
so that
$t \mapsto d_{g(t)} ( \bar{X}^{(1)}_t , \bar{X}^{(2)}_t )$
is non-increasing almost surely.
By taking an expectation,
we can derive the monotonicity of
the $L^2$-Wasserstein distance from it.
The sharpness of their pathwise contraction
looks
useful for the study of
the optimal transport associated with
a more general cost function
than the squared distance,
e.g.~$\mathcal{L}$-optimal transportation
studied in the above mentioned papers
\cite{topping,lott,brendle}.
As a consequence of our result,
we can consider such a problem
without assuming the compactness of the underlying space.
\section{Proof of Theorem~\ref{itoradial}: It\^o's formula for the radial process}
Since it suffices to prove Theorem~\ref{itoradial} before the exit time of $X$ of an arbitrarily large relatively compact open subset of $M$,
we may assume that $M$ is compact and that therefore its injectivity radius
\[
i_M := \inf \{ d_{g(t)}(x, y) \, | \, t \in [0,T], y \in \Cut_{g(t)}(x) \}
\]
is strictly positive and that we have a uniform bound for the sectional curvature $\Sect_{g(t)}$:
\[
|\Sect_{g(t)} | \leq K^2 \quad \mbox{for all } t \in [0,T].
\]
We first state It\^o's formula for smooth functions:
\begin{lemma}\label{ito2}
Let $f$ be a smooth function on $[0,T] \times M$. The
\begin{eqnarray*}
d f(t, X_t)
& = & \frac{\partial f}{\partial t}(t, X_t) dt + \frac{1}{2} \Delta_{g(t)} f(t,X_t) dt + \sum_{i = 1}^d (U_t e_i) f(t,X_t) d W_t^i.
\end{eqnarray*}
\end{lemma}
\begin{proof}
It\^o's formula applied to a smooth function ${\tilde f}$ on $[0,T] \times \mathcal F(M)$ gives
\begin{eqnarray}
d {\tilde f}(t, U_t) & = & \frac{\partial {\tilde f}}{\partial t}(t, U_t) dt + \sum_{i = 1}^d H_i(t) {\tilde f}(t,U_t) d W_t^i + \frac{1}{2} \sum_{i=1}^d H_i(t)^2 {\tilde f}(t,U_t) dt \nonumber\\
&& {} - \frac{1}{2} \sum_{\alpha, \beta = 1}^d \frac{\partial g}{\partial t}(t, U_t e_\alpha, U_t e_\beta) V_{\alpha \beta} {\tilde f} (t, U_t) dt. \label{ito1}
\end{eqnarray}
Now let ${\tilde f}(t,u) := f(t,\pi u)$. By definition of $H_i(t)$, $H_i(t) {\tilde f}(t,u) = (u e_i) f(t,\pi u)$.
Moreover, it is well known (see e.g.~\cite[Proposition~3.1.2]{hsu}) that $\sum_{i=1}^d H_i(t)^2 {\tilde f}(t,u) = \Delta_{g(t)} f(t,\pi u)$.
Finally, since ${\tilde f}$ is constant in the vertical direction, the last term in \eqref{ito1} vanishes, so that the claim follows.
\end{proof}
\begin{lemma}\label{density}
Let $G(x, \tau, y, t)$ ($x,y \in M$, $0 \leq t < \tau \leq T$) be the fundamental solution of the equation
$\frac{\partial u}{\partial t} = \frac{1}{2} \Delta_{g(t)}u$ (see \cite{guenther} for existence).
Then for all $\tau \in (0,T]$ and all $x \in M$ the law of $X_\tau$ under $P^x$ is absolutely continuous with respect to the volume measure
(note that this property does not depend on the choice of the Riemannian metric), and its density with respect to the
$g(0)$-volume measure is given by $y \mapsto G(x, \tau, y, 0)$.
\end{lemma}
\begin{proof}
Fix $\varphi \in \mathcal C^2(M)$, and let $u$ be the solution of the initial value problem
\[
\left\{ \begin{array}{lll}
\frac{\partial u}{\partial t} & = & \frac{1}{2} \Delta_{g(t)}u\\
u(0, \cdot) & = & \varphi.
\end{array} \right.
\]
Then by Corollary~2.2 in \cite{guenther},
\[
u(\tau, x) = \int_M G(x, \tau, y, 0) \varphi(y) d vol_{g(0)}(y).
\]
Now apply It\^o's formula to $X$ and the function $(t,x) \mapsto u(\tau-t,x)$ to obtain
\begin{eqnarray*}
u(0, X_\tau) & = & u(\tau, X_0) - \int_0^\tau \frac{\partial u}{\partial t}(\tau-t, X_t) dt + \frac{1}{2} \int_0^\tau \Delta_{g(t)} u(t-\tau,X_t) dt + \mbox{martingale}\\
& = & u(\tau, X_0) + \mbox{martingale,}
\end{eqnarray*}
so that
\[
E^x \left[ \varphi(X_\tau) \right] = E^x \left[ u(0, X_\tau) \right] = E^x \left[ u(\tau, X_0) \right] = u(\tau, x) = \int_M G(x, \tau, y, 0) \varphi(y) d vol_{g(0)}(y).
\]
Since $\varphi$ is arbitrary the claim is proved.
\end{proof}
\begin{lemma} \label{cutlocus}
$
\{
t \in [0,T]
\, | \,
X_t \in \Cut_{g(t)} (o)
\}
$
has Lebesgue measure zero almost surely.
\end{lemma}
\begin{proof}
Since by Lemma~\ref{density} for each $t \in (0,T]$ and any starting point $x \in M$, the law of $X_t$ under $P^x$ is absolutely continuous
with respect to the $g(t)$-Riemannian volume measure, and since moreover the cut-locus $\Cut_{g(t)}(o)$ has $g(t)$-volume zero
(see e.g.~\cite[Theorem~7.253]{hackenbrochthalmaier} or \cite[Proposition~3.1]{chavel}), we have
\[
E^x \left[ \int_0^T 1_{\{X_t \in \Cut_{g(t)}(o)\}} dt \right] = \int_0^T P^x \left[ X_t \in \Cut_{g(t)} (o) \right] dt = 0,
\]
so that almost surely $\int_0^T 1_{\{X_t \in \Cut_{g(t)}(o)\}} dt = 0$.
\end{proof}
We now apply Lemma~\ref{ito2} to the process $\rho(t, X_t)$
up to singularity.
As long as $X_t$ stays away from $o$ and the $g(t)$-cut-locus of $o$,
\begin{equation} \label{eq:Ito_preliminary}
d \rho(t, X_t) = d \beta_t + \frac{1}{2} \left[ \Delta_{g(t)} \rho + 2 \frac{\partial \rho}{\partial t} \right] (t, X_t) dt ,
\end{equation}
where $\beta_t$ is the martingale term given by
\[
\beta_t := \sum_{i = 1}^d \int_0^t H_i(s) {\tilde \rho}(s,U_s) d W_s^i .
\]
As we will observe in Lemma~\ref{negligible}, the singularity of $\rho(t,x)$ at $o$ is negligible.
The quadratic variation $\langle \beta \rangle_t$ of $\beta_t$
is computed as follows:
\begin{eqnarray*}
\langle \beta \rangle_t & = & \sum_{i = 1}^d \int_0^t \left[ H_i(s) {\tilde \rho}(s,U_s) \right]^2 ds\\
& = & \sum_{i = 1}^d \int_0^t \left[ (U_s e_i) \rho(s,X_s) \right]^2 ds\\
& = & \int_0^t |\nabla_{g(s)} \rho(s,X_s)|^2 ds\\
& = & t.
\end{eqnarray*}
Thus $\beta_t$ is a standard one-dimensional Brownian motion.
\begin{lemma}[Lemma~5 and Remark~6 in \cite{mccanntopping}]\label{mct}
The function $(t,x) \mapsto \rho(t,x)$ is smooth
whenever $x \notin \{o\} \cup \Cut_{g(t)}(o)$, and
\[
\frac{\partial \rho}{\partial t}(t,x)
=
\frac{1}{2} \int_0^{\rho (t, x)}
\frac{\partial g}{\partial t} (\dot{\gamma}(s), \dot{\gamma}(s))
ds,
\]
where $\gamma: [0, \rho (t,x)] \to M$ is the unique minimizing unit-speed $g(t)$-geodesic joining $o$ to $x$.
\end{lemma}
Let $\Cut_{\mathrm{ST}}$ be the space-time cut-locus defined by
\[
\Cut_{\mathrm{ST}}
:=
\{
(t,x,y) \in [0,T] \times M \times M
\, | \,
(x,y) \in \Cut_{g(t)}
\}.
\]
It is shown in \cite{mccanntopping} that $\Cut_{\mathrm{ST}}$ is a closed subset in $[0,T] \times M \times M$.
Though they assumed $M$ to be compact, extension to the noncompact case is straightforward.
Since $[0,T] \times \{ o \} \times \{ o \}$ is a compact subset in $[0,T] \times M \times M$ and it is away from $\Cut_{\mathrm{ST}}$,
we can take $r_1 >0$ so that
\begin{equation}\label{r1}
d_{g(t)} ( o , \Cut_{g(t)} (o) ) > r_1
\end{equation}
holds for all $t \in [0,T]$. Thus we can use \eqref{eq:Ito_preliminary}) when $X_t$ is in a small neighbourhood of $o$ until $X_t$ hits $o$.
Since $g(t)$ is smooth, Lemma~\ref{mct} and \eqref{eq:Ito_preliminary} together with the Laplacian comparison theorem imply the following by a standard argument:
\begin{lemma} \label{negligible}
With probability one, $X_t$ never hits $o$.
\end{lemma}
For $x,y \in M$, let
\[
\overline{d}(x,y) := \sup_{t \in [0,T]} d_{g(t)}(x,y).
\]
We consider $[0,T] \times M \times M$ equipped with the distance function $\mathbf{d}((s,x_1,x_2), (t,y_1,y_2)) := \max\{ |t-s|, \overline{d}(x_1,y_1), \overline{d}(x_2,y_2) \}$.
By Lemma~\ref{mct} and the compactness of $M$, there exists a constant $C_1 >0$ such that
\begin{equation} \label{eq:Lipschitz}
| d_{g(t)} (x,y) - d_{g(t')} (x,y) | \le C_1 |t - t'|
\end{equation}
holds for any $t , t' \in [0,T]$ and $x,y \in M$. We now define a set $A$ by
\[
A
:=
\left\{
( t , x , y ) \in [0,T] \times M \times M
\, \left| \,
\begin{array}{l}
d_{g(t)} ( o , x ) \ge 2 i_M / 3 ,
d_{g(t)} (o,y) = i_M / 3
\mbox{ and }
\\
d_{g(t)} (x,y) = d_{g(t)} (o,x) - d_{g(t)} (o,y)
\end{array}
\right.
\right\}.
\]
Note that $A$ is closed and hence compact since $d_{g(t)} (x,y)$ is continuous as a function of $(t,x,y)$.
Note that, for $(t,x,y) \in A$, $y$ is on a minimal $g(t)$-geodesic joining $o$ and $x$.
In particular, symmetry of the cutlocus implies that $A \cap \Cut_{ST} = \emptyset$. Thus we have
\[
\delta_1 := \mathbf{d}( A , \Cut_{ST} ) \wedge \frac{i_M}{3(C_1+1)} > 0.
\]
We define the function $V: \mathbb R_+ \to \mathbb R_+$ by
\[
V(r) := \frac{d-1}{2} K \coth \left( K \cdot r \wedge \frac{i_M}{3} \right) + 2 C_1.
\]
The Laplacian comparison theorem implies that, for all $( t , x , y ) \notin \Cut_{\mathrm{ST}}$,
$
| ( \Delta_{g(t)} d_{g(t)} ( y , \cdot ) ) ( x ) |
\leq
(d-1) K \coth( K d_{g(t)} ( x , y ) )
$
and hence Lemma~\ref{mct} implies
\begin{equation} \label{eq:dominant}
\left|
\frac{1}{2} ( \Delta_{g(t)} d_{g(t)} (y , \cdot ) ) (x)
+
\frac{\partial }{\partial t} d_{g(t)} (y,x)
\right|
\leq
V ( d_{g(t)} ( x , y ) ).
\end{equation}
\begin{lemma}\label{fund}
Let
$(t_0, x_0) \in \Cut_{g(t_0)}(o)$ and $\delta \in (0, \delta_1 )$.
Let $X$ be a $g(t)$-Brownian motion starting at $x_0$ at time $t_0$. Let
$\tilde{T} := T \wedge ( t_0 + \delta ) \wedge \inf \{ t \geq t_0 \, | \, d_{g(t)} (x_0, X_t) = \delta \}$.
Then
\[
E \left[ \rho(t \wedge \tilde{T}, X_{t \wedge \tilde{T}}) - \rho(t_0, x_0) - \int_{t_0}^{t \wedge \tilde{T}} V ( \rho ( s , X_s ) ) ds \right] \leq 0.
\]
\end{lemma}
\begin{proof}
We construct a point $\tilde{o} \in M$ as follows: we choose a minimizing unit-speed $g(t_0)$-geodesic $\gamma$ from $o$ to $x_0$ and define $\tilde{o} := \gamma(i_M/3)$.
Then by construction $( t_0 , x_0 , \tilde{o} ) \in A$. Moreover
for all $t \in [t_0, \tilde{T}]$ we have $\mathbf{d}((t_0, x_0, \tilde{o}), (t,X_t,\tilde{o})) < \delta_1$ and therefore $X_t \notin \Cut_{g(t)}(\tilde{o})$. Let
\[
\rho^+(t,x) := d_{g(t)} ( o, \tilde{o} ) + d_{g(t)}(\tilde{o}, x).
\]
Since $\tilde{o}$ lies on a minimizing $g(t_0)$-geodesic from $o$ to $x_0$, we have $\rho^+(t_0,x_0) = \rho(t_0,x_0)$.
Moreover, by the triangle inequality, $\rho^+(t,x) \geq \rho(t,x)$ for all $(t,x)$. On $[t_0, \tilde{T}]$,
\begin{eqnarray}
\nonumber
d_{g(t)}( \tilde{o} , X_t ) & \geq & d_{g(t)} ( \tilde{o} , x_0 ) - d_{g(t)} ( x_0 , X_t )\\
\nonumber & \geq & d_{g(t_0)} ( \tilde{o} , x_0 ) - (1 + C_1) \delta\\
\label{eq:distant} & \geq & \frac{i_M}{3}
\end{eqnarray}
holds. By \eqref{eq:dominant} and Lemma~\ref{mct},
\[
\left(
\frac12 \Delta_{g(t)} \rho^+
+
\frac{\partial \rho^+}{\partial t}
\right)
(t,x)
\le
V ( \rho^+ (t,x) )
\]
holds if $(t,x,\tilde{o}) \notin \Cut_{\mathrm{ST}}$. Note that $V ( \rho^+ (t,X_t) ) = V ( \rho (t,X_t) )$ holds
for all $t \in [t_0, \tilde{T}]$ since we can show $\rho( t , X_t ) \geq i_M / 3$ in a similar way as in \eqref{eq:distant}. Therefore
\begin{align*}
\rho(t \wedge \tilde{T}, X_{t \wedge \tilde{T}})
&
- \rho(t_0, X_{t_0})
- \int_{t_0}^{t \wedge \tilde{T}} V( \rho ( s , X_s ) ) ds
\\
& =
\rho(t \wedge \tilde{T}, X_{t \wedge \tilde{T}})
- \rho^+(t_0, X_{t_0})
- \int_{t_0}^{t \wedge \tilde{T}} V ( \rho^+ ( s , X_s ) ) ds
\\
& \leq
\rho^+(t \wedge \tilde{T}, X_{t \wedge \tilde{T}})
- \rho^+(t_0, X_{t_0})
- \int_{t_0}^{t \wedge \tilde{T}}
\left(
\frac{1}{2} \Delta_{g(s)} \rho^+
+
\frac{\partial \rho^+}{\partial s}
\right)
(s,X_s)
ds.
\end{align*}
Since $\rho^+$ is smooth at $(t,X_t)$ for $t \in [t_0, \tilde{T}]$,
the last term is a martingale.
Hence the claim follows.
\end{proof}
For $\delta \in (0 , \delta_1)$,
we define a sequence of stopping times $(S_n^\delta)_{n \in \mathbb N}$
and $(T_n^\delta)_{n \in \mathbb N_0}$ by
\begin{eqnarray*}
T_0^\delta
& := &
0,
\\
S_n^\delta
& := &
T \wedge
\inf \{
t \geq T_{n-1}^\delta
\, | \,
X_t \in \Cut_{g(t)}(o)
\},
\\
T_n^\delta
& := &
T \wedge
(S_n^\delta + \delta) \wedge
\inf \{
t \geq S_n^\delta
\, | \,
d_{g(t)} (X_{S_n^\delta}, X_t) = \delta
\}.
\end{eqnarray*}
Note that these are well-defined
because $\Cut_{\mathrm{ST}}$ and
$\{ (t,x) \, | \, d_{g(t)} ( y , x ) = \delta \}$, where $y \in M$,
are closed.
\begin{proposition}\label{sup}
The process
\[
\rho(t, X_t) - \rho(0, X_0) - \int_0^t V(s,X_s) ds
\]
is a supermartingale.
\end{proposition}
\begin{corollary}
The process $\rho(t,X_t)$ is a semimartingale.
\end{corollary}
\begin{proof}[Proof of Proposition~\ref{sup}]
Thanks to the strong Markov property of Brownian motion it suffices to show that for all
deterministic starting points $(t_0, x_0) \in [0,T] \times M$ and all $t \in [t_0, T]$
\[
E \left[ \rho(t, X_t) - \rho(t_0, X_{t_0}) - \int_{t_0}^t V(\rho(s,X_s)) ds \right] \leq 0.
\]
To show this we first observe that thanks to Lemma~\ref{ito2}, \eqref{eq:dominant} and Lemma~\ref{fund} for all $n \in \mathbb N$
\[
E \left[ \rho(t \wedge S_n^\delta, X_{t \wedge S_n^\delta}) - \rho(t \wedge T_{n-1}^\delta, X_{t \wedge T_{n-1}^\delta})
- \int_{t \wedge T_{n-1}^\delta}^{t \wedge S_n^\delta} V(\rho(s,X_s)) ds \, \Big| \, \mathcal F_{T_{n-1}^\delta} \right] \leq 0
\]
and
\[
E \left[ \rho(t \wedge T_n^\delta, X_{t \wedge T_n^\delta}) - \rho(t \wedge S_n^\delta, X_{t \wedge S_n^\delta})
- \int_{t \wedge S_n^\delta}^{t \wedge T_n^\delta} V(\rho(s,X_s)) ds \, \Big| \, \mathcal F_{S_n^\delta} \right] \leq 0.
\]
It remains to show that $T_n \to T$ as $n \to \infty$.
If $\lim_{n \to \infty } T_n =: T_\infty < T$ occurs,
then $T_n^\delta - S_n^\delta$ converges to 0 as $n \to \infty$.
In addition,
$d_{g(t)} ( X_{S_n^\delta} , X_{T_n^\delta} ) = \delta$ must hold
for infinitely many $n \in \mathbb{N}$.
Take $N \in \mathbb{N}$ so large that $C_1 ( T_\infty - T_n ) < \delta / 2$
for all $n \ge N$. Then \eqref{eq:Lipschitz} yields
$
d_{g(T_\infty)} ( X_{S_n^\delta} , X_{T_n^\delta} ) \ge \delta / 2
$
for infinitely many $n \ge N$.
But it contradicts with the fact
that $X_t$ is uniformly continuous on $[0,T]$.
Hence $T_n \to T$ as $n \to \infty$.
\end{proof}
\begin{lemma} \label{null}
$
\lim_{\delta \to 0}
\sum_{n=1}^\infty | T_n^\delta - S_n^\delta |
= 0
$
almost surely.
\end{lemma}
\begin{proof}
For $\delta > 0$,
let us define a random subset $E_\delta$ and $E$
in $[0, T]$ by
\begin{align*}
E_\delta
& : =
\{
t \in [ 0 , T ]
\, | \,
\mbox{there exists $t' \in [0,T]$ satisfying }
|t - t'| \le \delta
\mbox{ and }
( t' , X_{t'} , o ) \in \Cut_{\mathrm{ST}}
\} ,
\\
E
& : =
\{
t \in [ 0 , T ]
\, | \,
( t , X_t , o ) \in \Cut_{\mathrm{ST}}
\} .
\end{align*}
Since
the map $t \mapsto ( t , X_t , o )$ is continuous
and
$\Cut_{\mathrm{ST}}$ is closed,
$E$ is closed and hence $E = \cap_{ \delta > 0 } E_\delta$ holds.
By the definition of $S_n^\delta$ and $T_n^\delta$,
we have
\[
E
\subset
\bigcup_{n=1}^\infty [ S_n^\delta , T_n^\delta ]
\subset
E_\delta
\]
and hence the monotone convergence theorem implies
\[
\lim_{\delta \to 0}
\sum_{n=1}^\infty | T_n^\delta - S_n^\delta |
\le
\lim_{\delta \to 0}
\int_0^T 1_{E_\delta} (t) dt
=
\int_0^T 1_E (t) dt
= 0
\]
almost surely,
where the last equality follows
from Corollary~\ref{cutlocus}.
\end{proof}
\begin{lemma} \label{martingale}
The martingale part of $\rho(t,X_t)$ is
\[
\sum_{i=1}^d \int_0^t (U_s e_i) \rho(s,X_s) d W_s^i.
\]
\end{lemma}
\begin{proof}
By the martingale representation theorem there exists an $\mathbb R^d$-valued process $\eta$ such that
the martingale part of $\rho(t,X_t)$ equals $\int_0^t \eta_s dW_s$. Let
\[
N_t := \int_0^t \eta_s dW_s - \sum_{i = 1}^d \int_0^t (U_s e_i) \rho(s,X_s) d W_s^i.
\]
Using the stopping times $S_n^\delta$ and $T_n^\delta$, the quadratic variation $\langle N \rangle_T$ of $N$ is expressed as follows;
\begin{equation} \label{eq:quadratic}
\langle N \rangle_T
=
\sum_{i=1}^d
\sum_{n=1}^\infty
\left(
\int_{T_{n-1}^\delta \wedge T}^{S_n^\delta \wedge T}
| \eta_t^i - ( U_t e_i ) \rho ( t, X_t ) |^2
dt
+
\int_{S_n^\delta \wedge T}^{T_n^\delta \wedge T}
| \eta_t^i - ( U_t e_i ) \rho ( t, X_t ) |^2
dt
\right) .
\end{equation}
Since $X_t \notin \Cut_{g(t)} (o)$ if $t \in (T_{n-1}^\delta , S_n^\delta )$,
It\^o's formula \eqref{eq:Ito_preliminary} yields
\[
\int_{T_{n-1}^\delta \wedge T}^{S_n^\delta \wedge T}
| \eta_t^i - ( U_t e_i ) \rho ( t, X_t ) |^2
dt
= 0
\]
for $n \in \mathbb{N}$ and $i =1 , \cdots , d$. For the second term in the right-hand side of \eqref{eq:quadratic} we have
\[
\sum_{n=1}^\infty
\int_{S_n^\delta \wedge T}^{T_n^\delta \wedge T}
| \eta_t^i - ( U_t e_i ) \rho ( t, X_t ) |^2
dt
\le
2
\int_{\bigcup_{n=1}^\infty [ S_n^\delta , T_n^\delta ]}
\left(
| \eta_t |^2
+
1
\right)
dt .
\]
Since $\eta_t$ is locally square-integrable on $[0,T]$ almost surely, Lemma~\ref{null} yields $\langle N \rangle_T =0$ and the conclusion follows.
\end{proof}
We can now conclude the proof of Theorem~\ref{itoradial}:
Set
$
I_\delta
: =
\bigcup_{n=1}^\infty
[ S_n^\delta , T_n^\delta ]
$.
Set $L_t^\delta$ by
\begin{align*}
L_t^\delta
& :=
- \rho(t,X_t) + \rho(0,X_0)
+
\sum_{i=1}^d \int_0^t (U_s e_i) \rho(s,X_s) d W_s^i
\\
& \qquad
+
\int_{[0,t] \setminus I_\delta}
\left[
\frac{1}{2} \Delta_{g(t)} \rho
+
\frac{\partial \rho}{\partial s}
\right]
(s,X_s)
ds
+
\int_{[0,t] \cap I_\delta}
V ( \rho ( s, X_s ) )
ds
.
\end{align*}
By Proposition~\ref{sup}, Lemma~\ref{martingale} and It\^o's formula~\eqref{eq:Ito_preliminary} on
$[0,T] \setminus I_\delta$,
$L^\delta_t$ is non-decreasing in $t$.
In particular,
$L^\delta_t$ can increase
only when $t \in I_\delta$.
Then we have
\begin{align}
\nonumber
\rho(t, X_t) & - \rho(0, X_0) - \sum_{i = 1}^d \int_0^t (U_s e_i) \rho(s,X_s) d W_s^i
- \int_0^t \left[ \frac{1}{2} \Delta_{g(s)} \rho + \frac{\partial \rho}{\partial s} \right] (s, X_s) ds + L_t^\delta\\
\label{eq:error}
& = - \int_{[0,t] \cap I_\delta} \left[ \frac{1}{2} \Delta_{g(t)} \rho + \frac{\partial \rho}{\partial s} \right] (s,X_s) ds
- \int_{[0,t] \cap I_\delta} V ( \rho ( s, X_s ) ) ds.
\end{align}
Since \eqref{eq:dominant} yields
\begin{align*}
\left|
\int_{[0,t] \cap I_\delta}
\left[
\frac{1}{2} \Delta_{g(t)} \rho
+
\frac{\partial \rho}{\partial s}
\right]
(s,X_s)
ds
+
\int_{[0,t] \cap I_\delta}
V ( \rho ( s, X_s ) )
ds
\right|
\le
2 \int_{I_\delta} V ( \rho ( s, X_s ) ) ds
\end{align*}
and $V ( \rho (s ,X_s) )$ is bounded on $I_\delta$,
Lemma~\ref{null} yields that
the right hand of \eqref{eq:error} converges to 0
as $\delta \to 0$.
Thus $L_t := \lim_{\delta \downarrow 0} L^\delta_t$ exists
for all $t \in [0,T]$ almost surely
and hence \eqref{eq:Itoradial} holds.
We can easily deduce the fact that
$L_t$ can increase only when $t \in \Cut_{g(t)} (o)$
from the corresponding property for $L^\delta_t$. \hfill $\Box$
\section{Proof of Theorem~\ref{mainresult}: Non-explosion of Brownian motion}
We define $k_1 \ge 1$ and $\bar{F} \: : \: [0,\infty) \to \mathbb R$ by
\begin{align*}
k_1 & := \inf \left\{ k \ge 1 \, | \, \Ric_{g(t)}(x) \geq -(d-1)k^2
\mbox{
for $t \in [0,T]$
and $x \in M$
with $\overline{d}(o,x) \le r_1$}
\right\},
\\
\bar{F} (s)
& :=
k_1 \coth ( k_1 \cdot s \wedge r_1 ) + k_1 \cdot s \wedge r_1,
\end{align*}
where $r_1$ is defined in \eqref{r1}.
Theorem~\ref{mainresult} follows immediately from the following estimate of the drift part of \eqref{eq:Itoradial}:
\begin{proposition} \label{bound}
Suppose \eqref{backwards}.
Then,
for all $(t,x) \in [0,T] \times M$ with $(t,x,o) \notin \Cut_{ST}$,
\[
\Delta_{g(t)} \rho (t,x) + 2 \frac{\partial \rho}{\partial t} (t,x) \leq \bar{F} ( \rho(t,x) ).
\]
\end{proposition}
To prove this proposition it suffices to show the following lemma:
\begin{lemma} \label{bound_geodesic}
Suppose \eqref{backwards}.
Fix $t \in [0,T]$ and a minimizing unit-speed $g(t)$-geodesic $\gamma: [0,b] \to M$ with $\gamma(0) = 0$.
Then there exists a non-increasing function $F \: : \: (0, b) \to \mathbb R$ satisfying $F(s) \le \bar{F} (s)$ and
\[
\Delta_{g(t)} \rho(t, \gamma (s)) + 2 \frac{\partial \rho}{\partial t} (t, \gamma(s)) \leq F(s)
\]
for all $s \in (0,b)$.
\end{lemma}
\begin{proof}
Let $( X_i )_{i=1}^d$ be orthonormal parallel fields along $\gamma$ with $X_1 = \dot{\gamma}$. Fix $r \in (0, b)$, and let
$J_i$ be the Jacobi field along $\gamma |_{[0,r]}$ with $J_i(0) = 0$ and $J_i(r) = X_i(r)$.
Then it is well known
(see \cite{chavel} for example)
that
\begin{align*}
( \Delta_{g(t)} d_{g(t)} (\gamma (0), \cdot ) ) (\gamma(r))
& = \sum_{i=2}^d I(J_i, J_i),
\end{align*}
where the index form $I$ for smooth vector fields $Y,Z$ along $\gamma |_{[0,r]}$ is defined by
\begin{align*}
I ( Y , Z )
& : =
\int_0^r
\left(
\langle \dot{Y}(s) , \dot{Z}(s) \rangle_{g(t)}
-
\langle
R_{g(t)} ( Y (s), \dot{\gamma}(s) ) \dot{\gamma}(s), Z(s)
\rangle_{g(t)}
\right)
ds.
\end{align*}
Let $G: [0,b] \to \mathbb R$ be the solution to the initial value problem
\[
\begin{cases}
\displaystyle
G''(s)
=
-
\frac{
\Ric_{g(t)} (\dot{\gamma}(s) , \dot{\gamma}(s))
}
{d-1}
G(s),
\\
\displaystyle
G(0) = 0, G'(0) = 1 .
\end{cases}
\]
Then we have
\begin{align}
\nonumber
\sum_{i=2}^d I(G X_i, G X_i)
& =
\sum_{i=2}^d
\int_0^r
\left[
\left| G'(s) X_i(s) \right|^2
-
\langle
R( G(s) X_i(s), \dot{\gamma}(s) ) \dot{\gamma}(s), G(s) X_i(s)
\rangle
\right]
ds
\\
\nonumber
& =
\int_0^r
\left[
(d-1) G'(s)^2
- G(s)^2 \Ric(\dot{\gamma}(s), \dot{\gamma}(s))
\right]
ds
\\
\nonumber
& =
(d-1) \int_0^r \left[ G'(s)^2 + G(s) G''(s) \right] ds
\\
\label{eq:index1}
& =
(d-1) G(r) G'(r).
\end{align}
Since
$\gamma (0)$ has no conjugate point along $\gamma$
on $[0,r]$,
the left hand side of \eqref{eq:index1} must be strictly positive
(see Theorem~2.10 in \cite{chavel}).
It follows that $G(r) > 0$ for all $r \in (0,b)$.
Now let $Y_i(s) := \frac{G (s)}{G (r)} X_i(s)$.
Note that $Y_i$ has the same boundary values as $J_i$.
Therefore, by the index lemma,
\begin{equation*}
( \Delta_{g(t)} d_{g(t)} ( \gamma (0), \cdot ) ) ( \gamma (r) )
\leq
\sum_{i=2}^d I(Y_i, Y_i)
=
\frac{(d-1) G'(r)}{G(r)}.
\end{equation*}
Hence Lemma~\ref{mct} and \eqref{backwards} yield
\begin{eqnarray*}
(\Delta \rho + 2 \frac{\partial \rho}{\partial t})(t, \gamma(r)) & \leq & \frac{(d-1) G'(r)}{G(r)} + \int_0^r \Ric(\dot{\gamma}(s), \dot{\gamma}(s)) ds\\
& = & (d-1) \left[ \frac{G'(r)}{G(r)} - \int_0^r \frac{G''(s)}{G(s)} ds \right]\\
& =: & F(r).
\end{eqnarray*}
Since
\begin{equation*}
F'(r)
=
(d-1) \left[ \frac{G(r) G''(r) - G'(r)^2}{G(r)^2} - \frac{G''(r)}{G(r)} \right]
=
- \frac{(d-1) G'(r)^2}{G(r)^2} < 0 ,
\end{equation*}
$F$ is decreasing.
In particular, we have $F (r) \le F ( r \wedge r_1 )$.
A usual comparison argument implies
$G' ( r \wedge r_1 ) / G ( r \wedge r_1 )
\le
k_1 \coth ( k_1 \cdot r \wedge r_1 )$
and hence the conclusion follows.
\end{proof}
\section{Generalization to non-symmetric diffusion}\label{nonsymmetric}
We generalise the previous results on more general setting
including the case for non-symmetric diffusions.
Let $X_t$ be time-dependent diffusion whose generator is
$\Delta_{g(t)} / 2 + Z(t)$,
where $Z(t)$ is a time-dependent vector field on $M$
which is smooth on $[0,T] \times M$.
Even in this case,
Theorem~\ref{itoradial} still holds by replacing
$\Delta_{g(t)} / 2$ with $\Delta_{g(t)} / 2 + Z (t)$.
In what follows, we briefly mention the proof.
Except for Lemma~\ref{density} and Lemma~\ref{cutlocus},
an extension of each assertion is straightforward.
For Lemma~\ref{cutlocus}, some difficulties
come from the fact that the result corresponding
to Lemma~\ref{density},
especially the existence of a fundamental solution,
is not yet known at this moment
for non-symmetric diffusions.
But, for our purpose, it suffices to show the following:
\begin{lemma}
Suppose that $M$ is compact.
Then $P^x [ X_t \in \Cut_{g(t)} (o) ] =0$.
\end{lemma}
\begin{proof}
Let $\hat{Z}(t)$ be a differential 1-form
corresponding to $Z(t)$
by duality with respect to $g(t)$.
Let $M^Z_t$ be the martingale part of
the stochastic line integral of
$\hat{Z} (t)$ along $X_t$.
Note that there is a constant $c > 0$
such that
$\langle M^Z \rangle_t \le c t$ holds
since $M$ is compact.
Let us define a probability measure $\tilde{P}^x$
on the same probability space as $P^x$
by
$
\tilde{P}^x [ A ]
: =
E^x
[
\exp (- M^Z - \langle M^Z \rangle_t / 2 )
1_A
]
$.
By the Girsanov formula, the law of $X_t$
under $\tilde{P^x}$ coincides with
that of $g(t)$-Brownian motion at time $t$.
The Schwarz inequality yields
\begin{align*}
P^x [ X_t \in \Cut_{g(t)} (o) ]
& \le
E^x
\left[
\mathrm{e}^{-M^Z_t - \langle M^Z \rangle_t / 2}
1_{\{ X_t \in \Cut_{g(t)} (o) \} }
\right]^{1/2}
E^x
\left[
\mathrm{e}^{M^Z_t + \langle M^Z \rangle_t / 2}
\right]^{1/2}
\\
& \le
\tilde{P}^x
\left[
X_t \in \Cut_{g(t)} (o)
\right]^{1/2}
\mathrm{e}^{ct/2}.
\end{align*}
Hence the conclusion follows from Lemma~\ref{density}.
\end{proof}
To state an extension of Theorem~\ref{mainresult},
define a tensor field $( \nabla Z(t) )^\flat$ by
\[
( \nabla Z (t) )^\flat (X,Y)
:=
\frac12
\left(
\langle
\nabla_X Z (t) , Y
\rangle_{g(t)}
+
\langle
\nabla_Y Z (t) , X
\rangle_{g(t)}
\right)
,
\]
where $\nabla$ is the Levi-Civita connection with respect to $g(t)$.
\begin{assump} \label{non-symmetric}
There exists a locally bounded measurable function $b$ on $[0,\infty)$ so that
\begin{enumerate}
\item
$
( \nabla Z (t,x) )^\flat + \partial_t g (t,x)
\le
\Ric_{g(t)}(x) + b ( \rho ( t , x ) ) g(t,x)
$
for all $t \in (0,T)$ and all $x \in M$.
\item
The 1-dimensional diffusion process $y_t$ given by
$
d y_t
=
d \beta_t
+
\left(
\bar{F} ( y_t )
+
\int_0^{y_t} b ( s ) ds
\right)
dt
$
does not explode. (This is the case if and only if
\[
\int_1^\infty \exp \left[ -2 \int_1^y \mathbf{b}(z) dz \right] \left\{ \int_1^y \exp \left[ 2 \int_1^z \mathbf{b}(\xi) d \xi \right] dz \right\} dy = \infty,
\]
where $\mathbf{b}(y) := \bar{F}(y) + \int_0^{y} b(s) ds$,
see e.g.~\cite[Theorem~6.50]{hackenbrochthalmaier} or \cite[Theorem~VI.3.2]{ikedawatanabe}.)
\end{enumerate}
\end{assump}
Once we obtain the following, non-explosion of $X_t$
follows in the same way as above
by the comparison argument.
\begin{lemma}
Suppose that Assumption~\ref{non-symmetric} holds.
Fix $t \in [0,T]$ and a minimizing unit-speed $g(t)$-geodesic $\gamma: [0,b] \to M$ with $\gamma(0) = 0$.
Then there exists a constant $C_Z > 0$ depending only on $\{ Z(t) \}_{t \in [0,T]}$ and $\gamma(0)$ such that
\[
( ( \Delta_{g(t)} + Z(t) ) d_{g(t)} ( \gamma (0), \cdot ) ) ( \gamma (s) )
+
2 \frac{\partial }{\partial t} d_{g(t)} (\gamma(0), \gamma(s))
\leq
C_Z + F(s)
+
\int_0^s b ( d_{g(t)} ( \gamma (0) , \gamma (u) ) ) du
\]
for all $s \in (0,b)$, where $F$ is the same function as appeared in Lemma~\ref{bound_geodesic}.
\end{lemma}
\begin{proof}
By a direct calculation,
$
( \nabla Z (t) )^\flat
( \dot{\gamma}(s) , \dot{\gamma}(s) )
=
\partial_s
\langle Z(t) , \dot{\gamma} (s) \rangle_{g(t)} ( \gamma (s) )
$.
Hence we obtain
\begin{align*}
( Z(t) d_{g(t)} ( \gamma (0) , \cdot ) ) ( \gamma (r) )
& =
\langle Z(t) , \dot{\gamma} (r) \rangle_{g(t)} ( \gamma (r) )
\\
& =
\langle Z(t) , \dot{\gamma} (0) \rangle_{g(t)} ( \gamma (0) )
+
\int_0^t
( \nabla Z (t) )^\flat ( \dot{\gamma} (s) , \dot{\gamma}(s) )
ds .
\end{align*}
Then, by setting
$
C_Z
:=
\sup_{s \in [0,T]}
\sup
\{
\langle Z(s) , X \rangle_{g(s)} ( \gamma (0) )
\, | \,
X \in T_{\gamma (0)} M ,
\| X \|_{g(s)} = 1
\}
$,
the conclusion follows in a similar way as we did in the proof of Lemma~\ref{bound_geodesic}.
\end{proof}
|
1,116,691,497,347 | arxiv | \section{INTRODUCTION}
\label{sec:intro}
Integral quantum Hall effect in a planar electron
system is a beautiful manifestation of topological
effects in physics. Whenever the Fermi level is in
an energy gap, the quantized Hall conductance is known
to be a topological invariant at the microscopic level.
\cite{tknn,mkann,ntw,avron1,avron2,ish,edge_yh,chern_yh}
For example, in a periodic system the Hall conductance
for a filled band is the Chern number over the magnetic
Brillouin zone which is topologically a
torus, \cite{tknn,mkann} usually called the TKNN invariant
named after its discovers. For a lattice system with edges,
the Hall conductance can be written as another topological
invariant for the complex energy surface which is normally
a higher-genus Riemann surface. \cite{edge_yh} (There is a
close relationship between this invariant and edge
states \cite{chern_yh}, which we are going to exploit
in this paper.)
Thus the quantization of Hall conductance is expected to
be stable against weak disorder and perturbations unless
some topological change, e.g. gap closing, occurs.
\cite{gap_k,gap,wz}
When the gap is reopened, usually there is a discrete change
in the Hall conductance. Such a Hall plateau transition
due to gap collapse is an example of the so-called quantum phase
transitions, which recently attracts some theoretical
attention. \cite{ww,cfw,zhang}
For definiteness, let us consider the model of
non-interacting electrons on a square lattice, with both
nearest-neighbor and next-nearest-neighbor hoppings, in a
uniform magnetic field with rational flux, $\phi=p/q$, per
plaquette. In general, the one-electron spectrum is split
into bands. If the Fermi level is in a gap between
two bands, the Hall conductance is quantized. At
some special values for hopping parameters, the gap may
close, leading to a Hall plateau transition. \cite{gap}
One approach to study such transitions is to use
effective field theory. The basic idea is that in the
neighborhood of gap closing points, energy-momentum
dispersion is generally linear, corresponding to the
appearance of massless Dirac fermion(s). Thus the
large-distance behavior of the system near the
transition can be described by an effective field
theory with Dirac fermions, in which the gap closing
and reopening are represented by sign change in fermion
masses. (A similar situation is the anyon Mott transition
near the fermion point, which has been studied in this
way in ref. \cite{cfw}.) Usually one is tempted to
include into the effective theory only the massless
fermions that appear when the gap closes; it is these
fermions whose Dirac mass may change sign across the
transition. Indeed, it has been generally shown that
counting only these massless Dirac fermions at the
transition point will give us the correct value for
the discrete jump in Hall conductance across the
transition. \cite{oshikawa} Also only the massless Dirac
fermions contribute to the critical exponents for the
transition. \cite{cfw} However, as we will see, this does
not guarantee that the Hall conductance before and after
the transition can be correctly reproduced. In other
words, the topological nature of the Hall conductance
dictates a matching condition for the large-distance
effective field theory to be consistent with the
microscopic lattice formulation.
To see the necessity of such consistency conditions,
we note that the contribution of a planar massive
Dirac fermion to Hall conductance is known to be plus
or minus one-half in units of $e^2/h$, with the sign
depending on the sign of Dirac mass. Therefore, the
change in Hall conductance is always an integer, when
a number of fermion masses change sign. But, if an odd
number of Dirac masses change sign across the transition,
which is possible as we will see below in a numerical
example, the Hall conductance either before or after
the transition is half-odd-integer, in conflict with the
integer quantization as required by the microscopic
topological arguments. Thus a certain number
of massive Dirac fermions, whose masses do not
change sign across the plateau transition, must
be present in the effective field theory. We will
call such fermions as spectators, since really
they do not participate in the phase transition
because they remain massive at the transition point.
However, they are there in the effective theory,
not only ensuring the Hall conductance to be integral,
but also, added to the non-triviality of the problem,
ensuring that the integral value of the Hall
conductance before or after plateau transition
to be the {\it correct one} as dictated by the microscopic
topological invariant. So the number of species of
these massive ``spectators'' can not be arbitrary; it
needs to be determined from first principles.
Previously the necessity of the existence of massive
``spectator'' fermions has been noticed in Ref.
\cite{cfw} and \cite{haldane} in some simple and
specific models for phase transitions to quantum Hall
states. Here we will discuss a general prescription
and a concrete procedure for determining all the
massive spectators in more complicated
models with arbitrary rational flux per plaquette.
The paper is organized as follows. First in the next
section, we briefly review the (2+1)-dimensional
Dirac fermions, the associated zero mode and
resulting contribution, $\pm (1/2) e^2/h$ per species,
to Hall conductance, providing arguments for the
necessity of having massive spectators in the theory.
In Sec. III, we propose a general prescription for
determining the number of massive spectators, i.e.
counting the number of ``vortices'' in the magnetic
Brillouin zone for the highest filled band
near the transition. A concrete procedure in
which we find vortices by edge states
is introduced. To exemplify, we use a tight binding
hamiltonian with nearest-neighbor (NN)
and next-nearest-neighbor (NNN)
hoppings to illustrate our procedure. Sec. IV is
devoted to exhibiting numerical results in this model,
showing how our procedure works when gap closing occurs
near certain critical value of the
NNN hopping parameter. The existence of
massive spectators is clearly demonstrated in the
example. Finally in Sec. V we summarize our
results and discuss relationship of our Hall-conductance
matching condition to precedents in the literature
on matching topological property for theories at
small and large distances.
\section{Dirac fermions in (2+1)-Dimensions}
First let us briefly review the Hall conductance for
a system of planar Dirac fermions in the effective
field theory, with ``relativistic'' energy-momentum
relation, which becomes linear for the massless case.
This quantity can be extracted from a simple
calculation of the ``vacuum-polarization'' diagram
for the photon propagator with a fermion loop. This
one-loop diagram is superficially divergent, a certain
regularization is needed. Then, the one-loop integral
becomes finite even before removing the cut-off, with
the result \cite{djt,nsw}
\begin{equation}
\sigma_{xy}=-{\frac 1 2}\, \sum_{\alpha}\,
{\rm sgn}\, (m_\alpha ).
\label{eq:dirac_hall}
\end{equation}
Here $m_\alpha$ is the Dirac mass for species $\alpha$.
It has been shown that if the fermions remain massive,
there is no correction to the one-loop value of
$\sigma_{xy}$ from higher orders. \cite{ch,ssw} Other
derivations without computing diagrams can be found
in references \cite{ns,redlich,jackiw}. Here we present an argument
\cite{haldane,jackiw} which does not involve any diagram
and regularization explicitly.
The hamiltonian for a planar ``relativistic''
fermion of species $\alpha$ in a static,
uniform, perpendicular magnetic field is given by
\begin{equation}
H_\alpha = \Pi_\alpha^1{\bf \sigma}^2+\Pi_\alpha^2
{\bf \sigma}^1 +m_\alpha{\bf\sigma}^3,
\label{eq:dirac_ham}
\end{equation}
where $\Pi_\alpha^1$ and $\Pi_\alpha^2$ are spatial components
of covariant derivative, satisfying $[\Pi_\alpha^1,\Pi_\alpha^2]=
ie\hbar B$, $B$ is the magnetic field strength, and
${\bf \sigma}^{(1,2,3)}$ are Pauli matrices.
The spectrum for a single species at $B=0$ is symmetric
with respect to zero and is given by
\begin{equation}
\epsilon_\alpha^{\pm}(k) =\pm \sqrt{(\hbar k)^2 +m_\alpha^2},
\label{eq:spec_withoutB}
\end{equation}
which is gapless when $m_\alpha=0$, and is unbounded both
from below and from above. When $B\neq 0$, relativistic Landau
levels are given by
\begin{eqnarray}
\epsilon_{\alpha,n}^{\pm}&=&\pm \sqrt{n \hbar |e B| +m_\alpha^2},
\,\,\,\, ( n\ge 1)\, \\
\epsilon_{\alpha,0}&=& m_\alpha \, {\rm sgn} ( e B ).
\label{eq:spec_withB}
\end{eqnarray}
Note that there is a zero-mode (\ref{eq:spec_withB})
labeled by $n=0$, which
depends on the sign of fermion mass. For the vacuum state,
whether this mode is filled or not depends on the sign of
mass: filled if $m_\alpha<0$, and not if $m_\alpha>0$. When
$m_\alpha=0$, there is a charge conjugation symmetry for this
species.
In the mean time, the vacuum is doubly degenerate, with the
zero-mode filled or not filled. In the continuum field theory,
this ambiguity can not be resolved. It has been argued
\cite{jackiw} that a fermion-number fractionalization
occurs if one assumes charge conjugation symmetry:
the fermion number of the doubly degenerate
vacuum is either $+ 1/2$ or $ -1/2$ depending on
whether the zero mode is filled or empty.
Similarly for the Hall conductance, we may argue that
since the contribution of the filled zero-mode is always
unity (in units of $e^2/h$), charge conjugation
symmetry requires that the vacuum with or without
the zero-mode filled should have fractional contribution
$\pm (1/2) e^2/h$, respectively. Now let us add an
infinitesimal mass to the fermion to remove the vacuum
degeneracy. Obviously for the vacuum state, the zero-mode
is filled if $m_\alpha <0$, and empty if $m_\alpha > 0$.
Thus, we obtain the value of $\sigma_{xy}$ to be $\mp
(1/2) e^2/h$, depending on the sign of mass.
When there are several species of fermions, the Hall
conductance is given by a sum of their individual
contributions, as in Eq.(\ref{eq:dirac_hall}).
Therefore, if the mass of a species changes sign across a
plateau transition, it gives rise to a change in Hall
conductance by unity. So to account for the change in Hall
conductance, one only needs to look for massless fermions
at the transition.
However, it is easy to see that if the number of massless
fermions is odd at the transition, their total contribution to
Hall conductance before or after the transition is a half-odd
integer. If this gives the total Hall conductance, it is in
conflict with integral quantization of Hall conductance
implied by the TKNN topological invariant.
One may wonder perhaps this would never happen, similar to
the no-go theorem of Nielson and Ninomiya \cite{nogo} in
$3+1$-dimensions, which asserts that in the
continuum limit of a lattice theory the number of massless
chiral fermions must be even. If a similar theorem could hold
also in the $2+1$-dimensional case, perhaps the number of
massless Dirac fermions must be even. Unfortunately, it is not
the case: No such no-go theorem exists in $2+1$-dimensions.
Moreover, in Sec. 4 we will show a numerical example, in which
gap closing occurs clearly at three isolated points in the
magnetic Brillouin zone, corresponding to three
(no more and no less) massless fermions.
Thus if the low energy spectrum of the system
consists of unpaired (massless) fermions, there
must be massive fermions present in the ``high-energy''
sector of the theory. These massive fermions remain
massive during the transition and, therefore, do not
participate in the Hall plateau transition,
in the sense that their contribution to Hall
conductance does not change across the transition.
Therefore they are ``spectators'' of the transition.
But their existence restores the integral quantization
of Hall conductance before or after the transition.
\cite{cfw,haldane} In the next section we will discuss
the general principle, as well as a practical procedure,
to determine the number of species of the ``hidden''
massive fermion spectators from microscopic information.
\section{ Vorticities and edge states in the NNN model }
\label{sec:gap_and_dirac}
For definiteness, let us consider a tight-binding
hamiltonian on a square lattice with both NN and NNN
hoppings. In the Landau gauge, the hamiltonian is given by
\begin{eqnarray}
H = & & -t \sum_{m,n} (
c_{m+1,n}^\dagger
c_{m,n}
+
c_{m,n+1}^\dagger e^{i 2\pi \phi m}
c_{m,n} ) \nonumber \\
& & -t_d \sum_{m,n}
c_{m+1,n+1}^\dagger e^{i 2\pi \phi( m + 1 /2 )}
c_{m,n}
-t_d' \sum_{m,n}
c_{m,n+1}^\dagger e^{i 2\pi \phi ( m + 1 /2 )}
c_{m+1,n}
+ H.c.
\label{eq:hmlt}
\end{eqnarray}
where $c_{m,n}$ is the annihilation operator for a
lattice fermion at site $(m,n)$. We assume that
the magnetic flux per plaquette $\phi$ is rational,
{\it i.e.\/}, $\phi=p/q$ with mutually prime integers
$p$ and $q$.
The Hall conductance in this case is a bulk quantity
which is described by the TKNN topological integer
\cite{tknn}.
It is written as the Chern number of a $U(1)$
fibre bundle over the magnetic Brillouin zone as
\begin{equation}
\sigma^{j,\ bulk}_{xy} =- {\frac {e^2} h}
\frac 1 {2\pi i}
\int\int_{T^2_{MBZ}}{dk_x dk_y} \ [ \ {\bf \nabla}_k \times
{\bf A}_u^j({\bf k})\ ]_z,
\label{eq:chern}
\end{equation}
\begin{equation}
{\bf A}_u^j({\bf k}) =
\langle u^j({\bf k}) |{\bf \nabla}_k |u^j({\bf k})\rangle
=\sum_{m=1}^q
{u^j_m}^*({\bf k}){\bf \nabla}_k u^j_m({\bf k}),
\end{equation}
where $u^j_m$ is a normalized Bloch function of the $j$-th
energy band, with $m$ $(1\leq m \leq q)$ labeling its
components. (See references for precise definitions.
\cite{mkann,chern_yh})
An important observation here is that
the magnetic Brillouin zone $T^2_{MBZ}$ is topologically a
torus rather than a rectangle. Since the torus does not
have a boundary, application of Stokes' theorem
to Eq.(\ref{eq:chern}) would give $\sigma_{xy}=0$
if ${\bf A}_u^j({\bf k}) $ is well defined
on the entire torus $T^2_{MBZ}$.
A possible non-zero value of $\sigma_{xy}$
is a consequence of a non-trivial
topology of ${\bf A}_u^j({\bf k}) $.
In order to better understand the
relevant topology, let us examine the
effect of a phase transformation
of the wavefunction
\begin{equation}
|u^j({\bf k})\rangle ' =
e^{i f({\bf k })} |u^j({\bf k})\rangle,
\label{eq:gt1}
\end{equation}
where $f({\bf k })$ is an arbitrary smooth
function of $\bf k$ over the Brillouin zone.
The corresponding gauge transformatin for
${\bf A}_u^j({\bf k}) $ is
\begin{equation}
{\bf A}_u^j({\bf k})' ={\bf A}_u^j({\bf k})
+ i \nabla _k f({\bf k }),
\label{eq:gt2}
\end{equation}
which clearly leaves $\sigma_{xy}$ invariant.
The non-trivial topology arises when the phase
of the wavefunction can not be determined uniquely
and smoothly in the entire Brillouin zone.
The gauge transformation defined above implies
that the overall phase of the wavefunction can be
chosen arbitrarily. It can be determined, for example,
by demanding the $q$-th component of
$|u^j({\bf k})\rangle$ to be real.
However, this is not enough to fix the
phase over the Brillouin zone, when $u^j_q({\bf k})$
vanishes at some points. Let us denote these
zeros by $k^{*}_{s}$ with $s=1,\cdots,N $,
and define small regions around the zeros
by $ R_s^\epsilon = \{ \ { \bf k} \in T^2_{MBZ} \Bigl|
\ | {\bf k}-{\bf k}^*_s|< \epsilon,\
\Psi^j_q({\bf k}_s^*)=0 \}$, $\epsilon>0$.
We may adopt a different phase convention in
$R_s $ so that another component, say,
$u^j_1({\bf k})$, is real.
(We denote it by $|v^j({\bf k })\rangle$).
Then the overall phase is uniquely determined
on the entire Brillouin zone $T^2_{MBZ}$.
At the boundaries, $\partial R_s$, we have
a phase mismatch
\begin{equation}
|v^j({\bf k})\rangle =
e^{i f({\bf k })} |u^j({\bf k })\rangle.
\end{equation}
By using the above formulas for gauge
transformation, Eqs. (\ref{eq:gt1}),
(\ref{eq:gt2}), we have
\begin{eqnarray}
\sigma^j_{xy}
&= &
- \sum_{s=1}^{N} n_s\, , \\
n_s&=& \frac 1 {2\pi }
\oint_{\partial R_s}
{\bf \nabla}f ({\bf k}).
\label{eq:oint}
\end{eqnarray}
Here $n_s$ must be integers since each of the states
vectors must fit togther exactly when we complete
full revolutions around each $R_s$.
This implies that the zeros of a certain component
of the Bloch function define vortices in the
Brillouin zone, whose integral vorticities
contribute to the Hall conductance.
While the phase of the wavefunction depends
on phase convention (gauge choice),
the total vorticity is a gauge invariant quantity.
In this way, in principle, counting the total
vorticity of the $U(1)$ phase of the Bloch wavefunction
gives the bulk Hall conductance. But this would
need the knowledge of (explicit or numerical)
wavefunctions in the whole Brillouin zone.
A more practical prescription to obtain
the vortices of the bulk states is proposed
in Refs. \cite{edge_yh} and \cite{chern_yh},
which are summarized as follows. This method
exploits a relationship between two closely
related systems. We use periodic boundary
conditions in $y$-direction. But in
$x$-direction, we consider two possible
boundary conditions separately: the periodic
and fixed ones. Using the periodic boundary
conditions in both directions
and taking the infinite size limit, one can
obtain a bulk system (without edges).
By using the fixed boundary conditions
in $x$-direction, one can consider a
cylindrical system with edges in
that direction. In this case,
by a Fourier transformation in
$y$-direction, we obtain a
one-dimensional tight-binding equation
with parameter $k_y$,
\begin{eqnarray}
- ( t & + & t_d e^{i\Lambda(m)}
+ t_d' e^{-i\Lambda(m)}) \Psi_{m+1}(k_y)
- 2 t \cos (-k_y + 2\pi \phi m) \Psi_{m}(k_y)
\nonumber \\
&-& ( t + t_d'e^{i\Lambda(m-1)}
+ t_d e^{-i\Lambda(m-1)})\Psi_{m-1}(k_y)
= E(k_y) \Psi_{m}(k_y),
\label{eq:1dhmlt}
\end{eqnarray}
where
\begin{equation}
\Lambda(m)= -k_y+2 \pi \phi m +\pi\phi,
\ \ \ \Lambda(m+q)=\Lambda(m) .
\label{eq:lambda}
\end{equation}
This difference equation can be written as
\begin{equation}
\left( \begin{array}{c}
\Psi_{m+1}\\
\Psi_{m}
\end{array}
\right)
=
{M}_m(E,k_y)
\left( \begin{array}{c}
\Psi_{m}\\
\Psi_{m-1} \end{array}
\right),
\label{eq:tr}
\end{equation}
where $M_{m}$ is the following $2\times 2$ transfer matrix
\begin{eqnarray}
M_m(E,k_y) & = &
\left( \begin{array}{c c}
\frac
{-E - 2t \cos (-k_y+2\pi\phi m) }
{t + t_d e^{i\Lambda(m)} + t_d' e^{-i\Lambda(m)} }
& -{ \frac
{t + t_d' e^{i\Lambda(m-1)} + t_d e^{-i\Lambda(m-1)}}
{t + t_d e^{i\Lambda(m)} + t_d' e^{-i\Lambda(m)}}
} \\
1 & 0
\end{array}
\right).
\label{eq:mtramat}
\end{eqnarray}
Then the spectrum is completely governed
by a product of $q$ transfer matrices,
$M(E,k_y)$, given by
\begin{equation}
M(E,k_y) =
\prod_{m=1}^q M_m(E,k_y), \ \ \
(| \det M(\epsilon,k_y) | =1).
\label{eq:trmat}
\end{equation}
In the following, for simplicity, we assume
$t=1$, $t_d'=t_d$ and $L_x$ to be a multiple of $q$.
The boundary conditions are
$\Psi_0=\Psi_{L_x}=0$
and we set $\Psi_1=1$
to fix the normalization.
The one-dimensional system (\ref{eq:1dhmlt})
has a period $q$. Thus as $L_x\to\infty$,
part of the spectrum converges to $q$ energy
bands which are determined by the condition
\begin{equation}
| {\rm Tr} M(E,k_y) | \le 2.
\label{eq:cnd_band}
\end{equation}
This corresponds to the on-shell condition in
usual scattering theory.
Further, for the cylindrical system,
there are additional eigen energies,
$\mu_j$ ($j=1,\cdots,q-1$), satisfying
\begin{equation}
M_{12}(\mu_j, k_y)=0,
\end{equation}
so that $\Psi_{L_{x}}=0$. For each $\mu_{j}$,
the condition (\ref{eq:cnd_band}) is not satisfied.
On the other hand, the corresponding state
is localized near either $x\approx 0$ or
$x\approx L_x$, depending on the value of
$|M_{22}(\mu_j,k_y)|$:
\begin{eqnarray}
|M_{22}(\mu_j, k_y)| < 1 &:
\ \ {\rm localized\ at\ the \ left\
edge} \nonumber \\
|M_{22}(\mu_j, k_y)| > 1 &:
\ \ {\rm localized\ at\ the \ right\ edge}.
\label{eq:edgwhich}
\end{eqnarray}
So this is an edge state, whose energy lies in
the $j$-th gap. Note that for such state,
the above equation implies
\begin{equation}
\Psi_{q}(\mu_j (k_{y}),\, k_y)= 0\, .
\label{eq:edge_cond}
\end{equation}
On the other hand, if $|M_{22}(\mu_j(k_{y}), k_y)|=1$,
the condition (\ref{eq:cnd_band})
is satisfied, implying that the edge state
is in touch with an energy band edge. Then the
above equation (\ref{eq:edge_cond}) tells us that
the corresponding $k_{y}$, together with $k_{x}
=0$ or $2\pi/q$ depending on which (top or bottom)
egde it is of the energy band, gives a zero
of the Bloch function $\Psi_{q}$ in the Brillouin zone.
In this way, each time when the edge state becomes degenerate
with a bulk state at band edge occurs,
we find a zero of the $q$-th component of the
Bloch wave function for that band, which
gives a desired $U(1)$ vortex in Eq. (\ref{eq:chern}).
How to determine the vorticity associated with
each vortex found in this way?
A useful and practical rule is proved in Ref.
\cite{chern_yh}: When the degeneracy occurs
at the top of a band and the edge state
jumps from the right edge to the left
(or the opposite way from the left edge to the right)
with increasing $k_y$, the vorticity is $+1$ (or
$-1$). Similarly, when the degeneracy occurs
at the bottom of a band, the vorticity is just
opposite to the previous case. In Fig~\ref{f:ex},
a typical example of the degeneracy between
an energy band and edge state is shown.
The energy of an edge state that is localized
at the right edge is drawn by a solid line and
the one at the left edge by a dotted line.
Since there is one and only one edge state in
the $j$-th gap, we may trace its energy
$\mu_{j} (k_{y})$ as $k_{y}$ vareis from $0$
to $2\pi$, and define a winding number (around
the $j$-th gap) associated with the edge state
by adding all the vorticities at its
degeneracy points. (Geometrically indeed
this number can be realized \cite{chern_yh} as
the winding number of the loop traced
by $\mu_{j} (k_{y})$ around the $j$-th hole
on the complex energy Riemann surface for
Eq. (\ref{eq:1dhmlt}).) It is amusing that
this winding number is exactly the contribution
of the $j$-th band to the Hall conductance.
In this way, a direct relationship between the
bulk Hall conductance and the behavior of the edge
state is established. We will exploit this
relation to find the vorticities of the bulk Bloch
states. As will be discussed in the next section,
our rule is simply {\em to assign a Dirac fermion
to each vortex associated with the edge state} in
the gap relevant to the plateau transition.
\section{Dirac fermions and spectators on the lattice}
In the model with only the NN hopping,
it is known that there are zero modes
when $q$ is even \cite{gap_k,gap,wz}.
There are $q$
zero modes and the dispersion is linear
in momentum near these zero modes.
As far as the low energy physics is concerned,
when the fermi energy is near zero,
we may treat the system as that of Dirac fermions.
The energy dispersion for $\phi=1/4$ case is shown
in Fig~\ref{f:Dirac}(a).
There are four energy bands and
the two of them near $E=0$
are degenerate where the linear dispersion
is clearly observed.
Dirac fermions also appear in the NNN
model \cite{gap,oshikawa}.
For example, in Fig~\ref{f:Dirac}(b),
the spectrum for $\phi=1/3$ and $t_d=t_{Dirac}$,
$t_{Dirac}\approx 0.2679t$ is shown.
In this case, there are three energy bands
and the two higher energy bands are
degenerate at three momenta, corresponding
to three massless Dirac fermions.
In general, there are $q$ gap closing points
in the NNN model with $t_d=t_{Dirac}$.
Deviation of $t_d$ from $t_{Dirac}$ brings
a mass term in the energy dispersion.
The formula (\ref{eq:dirac_hall}) tells
that one Dirac fermion carries the Hall
conductance $\pm 1/2 (e^2/h)$ depending on
the sign of its mass. For an even number
of Dirac fermions (as shown
in Fig~\ref{f:Dirac}(a)),
Eq.(\ref{eq:dirac_hall}) always gives
an integer $\sigma_{xy}$.
On the other hand, if $q$ is odd,
according to Eq.(\ref{eq:dirac_hall}),
these $q$ Dirac fermions associated
with the gap closing points will give
a half-integer value for any kind of
combination of signs in the mass terms.
It would break the integral quantization
for the Hall conductance of the non-interacting
fermions, if no massive Dirac fermions are
introduced. Indeed, as will be shown below,
for example for the case of
Fig~\ref{f:Dirac}(b)), there is in addition
a massive Dirac fermion, corresponding to
a vortex not passing through any of the
gap closing points. Though it is a "spectator"
for this Hall transition. it helps
restore the integeral Hall conductance.
In Fig.~\ref{f:1o3},
the energy bands, edge states, and the corresponding
vortices of the NN model are shown
where the white circles denote $+1$ vorticities and
the black ones denote $-1$ vorticities.
In Figs.~\ref{f:edges1o3},
the same things are shown
for the NNN model for several $t_d$'s.
One can see how the second energy gap
closes and then opens again.
The energy dispersions near the gap closing
are shown in Fig.~\ref{f:edges1o3} (d)
for $t_d=0.267945t$ and Fig.~\ref{f:edges1o3}
for (e) $t_d=0.267955t$,
where there are three almost linear
dispersions in the second energy gap.
The exact linear dispersion is realized
at $t_d=t_{Dirac}\approx 0.26795$.
Thus the Dirac Fermion are massive in the cases of
Figs.~\ref{f:edges1o3} (d) and (e).
Further by comparing Fig.~\ref{f:edges1o3}(c)
and Fig.~\ref{f:edges1o3}(d) \cite{gap},
one can see that the sign of the mass is
reversed after the gap reopening.
Near the gap closing (appearance of the
massless Dirac fermions)
at the three different momentum points,
there are three vortices as shown in
the Figs~\ref{f:edges1o3}.
The vortices moves between the lower
band and the higher band during the process.
It explains the change in the Hall conductance.
What is crucial to our point is that there is
an additional vortex near the $k_y=\pi$ point,
where there is a large energy gap. Though
the shape of the energy dispersion
is quite different from that of a Dirac fermion,
it does not prevent us from including the effect
of this vortex as a Dirac fermion with a
large mass term in the effective field theory,
since its only role in the theory for the Hall
transition is to reproduce the correct
value of the Hall conductance\cite{comm}.
This is the hidden ``spectator'' which restores
the integral quantization of Hall conductance.
In short, it is the existence of the vortices
rather than their energy dispersion that
defines all relevant Dirac fermions appearing
in the effective theory, including massive
spectators.
Note that when one changes the parameters
of the model, the Dirac fermions associated
with vortices in the magnetic Brillouin
zone are always created in pairs. For example,
in Figs.~\ref{f:edges1o3}(a) and (b),
between $t_d=0.24$ and $t_d=0.25003$,
a pair of Dirac fermions
are created near $k_y=\pi$.
Two fermions move toward the gap closing points
at $k_y=2\pi/3$ and $k_y=4\pi/3$ and there
they go across the energy gap when the gap
closes (Figs.~\ref{f:edges1o3}(a)-(g)). In
summary, for the effective theory of this
quantum Hall transition, we should have
three massless Dirac fermions and one massive
spectator with positive mass.
\section{ Summary and Discussion}
We have shown that in general one needs to
add massive fermions (spectators)
into the effective field theory in orde to
get correct integral Hall conductance.
The number of spectators is not arbitrary, since the
Hall conductance is dictated by the microscopic
TKNN topological invariant. Therefore,
one needs to consider the ``high-energy'' sector
or the microscopic details of the system,
in order that the Hall conductance
in the low-energy effective field theory
at large distances should be the same
as in the high energy microscopic
model at small distances.
This matching condition, required by topological
considerations, establishes a connection between
theories of the same system at large and small
distaces. In the models with noninteracting
electrons on a planar lattice, we have
given the principles and a
practical procedure for determining the number of
massive species of fermion spectators from microscopic
information. Numerical examples are shown for the
validity of our prescription and for the
implementation of our practical procedure.
To conclude this paper, we would like to point out the
relationship of our present problem to its precedents
in the literature on topological investigations in
quantum field theory. From the point of view of the
latter, the Hall effect in a planar system (or in 2+1
dimensions) can be expressed by a Chern-Simons
effective action, obtained by integrating out the
electrons in the system, for the external
electromagnetic field:
\begin{equation}
S_{eff} (A)= {\sigma_{xy}\over 2}
\epsilon^{\mu\nu\lambda} A_{\mu} F_{\nu\lambda},
\label{eq:CS}
\end{equation}
whose variation with respect to $A_{\mu}$ gives us
the (Hall) current in the system induced by the
external field. (Here $\mu,\nu,\lambda=0,1,2$.)
There is a well-known formal relationship \cite{ns,zwz}
between this action and famous axial anomaly equation
in 3+1 dimensions:
\begin{equation}
\partial_{\mu} J_{5}^{\mu} = \frac{1}{16\pi^{2}}
\epsilon^{\mu\nu\lambda\sigma} F_{\mu\nu}
F_{\lambda\sigma}.
\label{eq:anomaly}
\end{equation}
Here non-conservation of the axial current,
$J_{5}^{\mu}$, is expressed in terms of
the gauge field that couples to the vector
current. One observes that the right side of
Eq. (\ref{eq:CS}) can be viewed,
up to numerical factors, as the surface term
of an integral of the right side of Eq. (\ref{eq:anomaly}).
It is not surprising to find similarities between
our present problem and some known examples related to
the axial anomaly in 3+1 dimensions.
The first example we like to mention
is the t'Hooft's anomaly-matching condition. \cite{tHooft}
In 3+1 dimensions, chiral fermions give rise to
axial anomaly, which is known to be of topological origin.
If this anomaly exists at the fundamental
constituent level, t'Hooft has argued
that it must survive at large distances even in theories
in which the massless fermions are confined (like quarks)
and do not show up at large distances. This requirement,
i.e. the axial anomaly in the effective theory at large
distances should match that in the fundamental theory at
small distances, imposes constraints on the spectrum of
the effective theory at large distances. In certain cases,
massive ``spectators'' may be needed to satisfy this
condition. In our problem, what plays the role of
axial anomaly is the Hall conductance in 2+1 dimensions.
The lattice theory we start with is the miscroscopic theory
and the effective field theory is the one at large distances.
In certain sense, the Nielson-Ninomiya ``no-go'' theorem
for unpaired massless chiral fermions in the continuum
limit of a lattice theory constitutes another counterpart
of our problem in 3+1 dimensions. The basic argument for
this ``no-go'' theorem is again the necessity of
matching, in the continuum limit, the vanishing axial
anomaly in the lattice theory.
Therefore, our present problem can be viewed as a
2+1 dimensional version of the above two problems
for matching topological properties for theories of
the same system at small and large distances.
\section{Acknowledgments}
Y.H, M.K, and Y.S.W were supported in part by Grant-in-Aid
from the Ministry of Education, Science and Culture
of Japan.
Y.S.W. thanks the Institute for
Solid State Physics, University of Tokyo for warm hospitality
during his visit.
His work was supported in part by US NSF grant No. PHY-9308458.
|
1,116,691,497,348 | arxiv | \section{Introduction}
Dark Energy is the dominant component of our
Universe, if future observations will establish that its equation of state differ from the one of a Cosmological Constant
contribution, then
we have a case for modifying GR at large
distances and massive gravity can be a compelling candidate. Great effort was devoted to extend at the
nonlinear level~\cite{Gabadadze:2010, Hassan:2011vm} the seminal work of
Fierz and Pauli (FP)~\cite{Fierz:1939ix} and recently a Boulware-Deser (BD) ghost free
theory was found~\cite{Gabadadze:2011, HR}.
Unfortunately, cosmological solutions of the ghost free dRGT theory are
rather problematic: spatially flat homogenous
Friedmann-Robertson-Walker (FRW) solutions simply do not
exist~\cite{DAmico} and even allowing for open FRW solutions~\cite{open}
strong coupling~\cite{tasinato} and ghostlike
instabilities~\cite{defelice-prl, defelice} develop. In addition the
cutoff of the theory is rather low~\cite{AGS}, namely $\Lambda_3=\left(m^2 \,
M_{Pl} \right)^{1/3}$. For a recent review see \cite{de-rham-long,defelice-rev}.
A possible way out is to give up Lorentz invariance and requires only rotational
invariance~\cite{Rubakov,dub,usweak}. Within the general class of
theories which propagate five DoF found in~\cite{uscan,uslong}, in
the Lorentz breaking case most of the theories have much safer
cutoff $\Lambda_2 =(m \, M_{Pl})^{1/2}\gg \Lambda_3$ and also avoid all
of the phenomenological difficulties mentioned above, including the
SWtroubles with cosmology~\cite{cosmogen}.
Another option is to promote the nondynamical metric entering in the
construction of massive gravity theory to a dynamical
one~\cite{DAM1,PRLus} entering in the realm of bigravity originally introduced by Isham, Salam
and Strathdee~\cite{Isham}.
In the bigravity formulation FRW homogenous solutions
do exist~\cite{uscosm,hasscosm,russ}, however cosmological perturbations, for modes inside the horizon,
start to grow too early and too fast when compared with GR, as a
result the linear regime becomes problematic already during the
radiation/matter era~\cite{uspert}.
The reason of such peculiar behaviour of the scalar perturbations could
be {\it naively} traced back to the FRW background solution
which is controlled by the parameter $\xi$ (the ratio of the
conformal factors of the two metrics) and to the absence of matter
coupled to the second metric whose pressure could support inside horizon gravitational perturbations.
In presence of only ordinary matter, coupled with the first metric,
{\it only} small values of the parameter $\xi$ give an acceptable early
time cosmology. The introduction of the second matter component
provides other consistent background solutions where the values of
$\xi$ can be also of order~1 and, at the same time, provides the necessary pressure support to infall perturbations.
So in this paper we will extend our previous analysis to the case where an additional matter sector
is minimally coupled to the second metric.
Though we do not consider the problem, the second matter sector
could be also relevant for dark matter~\cite{Yuk1,Yuk2}.
The outline of the paper is the following: in section \ref{bi} we
review the bigravity formulation of massive gravity and the extension
to the case where a second matter sector is present; in section
\ref{frwsec} we study FRW solutions and cosmological perturbations are
analysed in section \ref{pert-sect}.
\section{Massive Gravity and Bigravity}
\label{bi}
Any modification of GR that turns a massless graviton into a massive
one calls for additional DoF.
Basically, GR is
deformed by a scalar function $V$ built from the physical metric $g$ that
couples with matter and a second metric $\tilde g$.
Besides phenomenological issue, dealing
with a non dynamical metric is rather awkward, in this context it is
natural to promote the second metric to a fully dynamical field, see for instance~\cite{myproc}.
Thanks to $\tilde g$, it is possible to build nontrivial
diffeomorphism invariant terms without derivatives of the metric.
Expanding the metric around a fiducial background, such terms
lead precisely to the mass term for the graviton.
Consider the action
\begin{equation}
S=
\int d^4 x \left\{ \sqrt{\tilde g} \, \left[ \kappa \; M_{pl}^2\; \tilde R + L_{\tilde{\text{M}}} \right] +\sqrt{g} \;\left[
M_{pl}^2 \;\left( R
-2 \, m^2 \, V \right) + L_{\text{M}} \right] \right\},
\label{act}
\end{equation}
where $R$ and $\tilde R$ are the corresponding Ricci scalars and the deforming
potential $V$ is a scalar function of the tensor $X^\mu_\nu = {g}^{\mu
\alpha} {\tilde g}_{\alpha \nu}$. Ordinary matter is minimally coupled
with $g$ and is described by $L_{\text{M}}$. In order try to cope with the
instabilities found in~\cite{uspert} we shall introduce a second
matter sector that couples minimally with $\tilde g$ and it is
described by $ L_{\tilde{\text{M}}} $.
The constant $\kappa$ controls
the relative size of the strength of gravitational interactions in the
two sectors, while $m$ sets the scale of the graviton mass. In
particular, in the limit $\kappa \to \infty$, the second metric gets
frozen to a prescribed background value. Removing the second matter
sector one recovers the previously studied bigravity theories, see for instance~\cite{ussphe}.
The modified Einstein equations can be written as\footnote{When not
specified, indices of tensors related with $g$($\tilde g$) are raised/lowered with
$g(\tilde g)$}
\begin{gather}
\label{eqm1}
\,{E}^\mu_\nu + Q_1{}^\m_\n = \frac{1}{2\;M_{pl}^2 }\, {T}^\mu_\nu \,, \\
\label{eqm2}
\kappa \, {\tilde E}^\mu_\nu + Q_2{}^\m_\n = \frac{1}{2\;M_{pl}^2 }\, {\tilde T}^\mu_\nu \;;
\end{gather}
where we have defined $Q_1$ and $Q_2$ as effective energy-momentum tensors induced by the
interaction term.
The ghost free
potential~\cite{Gabadadze:2011, Hassan:2011vm}\footnote{A very similar potential
having the same form but with $X$ instead of $X^{1/2}$ was
considered in~\cite{usex}.} $V$ is a special scalar function
of $Y^\mu_\nu=(\sqrt{X})^\mu_\nu$ given by
\begin{equation}
\begin{split}
& V=\sum_{n=0}^4 \, a_n\, V_n \,,\qquad n=0\ldots4 \, , \qquad \tau_n
= \text{Tr}(Y^n); \\
&V_0=1\,\qquad
V_1=\tau_1\,,\qquad
V_2=\tau_1^2-\tau_2\,,\qquad
V_3=\tau_1^3-3\,\tau_1\,\tau_2+2\,\tau_3\,,\\[1ex]
&V_4=\tau_1^4-6\,\tau_1^2\,\tau_2+8\,\tau_1\,\tau_3+3\,\tau_2^2-6\,
\tau_4\, .
\end{split}
\label{eq:genpot}
\end{equation}
In \cite{HRBI} it was shown that in the bimetric
formulation the potential $V$ is BD ghost free. We have that
\begin{eqnarray}
\label{eq:q1}
Q_1{}_\nu^\mu &=& { m^2}\, \left[ \; V\; \delta^\mu_\nu \, - \, (V'\;Y)^\mu_\nu \right]\\[1ex]
\label{eq:q2}
Q_2{}_\nu^\mu &=& m^2\, q^{-1/2} \, \; (V'\;Y)^\mu_\nu ,
\end{eqnarray}
where $(V^\prime)^\mu_\nu = \partial V / \partial Y_\mu^\nu$ and $q =\det
X=\det(\tilde g)/\det(g)$.
The canonical analysis~\cite{HRBI, GF} shows that in general 7 DoF propagate; around a
Minkowski background, 5 can be associated to a massive spin two
graviton and the remaining 2 to a massless spin two graviton.
We consider only the case where each matter sector is minimally
coupled with only its own metric field.
Allowing the second metric to couple also with
standard matter would result in a violation of the equivalence
principle; indeed, it is not possible to locally put both
metrics in a Minkowski form.
\section{FRW Solutions in Massive Bigravity}
\label{frwsec}
Let us consider FRW background solutions in massive bigravity of the form
\begin{equation}
\begin{split}
ds^2 &= a^2(\tau) \left(- d \tau^2 + dr^2 + r^2 \, d \Omega^2 \right) =
{\bar g}_{1 \, \mu \nu} dx^\mu dx^\nu \,, \\
\tilde{ds}^2 &= \omega^2(\tau) \left[- c^2(\tau) \, d \tau^2 + dr^2+ r^2 \, d
\Omega^2 \right] =
{\bar g}_{2 \, \mu \nu} dx^\mu dx^\nu\, .
\label{frw}
\end{split}
\end{equation}
It is convenient to define the standard Hubble
parameters for the two metrics and the ratio between the two scale factors
\begin{equation}
{\cal H} =\frac{d a}{d\tau} \frac{1}{a} \equiv \frac{a'}{a
\, ,
\qquad {\cal H}_\o \equiv \frac{\o'}{\o}=\frac{\xi'}{\xi}+{\cal H}
\, , \qquad \xi \equiv\frac{\o}{a} \, ;
\label{hubble}
\end{equation}
where with $'$ we always denote the derivation with respect to the conformal
time $\tau$.
Solutions fall in two branches depending on how the covariant
conservation of $Q_{1/2}$, enforced by the Bianchi
identities, is realized. It turns out that the physically interesting
case~\cite{uscosm,uspert} is when, as a consequence of the
conservation of $Q_{1/2}$, we have that
\begin{equation}
c =
\frac{{\cal H}_\o}{{\cal H}} \,,\;\;\;\;\;\; \xi'=(c-1)\;{\cal H}\;\xi
\;\;\;\; {\rm with}\;\;\; c>0 \, .
\label{bnew}
\end{equation}
We will not discuss the other branch of solutions where $\xi$ is constant
and the effect of gravity modification amounts to a cosmological
constant and perturbations are strongly coupled~\cite{uspert}, as expected.
The expansion rate follows from the equation
\begin{equation}
\frac{3 \,\mathcal{H}^2}{a^2}= 8 \pi G \, \rho_1 + m^2\; \left(6\, a_3 \,
\xi ^3+6 \,a_2 \, \xi^2+3\, a_1 \, \xi +a_0 \right) \, .
\label{hh}
\end{equation}
The presence of the second metric is equivalent, for the first sector, to a gravitational
fluid with energy density $\rho_g$ given by
\begin{equation}
\rho_g =\frac{m^2 \left[ 6 \, \xi^2 \left(a_3 \, \xi
+a_2\right)+ 3 \, a_1 \, \xi+a_0\right]}{8 \pi G} \, \, ;
\end{equation}
with an equation of state $p_g=w_g \, \rho_g$ of the form
\begin{equation}
w_g =-1- \frac{\left(6 \, a_3\, \xi ^2+4 \, a_2 \, \xi +a_1\right)
\xi'}{\mathcal{H} \;\left[ 6 \, \xi^2 \, \left(a_3\, \xi
+a_2\right)+ 3 \, a_1 \, \xi+a_0\right]} \, .
\end{equation}
The conservation of energy-momentum tensor for the two fluids leads to
\begin{eqnarray}
\label{mat12}
\rho_1'+3\;{\cal H}\;(\rho_1+p_1)=0,\qquad \rho_2'+3\;{\cal H}_\o\;(\rho_2+p_2)=0,
\end{eqnarray}
thus \;for $p_i = w_i \,
\rho_i$ we \;have $
\rho_1 =
\rho_{1}^{\text{in}} \, a^{-3(w_1+1)}$ and $ \rho_2 = \rho_{2}^{\text{in}} \,
\o^{-3(w_2+1)}$.
Finally, using (\ref{bnew}) in the time-time component of the Einstein equations for
the second metric we get that the ratio $\xi$ of the two scale parameters satisfies the following algebraic equation
\begin{equation}
\xi ^2 \left(\frac{8\; a_4}{\kappa }-2\; a_2\right)+\xi
\left(\frac{6 \; a_3}{\kappa }-a_1\right)+\frac{a_1}{3 \; \kappa \,
\xi }+\frac{2 \; a_2}{\kappa }-2\; a_3 \, \xi ^3-\frac{a_0}{3}
=\frac{8 \pi G}{3 \, m^2}\left( \rho_1 - \frac{\xi^2 \rho_2}{\kappa} \right) \, .
\label{conII}
\end{equation}
The analysis is identical when the {\it same} spatial
curvature $k_c$ is introduced in (\ref{frw}) for both metrics\footnote{The spatial
curvatures must be equal for consistency~\cite{uscosm}.}.
The presence of the second matter opens the possibility for a
behaviour of $\xi$ different from the one found in \cite{uscosm}.
We assume that the mass scale $m$ is related to the present
cosmological constant as $\; m^2\,M_{pl}^2\propto \Lambda$ and the
equation of state for matter one and two is such that $w_{1,\,2}>
-1$. The assumption on the scale $m$ is natural if massive gravity
is relevant for the present acceleration of the Universe
\footnote{We do not consider here the case \cite{DeFelix} $m^2\;M^2_{pl}
\gg\rho_1$, where the scale of $m$ is not related with the present
acceleration of the Universe.}.
In order to not spoil early cosmology (say before nucleosynthesis till after the decoupling time), the contribution
proportional to $m$ in (\ref{hh}) have to kick in only at small
redshift ($z\sim 10$) when ``dark energy'' starts to dominate the expansion rate. This
is the case when
\begin{equation}
\frac{3 \mathcal{H}^2}{a^2}\simeq 8 \pi G \, \rho_1\quad {\rm
implying }\quad m^2\; \sum_{i=0}^3\;(a_i\;\xi^i) \ll 8 \pi G \,
\rho_1 \, ,
\label{early}
\end{equation}
or equivalently
\begin{equation}
\label{first}
\frac{\Lambda}{\rho_1} \, \sum_{i=0}^3 a_i \, \xi^i \ll1 \, .
\end{equation}
Now, for most of the history of our Universe (matter and radiation periods) $\rho_1 \gg \Lambda$, thus
(\ref{first}) is naturally satisfied unless $\xi$ evolves to values of $\sim \rho_1/\Lambda$.
As a result, in such a regime, the implementation of eq.(\ref{first})
in eq.(\ref{conII}) requires that at the leading order
\begin{equation} \label{chi}
8\,\pi\,G\,(\rho_1\,\kappa-\rho_2\,\xi^2) \simeq
\begin{cases}
{\Large \frac{a_1\, m^2}{\xi }} & \qquad \text{when } \xi \ll 1\\[.3cm]
\quad 0 & \qquad \text{when } \xi \sim 1
\end{cases} \,\,\, .
\end{equation}
In absence of a second matter sector, the solution $\xi \sim 1$ could not exist.
Of course, when (\ref{early}) holds, while the dynamics of $a$ is not affected by $\xi$, on the contrary, the impact on ${\cal H}_\omega$
can be relevant, see (\ref{hubble}).
According to eq.(\ref{chi}), the following regimes for the background
value of $\xi$ emerge
\begin{itemize}
\item[({\bf A})] When $\xi^2\;\rho_2\gg \rho_1\gg \L$ \footnote{Notice that,
being $\xi^2\gg \frac{\L}{\rho_2}\simeq \xi^3$, then $\xi\ll1$ and so we are in the region where $\rho_2\gg\rho_1\gg\L$.}
\begin{equation}
\xi\simeq -\left( \frac{a_1\,m^2 }{8\,\pi\,G\;\rho_2
}\right)^{1/3}\propto \frac{\L^{1/3}}{\rho_2^{1/3}}\ll\frac{\L}{\rho_1}\ll 1,\qquad
\qquad \text{with }
c \simeq -\frac{1}{w_2} \, .
\end{equation}
The above expression can be rewritten also in the form
\begin{equation}
\xi=\left(-\frac{8\,\pi\,G\,\rho_2^{\text{in}}}{a_1\;m^2}\right)^{\frac{1}{3\,w_2}}\;a^{-\frac{1+w_2}{w_2}}
\, ,
\end{equation}
where the explicit time dependence of $\xi$ is shown. The above
expressions are valid when $w_2 <0$. Clearly, we have that $c >0$ and
we also need $a_1 <0$ so that $\xi$ is
real and positive. Being $\xi \ll1$, (\ref{early}) is
satisfied. Requiring $w_2 <0$ is rather exotic, nevertheless, as will be
shown in section \ref{caseA}, it does not help to avoid instabilities.
\item[({\bf B})] When $\rho_1\gg\xi^2\;\rho_2$ and at any time
$\rho_1\gg \L$.
This case was considered in~\cite{uspert} when a
single matter sector was present. Clearly (\ref{early}) is easily
satisfied. The value for $\xi$ is of the form
\begin{equation}
\xi\simeq \;\frac{a_1\,m^2}{8\,\pi\,G\;\rho_1\;\kappa}\propto
\frac{\L}{\rho_1}\ll1, \qquad \qquad
\text{with } \; c\simeq (4+3\,w_1) \, .
\end{equation}
and self consistency requires that
\begin{equation}
\rho_2\ll \frac{\rho_1^3}{\L^2} \, .
\end{equation}
\item[({\bf C})] When $ \rho_1 \simeq \xi^2 \; \rho_2 \gg
\Lambda$
\begin{equation}
\xi\simeq \left(\kappa\; \frac{\rho_1}{\rho_2} \right)^{1/2}
= \,
\xi_{in}\;a^{\frac{3\,(w_1-w_2)}{1+3\,w_2}}
,\qquad \qquad \text{with }
c\simeq \frac{1+3 \;w_1}{1+3\; w_2} \, ;
\end{equation}
where we used the solutions of eq.(\ref{mat12}) and
\xi_{in}= (\k\,\rho_1^{\text{in}}/\rho_2^{\text{in}})^{-1/(1+3\,w_2)}$ defines the initial time conditions in terms of the initial density ratio.
In such a regime $\rho_2\propto
a^{-3\frac{(1+w_2)\,(1+3\,w_1)}{1+3\,w_2}}$, thus only when
$w_2>-\frac{1}{3}$ matter density in the second sector decreases with time, while
$\xi$ can grow or decay depending on the sign of $(w_1-w_2)$.
When $w_2 > w_1$, going back in time, $\xi$
grows; nevertheless, condition (\ref{first}) is still satisfied if $w_1\ge0$.
The validity region of such an approximated solution is in the range
\begin{equation}\label{cc}
\frac{\L}{\rho_1}\ll \xi\ll \left(\frac{\rho_1}{\L}\right)^{n} \; ,
\end{equation}
where the power $n$ can be $1/3,\,1/2 $ or $1$ depending on the $a_i$ values, see \cite{uscosm} for details.
When $w_1 > w_2$ and $\xi$
decreases going back in time, the above lower bound holds for $w_2 > -1/(4+ 3 w_1)$.
\item[({\bf D})] When $\rho_2=0$, also the
case of very large $\xi$ is possible, with
\begin{equation}
\xi \propto \left(\frac{ \rho_1}{\L}\right)^n \, ,
\end{equation}
which gives $c<0$. The power $n$ is the same as in eq.(\ref{cc}), see \cite{uscosm}.
Thus, not only (\ref{early}) is violated but also $c$ is
negative. Starting from a negative $c$ in order to get to a quasi dS
phase, where $c \sim 1$, one has to cross $c=0$ where $\tilde g$ is
singular.\footnote{This point was overlooked in~\cite{hasscosm, Berg, kov}. We only
consider FRW-like backgrounds where $c>0$.}
\end{itemize}
\medskip
\noindent Finally, looking at the validity of our
approximation, we found that the explored range of the $\xi$ values can be divided in the following
disjoined regions
\begin{equation}
\xi_{(\bf A)}\ll\; \xi_{(\bf B)}\sim\frac{\L}{\rho_1} \;
\ll\xi_{(\bf C)}\ll \;
\xi_{(\bf D)}\sim \left(\frac{ \rho_1}{\L}\right)^n \, ,
\end{equation}
which cover the whole range of $\xi $; except $(\bf{D})$, all cases
are compatible with eq.(\ref{early}), i.e. an early time standard
FRW universe.
\section{Perturbed FRW Universe}
\label{pert-sect}
Perturbations around the solution (\ref{frw}) can be studied along the
same lines of~\cite{uspert}. We focus here on the scalar sector; in
the vector and tensor ones, the results are very similar to the case
with only $\rho_1$ and they can be found in~\cite{uspert}.
In the scalar sector we have 8 fields and two independent gauge
transformations, as a result we can form 6 independent gauge invariant
scalar combinations $ \Psi_1, \, \Psi_2, \, \Phi_1, \, \Phi_2, \,
{\cal E}, \, {\cal B}_1$ for the metric perturbations. For matter
we have the gauge invariant density perturbations $\delta
\rho_{1/2 \, \text{gi}}$ and the scalar part of velocity perturbations $\delta
u_{s \, 1/2}$. The various definition can be found in Appendix \ref{pert-app} where also the full set of equations is given.
The fields ${\cal B}_1$
and $\Psi_{1/2}$ are non dynamical and can be expressed in
terms of ${\cal E}$ and $\Phi_{1/2}$, in particular
\begin{equation}
\Psi_1 +\Phi _1= m^2 \, a^2 \, f_1 \, \mathcal{E} \, , \qquad \qquad
\Psi_2
+\Phi _2 =-\frac{m^2 \, a^2 \, f_1 \, \mathcal{E}
}{
\kappa \;c\; \xi^2 }\, ;
\label{psi}
\end{equation}
where $f_{1/2}$ are defined in eq.(\ref{fdef}).
The fields ${\cal E}$ and $\Phi_{1/2}$ satisfy three second order equations; thus 3 scalar DoF propagate.
The condition (\ref{early}) guarantees only that background solution
follows closely GR cosmology with standard matter (sector 1) until the present
epoch. Of course we need more than that: we need to be sure that
perturbations, in particular the ones related to the new degrees of
freedom, do not start growing too early. Indeed, that is precisely what
happen when only ordinary matter is present: at early time, the mode $\Phi_2$
inside the horizon grows exponentially, though $\Phi_1$ and $\delta=
\delta \rho_{1 \, \text{gi}}/\rho_1$ are the same as in GR. As result we have
to face a very early breakdown of perturbation theory. Apparently,
this point was not taken into account fitting the
parameters $a_i$ and $m$ against observations~\cite{amendola}.
Basically, in the presence of the
aforementioned instabilities, structure formation will be completely
different. Thus, a preliminary necessary condition is to get rid of
exponential instabilities, irrespective of their tachyonic or ghost
nature.
In what follows we will show that also the presence of a second matter
sector is not instrumental to avoid such a kind of instabilities.
\subsection{Structure of the evolution equations}
The equations are rather complicated, however at early
times we can expand using the small parameter $\epsilon = m \;
{\cal H}^{-1}\sim (\Lambda/\rho_1)^{1/2}$.
Formally this is equivalent to expanding the equations of the
perturbation for small $m$. We stress that in the $m \to 0$ limit
there is no guarantee to recover GR as discussed in details in~\cite{uspert}.
In all cases ({\bf A}), ({\bf B}) and ({\bf C}), $\Phi_1$, at leading order in
$\epsilon$, satisfies the following equation
\begin{equation}
\Phi _1'' +\frac{6\left(w_1+1\right)}{(1+3 \, w_1)\;\tau}\Phi _1'+k^2\; w_1\;
\Phi _1=0 \, ;
\label{eq1}
\end{equation}
that coincides with the one in GR. In the radiation epoch,
sub-horizon modes oscillates, dumped by a factor $a^2$, while super
horizon modes are frozen and $\Phi_1=$ constant. In a matter
dominates Universe $\Phi_1$ is always constant. Thus, at leading
order in $\epsilon$, the dynamics of $\Phi_2$ and $\mathcal{E}$ is described
by a system of coupled second order ODEs of the form
\begin{equation}
\phi'' + {\cal D} \, \phi' + {\cal M} \, \phi + z_1 \, \Phi_1 + z_2 \,
\Phi_1' \, =0 \, , \qquad \phi = \begin{pmatrix} \Phi_2 & \\ \mathcal{E}_N \equiv \mathcal{E}/\tau^2 \end{pmatrix} \, .
\label{dyn}
\end{equation}
where ${\cal D}$ and ${\cal M}$ are suitable 2$\times$2 matrices and
$z_{1/2}$ functions of $\tau$ and $k$. We have also conveniently introduced a
dimensionless field $\mathcal{E}_N= {\cal E}/\tau^2$. Thus, once $\Phi_1$ is found from
(\ref{eq1}), it enters in (\ref{dyn}) as a source term. As shown in
appendix \ref{details}, the eqs (\ref{dyn}) correspond, for sub horizon and super horizon modes, to a coupled system of
Bessel-like equations.
It turns out that for cases ({\bf A}) and ({\bf C}), the system (\ref{dyn}) further simplifies
because the dynamics of $\Phi_2$ decouples from the one of $\mathcal{E}_N$ and
stability can be established simply studying the mass term. For the case
({\bf B}), on the contrary, one has to do a more involved analysis.
\subsection{Case ({\bf A})}
\label{caseA}
One has to be careful in the expansion; indeed here one
can expand for small $m$ only if $w_2< -1/3$. The result, this
time, is that also the equation for $\Phi_2$ is decoupled. In particular, we have that
\begin{equation}
\Phi _2'' +
\frac{6 \left(1-|w_2|\right)
}{|w_2|\, \left(3\, w_1+1\right)\, \tau} \, \Phi _2' -
\left(\frac{k^2}{|w_2| }+\frac{|w_2| \,\left( 3 \,w_1+4\right)-4}{ w_2^2 \,\tau^2\,\left(3 \,w_1+1\right)^2}
\right) \;\Phi _2 =0 \, .
\end{equation}
The above equation can be easily solved in terms of Bessel
functions. However, it is clear that the solution has an exponentially
growing mode. Indeed, inside the horizon $x=k \,\tau \gg 1$, the mass term is simply proportional to $-|w_2|$ and is
negative. The solution reads
\begin{equation}
\Phi_2 = \left(x |w_2|\right)^{2-\frac{3}{2 |w_2|}} \left[\alpha_1 \,
J_\nu \left( \frac{-i \, x}{\sqrt{|w_2|}}\right) + \alpha_2 \,
Y_\nu\left( \frac{-i \, x}{\sqrt{|w_2}|} \right)
\right] \, , \qquad \nu =\frac{\sqrt{4 w_2 \left(4
w_2+1\right)+5}}{2 |w_2|} \, ,
\end{equation}
Clearly $\Phi_2$ grows like $e^{x/\sqrt{|w_2|}}$.
The same instability is present also for the field $\mathcal{E}$ whose mass term, inside the horizon, gets the value
$\frac{k^2\left( w_1 -1 \right)}{|w_2| \left(3w_1+1\right)}$\,.
As a result, exponential instabilities are always present in both $\Phi_2$ and $\mathcal{E}$ and the background ({\bf A})
is pathological. The behaviour of super horizon modes is similar to
the case ({\bf B}), discussed bellow.
\subsection{Case ({\bf B})}
As for the case ({\bf A}), the expansion for small $m$ is a bit tricky, indeed $\xi\simeq \;\frac{a_1\, m^2}{8\,\pi\,G\;\rho_1\;\kappa} $
goes to zero when $m \to 0$ and all quantities must be expanded to next
to leading order.
In this case, as shown in appendix~\ref{details}, the equations for
$\Phi_2$ and $\mathcal{E}$ stay coupled. The only way to decouple them is to
work with a forth order equation for one of the two field. Taking for
simplicity $w_1=1/3$, we get for \underline{\it sub horizon modes}
\begin{equation}
\begin{split}
&{\cal E}_N{}^{(4)} +\frac{5
\left(3 w_2+5\right)}{\tau} \, {\cal E}_N^{(3)}
+k^2\;\left(25\;w_2-\frac{5}{3}\right)\; {\cal E}_N''
+k^2\;\frac{25\left( 9\;w_2-1\right) }{\tau} \, {\cal E}_N'
-k^4\;\frac{125 \, w_2}{3}\;{\cal E}_N=0 \, .
\end{split}
\label{high1}
\end{equation}
and for $w_1=0$
\begin{eqnarray}
\mathcal{E}_N^{(4)} &+&\frac{8\, \left(3\,
w_2+5\right) \mathcal{E}_N^{(3)}}{\tau}+k^2 \left(16\, w_2-1\right) \mathcal{E}_N^{(2)}
+\frac{8\, k^2 \,\left(29\, w_2-3\right)
}{\tau}\, \mathcal{E}_N'-k^4\,16 \, w_2 \,\mathcal{E} + \frac{k^2
(20\, w_2+3)}{\tau} \Phi_1' \nonumber\\
&+& \frac{8 \, k^4 \, w2}{3} \, \Phi_1=0 \, .
\label{high2}
\end{eqnarray}
Even before attempting solving (\ref{high1}) and (\ref{high2}) one
see that an exponential instability is expected. Indeed, for ${\cal D}$
and ${\cal M}$ in (\ref{dyn}) we have that
\begin{equation}
\begin{split}
&\text{Det}({\cal D})=\frac{24 \left(3 w_1+4\right){}^2 \left(w_2+1\right)}{\tau^2
\, \left(3 w_1+1\right)^2} \, , \qquad \quad \; \; \; \; \; \text{Tr}({\cal D})=\frac{2 (4 + 3
w_1) (5 + 3 w_2)}{\tau(1 + 3 w_1)} \, , \\
&\text{Det}({\cal M})= -k^4 \left(2 w_1+1\right) \left(3
w_1+4\right)^2 w_2\, , \qquad \text{Tr}({\cal M})=k^2 \left[ (4 + 3
w_1)^2 w_2 - 2 w_1-1 \right] \,.
\end{split}
\end{equation}
Thus, while ${\cal D}$ is positive definite, ${\cal M}$ has at least a
negative eigenvalue; in particular the eigenvalues of ${\cal M}$ are
given by
\begin{equation}
\lambda_1 = - k^2 \, (2 w_1 +1) \, , \qquad \qquad \lambda_2 =k^2
\left(3 w_1+4\right)^2 w_2 \, .
\end{equation}
Clearly, the fact that $\lambda_1 <0$ will lead to an
exponential growth of sub horizon modes. It should be stressed that $\lambda_1$ does not depend
on $w_2$ and precisely coincides with the negative mass term of $\mathcal{E}_N$
found in the case where a single matter sector was
present~\cite{uspert}\footnote{For reference, when $\rho_2 = 0$, $\Phi_2$
has a tachyonic mass equal to $\lambda_1$ and $\mathcal{E}_N= -\frac{2}{3}
\Phi_2$, for $w_1=1/3$.}.
The numerical solution of (\ref{high1}) and (\ref{high2}) confirms
that there is no value of $w_2$ such that ${\cal E}_N$ does not grow
exponentially. It is evident that sub horizon instability cannot be
avoided.
For \underline{\it super horizon modes} we can gives directly the full solution
\begin{eqnarray}
\mathcal{E}= \bar{\mathcal{E}}_1 \,\tau^{-15 w_2-1}+\bar{\mathcal{E}}_2\,\tau^{-\frac{9}{2}-\frac{\sqrt{21}}{2}}
+\bar{\mathcal{E}}_3\,\tau^{\frac{1}{2} \left(\sqrt{21}-9\right)}
+\frac{\bar{\mathcal{E}}_4}{\tau^7}-\frac{32
\,\tau^2\, \bar \Phi _1}{37}\quad{\rm for}
\quad w_1=\frac{1}{3}\\
\mathcal{E}= \bar{\mathcal{E}}_1\, \tau^{-24 \,w_2-4}+\bar{\mathcal{E}}_2\,\tau^{-\frac{15}{2}-\frac{\sqrt{33}}{2}}
+\bar{ \mathcal{E}}_3\,\tau^{\frac{1}{2} \,\left(\sqrt{33}-15\right)}
+\frac{\bar{\mathcal{E}}_4}{\tau^{13}}-\frac{21 \,\tau^2\, \bar \Phi_1}{82}\quad{\rm for}\quad w_1=0
\end{eqnarray}
where the $ \bar{\mathcal{E}}_i$ are the values of $\mathcal{E}$ at same initial time and $\bar
\Phi_{1}$ is the frozen value of $\Phi_1$. Notice that in
particular for $w_1=\frac{1}{3}$ the non decaying mode of the three perturbations are
\begin{equation}
\Phi_1= \bar \Phi_1 \, \qquad \Phi_2= \frac{39}{37} \, \bar \Phi_1 \,
, \qquad \mathcal{E} = - \frac{32}{37} \,\tau^2 \, \bar \Phi_1 \; .
\end{equation}
In the metric perturbations actually
$\mathcal{E}$ enters with the combination $k^2
\, \mathcal{E} \propto (k\;\tau)^2 \ll1$, as a result it stays very small and there are no
consequences on the validity of the perturbative expansion.
In addition, from (\ref{psi}) we get that
\begin{equation}
\Psi_1 + \Phi_1 \approx 0 \, , \qquad \qquad \Psi_2 + \Phi_2 \approx
\frac{96}{185} \, \bar \Phi_1 \, .
\end{equation}
Thus, perturbations in the sector one, relevant for our matter,
are indistinguishable from GR at early times. In the second sector the two
Bardeen potentials are not equal even if the source is a perfect fluid.
\subsection{Case ({\bf C})}
As shown in Appendix \ref{details}, also in this case the dynamics of $\Phi_2$ is decoupled and its
equations are similar to the ones in case ({\bf A}).
Inside the horizon, simply looking at the time dependent mass terms we find that they are positive, avoiding
instabilities, when (see appendix \ref{details})
\begin{equation}
w_{1/2} >0 \, , \qquad \qquad 3\, w_1+1-\frac{4 \,f_1}{f_2}>0 \, ,
\label{condC}
\end{equation}
where $f_{1,\,2}$, see eq.(\ref{fdef}), are $\tau$ dependent. Notice that when $f_1=f_2$ the above condition cannot be
satisfied if $0 \leq w_1 <1$. Actually, we have $f_1=f_2$ when $c=1$ and/or $a_2=a_3=0$ (as in the simplest bigravity model of \cite{amendola}),
and also when $w_1=w_2$.
Now depending on whether $w_1>w_2$ or $w_2>w_1$, $\xi$ dynamically becomes very small or very
large in the early universe, being
\begin{equation}
\xi (\tau ) = \xi_{in} \; a^{\frac{3(w_1-w_2)}{(1+3 w_2)}} \, .
\end{equation}
In particular
\begin{equation}
3 \,w_1+1 -\frac{4 f_1}{f_2} =
\begin{cases}\frac{3 \,\left(3\, w_1+1\right) \left(w_2-1\right)}{3\,
w_2+1} <0 & {\rm for}\quad \xi\to \infty \\[.2cm]
3 \,(w_1-1) <0& {\rm for}\quad \xi \to 0
\end{cases} \, .
\end{equation}
Thus, (\ref{condC}) cannot be satisfied at early times. When $\xi \to
0$ or $\xi \to \infty$, the mass term of $\mathcal{E}$ becomes time independent
and negative definite, leading to an exponential instability.
We conclude therefore that also in the case ({\bf C}) the
instability cannot be avoided if $w_2 <1$. For what concernes super
horizon modes the discussion is similar to the case ({\bf B}).
\section{Conclusions}
\label{con}
We studied in detail the dynamics of scalar perturbations in massive bigravity. Beside
its theoretical interest, massive gravity could be an interesting alternative
to dark energy.
As a general ground, the ghost free massive gravity theories can be classified
according to the global symmetries of the potential $V$ in the unitary gauge~\cite{uslong}.
The ones characterized by Lorentz invariance on flat space have a number of issues
once an homogeneous FRW background is implemented .
In the bigravity formulation, with a \underline{single matter sector}, things get better and FRW cosmological solutions indeed exist~\cite{russ,uscosm,hasscosm}.
However, cosmological perturbations are different from the ones in GR.
Already during radiation domination, sub horizon scalar perturbations tend to
grow exponentially~\cite{uspert}.
The manifestation of such instabilities is rather peculiar.
In the sector one, composed by ordinary matter and the metric $g$, their
perturbations are very close to the ones of GR.
The instability manifests as an exponential sub horizon growth of the field $\mathcal{E}$ and of the
second scalar mode $\Phi_2$, one of the Bardeen potentials of $\tilde
g$, which quickly invalidate the use of perturbation theory at very early time.
This is very different from GR where
perturbations become large (power law growth) only when the universe
is non relativistic.
The emergence of an instability only in the perturbations of the second metric
suggests its origin may resides in the matter content asymmetry of
the two sectors, since only the physical metric is coupled to
matter. Indeed, the only background solutions acceptables have
a ratio $\xi=\omega/a$ of the metrics' scale factors such that $\xi \ll 1$.
Adding a \underline{second matter sector} sourcing the second metric, opens up the
possibility (case ({\bf C})) to have a more symmetric background with
$\xi \sim 1$ and one may hope the exponential instability to be absent.
Unfortunately, we have shown that this is not the case.
Though, the pressure provided by the second matter stabilizes $\Phi_2$ and its dynamics becomes similar to GR, the sub horizon
instability persists for $\mathcal{E}$ that represents a purely gravitational extra
scalar field.
We managed to analyze the perturbations in whole range of $\xi$
compatible with the early Universe evolution (matter and radiation).
The cases ({\bf A}) and ({\bf B}) represent regions of very small $\xi$
where only one matter sector dominates, likewise the case with a single
matter, and both $\mathcal{E}$ and $\Phi_2$ grow exponentially inside the horizon.
When $\rho_1 \gg \rho_2$, the values of the tachyonic mass responsible
for that instability does not depend on $w_2$ and actually
coincides with the one found in the case where $\rho_2=0$~\cite{uspert}.
In region ({\bf C}) both the matter sectors are important.
While, the Bardeen potentials
$\Phi_{1,\,2}$ are stable, the purely scalar gravitational
field $\mathcal{E}= E_1-E_2$ (see Appendix \ref{pert-app}) that involves both
metrics has early time instabilities.
Finally, the region ({\bf D}), characterized by very large values of
$\xi$, already at the level of background, spoils early time standard FRW cosmology.
Spanning the whole range of $\xi$
compatible with a standard early time cosmology, when $m^2 M_{pl}^2$ is
the order of the present cosmological constant, the bottom line is that
massive bigravity has an intrinsic exponential instability.
Looking at the behaviour of the matter contrast which is the same of GR, one may speculate that some sort of Vainshtein~\cite{vain} cosmological mechanism could take place,
though here the trouble is with perturbations and not with the background.
Even if that happens, the deal is rather pricey: perturbation theory will
fail both at Solar System and cosmological scales.
\vskip 1cm
\noindent
{\Large \bf Acknowledgements}
\vskip .5cm
\noindent
M.C. thanks A. Emir G\"umr\"uk\c c\"uo\u glu for useful discussions
and the {\it Fondazione Angelo Della Riccia} for financial support.
L.P. thanks the Cosmology and Astroparticle Physics
Group of the {\it University of Geneva} for hospitality and support.
D.C. thanks { Negramaro} for their {\it Senza fiato} inspiring song.
\begin{appendix}
\section{Perturbed Geometry}
\label{pert-app}
Let us now consider the perturbations of the FRW background (\ref{frw})
\begin{equation}
g_{\mu \nu} = \, \bar g_{1 \, \mu \nu} +a^2 \, {h_1}_{\mu
\nu} \, , \qquad {\tilde g}_{\mu \nu} = \bar g_{2 \, \mu \nu} +
\o^2 \, h_{2 \, \mu \nu} \, .
\end{equation}
parametrized as follows
\begin{equation}
\begin{split}
& {h}_{1 \, 00} \equiv - 2 A_1 \, , \qquad {h}_{2 \, 00} \equiv - 2 c^2 \, A_2\\
& {h}_{1/2 \, 0 i} \equiv {\cal C}_{1/2 \,i} - \partial_i B_{1/2} \, , \qquad
\partial^i {\cal V}_{1/2 \, i}= \partial^i {\cal C}_{1/2 \, i} = \partial^j {h^{TT}}_{1/2 \, ij} =
\delta^{ij} {h^{TT}}_{1/2 \, ij} =0 \, ,\\
& h_{1/2 \, ij} \equiv {h^{TT}}_{1/2 \, ij} + \partial_i {\cal V}_{1/2 \, j} + \partial_j {\cal
V}_{1/2\, i} + 2 \partial_i \partial_j E_{1/2} + 2 \, \delta_{ij} \, F_{1/2} \,.
\end{split}
\end{equation}
Spatial indices are raised/lowered using the spatial flat metric.
In the scalar sector we can form 6 independent gauge invariant
scalar combinations that we chose to be
\begin{equation}
\begin{split}
& \Psi_1= A_1 - {\cal H} \, \Xi_1 - \Xi_1^\prime \qquad \Psi_2= A_2 + c^{-2} \left(\frac{c'}{c} - {\cal H}_\o \right) \, \Xi_2 -
\frac{\Xi_2^\prime}{c^2} \\
&\Phi_1 = F_1- {\cal H} \, \Xi_1 \, , \qquad \Phi_2 = F_2 - {\cal
H}_\o \, \frac{\Xi_2}{c^2} \, , \\
&{\cal E} = E_1 - E_2 \, , \qquad {\cal B}_1 = B_2 - c^2
B_1 +(1-c^2) \, E_1' \, ,
\end{split}
\label{sgib}
\end{equation}
where $\Xi_{1/2} = B_{1/2} + E_{1/2}^\prime$. In the matter sectors, we define the following gauge
invariant perturbed pressure and density
\begin{equation}
\begin{split}
&\delta \rho_{1_{gi}} = \delta \rho_1 - \Xi_1 \, \rho_1' \, , \qquad \delta
p_{1_{gi}} = \delta p_1 - \Xi_1 \, p_1' \; ; \\
&\delta \rho_{2_{gi}} = \delta \rho_2 - \frac{\Xi_2}{c^2} \, \rho_2' \, , \qquad \delta
p_{2_{gi}} = \delta p_2 - \frac{\Xi_2}{c^2} \, p_2' \, .
\end{split}
\end{equation}
The scalar part $v$ of the perturbed 4-velocity $u^\mu$ is defined as
\begin{equation}
\begin{split}
&u_{1/2}^\mu = {\bar u}_{1/2}^\mu + \delta u^\mu \, , \qquad u_1^\mu\; u_1^\nu\; g_{\mu
\nu} = -1 \, , \quad u_2^\mu \;u_2^\nu\; \tilde g_{\mu
\nu} = -1 \, , \quad \delta u_{1/2}^0 = - a^{-1} \, A_{1/2} \, ; \\
& \delta u_{1/2 \, i} = a \;\left( \partial_i v_{1/2} - \partial_i B_{1/2} \right) \, .
\end{split}
\end{equation}
The corresponding gauge invariant quantity are defined as
\begin{equation}
u_{1/2 \,s} = v + E_{1/2}' \, .
\end{equation}
The conservation of the EMT leads to a set of differential
relations. For the sector 1 we have
\begin{eqnarray}
\delta \rho_{1_{ gi}}'=(1+w_1) \left[\rho_1 \,\left( k^2 \, u_{1 \, s}-3 \, \Phi_1'
\right) - 3 \, {\cal H } \, \delta \rho_{1_{ gi}} \right] \, ,\\
u_{1 \, s}'= (3 w_1-1) \, u_{1 \, s} \, {\cal H} - \frac{w_1}{(1+w_1)} \, \frac{\delta
\rho_{1_{gi}}}{\rho_1} - \Psi_ 1\, .
\end{eqnarray}
For the sector 2
\begin{equation}
\begin{split}
& \delta \rho_{2_{gi}}'=(1+w_2) \left[\rho_2 \,\left( k^2 \, u_{2 \, s}-3 \, \Phi_2'
\right) - 3 \, {\cal H}_\o \, \delta \rho_{2_{gi}} \right] \,, \\
& u_{2 \, s}'= u_{2 \, s} \, \left[(3 w_2-1) \, {\cal H}_\o + \frac{c'}{c}\right]- c^2 \left[\frac{w_2}{(1+w_2)} \, \frac{\delta
\rho_{2_{gi}}}{\rho_2} + \Psi_ 2 \right] \, .
\end{split}
\end{equation}
The perturbed Einstein equations for the first metric reads
\begin{eqnarray}
&&2 \Delta \Phi_1+6 {\cal H} \left(\Psi _1 \mathcal{H}-\Phi
_1'\right) +a^2 m^2 \, f_2 (3 {\cal F}_1-\Delta {\cal
E}) =-8 \pi a^2 \,G \,\delta \rho_{1 _{gi}} \, ; \label{1tt}\\[.3cm]
&&\partial_i \left[2 \Psi _1 \mathcal{H}-2 \Phi _1' +\frac{a^2 m^2 \,
\mathcal{B}_1 \, f_2 }{(c+1)} + 8 \pi\, G \,a^2\, (p_1+\rho_1 )\,
u_{1 \,s}\right]
=0 \, ; \label{1ts}\\[.3cm]
&& \left(\partial_i \partial_j - \delta_{ij} \Delta \right)
\left( a^2 \,f_1\, m^2 \mathcal{E} - \Phi_1- \Psi_1\right) + \delta_{ij}
\left[m^2 \, a^2 \,(2\, f_1\, \mathcal{F}_1 +f_2\, \mathcal{A}_1 )+2
\Psi_1 \,\left(\mathcal{H}^2+2 \,\mathcal{H}'\right) \right. \nonumber \label{1ss}\\[.2cm]
&&\left. -2 \,\Phi_1''-2\, \mathcal{H} \left(2\, \Phi _1'-\Psi _1'\right)\right]
= 8 \pi G \, a^2 \, \delta_{ij} \,
\delta p_{1_{gi}} \, ,
\end{eqnarray}
where
\begin{equation}
f_1 =\xi \, \left[2 \,\xi \,\left(3\, a_3 \, c \, \xi +a_2
\, (c+1)\right)+a_1\right] \, , \qquad f_2 =\xi \, \left(6 \,a_3 \,\xi ^2+4
\, a_2\, \xi +a_1\right) \, .
\label{fdef}
\end{equation}
For the metric $\tilde g$
\begin{eqnarray}
&&
2\,c^2 \Delta \Phi _2+6
\mathcal{H}_{\omega } \left(\Psi _2 \mathcal{H}_{\omega }-\Phi _2'\right) +\frac{m^2 a^2 f_2}{\kappa\,\xi^2}\, c^2
\, \left(\Delta \mathcal{E}-3 \,\mathcal{F}_2\right)=-\frac{8 \pi \, G}{\kappa} \, a^2 \, c^2 \, \xi^2 \,\delta \rho_{2_{gi}} \, ; \label{2tt}\\[.3cm]
&& \partial_i \left[ 2\, c \,
\left(\Psi _2 \mathcal{H}_{\omega }-\Phi _2'\right)-\frac{m^2 \, a^2 \, f_2}{\kappa\,\xi^2\,(1+c)} \, \mathcal{B}_2
+ \frac{8 \pi\, G}{\kappa} \, c\, a^2 \, \xi^2 \, (p_2+\rho_2 )\, u_{2 \, s} \right] =0 \, ;
\label{2ts}\\[.3cm]
&&
-c \, \left(\partial_i \partial_j - \delta_{ij} \Delta \right) \left[
\frac{a^2 \, f_1 \, m^2}{\kappa\,\xi^2} \, \mathcal{E}+ \, c \, \left(\Phi_2+\Psi
_2\right)\right]+\delta_{ij}\left[ \frac{m^2 \,a^2}{\kappa\,\xi^2} (2\, c\, f_1 \, \mathcal{F}_2+ f_2 \, \mathcal{A}_2 ) +\right.
\nonumber\\[.2cm]
&&
\left.
2\left( \mathcal{H}_{\omega }^2+2\,\mathcal{H}_{\omega }'-2\,\frac{c'}{c}\, \mathcal{H}_{\omega }\right) \Psi _2-
2 \Phi_2''+2\left(\frac{c'}{c}-2\,\mathcal{H}_{\omega }\right)\, \Phi
_2'+2\,\mathcal{H}_{\omega }\,\Psi _2' \right]=\frac{8 \pi \, G}{\kappa} \, a^2 \, c^2 \, \xi^2 \, \delta_{ij} \,
\delta p_{2_{gi}} \,.\,\,\,\,\,\,\,\,\,\,\, \label{2ss}
\end{eqnarray}
The gauge invariant fields ${\cal F}_{1/2}$ can be expressed in terms
of $\Phi_{1/2}$, ${\cal B}_1$ and $\mathcal{E}$
by using
\begin{eqnarray}
&& {\cal H}_\o \, {\cal F}_2 - {\cal H} \, {\cal F}_1 = ( {\cal H} -
{\cal H}_\o ) (\Phi_1 - \Phi_2) \, ; \\[.2cm]
&& c^2( {\cal F}_2+ {\cal F}_1) =( {\cal B}_1 - {\cal E}) ( {\cal H} +{\cal H}_\o)
-2 c^2 (\Phi_1 - \Phi_2) \, .
\end{eqnarray}
We often use the Fourier transform of
perturbations with respect to $x^i$, the corresponding
3-momentum is $k^i$ and $k^2 = k^i k_i$. To keep notation as
simple as possible we give up the symbol of the Fourier transform.
\section{Evolution of Perturbations}
\label{details}
In this Appendix we give the equations that govern the evolution of
the perturbations. We are interested in two two regimes: sub horizon modes with $k \;\tau
\gg 1$ and super horizon ones for which $k \;\tau\ll 1$.
\subsection{Case ({\bf B})}
Remember that in this case we have $\xi\simeq \;\frac{a_1\,m^2}{8\,\pi\,G\;\rho_1\;\kappa}\ll1$ and that the leading contribution for the
evolution equation of $\Phi_1$ is the same of GR.
For \underline{\it sub horizon modes}, $\Phi_2$ and $\mathcal{E}$ satisfy a system of coupled
equation
\begin{equation}
\begin{split}
&
\Phi _2''+\frac{6 \left[9 \left(w_2+1\right) w_1^2+3 \left(5 w_2+7\right)
w_1+4 w_2+14\right]}{\tau \left(3 w_1+4\right) \left(3
w_1+1\right)} \, \Phi_2'
+ k^2 \left[\left(3 w_1+1\right) \left(3 w_1+4\right) w_2-3 w_1-2
\right] \, \Phi _2\\
&+ k^2 (3 w_1-1) \, \Phi_1+
\frac{18 w_1-6}{\tau \left(3 w_1+1\right)} \, \Phi_1' \,+\frac{12 \left[9 \left(3 w_2+1\right) w_1^2+15 w_1 \left(3
w_2+1\right)+2 \left(6 w_2+5\right)\right]}{\tau^3 \left(3
w_1+1\right){}^3 \left(3 w_1+4\right)} \, \mathcal{E}' +\\
& k^2\,\frac{6 \left(3 w_1+1\right) \left(3 w_1+4\right) w_2-6
\left(w_1+1\right)}{\tau^2\, \left(3 w_1+1\right)^2} \,\mathcal{E}
=0 \; ; \\[.3cm]
& {\cal E}''+\frac{6 \left(9 w_2 w_1+9 w_1+12 w_2+10\right)}{\tau \left(9
w_1^2+15 w_1+4\right)} \mathcal{E}' +k^2 \left[w_1+3 \,\left(3\, w_1+4\right) \,w_2+1\right]\,
\mathcal{E}\\ & -\frac{1}{6}\, k^2\, \tau^2 \left(3 w_1-1\right) \left(3 w_1+1\right)^2 \,
\Phi_1 +\tau(1 - 9 w_1^2) \, \Phi_1' +\frac{1}{6}\, k^2\, \tau^2 \left(3 w_1+1\right)^2 \left(12 w_2+w_1 \left(9
w_2+3\right)+2\right) \, \Phi_2 \\
&+ \frac{\tau \left(3 w_1+1\right) \left[12 w_2+w_1 \left(9
w_2+3\right)+2\right]}{3\, w_1+4}\, \Phi_2' =0 \; ;
\end{split}
\end{equation}
For \underline{\it super horizon modes},
the coupled equations for $\Phi_2$ and $\mathcal{E}$ are given by
\begin{equation}
\begin{split}
&\Phi _2'' +\frac{6 \left(3
w_1+4\right) \left(w_2+1\right) }{\tau \left(3
w_1+1\right)} \, \Phi _2' +\frac{6
\left(15 w_1+17\right) \left[\left(3 w_1+4\right)
w_2+1\right]}{\tau^2 \left(3 w_1+1\right){}^2}\, \Phi _2+\frac{24}{\tau^3 \left(3 w_1+1\right){}^3} \, \mathcal{E}'\\
&+\frac{48 \left[12 w_2+w_1 \left(9 w_2-3\right)+2\right]
}{\tau^4 (3 w_1+1)^4} \, \mathcal{E}-\frac{18
\left(3 w_1+5\right) \left(\left(3 w_1+4\right)
w_2+1\right)}{\tau^2 \left(3 w_1+1\right){}^2} \, \Phi _1-\frac{18 \left[(3 w_1+4)
w_2+1\right]}{\tau \left(3 w_1+1\right)} \, \Phi _1'=0 .
\end{split}
\end{equation}
\begin{equation}
\begin{split}
&\mathcal{E}''+\frac{2 \left(3 w_1+7\right) }{\tau
\left(3 w_1+1\right)}\mathcal{E}'
+\frac{36 \left(3 w_1^2+\left(6 w_2+5\right) w_1+8
w_2+2\right)}{\tau ^2 \left(3 w_1+1\right){}^3} \mathcal{E}
-\tau \left(36 w_2+3 w_1 \left(9 w_2-2\right)+1\right) \Phi _1'\\
&-\frac{9 \left(3
w_1+5\right) \left(4 w_2+w_1 \left(3 w_2-1\right)\right)}{3 w_1+1}\Phi _1+
\tau \left(3
\left(3 w_1+4\right) w_2+1\right) \Phi _2'+\\&
\frac{3
\left(9 w_1^3+9 \left(5 w_2+2\right) w_1^2+\left(111 w_2+14\right)
w_1+68 w_2+7\right)}{3 w_1+1}\Phi _2=0
\end{split}
\end{equation}
\subsection{Case ({\bf C})}
For this case $
\xi\simeq \left(\frac{\kappa\; \rho_{1}}{\rho_{2}}
\right)^{1/2} = \,\xi_{in} \;
a^{\frac{3\,(w_1-w_2)}{1+3\,w_2}}$.
At the leading order in the $\epsilon$ expansion, $\Phi_2$ satisfies
the following equation that is valid for any $k \;\tau$
\begin{equation}
\label{eq2}
\Phi _2'' +\frac{6 \left(w_2+1\right) }{\tau
\left(3 w_2+1\right)} \Phi _2'+k^2 \frac{ \, w_2 \left(3 \, w_1+1\right){}^2}{\left(3
w_2+1\right){}^2}\Phi _2=0 \, ;
\end{equation}
For $\mathcal{E}$, inside
the horizon, we get
\begin{equation}
\begin{split}
&
\mathcal{E}'' +\frac{2 \left[2 \, f_1 \left(3 w_2+1\right)+f_2
\left(1-9 \, w_1 w_2\right)\right]}{\tau \, f_2 \left(3 \,
w_1+1\right) \left(3 \, w_2+1\right)} \, \mathcal{E}'+\frac{k^2 \left[(3
w_1+1)f_2 -4f_1 \right]}{3 f_2
(3w_2+1)} \, \mathcal{E} + \\
& k^2\,\tau^2 \,\frac{\left(3 w_1+1\right){}^2 \left(f_2 \left(3
w_1+1\right)-2 f_1\right)}{6 f_2 \left(3
w_2+1\right)} \, \Phi_2+\tau \left(-\frac{2 f_1}{f_2}+3
w_1+1\right) \, \Phi_2'\\
&-\frac{\tau \left(3 w_1+1\right) \left[f_2(3
w_1+1)-2 f_1\right]}{f_2 (3w_2+1)} \, \Phi_1' -k^2\,\tau^2 \,\frac{\left(3 w_1+1\right){}^2 \left(f_2 \left(3
w_1+1\right)-2 f_1\right)}{6 f_2 \left(3
w_2+1\right)} \, \Phi_1=0
\end{split} \, .
\end{equation}
The quantities $f_1$ and $f_2$ are defined in
(\ref{fdef}). Imposing that all mass terms are positive we get
precisely condition (\ref{condC}).
Notice that for $w_2=w_1$ the equation for ${\cal E}$
reduces to
\begin{equation}
\mathcal{E}''-
\frac{6\left(w_1-1\right)}{\tau \, (3 w_1+1)}
\mathcal{E}'+\frac{k^2 \left(w_1-1\right)}{3 w_1+1} \mathcal{E}+{\cal
F}(\Phi_1,\, \Phi_2)=0 \; ,
\end{equation}
and the exponential instability is present.
\end{appendix}
|
1,116,691,497,349 | arxiv | \section{Introduction}\label{section:Introduction}
It is well-known that, for a spectrally negative L\'evy process $X$ \cite[Chapter~VII]{bertoin} \cite[Section~9.46]{sato}, fluctuation theory in terms of the two families of scale functions, $(W^{(q)})_{q\in [0,\infty)}$ and $(Z^{(q)})_{q\in[0,\infty)}$, has been developed \cite[Section~8.2]{kyprianou}. Of particular importance is the function $W:=W^{(0)}$, in terms of which the others may be defined, and which features in the solution of many important problems of applied probability \cite[Section 1.2]{kuznetsovkyprianourivero}. It is central to these applications to be able to evaluate scale functions for any spectrally negative L\'evy
process $X$.
The goal of the present paper is to define and analyse a very simple novel algorithm for computing $W$. Specifically, to compute $W(x)$ for some $x>0$, choose small $h>0$ such that $x/h$ is an integer. Then the approximation $W_h(x)$ to $W(x)$ is given by the recursion:
\begin{eqnarray}
\label{eq:LinRecursion}
W_h(y+h)=W_h(0)+\sum_{k=1}^{y/h+1}W_h(y+h-kh)\frac{\gamma_{-kh}}{\gamma_{h}},\qquad W_h(0)=(\gamma_hh)^{-1}
\end{eqnarray}
for $y=0,h,2h,\ldots,x-h$, where the coefficients $\gamma_h$ and $(\gamma_{-kh})_{k\geq 1}$ are expressible directly in terms of the L\'evy measure $\lambda$,
(possibly vanishing) Gaussian component $\sigma^2$ and drift $\mu$ of the L\'evy process $X$, as follows. Let:
\footnotesize
\begin{eqnarray*}
\tilde{\sigma}^2_h :=
\frac{1}{2h^2}\left(\sigma^2+\int_{[-h/2,0)}\!\!\!\!\!\!\!\!y^2\mathbbm{1}_{[-V,0)}(y)\lambda(dy)\right), \quad
\tilde{\mu}^h := \frac{1}{2h}
\left(\mu +h\sum_{k\in\mathbb{N}}k\lambda\left(\left[\left(-k-\frac{1}{2}\right)h,\left(-k+\frac{1}{2}\right)h\right)\cap[-V,0)\right)
\right),
\end{eqnarray*}
\normalsize
where $V$ equals $0$ or
$1$ according as to whether $\lambda$ is finite or infinite, and the drift $\mu$ is relative to the cut-off function $\tilde{c}(y):=y\mathbbm{1}_{[-V,0)}(y)$ (see Eq.~\ref{eq:laplace_exponent} for the Laplace exponent of $X$); remark
$\tilde{\sigma}^2_h = \sigma^2/2h^2$
and
$\tilde{\mu}^h = \mu/2h$, when $V=0$. Then the coefficients in~\eqref{eq:LinRecursion}
are given by:
\begin{eqnarray}
\label{eq:Explicit_1}
\gamma_h:=\tilde{\sigma}^2_h+\mathbbm{1}_{(0,\infty)}(\sigma^2)\tilde{\mu}^h
+\mathbbm{1}_{\{0\}}(\sigma^2)2\tilde{\mu}^h,& & \quad
\gamma_{-h}:=\tilde{\sigma}^2_h-\mathbbm{1}_{(0,\infty)}(\sigma^2)\tilde{\mu}^h+\lambda(-\infty,-h/2]\\
\gamma_{-kh}:=\lambda(-\infty,-kh+h/2],& &\quad\text{where } k\geq 2.
\label{eq:Explicit_2}
\end{eqnarray}
Indeed, the algorithm just described is based on a purely probabilistic idea of weak approximation: for small positive $h$, $X$ is approximated by what is a random walk $X^h$ on a lattice with spacing $h$, skip-free to the right, and embedded into continuous time as a compound Poisson process (see Definition~\ref{def:USF_Levy_chain}). Then, in recursion~\eqref{eq:LinRecursion}, $W_h$ is the scale function associated to $X^h$ --- it plays a probabilistically analogous r\^ole for the process $X^h$, as does $W$ for the process $X$. Thus $W_h$ is computed as an approximation to $W$ (see Corollary~\ref{proposition:calculating_scale_functions}).
When it comes to existing methods for the evaluation of $W$, note that analytically $W$ is characterized via its Laplace transform $\widehat{W}$, $\widehat{W}$ in turn being a certain rational function of the Laplace exponent $\psi$ of $X$. However, already $\psi$ need not be given directly in terms of elementary/special functions, and less often still is it possible to obtain closed-form expressions for $W$ itself. The user is then faced with a Laplace inversion algorithm \cite{CohenAM} \cite[Chapter~5]{kuznetsovkyprianourivero}, which (i) necessarily involves the evaluation of $\psi$, typically at complex values of its argument and requiring high-precision arithmetic due to numerical instabilities; (ii) says little about the dependence of the scale function on the L\'evy triplet of $X$ (recall that $\psi$ depends on a parametric complex integral of the L\'evy measure, making it hard to discern how a perturbation in the L\'evy measure influences the values taken by the scale function); and (iii) being a numerical approximation, fails \emph{a priori} to ensure that the computed values of the scale function are probabilistically meaningful (e.g. given an output of a numerical Laplace inversion, it is not necessary that the formulae for, say, exit probabilities, involving $W$, should yield values in the interval $[0,1]$).
By contrast, it follows from~\eqref{eq:LinRecursion} and the discussion following, that our proposed algorithm (i)~requires no evaluations of the Laplace exponent of $X$ and is numerically stable, as it operates in nonnegative real arithmetic \cite[Theorem~7]{panjer}; (ii) provides an explicit link between the deterministic semimartingale characteristics of $X$, in particular its L\'evy measure,
and the scale function $W$; and (iii) yields probabilistically consistent outputs. Further, the values of $W_h$ are so computed by a simple finite linear recursion and, as a by-product of the evaluation of $W_h(x)$, values $W_h(y)$ for all the grid-points $y=0,h,2h,\ldots,x-h,x$, are obtained (see Matlab code for the algorithm in~\cite{Scale_Function_Code}), which is useful in applications (see Section~\ref{section:numerical_illustrations} below).
Our main results will (I) show that $W_h$ converges to $W$ pointwise, and uniformly on the grid with spacing $h$ (if bounded away from $0$ and $+\infty$), for any spectrally negative L\'evy process,
and (II) establish sharp rates for this convergence under a mild assumption on the L\'evy measure.
Due to the explicit connection between the coefficients appearing in \eqref{eq:LinRecursion} and the L\'evy triplet of $X$, \eqref{eq:LinRecursion} also has the spirit of its one-dimensional It\^o diffusion analogue, wherein the computation of the scale function requires numerical evaluation of certain integrals of the coefficients of the SDE driving said diffusion (for the explicit formulae of the integrals see e.g.~\cite[Chapters~2 and~3]{BorodinSalminen}). Indeed, we express $W$ as a single limit, as $h\downarrow 0$, of nonnegative terms explicitly given in terms of the L\'evy triplet. This is more direct than the Laplace inversion of a rational transform of the Laplace exponent, and hence may be of purely theoretical significance (see Remark~\ref{remark:numerics:three} on how the scale functions are affected by a perturbation of the L\'evy measure, following directly from a transformed form of \eqref{eq:LinRecursion}).
Finally, note that an algorithm, completely analogous to~\eqref{eq:LinRecursion}, for the computation of the scale functions $W^{(q)}$, and also $Z^{(q)}$, $q\geq0$, follows from our results (see Corollary~\ref{proposition:calculating_scale_functions}, Eq.~\eqref{equation:recursion:Wq} and~\eqref{equation:recursion:Zq})
and presents no further difficulty for the analysis of convergence (see Theorem~\ref{theorem:rates_for_scale_functions} below). Indeed, our discretization allows naturally to approximate other quantities involving scale functions, which arise in application: the derivatives of $W^{(q)}$ by difference quotients of $W^{(q)}_h$; the integrals of a continuous (locally bounded) function against $dW^{(q)}$ by its integrals against $dW^{(q)}_h$; expressions of the form $\int_0^xF(y,W^{(q)}(y))dy$, where $F$ is continuous locally bounded, by the sums $\sum_{k=0}^{\lfloor x/h\rfloor -1}F(kh,W^{(q)}_h(kh))h$ etc. (See Section~\ref{section:numerical_illustrations} for examples.)
\subsection{Overview of main
results}
\label{subsec:Main_REsults_description}
The key idea leading to the algorithm
in~\eqref{eq:LinRecursion}
is best described by the following two steps:
(i) approximate the spectrally negative L\'evy process
$X$
by a
continuous-time Markov chain (CTMC)
$X^h$
with state space
$\mathbb{Z}_h:=\{hk\!\!:k\in\mathbb Z\}$
($h\in (0,h_\star)$ for some $h_\star>0$),
as described in Subsection~\ref{subsection:the_approximation};
(ii) find an algorithm for computing the scale functions
of the chain $X^h$.
The approximation in Subsection~\ref{subsection:the_approximation}
implies that
$X^h$
is a compound Poisson (CP) process, which is not spectrally negative.
However, since the corresponding jump chain of $X^h$ is a
skip-free to the right
$\mathbb{Z}_h$-valued random walk,
it is possible to introduce (right-continuous, nondecreasing) scale functions
$(W^{(q)}_h)_{q\geq 0}$ and $(Z^{(q)}_h)_{q\geq 0}$ (with measures
$dW^{(q)}_h$ and $dZ^{(q)}_h$ supported in $\mathbb{Z}_h$), in
analogy to the spectrally negative case.
Moreover, as described in Corollary~\ref{proposition:calculating_scale_functions},
a straightforward recursive algorithm is readily available for evaluating \emph{exactly} any function in the families
$(W^{(q)}_h)_{q\geq 0}$ and
$(Z^{(q)}_h)_{q\geq 0}$ at any point. More precisely, it emerges, that for each $x\in \mathbb{Z}_h$, $W^{(q)}_h(x)$ (resp. $Z^{(q)}_h(x)$) obtains as a \emph{finite} linear combination of the preceding values $W^{(q)}_h(y)$ (resp. $Z^{(q)}_h(y)$) for $y\in \{0,h,\ldots,x-h\}$; with the starting value $W^{(q)}_h(0)$ (resp. $Z^{(q)}(0)$) being known explicitly. This is in spite of the fact that the state space of the L\'evy process $X^h$ is in fact the \emph{infinite} lattice $\mathbb{Z}_h$.
In order to precisely describe the rates of convergence of the
algorithm in~\eqref{eq:LinRecursion}, we introduce some notation. Fix $q\geq 0$ and define for $K,G$ bounded
subset of $(0,\infty)$:
$$\Delta_W^K(h):=\sup_{x\in \mathbb{Z}_h\cap
K}\left\vert W^{(q)}_h(x-\delta^0h)-W^{(q)}(x)\right\vert\text{ and
}\Delta_Z^G(h):=\sup_{x\in \mathbb{Z}_h\cap G}\left\vert Z^{(q)}_h(x)-Z^{(q)}(x)\right\vert,$$
where $\delta^0$ equals $0$
if
$X$ has sample paths of finite
variation and
$1$ otherwise.
We further introduce:
$$\kappa(\delta):=\int_{[-1,-\delta)}\vert y\vert\lambda(dy),\qquad\text{for any $\delta\geq 0$.}$$
If the jump part of
$X$
has paths of infinite variation,
i.e. in the case the equality $\kappa(0)=\infty$ holds,
we assume
(throughout the paper we shall make it explicit when this assumption is in effect):
\begin{assumption}\label{assumption:salient}
There exists $\epsilon\in (1,2)$ with:
\begin{enumerate}[(1)]
\item\label{assumption:salient:one} $\limsup_{\delta\downarrow 0}\delta^{\epsilon}\lambda(-1,-\delta)<\infty$ and
\item\label{assumption:salient:two} $\liminf_{\delta\downarrow 0}\int_{[-\delta,0)}x^2\lambda(dx)/\delta^{2-\epsilon}>0$.
\end{enumerate}
\end{assumption}
Note that this is a fairly mild condition, fulfilled if e.g. $\lambda(-1,-\delta)$ ``behaves as'' $\delta^{-\epsilon}$, as $\delta\downarrow 0$; for a precise statement see Remark~\ref{remark:diffusion_positive:asymptotic}.
Here is now our main result:
\begin{theorem}\label{theorem:rates_for_scale_functions}
Let $K$ and $G$ be bounded subsets of $(0,\infty)$, $K$ bounded away from zero when $\sigma^2=0$. If $\kappa(0)=\infty$, suppose further that Assumption~\ref{assumption:salient} is fulfilled. Then the rates of convergence of the scale functions are summarized by the following table:
\begin{center}
\begin{tabular}{|c|c|}\hline
$\lambda(\mathbb{R})=0$ & $\Delta_W^K(h)=O(h^2)$ and $\Delta_Z^G(h)=O(h)$\\\hline
$0<\lambda(\mathbb{R})$ \& $\kappa(0)<\infty$ & $\Delta_W^K(h)+\Delta_Z^G(h)=O(h)$\\\hline
$\kappa(0)=\infty$ & $\Delta_W^K(h)+\Delta_Z^G(h)=O(h^{2-\epsilon})$\\\hline
\end{tabular}
\end{center}
Moreover, the rates so established are sharp in the sense that for each of the
three entries in the table above, examples of spectrally negative
L\'evy processes are constructed for which the rate of convergence is no better than
stipulated.
\end{theorem}
\begin{remark
\noindent (1) The rates of convergence depend on the behaviour of the tail of the L\'evy measure at the origin; by contrast behaviour of Laplace inversion algorithms tends to be susceptible to the degree of smoothness of the scale function (for which see \cite{kyprianou:smoothness}) itself \cite{abate_unified}.\\
\noindent (2) More exhaustive and at times general statements are to be found in
Propositions~\ref{proposition:convergence:BM+drift:Wq}-
\ref{proposition:convergence:diffusion_zero_infinite_variation}. In particular, the case $\sigma^2>0$ and $\kappa(0)=\infty$ does not require
Assumption~\ref{assumption:salient} to be fulfilled, although the statement of
the convergence rate is more succinct under its proviso.\\
\noindent (3) The proof of Theorem~\ref{theorem:rates_for_scale_functions} consists of studying the differences of the integral representations of the scale functions.
The integrands, however, decay only according to some power law, making the analysis much more involved than was the case in \cite{vidmarmijatovicsaul}, where the corresponding decay was exponential. In particular, one cannot, in the pure-jump case, directly apply the integral triangle inequality. The structure of the proof is explained in detail in Subsection~\ref{subsection:method_preliminary_etc}. \\
\noindent (4) Since scale functions often appear in applications (for which see Section~\ref{subsections:scales:applications}
below) in the form $W^{(q)}(x)/W^{(q)}(y)$ ($x,y>0$, $q\geq 0$), we note that the rates from Theorem~\ref{theorem:rates_for_scale_functions}
transfer directly to such quotients, essentially because
$W^{(q)}_h(y)\to W^{(q)}(y)\in (0,\infty)$, as $h\downarrow 0$, and since for all $h\in (0,h_\star)$, $\frac{1}{W^{(q)}(y)}-\frac{1}{W^{(q)}_h(y)}=\frac{W^{(q)}_h(y)-W^{(q)}(y)}{W^{(q)}(y)W^{(q)}_h(y)}$.\\
\noindent (5) For a result concerning the derivatives of $W^{(q)}$ see Subsection~\ref{subsection:functionals_of_scale_fncs_conv}.
\end{remark}
\subsection{Overview of the literature and of the applications of scale functions}\label{subsections:scales:applications}
For the general theory of spectrally negative L\'evy processes and their scale functions we refer to~\cite[Chapter~8]{kyprianou} and~\cite[Chapter~VII]{bertoin}, while an excellent account of available numerical methods for computing them can be found in~\cite[Chapter~5]{kuznetsovkyprianourivero}.
Examples, few, but important, of processes when the scale functions
\emph{can} be given analytically, appear e.g. in \cite{hubalek}; and in certain cases it is
possible to construct them \emph{indirectly} \cite[Chapter~4]{kuznetsovkyprianourivero} (i.e. not starting from the basic datum, which we consider here to be the characteristic triplet of
$X$). Finally, in the special case when $X$ is a positive drift minus a compound Poisson subordinator, we note that numerical schemes for (finite time) ruin/survival probabilities (expressible in terms of scale functions), based on discrete-time Markov chain approximations of one sort or another, have been proposed in the literature (see \cite{vylder,dickson,cardoso,dicksongray} and the references therein).
In terms of applications of scale functions in applied probability, there are numerous identities concerning
boundary crossing problems and related path decompositions in which scale
functions feature~\cite[p.~100]{kuznetsovkyprianourivero}. They do so either (a)~indirectly (usually as Laplace transforms of quantities which are ultimately of
interest), or even (b)~directly (then typically, but not always, as probabilities in the form of quotients $W(x)/W(y)$). For examples of the latter see the two-sided exit problem \cite[Chapter VII, Theorem 8]{bertoin}; ruin probabilities \cite[p. 217, Eq.~(8.15)]{kyprianou} and the Gerber-Shiu measure \cite[Section~5.4]{gerber_shiu} in the insurance/ruin theory context; laws of suprema of continuous-state branching processes \cite[Proposition~3.1]{bingham_branching}; L\'evy measures of limits of continuous-state branching processes with immigration (CBI processes) \cite[Eq.~(3.7)]{keller_mijatovic}; laws of branch lengths in population biology \cite[Eq.~(7)]{lambert}; the Shepp-Shiryaev optimal stopping problem (solved for the spectrally negative case in \cite[Theorem~2, Eq.~(30)]{avram_exitproblems}); \cite[Proposition~1]{loeffen} for an optimal dividend control problem. A further overview of these and other applications of scale functions (together with their derivatives and the integrals $Z^{(q)}$), e.g. in queuing theory and fragmentation processes, may be found in \cite[Section 1.2]{kuznetsovkyprianourivero}, see also the references therein. A suite of identities involving Laplace transforms of quantities pertaining to the reflected process of $X$ appears in \cite{mijatovicpistorius}.
\subsection{Organisation of the remainder of the paper}
Section~\ref{section:setting_and_notation} gives the setting and fixes general notation. Section~\ref{section:upwards_skip_free} introduces upwards skip-free L\'evy chains (they being the continuous-time analogues of random walks, which are skip-free to the right), describes their scale functions and how to compute them. In Section~\ref{section:Convergence_of_scale_functions} we demonstrate pointwise convergence of the approximating scale functions to those of the spectrally negative L\'evy process. Then Section~\ref{section:convergence_rates} establishes the rate at which this convergence transpires. Finally, Section~\ref{section:numerical_illustrations} provides some numerical illustrations and further discusses the computational side of the proposed algorithm. Appendices~\ref{appendix:technical_lemmas} and~\ref{appendix:some_asymptotic_properties_of_measures_on_R} contain the proofs of technical results from Subsection~\ref{subsection:auxiliary_technical_results}, while Appendix~\ref{appendix:further_examples} provides some additional numerical examples.
\section{Setting and general notation}\label{section:setting_and_notation}
Throughout this paper we let $X$ be a spectrally negative L\'evy process (i. e. $X$ has stationary independent increments, is c\`adl\`ag, $X_0=0$ a.s., the L\'evy measure $\lambda$ of $X$ is concentrated on $(-\infty,0)$ and $X$ does not have a.s. monotone paths). The Laplace exponent $\psi$ of $X$, defined via $\psi(\beta):=\log \mathsf E[e^{\beta X_1}]$ ($\beta\in\{\gamma\in \mathbb{C}:\Re\gamma\geq 0\}=:\overline{\mathbb{C}^{\rightarrow}}$), can be expressed as (see e.g. \cite[p. 188]{bertoin}):
\begin{equation}\label{eq:laplace_exponent}
\psi(\beta)=\frac{1}{2}\sigma^2\beta^2+\mu \beta+\int_{(-\infty,0)}\left(e^{\beta y}-\beta \tilde{c}(y)-1\right)\lambda(dy),\quad \beta\in \overline{\mathbb{C}^{\rightarrow}}.
\end{equation}
The L\'evy triplet of $X$ is thus given by $(\sigma^2,\lambda,\mu)_{\tilde{c}}$, $\tilde{c}:=\mathrm{id}_\mathbb{R}\mathbbm{1}_{[-V,0)}$ with $V$ equal to either $0$ or $1$, the former only if $\int_{[-1,0)}\vert x\vert\lambda(dx)<\infty$ (where $\mathrm{id}_\mathbb{R}$ is the identity on $\mathbb{R}$). Further, when the L\'evy measure satisfies $\int_{[-1,0)}\vert x\vert\lambda(dx)<\infty$, we may always express $\psi$ in the form $\psi(\beta)=\frac{1}{2}\sigma^2\beta^2+\mu_0 \beta+\int_{(-\infty,0)}\left(e^{\beta y}-1\right)\lambda(dy)$ for $\beta\in\overline{\mathbb{C}^{\rightarrow}}$. If in addition $\sigma^2=0$, then necessarily the drift $\mu_0$ must be strictly positive, $\mu_0>0$ \cite[p. 212]{kyprianou}.
\subsection{The approximation}\label{subsection:the_approximation}
We now recall from \cite{vidmarmijatovicsaul}, specializing to the spectrally negative setting, the spatial discretisation of $X$ by the family of CTMCs $(X^h)_{h\in (0,h_\star)}$ (where $h_\star\in (0,+\infty]$). This family weakly approximates $X$ as $h\downarrow 0$. As in \cite{vidmarmijatovicsaul} we will use two approximating schemes, scheme 1 and 2, according as $\sigma^2>0$ or $\sigma^2=0$. Recall that two different schemes are introduced since the case $\sigma^2>0$ allows for a better (i.e. a faster converging) discretization of the drift term, but the case $\sigma^2=0$ (in general) does not \cite[Paragraph~2.2.1]{vidmarmijatovicsaul}. Let also $V=0$, if $\lambda$ is finite and $V=1$, if $\lambda$ is infinite.
Notation-wise, define for $h>0$,
$c^h_{y}:=\lambda(A_y^h)$ with $A_y^h:=[y-h/2,y+h/2)$ ($y\in
\mathbb{Z}_h^{--}:=\mathbb{Z}_h\cap (-\infty,0)$); $A_0^h:=[-h/2,0)$;
$$c^h_0:=\int_{A_0^h}y^2\mathbbm{1}_{[-V,0)}(y)\lambda(dy)\quad\text{ and }\quad\mu^h:=\sum_{y\in
\mathbb{Z}_h^{--}}y\int_{A_y^h}\mathbbm{1}_{[-V,0)}(z)\lambda(dz).$$
We now specify the law of the approximating chain $X^h$ by insisting that (i) $X^h$ is a compound Poisson (CP)
process, with $X^h_0=0$, a.s., and whose positive jumps do not exceed $h$ -- hence admits a Laplace exponent $\psi^h(\beta):=\log\mathsf E[e^{\beta X^h_1}]$ ($\beta\in \overline{\mathbb{C}^{\rightarrow}}$) --; and (ii) specifying $\psi^h$ under scheme 1, as:
\begin{equation}\label{eq:laplace_exponent:scheme1}
\psi^h(\beta)=(\mu-\mu^h) \frac{e^{\beta h}-e^{-\beta h}}{2h}+(\sigma^2+c_0^h)\frac{e^{\beta h}+e^{-\beta h}-2}{2h^2}+\sum_{y\in \mathbb{Z}_h^{--}}c_y^h\left(e^{\beta y}-1\right),
\end{equation}
and under scheme 2, as:
\begin{equation}
\psi^h(\beta)=(\mu-\mu^h)\frac{e^{\beta h}-1}{h} + c_0^h\frac{e^{\beta h}+e^{-\beta h}-2}{2h^2} +\sum_{y\in \mathbb{Z}_h^{--}}c_y^h\left(e^{\beta y}-1\right).
\label{eq:laplace_exponent:scheme2}
\end{equation}
This is consistent with the approximation of \cite{vidmarmijatovicsaul}: the above Laplace exponents follow from the forms of the characteristic exponents \cite[Eq.~(3.1)
and~(3.2)]{vidmarmijatovicsaul} via analytic
continuation, and to properly appreciate where the different terms appearing in \eqref{eq:laplace_exponent:scheme1}-\eqref{eq:laplace_exponent:scheme2} come from, we refer the reader to our paper \cite{vidmarmijatovicsaul}, especially Section~2.1 therein.
\begin{table}[!hbt]
\caption{The usage of schemes 1 and 2 and of $V$ depends on the nature of $\sigma^2$ and $\lambda$.
}\label{table:reference}
\begin{center}
\begin{tabular}{|c|c|c|}\hline
L\'evy measure/diffusion part& $\sigma^2>0$ & $\sigma^2=0$ \\\hline
$\lambda(\mathbb{R})<\infty$ & $V=0$, scheme 1 & $V=0$, scheme 2\\\hline
$\lambda(\mathbb{R})=\infty$ & $V=1$, scheme 1 & $V=1$, scheme 2\\\hline
\end{tabular}
\end{center}
\end{table}
Indeed, note that, starting directly from \cite[Eq.~(3.2)]{vidmarmijatovicsaul}, the term $(\mu-\mu^h)\frac{e^{\beta h}-1}{h}$ in \eqref{eq:laplace_exponent:scheme2} should actually read as: $$(\mu-\mu^h)\left(\frac{e^{\beta h}-1}{h}\mathbbm{1}_{[0,\infty)}(\mu-\mu^h)+\frac{1-e^{-\beta h}}{h}\mathbbm{1}_{(-\infty,0]}(\mu-\mu^h)\right).$$ However, when $X$ is a spectrally negative L\'evy process with $\sigma^2=0$, we have $\mu-\mu^h\geq 0$, at least for all sufficiently small $h$. Indeed, if $\int_{[-1,0)}\vert y\vert\lambda(dy)<\infty$, then $\mu_0>0$ and by dominated convergence $\mu-\mu^h\to\mu_0$ as $h\downarrow 0$. On the other hand, if $\int_{[-1,0)}\vert y\vert\lambda(dy)=\infty$, then we deduce by monotone convergence $-\mu^h\geq \frac{1}{2}\int_{ [-1,-h/2)}\vert y\vert\lambda(dy)\to\infty$ as $h\downarrow 0$. We shall assume throughout that $h_\star$ is already chosen small enough, so that $\mu-\mu^h\geq 0$ holds for all $h\in (0,h_\star)$.
In summary, then, $h_\star$ is chosen so small as to guarantee that, for all $h\in (0,h_\star)$: (i) $\mu-\mu^h\geq 0$ and (ii) $\psi^h$ is the Laplace exponent of some CP process $X^h$, which is also a CTMC with state space $\mathbb{Z}_h$ (note that in \cite[Proposition 3.9]{vidmarmijatovicsaul} it is shown $h_\star$ can indeed be so chosen, viz. point (ii)). Eq.~\eqref{eq:laplace_exponent:scheme1} and \eqref{eq:laplace_exponent:scheme2} then determine the \emph{weak} approximation $(X^h)_{h\in (0,h_\star)}$ precisely.
Finally, for $h\in (0,h_\star)$, let $\lambda^h$ denote the L\'evy measure of
$X^h$. In particular, $\psi^h(\beta)=\int\left(e^{\beta
y}-1\right)\lambda^h(dy)$, $\beta\in\overline{\mathbb{C}^{\rightarrow}}$, $h\in (0,h_\star)$, so that the jump intensities, equivalently the L\'evy measure, of $X^h$ can be read off directly from \eqref{eq:laplace_exponent:scheme1}-\eqref{eq:laplace_exponent:scheme2}. It will also be
convenient to define $\psi^0:=\psi$.
\subsection{Connection with integro-differential equations}\label{subsection:connection_to_IDE}
An alternative form of \eqref{eq:LinRecursion} (as generalized to the case of arbitrary $q\geq 0$; see \eqref{equation:recursion:Wq_basic} of Proposition~\ref{proposition:calculating_scale_functions:basic}) is the analogue of the relation $(L-q)W^{(q)}=0$ in the spectrally negative case, the latter holding true under sufficient regularity conditions on $W^{(q)}$ (see e.g. \cite[Eq.~(12)]{biffis}). Here $L$ is the infinitesimal generator of $X$ \cite[p. 208, Theorem 31.5]{sato}: $$Lf(x)=\frac{\sigma^2}{2}f''(x)+\mu f'(x)+\int_{(-\infty,0)}\left(f(x+y)-f(x)-yf'(x)\mathbbm{1}_{[-V,0)}(y)\right)\lambda(dy)$$ ($f\in C^2_0(\mathbb{R})$, $x\in \mathbb{R}$). This suggests there might be a link between our probabilistic approximation and solutions to integro-differential equations.
Indeed, one can check, for each $q\geq 0$, by taking Laplace transforms (using \eqref{eq:laplace_exponent}, the expression for the Laplace transform of $W^{(q)}$ \cite[p. 100, Eq.~(4)]{kuznetsovkyprianourivero}, nondecreasingness of $W^{(q)}$, and the theorem of Fubini), that the function $W^{(q)}$ satisfies the following integro-differential equation (cf. \cite[Corollary IV.3.3]{asmussen} for the case $\sigma^2=0$, $\lambda(\mathbb{R})<+\infty$, and survival probabilities):
\footnotesize
\begin{equation}\label{eq:integro-differential}
\frac{1}{2}\sigma^2 \frac{dW^{(q)}}{dx}=1-\muW^{(q)}(x)+\int_0^\infty\left(W^{(q)}(x-y)(\lambda(-\infty,-y)+q)-W^{(q)}(x)\lambda[-V,-y)\mathbbm{1}_{(0,V]}(y)\right)dy
\end{equation}
\normalsize
(for the value of $W^{(q)}(0)$ see \cite[p. 127, Lemma 3.1]{kuznetsovkyprianourivero}). Note that in the last integral of \eqref{eq:integro-differential}, the two terms appearing in its integrand cannot be separated when $\kappa(0)=+\infty$.
Furthermore, \eqref{eq:LinRecursion} as generalized to arbitrary $q\geq 0$ (see Corollary~\ref{proposition:calculating_scale_functions}, Eq.~\ref{equation:recursion:Wq}) can be rewritten, when $\sigma^2=0$, as (for $x\in \mathbb{Z}_h^{++}$):
\footnotesize
\begin{equation}\label{eq:integrodiff:scheme2}
\frac{1}{2}c_0^h\frac{W^{(q)}_h(x)-W^{(q)}_h(x-h)}{h}=1+(\mu^h-\mu)W^{(q)}_h(x)+h\sum_{k=1}^{x/h}W^{(q)}_h(x-kh)\left(\lambda(-\infty,-(k-1/2)h)+q\right),
\end{equation}
\normalsize
with $W^{(q)}_h(0)=\frac{1}{\frac{1}{2}c_0^h/h+\mu-\mu^h}$, and when $\sigma^2>0$, as (again for $x\in \mathbb{Z}_h^{++}$):
\footnotesize
\begin{eqnarray}
\nonumber \frac{1}{2}\left(\sigma^2+c_0^h\right)\frac{W^{(q)}_h(x)-W^{(q)}_h(x-h)}{h}&=&1+(\mu^h-\mu)\frac{W^{(q)}_h(x)+W^{(q)}_h(x-h)}{2}\\
&&+h\sum_{k=1}^{x/h}W^{(q)}_h(x-kh)\left(\lambda(-\infty,-(k-1/2)h)+q\right),
\label{eq:integrodiff:scheme1}
\end{eqnarray}
\normalsize
with $W^{(q)}_h(0)=\frac{2h}{\sigma^2+c_0^h+(\mu-\mu^h)h}$.
Thus \eqref{eq:integrodiff:scheme2} and \eqref{eq:integrodiff:scheme1} can be seen as (simple) approximation schemes for the integro-differential equation \eqref{eq:integro-differential}.\footnote{We are grateful to an anonymous referee for pointing out this connection, on an earlier draft of this paper.} However, from this viewpoint alone, it would be very difficult indeed to ``guess'' the correct discretization, which would also yield meaningful generalized scale functions of approximating chains --- the latter being our starting point and precisely the aspect of our schemes, which we wish to emphasize. Indeed, higher-order schemes for \eqref{eq:integro-differential}, if and should they exist, would (likely) no longer be connected with L\'evy chains.
\subsection{General notation}\label{subsection:further_general_notation}
With regard to miscellaneous notation, we let $\mathbb{R}_+$ (respectively $\mathbb{R}^+$) be the nonnegative (respectively strictly positive) real numbers; $\mathbb{Z}_+=\mathbb{N}_0$ (respectively $\mathbb{Z}^+$, $\mathbb{Z}_-$, $\mathbb{Z}^-$) the nonnegative, (respectively strictly positive, nonpositive, strictly negative) integers; $\mathbb{Z}_h^+$ (respectively $\mathbb{Z}_h^{++}$, $\mathbb{Z}_h^-$, $\mathbb{Z}_h^{--}$) the nonnegative (respectively strictly positive, nonpositive, strictly negative) elements of $\mathbb{Z}_h:=\{hk:k\in\mathbb Z\}$; $\mathbb{C}^{\leftarrow}:=\{z\in\mathbb{C}\!\!:\Re z<0\}$ (respectively $\mathbb{C}^{\rightarrow}:=\{z\in\mathbb{C}\!\!:\Re z>0\}$) with $\overline{\mathbb{C}^{\leftarrow}}$ (respectively $\overline{\mathbb{C}^{\rightarrow}}$) denoting the closure of this set (note that the arrow notation is suggestive of which halfplane is being considered). To a nondecreasing right-continuous function $F:\mathbb{R}\to\mathbb{R}$ a measure $dF$ may be associated in the Lebesgue-Stieltjes sense; stating that $F$ is of $M$-exponential order means $\vert F\vert(x)\leq C e^{M x}$ for all $x\geq 0$, for some $C<\infty$. On the other hand, a real-valued function $f$, defined on some $(a,\infty)$, $a\in \mathbb{R}$ is said to grow asymptotically $M$-exponentially, $M\in (0,\infty)$, if $\{\liminf_{x\to \infty}f(x)e^{-Mx},\limsup_{x\to\infty} f(x)e^{-Mx}\}\subset (0,\infty)$. Further, for functions $g\geq 0$ and $h>0$ defined on some right neighborhood of $0$, $g\sim h$ (resp. $g=O(h)$, $g=o(h)$) means $\lim_{0+}g/h\in (0,\infty)$ (resp. $\limsup_{0+}g/h<\infty$, $\lim_{0+}g/h=0$). Next, the Laplace transform of a measurable function $f:\mathbb{R}\to\mathbb{R}$ of $M$-exponential order, with $f\vert_{(-\infty,0)}$ constant (respectively measure $\mu$ on $\mathbb{R}$, concentrated on $[0,\infty)$) is denoted $\hat{f}$ (respectively $\hat{\mu}$): $\hat{f}(\beta)=\int_0^\infty e^{-\beta x}f(x)dx$ for $\Re\beta>M$ (respectively $\hat{\mu}(\beta)=\int_{[0,\infty)}e^{-\beta x}\mu(dx)$ for all $\beta\geq 0$ such that this integral is finite). For $\{x,y\}\subset\mathbb{R}$, $\lfloor x\rfloor:=\max\{k\in \mathbb Z:k\leq x\}$, $x\lor y=\max\{x,y\}$ and $x\land y=\min\{x,y\}$. A sequence $(h_n)_{n\in\mathbb{N}}$ of non-zero real numbers is said to be nested, if $h_n/h_{n+1}\in \mathbb{N}$ for all $n\in \mathbb{N}$. $\delta_x$ denotes the Dirac measure at $x\in \mathbb{R}$. Finally, increasing will mean strictly increasing; DCT stands for the Dominated Convergence Theorem; and we interpret $\pm a/0=\pm \infty$ for $a>0$.
\section{Upwards skip-free L\'evy chains and their scale functions}\label{section:upwards_skip_free}
In the sequel, we will require a fluctuation theory (and, in particular, a
theory of scale functions) for random walks, which are skip-free to the right,
once these have been embedded into continuous-time as CP processes (see next
definition and remark). Indeed, this theory has been developed in full detail
in \cite{vidmar:fluctuation_theory} and we recall here for the readers
convenience the pertinent results.
\begin{definition}
\label{def:USF_Levy_chain}
A L\'evy process $Y$ with L\'evy measure $\nu$ is
said to be an \emph{upwards skip-free L\'evy chain}, if it is compound Poisson
and the L\'evy measure $\nu$ satisfies
$\mathrm{supp}(\nu)\subset
\mathbb{Z}_h$ and $\mathrm{supp}(\nu\vert_{\mathcal{B}((0,\infty))})=\{h\}$
for some $h>0$.
\end{definition}
\begin{remark}\label{remark:ours_are_usf}
For all $h\in (0,h_\star)$, $X^h$ is an upwards skip-free L\'evy chain.
\end{remark}
For the remainder of this section we let $Y$ be an upwards skip-free L\'evy chain with L\'evy measure $\nu$, such that $\nu(\{h\})>0$ ($h>0$).
The following is either clear or else can be found in \cite[Subsection 3.1]{vidmar:fluctuation_theory}:
\begin{enumerate}[(i)]
\item One can introduce the Laplace exponent $\varphi:\overline{\mathbb{C}^{\rightarrow}}\to\mathbb{C}$,
given by $\varphi(\beta):=\int_{\mathbb{R}}(e^{\beta x}-1)\nu(dx)$
($\beta\in \overline{\mathbb{C}^{\rightarrow}}$), for which: $\mathsf E[e^{\beta Y_t}]=\exp\{t\varphi(\beta)\}$
($\beta\in \overline{\mathbb{C}^{\rightarrow}}$, $t\geq 0)$. $\varphi$ is continuous in $\overline{\mathbb{C}^{\rightarrow}}$, analytic in
$\mathbb{C}^{\rightarrow}$, $\lim_{+\infty}\varphi\vert_{[0,\infty)}=+\infty$ with
$\varphi\vert_{[0,\infty)}$ strictly convex.
\item Let
$\Phi(0)\in [0,\infty)$ be the largest root of $\varphi$ on $[0,\infty)$.
Then $\varphi\vert_{[\Phi(0),\infty)}:[\Phi(0),\infty)\to [0,\infty)$ is an
increasing bijection and we let
$\Phi:=(\varphi\vert_{[\Phi(0),\infty)})^{-1}:[0,\infty)\to [\Phi(0),\infty)$
be its inverse.
\end{enumerate}
We introduce in the next proposition two families of scale functions for $Y$,
which play analogous roles in the solution of exit problems, as they do in the
case of spectrally negative L\'evy processes, see \cite[Subsections
4.1-4-3]{vidmar:fluctuation_theory}:
\begin{proposition}[Scale functions]\label{proposition:scale_functions_for_USF}
There exists a family of functions $W^{(q)}:\mathbb{R}\to [0,\infty)$ and
$$Z^{(q)}(x)=1+q\int_0^{\lfloor x/h\rfloor h}W^{(q)}(y)dy,\quad
x\in\mathbb{R}$$ defined for each $q\geq 0$ such that for any $q\geq 0$, we
have $W^{(q)}(x)=0$ for $x<0$ and $W^{(q)}$ is characterised on $[0,\infty)$ as the
unique right-continuous and piecewise constant function of exponential order
whose Laplace transform satisfies: $$\widehat{W^{(q)}}(\beta)=\frac{e^{\beta
h}-1}{\beta h(\varphi(\beta)-q)}\text{ for }\beta>\Phi(q).$$
\end{proposition}
\begin{remark}\label{remark:scalefnsUSF}
\begin{enumerate}[(i)]
\item\label{remark:scalefnsUSF:one} The functions $W^{(q)}$ are nondecreasing and the corresponding measures $dW^{(q)}$ are supported in $\mathbb{Z}_h$ for each $q\geq 0$.
\item\label{remark:scalefnsUSF:three} The Laplace transform of the functions $Z^{(q)}$ is given by: $$\widehat{Z^{(q)}}(\beta)=\frac{1}{\beta}\left(1+\frac{q}{\varphi(\beta)-q}\right)\text{ for }\beta>\Phi(q),\, q\geq 0.$$
\item\label{remark:scalefnsUSF:four} For all $q\geq 0$: $W^{(q)}(0)=1/(h\nu(\{h\}))$.
\end{enumerate}
\end{remark}
Finally, the following proposition, whose corollary will gives rise to a method for
calculating the values of the scale functions associated to $Y$, follows from
the strong Markov property of $Y$ (see~\cite[Subsection~4.4]{vidmar:fluctuation_theory}).
\begin{proposition}\label{proposition:calculating_scale_functions:basic}
Let $P$ be the transition matrix of the jump chain of the CTMC $Y$ and let $q\geq 0$, $\nu_q:=1+q/\nu(\mathbb{R})$. Assume (for ease of notation and without loss of generality) that $h=1$. Then, seen as vectors, $W^{(q)}:=(W^{(q)}(k))_{k\in\mathbb Z}$ and $Z^{(q)}:=(Z^{(q)}(k))_{k\in\mathbb Z}$ satisfy, entry-by-entry: $$(PW^{(q)})\vert_{\mathbb Z_+}=\nu_qW^{(q)}\vert_{\mathbb Z_+}\text{ and }(PZ^{(q)})\vert_{\mathbb Z_+}=\nu_qZ^{(q)}\vert_{\mathbb Z_+},$$ i.e. with $\gamma:=\nu(\mathbb R)$, $p:=\nu(\{1\})/\gamma$ and $q_n:=\nu(\{-n\})/\gamma$ ($n\in\mathbb{N}$), the recursive relations (for $n\in \mathbb{N}\cup \{0\}$):
\begin{equation}\label{equation:recursion:Wq_basic}
pW^{(q)}(n+1)=\left(1+\frac{q}{\gamma}\right)W^{(q)}(n)-\sum_{k=1}^nq_kW^{(q)}(n-k),
\end{equation}
and
\begin{equation}\label{equation:recursion:Zq:basic}
pZ^{(q)}(n+1)+\left(1-p-\!\!\sum_{k=1}^{n-1}q_k\right)=\left(1+\frac{q}{\gamma}\right)Z^{(q)}(n)-\!\!\sum_{k=1}^{n-1}q_kZ^{(q)}(n-k)
\end{equation}
hold true. Additionally $W^{(q)}\vert_{\mathbb{Z}^-}=0$ with $W^{(q)}(0)=1/\nu(\{1\})$, whereas $Z^{(q)}\vert_{\mathbb{Z}_-}=1$.
\end{proposition}
\begin{corollary}\label{proposition:calculating_scale_functions}
Continue to assume $h=1$. We have for all $n\in \mathbb{N}\cup \{0\}$:
\begin{equation}\label{equation:recursion:Wq}
W^{(q)}(n+1)=W^{(q)}(0)+\sum_{k=1} ^{n+1}W^{(q)}(n+1-k)\frac{q+\nu(-\infty,-k]}{\nu(\{1\})},\quad W^{(q)}(0)=1/\nu(\{1\}),
\end{equation}
and for $\widetilde{Z^{(q)}}:=Z^{(q)}-1$,
\begin{equation}\label{equation:recursion:Zq}
\widetilde{Z^{(q)}}(n+1)=(n+1)\frac{q}{\nu\{1\}}+\sum_{k=1}^n\widetilde{Z^{(q)}}(n+1-k)\frac{q+\nu(-\infty,-k]}{\nu(\{1\})},\quad \widetilde{Z^{(q)}}(0)=0.
\end{equation}
\end{corollary}
\begin{remark}\label{remark:numerics:three}
Whilst we have based our algorithm on Eq.~\eqref{equation:recursion:Wq} and~\eqref{equation:recursion:Zq}, nevertheless Eq.~\eqref{equation:recursion:Wq_basic} and \eqref{equation:recursion:Zq:basic} should not be discounted entirely. For example, they allow
to make the following observation.
Recall $W_h$ is given by~\eqref{equation:recursion:Wq}, as applied to the process $Y=X^h/h$, equivalently then it may be obtained from \eqref{equation:recursion:Wq_basic}, again as applied to the process $Y=X^h/h$. Assume next (by scaling, without loss of generality) $V\leq x\in \mathbb{Z}_h$.
Suppose the L\'evy measure $\lambda$ of $X$ is modified below (possibly including) the level $-x$ in such a way that $\lambda(-\infty,-x]$ is preserved, whilst $ \mathbbm{1}_{(-x,0)}\cdot\lambda$, $\sigma^2$ and $\mu$ are kept fixed. Then \eqref{equation:recursion:Wq_basic}, as applied to $X^h/h$ for computing $W_h$ on the interval $[0,x]$, remains unaffected. This is because this recursion up to level $x$ depends solely on the probabilities of the jump-sizes with modulus at most $x-h$ and on the total L\'evy mass of the approximating L\'evy chain. The latter, however, do not change by said transformation of the L\'evy measure (see \eqref{eq:laplace_exponent:scheme1}-\eqref{eq:laplace_exponent:scheme2}). Consequently, since, as we shall see, $W_h$ converges to $W$, as $h\downarrow 0$, this also means that the scale function $W$ itself on the interval $[0,x]$ is invariant under such a transformation. Analogously for $W^{(q)}$ and $Z^{(q)}$.
\end{remark}
\section{Convergence of scale functions}\label{section:Convergence_of_scale_functions}
First we fix some notation. Pursuant to \cite[Subsections 8.1 \& 8.2]{kyprianou} (respectively Section~\ref{section:upwards_skip_free}) we associate henceforth with $X$ (respectively $X^h$) two families of scale functions $(W^{(q)})_{q\geq 0}$ and $(Z^{(q)})_{q\geq 0}$ (respectively $(W^{(q)}_h)_{q\geq 0}$ and $(Z^{(q)}_h)_{q\geq 0}$, $h\in (0,h_\star)$). Note that these functions are defined on the whole of $\mathbb{R}$, are nondecreasing, c\`adl\`ag, with $W^{(q)}(x)=W^{(q)}_h(x)=0$ and $Z^{(q)}(x)=Z^{(q)}_h(x)=1$ for $x\in (-\infty,0$). We also let $\Phi(0)$ (respectively $\Phi^h(0)$) be the largest root of $\psi\vert_{[0,\infty)}$ (respectively $\psi^h\vert_{[0,\infty)}$) and denote by $\Phi$ (respectively $\Phi^h$) the inverse of $\psi\vert_{[\Phi(0),\infty)}$ (respectively $\psi^h\vert_{[\Phi^h(0),\infty)}$, $h\in (0,h_\star)$). As usual $W$ (resp. $W_h$) denotes $W^{(0)}$ (resp. $W^{(0)}_h$, $h\in (0,h_\star)$).
Next, for $q\geq 0$, recall the Laplace transforms of the functions $W^{(q)}$ and $Z^{(q)}$ \cite[p. 214, Theorem~8.1]{kyprianou} (for $\beta>\Phi(q)$): $\int_0^\infty e^{-\beta x}W^{(q)}(x)dx=1/(\psi(\beta)-q)$ and $\int_0^\infty e^{-\beta x}Z^{(q)}(x)dx=\frac{1}{\beta}\left(1+\frac{q}{\psi(\beta)-q}\right)$ (where the latter formula follows using e.g. integration by parts). The Laplace transforms of $W^{(q)}_h$ and $Z^{(q)}_h$, $h\in (0,h_\star)$, follow from Proposition~\ref{proposition:scale_functions_for_USF} and Remark~\ref{remark:scalefnsUSF}~\ref{remark:scalefnsUSF:three}.
\begin{proposition}[Pointwise convergence]\label{remark:pointwise_convergence}\label{proposition:scales:pointwise_convergence}
Suppose $\psi^h\to \psi$ and $\Phi^h\to\Phi$ pointwise as $h\downarrow 0$. Then, for each $q\geq 0$, $W^{(q)}_{h}\to W^{(q)}$ and $Z^{(q)}_{h}\to Z^{(q)}$ pointwise, as $h\downarrow 0$.
\end{proposition}
\begin{remark}\label{remark:convergencePhi}
We will see in \ref{delta1} of Subsection~\ref{subsection:the_difference} that, in fact, $\psi^h\to \psi$ locally uniformly in $[0,\infty)$ as $h\downarrow 0$, which implies that $\Phi^h\to\Phi$ pointwise as $h\downarrow 0$. In particular, given any $q\geq 0$, $\gamma>\Phi(q)$ implies $\gamma>\Phi^h(q)$ for all $h\in (0,h_0)$, for some $h_0>0$.
\end{remark}
\begin{proof}
Since $\Phi^h(q)\to \Phi(q)$ as $h\downarrow 0$, it follows via integration by parts ($\int_{[0,\infty)}e^{-\beta x}dF(x)=\beta\int_{(0,\infty)}e^{-\beta x}F(x)dx$ for any $\beta\geq 0$ and any nondecreasing right-continuous $F:\mathbb{R}\to\mathbb{R}$ vanishing on $(-\infty,0)$ \cite[Chapter 0, (4.5) Proposition]{revuzyor}) that, for some $h_0>0$, the Laplace transforms of $dW^{(q)}$, $dZ^{(q)}$, $(dW^{(q)}_h)_{h\in (0,h_0)}$ and $(dZ^{(q)}_h)_{h\in (0,h_0)}$, are defined (i.e. finite) on a common halfline. These measures are furthermore concentrated on $[0,\infty)$ and since $\psi^h\to\psi$ pointwise as $h\downarrow 0$, then $\widehat{dW^{(q)}_h}\to \widehat{dW^{(q)}}$ and $\widehat{dZ^{(q)}_h}\to \widehat{dZ^{(q)}}$ pointwise as $h\downarrow 0$. By \cite[p. 110, Theorem 8.5]{bhattacharya}, it follows that $dW^{(q)}_{h_n}\to dW^{(q)}$ and $dZ^{(q)}_{h_n}\to dZ^{(q)}$ vaguely as $n\to\infty$, for any sequence $(h_n)_{n\geq 1}\downarrow 0$. This implies that, as $h\downarrow 0$, $W^{(q)}_{h}\to W^{(q)}$ (respectively $Z^{(q)}_{h}\to Z^{(q)}$) pointwise at all points of continuity of $W^{(q)}$ (respectively $Z^{(q)}$). Now, the functions $Z^{(q)}$ are continuous everywhere, whereas $W^{(q)}$ is continuous on $\mathbb{R}\backslash \{0\}$ and has a jump at $0$, if and only if $X$ has sample paths of finite variation \cite[p. 222, Lemma 8.6]{kyprianou}. In the latter case, however, we necessarily have $\sigma^2=0$ and $\int_{[-1,0)}\vert y\vert\lambda(dy)<\infty$ \cite[p. 140, Theorem 21.9]{sato} and the jump size is $W^{(q)}(0)=1/\mu_0$ (see Section~\ref{section:setting_and_notation} for definition of $\mu_0$). By Remark~\ref{remark:scalefnsUSF}~\ref{remark:scalefnsUSF:four} and \eqref{eq:laplace_exponent:scheme2}, $W^{(q)}_h(0)=1/(h\lambda^h(\{h\}))=1/(\mu-\mu^h+c_0^h/h)$. The latter quotient, however, converges to $1/\mu_0$, as $h\downarrow 0$, by the DCT (since $\int_{[-1,0)}\vert y\vert\lambda(dy)<\infty$ in this case, and $c_0^h\leq (h/2)\int_{[-V\land (h/2),0)}\vert y\vert\lambda(dy)$).
\end{proof}
\section{Rates of convergence}\label{section:convergence_rates}
In this section we establish our main result, Theorem~\ref{theorem:rates_for_scale_functions} from the Introduction. Subsection~\ref{subsection:method_preliminary_etc} describes the general method of proof and establishes some preliminary observations and notation. Subsection~\ref{subsection:auxiliary_technical_results}
contains technical results, notationally and otherwise independent from rest of the text, which are applied time and again in the sequel. Then Subsections~\ref{subsection:the_difference}--\ref{subsection:sclaes:diffusion0infvariation} establish a series of convergence results, which together imply Theorem~\ref{theorem:rates_for_scale_functions}. Finally, Subsection~\ref{subsection:functionals_of_scale_fncs_conv} contains a convergence result for the derivatives $W^{(q)\prime}$.
\subsection{Method of proof, preliminary observations and notation}\label{subsection:method_preliminary_etc}
The key step in the proof of Theorem~\ref{theorem:rates_for_scale_functions}
consists of a detailed analysis of the relevant differences arising in the
integral representations (see
Paragraph~\ref{subsubsection:integral_representations}) of the scale functions,
see Paragraph~\ref{subsubsection:precise_statement}.
A more detailed explanation of the method of proof
will be given in
Paragraph~\ref{subsubsection:method_for_obtaining_the_rates_of_convergence}.
\begin{remark}
With reference to Subsection~\ref{subsection:connection_to_IDE}, there is of course extensive literature on numerical solutions to integro-differential equations (IDE) of the relevant (Volterra) type (viz. Eq.~\eqref{eq:integro-differential}). This literature will, however, typically assume at least the continuity of the kernel appearing in the integral of the IDE, to even pose the problem, and obtain rates of convergence under additional smoothness conditions thereon (and the solution to the IDE) \cite[Chapters~2 and~3]{brunner} \cite[Chapters~7 and~11]{linz}. In our case the kernel appearing in \eqref{eq:integro-differential} is of course not (necessarily) even continuous (let alone possessing higher degrees of smoothness). Further, discounting for a moment the continuity requirement on the kernel, which may appear technical, some relevant \emph{general} results on convergence do exist, e.g. \cite[p. 102, Theorem 7.2]{linz} for the case $\sigma^2=0$ \& $\lambda(\mathbb{R})<+\infty$, but are not really (directly) applicable, since one would need to \emph{a priori} establish (at least) a rate of convergence for the difference between the integral appearing in \eqref{eq:integro-differential} and its discretization (local consistency error; see \cite[p. 101, Eq.~(7.12)]{linz}). This does not appear possible in general without a knowledge of the (sufficient) smoothness properties of the target function $W^{(q)}$ (the latter not always being clear; see \cite{kyprianou:smoothness}) and indeed, it would seem, those of the tail $(y \mapsto \lambda(-\infty,-y))$. Such an error analysis would be further complicated when $\sigma^2>0$ (respectively $\kappa(0)=\infty$), since then we are dealing with the discretization also of the derivative of $W^{(q)}$ (respectively the integral in \eqref{eq:integro-differential} cannot be split up as the difference of the integrals of each individual term of the integrand). It is then not very likely that looking at this problem from the integro-differential perspective alone would allow us to obtain, moreover sharp, rates of convergence (at least not in general).
By contrast, the method for obtaining the sharp rates of convergence that we shall use, based on the integral representations of the scale functions and their approximations, will allow us to handle all the cases within a single framework.
\end{remark}
\subsubsection{Integral representations of scale
functions}\label{subsubsection:integral_representations}
\begin{proposition}\label{proposition:well-posedness}
Let $q\geq 0$. For all $\beta\in\mathbb{C}$ with $\Re\beta>\Phi(q)$, $\psi(\beta)-q\ne 0$ (respectively $\psi^h(\beta)-q\ne 0$) and one has $(\psi(\beta)-q)\widehat{W^{(q)}}(\beta)=1$ (respectively $\beta h(\psi^h(\beta)-q)\widehat{W^{(q)}_h}(\beta)=e^{\beta h}-1$), and $\beta \widehat{Z^{(q)}}(\beta)=1+\frac{q}{\psi(\beta)-q}$ (respectively $\beta \widehat{Z^{(q)}_h}(\beta)=1+\frac{q}{\psi^h(\beta)-q}$) for the scale functions of $X$ (respectively $X^h$, $h\in (0,h_\star)$).
\end{proposition}
\begin{proof}
The stipulated equalities extend from $\beta>\Phi(q)$ real, to complex $\beta$ with $\Re\beta>\Phi(q)$, via analytic continuation, using expressions for the Laplace transforms of the scale functions (the latter having been noted in Section~\ref{section:Convergence_of_scale_functions}). In particular, so extended, they then imply $\psi(\beta)-q\ne 0$ for the range of $\beta$ as given.
\end{proof}
\begin{corollary}[Integral representation of scale functions]\label{corollary:scale_fncs_laplace-transforms}
Let $q\geq 0$. For any $\gamma>\Phi(q)$, we have, for all $x>0$ (with $\beta:=\gamma+is$):
\begin{equation}\label{eq:integral_represantation_Wq}
W^{(q)}(x)=\frac{1}{2\pi}\lim_{T\to\infty}\int_{-T}^T\frac{e^{\beta x}}{\psi(\beta)-q}ds
\end{equation}
and
\begin{equation}
Z^{(q)}(x)=\frac{1}{2\pi}\lim_{T\to\infty}\int_{-T}^T\frac{e^{\beta x}}{\beta}\left(1+\frac{q}{\psi(\beta)-q}\right)ds.
\end{equation}
Likewise, for any $h\in (0,h_\star)$ and then any $\gamma>\Phi^h(q)$, we have, for all $x\in \mathbb{Z}_h^+$ (again with $\beta:=\gamma+is$):
\begin{equation}\label{eq:integral_represantation_Wqh}
W^{(q)}_h(x)=\frac{1}{2\pi}\int_{-\pi/h}^{\pi/h}\frac{e^{\beta (x+h)}}{\psi^h(\beta)-q}ds
\end{equation}
and
\begin{equation}
Z^{(q)}_h(x)=\frac{1}{2\pi}\int_{-\pi/h}^{\pi/h}\frac{e^{\beta x}}{\beta}\frac{\beta h}{1-e^{-\beta h}}\left(1+\frac{q}{\psi^h(\beta)-q}\right)ds.
\end{equation}
\end{corollary}
\begin{proof}
First note that $W^{(q)}$ and $Z^{(q)}$ (respectively $W^{(q)}_h$ and $Z^{(q)}_h$) are of $\gamma$-exponential order for all $\gamma>\Phi(q)$ (respectively $\gamma>\Phi^h(q)$, $h\in (0,h_\star)$). Then use the inverse Laplace \cite[Section 3.3]{davies} (respectively $Z$ \cite[p. 11]{jury}) transform.
\end{proof}
\subsubsection{The differences $\Delta_W^{(q)}$ and $\Delta_Z^{(q)}$}\label{subsubsection:precise_statement}
For $x\geq 0$, let $T_x$ (resp. $T^h_x$) denote the first entrance time of $X$ (resp. $X^h$) to $[x,\infty)$, let $\underline{X}_t:=\inf \{X_s:s\in [0,t]\}$ (resp. $\underline{X^h}_t:=\inf \{X_s^h:s\in [0,t]\}$), $t\geq 0$, be the running infimum process and $\underline{X}_{\infty}:=\inf\{X_s:s\in [0,\infty)\}$ (resp. $\underline{X^h}_\infty:=\inf\{X_s^h:s\in [0,\infty)\}$) the overall infimum of $X$ (resp. $X^h$, $h\in (0,h_\star)$).
In the case of the spectrally negative process $X$, it follows from \cite[Theorem 8.1~(iii)]{kyprianou}, regularity of $0$ for $(0,\infty)$ \cite[p. 212]{kyprianou}, dominated convergence and continuity of $W^{(q)}\vert_{(0,\infty)}$ that, for $q\geq 0$ and $\{x,y\}\subset \mathbb{R}^+$:
\begin{equation}\label{eq:comparison_issue_X}
\mathsf E[e^{-q T_y}\mathbbm{1}(\underline{X}_{T_y}\geq -x)]=\frac{W^{(q)}(x)}{W^{(q)}(x+y)}\text{ and }\mathsf E[e^{-q T_y}\mathbbm{1}(\underline{X}_{T_y}>-x)]=\frac{W^{(q)}(x)}{W^{(q)}(x+y)}.
\end{equation}
On the other hand, we find \cite[Theorem 4.6]{vidmar:fluctuation_theory} the direct analogues of these two formulae in the case of the approximating processes $X^h$, $h\in (0,h_\star)$ as being ($q\geq 0$, $\{x,y\}\subset \mathbb{Z}_h^{++}$):
\begin{equation}\label{eq:comparison_issue_Xh}
\mathsf E[e^{-q T^h_y}\mathbbm{1}(\underline{X^h}_{T_y^h}\geq -x)]=\frac{W^{(q)}_h(x)}{W^{(q)}_h(x+y)}\text{ and }\mathsf E[e^{-q T_y^h}\mathbbm{1}(\underline{X}_{T_y^h}>-x)]=\frac{W^{(q)}_h(x-h)}{W^{(q)}_h(x-h+y)}.
\end{equation}
We conclude by comparing \eqref{eq:comparison_issue_X} with \eqref{eq:comparison_issue_Xh} that there is \emph{no a priori probabilistic reason} to favour either $W^{(q)}_h$ or $W^{(q)}_h(\cdot-h)$ in the choice of which of these two quantities to compare to $W^{(q)}$. Nevertheless, this choice is not completely arbitrary:
\noindent (a) In view of \eqref{eq:integral_represantation_Wq} and \eqref{eq:integral_represantation_Wqh}, the quantity $W^{(q)}_h(\cdot-h)$ seems more favourable (cf. also the findings of Proposition~\ref{proposition:convergence:BM+drift:Wq}, especially when $q=\vert \mu\vert=0$). In addition, when $X$ has sample paths of infinite variation, a.s., $W^{(q)}(0)$ is equal to zero \cite[p. 33, Lemma 3.1]{kuznetsovkyprianourivero} and so is $W^{(q)}_h(-h)$, whereas $W^{(q)}_h(0)$ is always strictly positive ($h\in (0,h_\star)$).
\noindent (b) On the other hand, when $X$ has sample paths of finite variation, a.s., then $W^{(q)}(0)=1/\mu_0>0$ \cite[p. 33, Lemma 3.1]{kuznetsovkyprianourivero} and if in addition the L\'evy measure is finite, then in fact also $W^{(q)}_h(0)=1/\mu_0$ for all $h\in (0,h_\star)$.
\begin{remark}\label{remark:basic}
\begin{enumerate}[(i)]
\item\label{remark:basic:i} It follows from the above discussion that it is reasonable to approximate $W^{(q)}$ by $W^{(q)}_h(\cdot-h)$ (resp. $W^{(q)}_h$), when $X$ has sample paths of infinite (resp. finite) variation (a.s.). Indeed, in the Brownian motion with drift case, approximating $W^{(q)}$ by $W^{(q)}_h$, or even the average $(W^{(q)}_h+W^{(q)}_h(\cdot-h))/2$, rather than by $W^{(q)}_h(\cdot-h)$, would lower the order of convergence from quadratic to linear (see Proposition~\ref{proposition:convergence:BM+drift:Wq}).
\item When $q=0$ or $x=0$, $Z^{(q)}(x)=Z^{(q)}_h(x)=1$, $h\in (0,h_\star)$. Thus, when comparing these functions, we shall always assume $q\land x>0$, for the only interesting case.
\end{enumerate}
\end{remark}
In view of Remark~\ref{remark:basic}~\ref{remark:basic:i} we define $\delta^0$ to be equal to $0$ or $1$ according as the sample paths of $X$ are of finite or infinite variation (a.s.). Fix $q\geq 0$. For $h\in (0,h_\star)$ we then define the differences:
\begin{equation}\label{eq:diffW}
\Delta_{W}^{(q)}(x,h):=W^{(q)}(x)-W^{(q)}_h(x- \delta^0h),\quad x\in \mathbb{Z}_h^{++}\cup \{\delta_0h\}
\end{equation}
and
\begin{equation}\label{eq:diffZ}
\Delta_Z^{(q)}(x,h):=Z^{(q)}(x)-Z^{(q)}_h(x),\quad x\in\mathbb{Z}_h^{++}.
\end{equation}
Fix further any $\gamma>\Phi(q)$. Let $h\in (0,h_\star)$ be such that also $\gamma>\Phi^h(q)$, Then Corollary~\ref{corollary:scale_fncs_laplace-transforms} implies, for any $x\in \mathbb{Z}_h^{++}\cup \{\delta_0h\}$ (we always let, here and in the sequel, $\beta:=\gamma+is$ to shorten notation):
\footnotesize
\begin{eqnarray}\label{eq:difference:Wqs} \nonumber
e^{-\gamma x}2\pi \Delta_{W}^{(q)}(x,h)&=&\underbrace{\lim_{T\to\infty}\int_{(-T,T)\backslash (-\pi/h,\pi/h)}e^{isx}\frac{ds}{\psi(\beta)-q}}_{\mytag{(a)}{W:a}}+\underbrace{\int_{[-\pi/h,\pi/h]}e^{isx}\left[\frac{\psi^h-\psi}{(\psi-q)(\psi^h-q)}\right](\beta)ds}_{\mytag{(b)}{W:b}}+\\
&&\underbrace{(1-\delta^0)\int_{[-\pi/h,\pi/h]}e^{isx}\left(1-e^{ish}\right)\frac{ds}{\psi^h(\beta)-q}}_{\mytag{(c)}{W:c}}
\end{eqnarray}
\normalsize
whereas for $x\in\mathbb{Z}_h^{++}$:
\footnotesize
\begin{eqnarray}\label{eq:difference:Zqs}
\nonumber e^{-\gamma x}2\pi\Delta_{Z}^{(q)}(x,h)&=&\underbrace{\lim_{T\to\infty}\!\int_{(-T,T)\backslash (-\pi/h,\pi/h)}\frac{e^{isx}}{\beta}\left(\frac{q}{\psi(\beta)-q}\right)ds}_{\mytag{(a)}{Z:a}}\\\nonumber
&+&\underbrace{\int_{[-\pi/h,\pi/h]}\frac{e^{isx}}{\beta}\left(1-\frac{\beta h}{1-e^{-\beta h}}\right)\left(\frac{q}{\psi^h(\beta)-q}\right)ds}_{\mytag{(b)}{Z:b}}\\
&+&\underbrace{q\int_{[-\pi/h,\pi/h]}\frac{e^{isx}}{\beta}\left[\frac{\psi^h-\psi}{(\psi^h-q)(\psi-q)}\right](\beta)ds}_{\mytag{(c)}{Z:c}}.
\end{eqnarray}
\normalsize
Note that in \eqref{eq:difference:Zqs} we have taken into account that the difference between the inverse Laplace and inverse $Z$ transform, for the function, which is identically equal to $1$, vanishes identically.
\begin{remark}\label{remark:approx_by_mean}
Notice that if we were to approximate $W^{(q)}(x)$ by the average $\frac{1}{2}(W^{(q)}_h(x)+W^{(q)}_h(x-h))$ instead of $W^{(q)}_h(x-\delta^0h)$, and were to adapt accordingly the definition in \eqref{eq:diffW}, the resulting change to \eqref{eq:difference:Wqs} would be, that term \ref{W:c}
would always be present, with $(1-\delta^0)$ replaced by $1/2$.
Now, when the sample paths are of finite variation (hence when we have $\delta^0=0$), none of the arguments would change, and the same theoretical rates of convergence would obtain. Indeed as we will see in the proof of Proposition~\ref{proposition:Wqconvergence:trivial_diffusion_FV}, in this case \eqref{eq:difference:Wqs}\ref{W:c} admits an estimate that yields a linear order of convergence ($O(h)/x$), and one gets linear order terms such as this from terms \ref{W:a} and \ref{W:b}, also.
However, when the sample paths are of infinite variation (hence when we have $\delta^0=1$), then in the estimate of the difference $\Delta_W^{(q)}$ we would have to \emph{add} to the error also an estimate of \eqref{eq:difference:Wqs}\ref{W:c} (with $1-\delta^0$ replaced by $1/2$ therein), which in general would then worsen the theoretical order of convergence. (Cf. also Remark~\ref{remark:basic}~\ref{remark:basic:i}.)
\end{remark}
\subsubsection{Method for obtaining the rates of convergence in \eqref{eq:difference:Wqs} and \eqref{eq:difference:Zqs}}\label{subsubsection:method_for_obtaining_the_rates_of_convergence}
Apart from the Brownian motion with drift case, which is treated explicitly, the method for obtaining the rates of convergence for the differences \eqref{eq:difference:Wqs} and \eqref{eq:difference:Zqs} is as follows (recall $\beta=\gamma+is$):
\begin{enumerate}
\item \label{method:scales:one}First we estimate $\vert \psi^h-\psi\vert (\beta)$ to control the numerators. In particular, we are able to conclude $\psi^h\to\psi$, uniformly in bounded subsets of $\overline{\mathbb{C}^{\rightarrow}}$. See Subsection~\ref{subsection:the_difference}.
\item \label{method:scales:two} Then we show $\vert\psi ^h-q\vert(\beta)$ is suitably bounded from below on $s\in (-\pi/h,\pi/h)$, uniformly in $h\in [0, h_0)$, for some $h_0>0$. This property, referred to as \emph{coercivity}, controls the denominators. See Subsection~\ref{subsection:coercivity}.
\item \label{method:scales:three} Finally, using \eqref{method:scales:one} and \eqref{method:scales:two}, one can estimate the integrals appearing in \eqref{eq:difference:Wqs} and \eqref{eq:difference:Zqs} either by a direct $\vert\int\cdot ds\vert\leq \int\vert\cdot\vert ds$ argument, or else by first applying a combination of integrations by parts (see \eqref{eq:general_scheme_per_partes} below) and Fubini's Theorem. In the latter case, the estimates of $\vert \frac{d(\psi(\beta)-\psi^h(\beta))}{ds}\vert$ and the growth in $s$, as $\vert s\vert\to\infty$, of $\frac{d(\psi^h-q)(\beta)}{ds}$, $h\in [0,h_\star)$, also become relevant, and we provide these in Subsection~\ref{subsection:_the_difference_of_derivatives}.
\end{enumerate}
\begin{remark}
\noindent (i) Note that the integral representation of the scale functions is crucial for our programme to yield results. The formulae \eqref{eq:difference:Wqs} and \eqref{eq:difference:Zqs} suffice to give a precise rate
locally uniformly in $x\in (0,\infty)$.
\noindent (ii) The integration by parts in \eqref{method:scales:three} is applied according to the general scheme ($f$ differentiable, $x>0$):
\begin{equation}\label{eq:general_scheme_per_partes}
\frac{1}{x}\frac{d}{ds}\left( e^{isx}f(s)\right)=ie^{isx}f(s)+\frac{1}{x}e^{isx}f'(s),
\end{equation}
and with the integral $\int e^{isx}f(s) ds$ (over a relevant domain) in mind. Then, upon integration against $ds$, the left-hand side and the second term on the right-hand side of \eqref{eq:general_scheme_per_partes} admit for an estimate, which could not be made for $\int e^{isx}f(s) ds$ directly, but in turn a factor of $1/x$ emerges, implying (as we will see) that the final bound is locally uniform in $(0,\infty)$ (in the estimates there is always also present a factor of $e^{\gamma x}$ which from the perspective of the relative error, and in view of the growth properties of $W^{(q)}$ and $Z^{(q)}$ at $+\infty$ \cite[p. 129, Lemma~3.3]{kuznetsovkyprianourivero}, is perhaps not so bad). In fact, the convergence rate obtained via \eqref{method:scales:one}-\eqref{method:scales:three} is uniform in bounded subsets of $(0,\infty)$, if \eqref{method:scales:three} does not involve integration by parts.
\noindent (iii) Now, it is usually the case that the estimates from \eqref{method:scales:three} may be made by a direct application of the integral triangle inequality. We were not able to avoid integration by parts, however, in the case of the convergence for the functions $W^{(q)}$, $q\geq 0$, and even then only if $\sigma^2=0$ (see Subsections~\ref{subsection:scales:FV} and~\ref{subsection:scales:diffusion=0:infinite_variation}). Particularly delicate is the case when furthermore the sample paths of $X$ are of finite variation (a.s.). In the latter case a key lemma is Lemma~\ref{lemma:FV}, which itself depends crucially on the findings of Proposition~\ref{appendix:proposition:integrability:main}.
\noindent (iv) Even when $\sigma^2=0$, however, numerical experiments (see Section~\ref{section:numerical_illustrations} and Appendix~\ref{appendix:further_examples}) seem to suggest that, at least in some further subcases, one should be able to establish convergence for the functions $W^{(q)}$, $q\geq 0$, which is uniform in bounded (rather than just compact) subsets of $(0,\infty)$. This remains open for future research.
\end{remark}
\begin{remark}
Sharpness of the rates is obtained by constructing specific examples of L\'evy processes, for which convergence is no better than stipulated (cf. the statement of Theorem~\ref{theorem:rates_for_scale_functions}). The key observation here is the following principle of \emph{reduction by domination}:
\begin{quote}
Suppose we seek to prove that $f\geq 0$ converges to $0$ no faster than $g>0$, i.e. that $\limsup_{h\downarrow 0}f(h)/g(h)\geq C>0$ for some $C$. If one can show $f(h)\geq A(h)-B(h)$ and $B=o(g)$, then to show $\limsup_{h\downarrow 0}f(h)/g(h)\geq C$, it is sufficient to establish $\limsup_{h\downarrow 0}A(h)/g(h)\geq C$.
\end{quote}
(This principle was also applied in \cite{vidmarmijatovicsaul} to establish sharpness of the stated rates of convergence there.)
\end{remark}
We will use the basic, but very useful, principle of reduction by domination without explicit reference in the sequel.
\subsubsection{Further notation}
Notation-wise, we let (where $\delta\in [0,1]$):
$$\xi(\delta):=\int_{[-\delta,0)}u^2\lambda(du),\ \kappa(\delta):=\int_{[-1,-\delta)}\vert y\vert\lambda(dy),\ \zeta(\delta):=\delta\kappa(\delta)\text{ and }\gamma(\delta):=\delta^2\lambda([-1,-\delta))$$
and remark that, by the findings of \cite[Lemma 3.8]{vidmarmijatovicsaul}, $\gamma(\delta)+\zeta(\delta)+\xi(\delta)\to 0$ as $\delta\downarrow 0$.
Finally, note that, unless otherwise indicated, we consider henceforth as having fixed:
\begin{equation*}
X,\quad (X^h)_{h\in (0,h_\star)}, \quad q\geq 0\quad \text{and a}\quad \gamma>\Phi(q).
\end{equation*}
We insist that the dependence on $x$ of the error estimates will be kept explicit throughout, whereas the dependence on the L\'evy triplet, $q$ and $\gamma$ will be subsumed in the capital (or small) $O$ ($o$) notation. In particular, the notation $f(x,h)=g(x,h)+l(x)O(h)$, $x\in A$, means that $l(x)>0$ for $x\in A$ and: $$\sup_{x\in A}\vert (f(x,h)-g(x,h))/l(x)\vert=O(h)$$ and analogously when $O(h)$ is replaced by $o(h)$ etc. Further, we shall sometimes resort to the notation: $$A(s):=A^0(s):=\psi(\gamma+is)-q\ (\text{for }s\in \mathbb{R})\text{ and }A^h(s):=\psi^h(\gamma+is)-q\ (\text{for }s\in [-\pi/h,\pi/h]),$$ $h\in (0,h_\star)$, where reference to $q$ and $\gamma$ has been suppressed. We stress that in Subsections~\ref{subsection:the_difference}-\ref{subsection:functionals_of_scale_fncs_conv} we shall have throughout:
\begin{equation*}
\beta:=\gamma+is.
\end{equation*}
\noindent The remainder of our analysis in this section will proceed as follows. First we list in Subsection~\ref{subsection:auxiliary_technical_results}, for the readers convenience, a number of auxiliary technical results.
Their proofs, which are independent of the analysis in Section~\ref{section:convergence_rates}, are relegated to Appendices~\ref{appendix:technical_lemmas} and~\ref{appendix:some_asymptotic_properties_of_measures_on_R}. Then Subsection~\ref{subsection:the_difference} estimates the absolute difference $\vert \psi^h-\psi\vert$, Subsection~\ref{subsection:_the_difference_of_derivatives} analyzes the derivatives $A^{h\prime}$ and the difference $\vert A^{h\prime}-A'\vert$, while in Subsection~\ref{subsection:coercivity} we prove suitable coercivity of $\vert \psi^h-q\vert$. Subsections~\ref{subsection:scales:BMdrift}-\ref{subsection:sclaes:diffusion0infvariation} deal with the various cases of convergence for the scale functions. Subsection~\ref{subsection:functionals_of_scale_fncs_conv} establishes a convergence result for the derivatives of $W^{(q)}$ in the case when $\sigma^2>0$.
\subsection{Auxiliary technical results}\label{subsection:auxiliary_technical_results}
[Apart from the notation of Subsection~\ref{subsection:further_general_notation}, the contents of this subsection is notationally and otherwise independent from the remainder of the text. For proofs see Appendices~\ref{appendix:technical_lemmas} and~\ref{appendix:some_asymptotic_properties_of_measures_on_R}.]
\subsubsection{Some estimates and bounds}\label{subsection:some_technical_estimates_and_bounds}
\begin{lemma}\label{lemma:technical:estimates}
For every $\gamma_\star\in\mathbb{R}_+$ and $h_\star\in \mathbb{R}^+$, there is an $A_0\in (0,\infty)$ such that for all $\gamma\in [-\gamma_\star,\gamma_\star]$, $h\in (0,h_\star)$ and then all $s\in [-\pi/h,\pi/h]$ (with $\beta=\gamma+is$):
\begin{enumerate}[(i)]
\item\label{lemma:technical:estimates:i} $\left\vert\frac{1}{2h}(e^{\beta h}-e^{-\beta h})\right\vert\leq A_0\vert \beta\vert$.
\item\label{lemma:technical:estimates:ii} $\vert e^{\beta h}-1\vert\leq A_0\vert \beta \vert h$.
\item\label{lemma:technical:estimates:iii} $\left\vert \frac{1}{h}(e^{\beta h}-1)-\beta\right\vert\leq A_0h \vert\beta \vert^2$.
\item\label{lemma:technical:estimates:iv} $\left\vert\frac{1}{2h^2}(e^{\beta h}+e^{-\beta h}-2)\right\vert\leq A_0\vert \beta\vert^2$.
\item\label{lemma:technical:estimates:rough:I} $\left\vert\frac{1}{2h^2}(e^{\beta h}+e^{-\beta h}-2-(\beta h)^2)\right\vert\leq A_0 h^2\vert \beta\vert^4.$
\item \label{lemma:technical:estimates:rough:II} $\left\vert \frac{1}{2h}(e^{\beta h}-e^{-\beta h}-2\beta h)\right\vert\leq A_0 h^2\vert \beta\vert^3$.
\end{enumerate}
Further:
\begin{enumerate}[(a)]
\item \label{lemma:technical:coercivity:xx} For any $\xi \in [0,2\pi)$,
$\left(u\mapsto (1-\cos(u))/u^2\right)$ is bounded away from $0$ for $u\in [-\xi,\xi]$.
\item \label{lemma:technical:coercivity:a} For any $\xi\in [0,2\pi)$,
$\left(u\mapsto \frac{1}{u^2}\left(\cosh(u)-1\right)\right)$ is bounded away from $0$ for $\Im u\in [-\xi,\xi]$.
\item \label{lemma:technical:coercivity:b} For any $\xi\in [0,2\pi)$ and $L\in\mathbb{R}$,
$\left(u\mapsto \frac{1}{u}\left(e^u-1\right)\right)$ is bounded away from $0$ for $\Im u\in [-\xi,\xi]$ and $\Re u\geq L$.
\end{enumerate}
\end{lemma}
\begin{remark}
In \ref{lemma:technical:coercivity:xx}-\ref{lemma:technical:coercivity:b}, at $u=0$, the relevant limit (which exists) is taken, in order to make the mappings well-defined in this single point.
\end{remark}
\begin{lemma}\label{lemma:convergence:a_uniformly_bounded_family}
The family of functions $f_{\gamma_0}:[-\pi,\pi]\to\mathbb{R}$, defined by: $f_{\gamma_0}(s_0):=\left(1-\frac{\gamma_0+is_0}{1-e^{-\gamma_0-is_0}}\right)\frac{1}{\gamma_0+is_0}$, $s_0\in [-\pi,\pi]$, is uniformly bounded for $\gamma_0$ belonging to bounded subsets of $\mathbb{R}\backslash \{0\}$.
\end{lemma}
\begin{lemma}\label{lemma:fundamental:inequalities}
\begin{enumerate}[(i)]
\item\label{lemma:technical:3rdorder:i} For any $z\in \overline{\mathbb{C}^{\leftarrow}}$: $\vert e^z-1\vert\leq \vert z\vert$.
\item\label{lemma:technical:3rdorder:ii} There exists $C\in (0,\sqrt{5/2}]$, such that for any $z\in \overline{\mathbb{C}^{\leftarrow}}$: $\vert e^z-z-1\vert\leq C\vert z\vert^2$.
\end{enumerate}
\label{lemma:technical:3rdorder}
\end{lemma}
\begin{lemma}\label{lemma:technical:2ndorder}
\begin{enumerate}[(i)]
\item\label{lemma:technical:2ndorder:i} Let $\{x,y\}\subset \mathbb{R}_-$, $\vert x-y\vert\leq h/2\leq \vert y\vert$. Then for any $\alpha\in\overline{\mathbb{C}^{\rightarrow}}$ we have: $\left\vert e^{\alpha x}-\alpha x-(e^{\alpha y}-\alpha y)\right\vert\leq 2 h \vert\alpha\vert^2\vert y\vert$.
\item\label{lemma:technical:2ndorder:ii} There exists $C\in (0,\infty)$ such that whenever $\{x,y\}\subset \mathbb{R}_-$, $\alpha\in\overline{\mathbb{C}^{\rightarrow}}$ and $\vert x-y\vert\leq h/2\leq \vert y\vert$, we have: $\left\vert e^{\alpha x}-\alpha x-\frac{\alpha^2x^2}{2}-\left(e^{\alpha y}-\alpha y-\frac{\alpha^2y^2}{2}\right)\right\vert\leq C hy^2\vert \alpha\vert^3$.
\end{enumerate}
\end{lemma}
\subsubsection{Some asymptotic properties at $0$ of measures on $\mathbb{R}$}\label{subsection:some_asymptotic_properties_at_zero}
Let $\nu$ be a measure on $\mathbb{R}$.
\begin{proposition}\label{appendix:proposition:integrability:main}
If $\nu$ is compactly supported and locally finite in $\mathbb{R}\backslash \{0\}$, then: $\int\vert x\vert \nu(dx)<\infty$, precisely when $\int_1^\infty \frac{ds}{s^2}\int \nu(dx)(1-\cos(sx))<\infty.$
\end{proposition}
\begin{lemma}\label{lemma:fubini}
Let $r\geq 0$. Then:
\begin{enumerate}[(i)]
\item\label{fubini:one} $\int_{[0,r]} x \nu(dx)=\int_{[0,r]}\nu((t,r])dt$.
\item\label{fubini:two} $\int_{[-r,r]}\vert x\vert \nu(dx)=\int_{[0,r]}\nu([-r,r]\backslash [-t,t])dt$.
\item\label{fubini:three} $\int_{[-1,1]\backslash [-r,r]}\vert x\vert \nu(dx)=rg(r)+\int_{(r,1]}g(t)dt$ whenever $r\leq 1$ and with $g(r):=\nu([-1,1]\backslash [-r,r])$.
\item\label{fubini:four} $\int_{[0,r]}x^2\nu(dx)=2\int_{[0,r]}t\nu((t,r])dt$.
\item\label{fubini:five} $\int_{[-r,r]}x^2\nu(dx)=2\int_{[0,r]}t\nu([-r,r]\backslash [-t,t])dt$.
\end{enumerate}
\end{lemma}
\begin{proposition}\label{proposition:generally_on_measure_asymptotics}
Suppose $g(\delta):=\nu([-1,1]\backslash [-\delta,\delta])\sim 1/\delta^{1+\alpha}$ as $\delta\downarrow 0$, so that in particular $g(\delta)$ is finite for all $0<\delta\leq 1$ and necessarily $\alpha\geq -1$. Then:
\begin{enumerate}[(a)]
\item\label{asymptoptic:a} $\gamma(\delta):=\delta^2\nu([-1,1]\backslash [-\delta,\delta])\sim \delta^{1-\alpha}$ as $\delta\downarrow 0$.
\item\label{asymptoptic:b} $\int_{[-1,1]} x^2\nu(dx)<\infty$, iff $\alpha<1$. If $\alpha\in (-1,1)$, then $\int_{[-\delta,\delta]} x^2\nu(dx)\sim \delta^{1-\alpha}$ as $\delta\downarrow 0$.
\item\label{asymptoptic:c} $\nu$ is a L\'evy measure, iff $\nu(\mathbb{R}\backslash [-1,1])<\infty$, $\nu(\{0\})=0$ and $\alpha<1$.
\item\label{asymptoptic:d} $\int_{[-1,1]} \vert x\vert \nu(dx)=\infty$, iff $0\leq \alpha$.
\item\label{asymptoptic:e} Finally, as $\delta\downarrow 0$, if $\alpha>0$, $\int_{(\delta,1]}g(t)dt\sim \delta^{-\alpha}$ and if $\alpha=0$, then $\int_{(\delta,1]}g(t)dt\sim \vert \log\delta\vert$. In particular, $\zeta(\delta):=\delta \int_{[-1,1]\backslash [-\delta,\delta]}\vert x\vert \nu(dx)\sim \delta^{1-\alpha}$, when $\alpha>0$, respectively $\zeta(\delta)\sim \delta\vert \log\delta\vert$, when $\alpha=0$.
\end{enumerate}
\end{proposition}
\begin{proposition}\label{proposition:generally_on_measure_asymptotics_bis}
Define $\lambda(dx)=\mathbbm{1}_{(-1,1)}(x)\vert x\vert\nu(dx)$. Furthermore, let $\alpha\in (0,1)$, with $\limsup_{\delta\downarrow 0}\nu((-1,1)\backslash [-\delta,\delta])\delta^{1+\alpha}<\infty$. Then each of the quantities $\limsup_{\delta\downarrow 0}\lambda((-1,1)\backslash [-\delta,\delta])\delta^{\alpha}$, $\limsup_{\delta\downarrow 0}\int_{[-\delta,\delta]}x^2\nu(dx)\delta^{\alpha-1}$ and $\sup_{s\in \mathbb{R}\backslash \{0\}}\frac{1}{\vert s\vert^\alpha}\vert \int (e^{isy}-1)\lambda(dy)\vert$ is finite.
\end{proposition}
\subsection{Estimating the absolute difference $\vert \psi^h-\psi\vert$}\label{subsection:the_difference}
Recall that $\beta$ is defined to be $\gamma+is$ throughout. We establish in this subsection two key properties of the difference $\psi^h-\psi$:
\begin{enumerate}[label=($\Delta_\arabic{*}$),ref=($\Delta_\arabic{*}$)]
\item\label{delta1} $\psi^h\to \psi$ as $h\downarrow 0$, uniformly in bounded subsets of $\overline{\mathbb{C}^{\rightarrow}}$.
\item\label{delta2} There exists $A_0\in (0,\infty)$, such that for all $h\in (0,h_\star\land 2)$ and then all $s\in [-\pi/h,\pi/h]$, the following holds (see Table~\ref{table:reference} for values of the parameter $V$):
\begin{enumerate}[(i)]
\item When $\sigma^2>0$: $$\vert \psi^h-\psi\vert(\beta)\leq A_0\left[h^2\vert \beta\vert^4+h\xi(h/2)\vert \beta\vert^3+h\vert \beta \vert+V\zeta(h/2)\vert \beta\vert^2\right].$$ In particular, if in addition $\kappa(0)<\infty$, we have: $$\vert \psi^h-\psi\vert(\beta)\leq A_0[ h^2\vert \beta\vert^4+h\vert \beta\vert^2].$$ If, moreover, $\lambda(\mathbb R)<\infty$, then $$\vert \psi^h-\psi\vert(\beta)\leq A_0[h^2\vert \beta\vert^4+h\vert \beta\vert].$$
\item When $\sigma^2=0$: $$\vert \psi^h-\psi\vert(\beta)\leq A_0\left[h\xi(h/2)\vert \beta\vert^3+(h+\zeta(h/2))\vert \beta\vert^2\right].$$ If in addition $\kappa(0)<\infty$, then: $$\vert \psi^h-\psi\vert(\beta)\leq A_0h\vert \beta\vert^2.$$
\end{enumerate}
\end{enumerate}
\noindent \emph{Proof of \ref{delta1} and \ref{delta2}.}
Indeed, suppose $\sigma^2>0$ (respectively $\sigma^2=0$), so that we are working under scheme 1 (respectively scheme 2). We decompose, referring to \eqref{eq:laplace_exponent}, \eqref{eq:laplace_exponent:scheme1} and \eqref{eq:laplace_exponent:scheme2}, the difference $\psi^h-\psi$ into terms, which allow for straightforward estimates. To wit, for any $h_0\in (0,2]$ and $\rho_0>0$, there exists $A_0\in (0,\infty)$, such that for all $h\in (0,h_0)$ and then all $s\in [-\pi/h,\pi/h]$, as well as all $\rho\in [0,\rho_0]$ (with $\alpha=\rho+is$):
\begin{enumerate}[(1)]
\item\label{item:diffusion>0:first} $\left\vert \sigma^2\left(\frac{e^{\alpha h}+e^{-\alpha h}-2}{2h^2}-\frac{\alpha^2}{2}\right)\right\vert\leq A_0h^2\vert \alpha\vert^4$ by \ref{lemma:technical:estimates:rough:I} of Lemma~\ref{lemma:technical:estimates} (respectively this is void).
\item\label{item:diffusion>0:2} $\left\vert c_0^h\left(\frac{e^{\alpha h}+e^{-\alpha h}-2}{2h^2}-\frac{\alpha^2}{2}\right)\right\vert\leq A_0h^2\xi(h/2)\vert\alpha\vert^4$ by \ref{lemma:technical:estimates:rough:I} of Lemma~\ref{lemma:technical:estimates}.
\item\label{item:diffusion>0:3} By a direct Taylor expansion:
\begin{eqnarray*}
&\phantom{\leq}&\left\vert V\int_{[-h/2,0)}y^2\frac{\alpha^2}{2}\lambda(dy)-\int_{[-h/2,0)}\left(e^{\alpha y}-V\alpha y-1\right)\lambda(dy)\right\vert\\
&\leq& A_0\left(V\vert \alpha\vert^3h\xi(h/2)+(1-V)\vert \alpha\vert h\right).
\end{eqnarray*}
\item\label{item:diffusion>0:4} $\left\vert \mu\left(\frac{e^{\alpha h}-e^{-\alpha h}}{2h}-\alpha\right)\right\vert\leq A_0h^2\vert \alpha\vert^3$ by \ref{lemma:technical:estimates:rough:II} (respectively $\left\vert \mu\left(\frac{e^{\alpha h}-1}{h}-\alpha\right)\right\vert\leq A_0h\vert \alpha\vert^2$ by \ref{lemma:technical:estimates:iii} of Lemma~\ref{lemma:technical:estimates}).
\item\label{item:diffusion>0:5} $\left\vert \left(-\mu^h\right)\left(\frac{e^{\alpha h}-e^{-\alpha h}}{2h}-\alpha\right)\right\vert\leq A_0Vh^2\kappa(h/2)\vert \alpha\vert^3$ (respectively $\left\vert \left(-\mu^h\right)\left(\frac{e^{\alpha h}-1}{h}-\alpha\right)\right\vert\leq A_0Vh\kappa(h/2)\vert \alpha\vert^2$), by the same token, since in fact:
\begin{equation*}
-\mu^h =- \sum_{y\in\mathbb{Z}_h^{--}}y\int_{A_y^h}\mathbbm{1}_{[-V,0)}(z)\lambda(dz)\leq -2V\int_{[-1,-h/2)}z\lambda(dz)= 2V\kappa(h/2).
\end{equation*}
\item\label{item:diffusion>0:last} Finally:
\begin{eqnarray*}
&\phantom{\leq}&\left\vert\sum_{y\in\mathbb{Z}_h^{--}}c_y^h\left(e^{ \alpha y}-1\right)-\alpha \mu^h-\int_{(-\infty,-h/2]}\left(e^{\alpha z}-\alpha z\mathbbm{1}_{[-V,0)}(z)-1\right)\lambda(dz)\right\vert\\
&\leq& \left\vert\!\sum_{y\in \mathbb{Z}_h^{--}}\int_{A^h_y\cap (-\infty,-V)}\left(e^{\alpha y}-e^{\alpha z}\right)\lambda(dz)\right\vert+\left\vert\sum_{y\in \mathbb{Z}_h^{--}}\!\int_{A^h_y\cap [-V,0)}\left(e^{\alpha y}-e^{\alpha z}-V\alpha (y-z) \right)\lambda(dz)\right\vert\\
&\leq&A_0\left(\vert\alpha \vert h+V\vert \alpha\vert^2h\kappa(h/2)\right),
\end{eqnarray*}
by \ref{lemma:technical:estimates:ii} of Lemma~\ref{lemma:technical:estimates} (since $\vert e^{\alpha y}-e^{\alpha z}\vert\leq \vert 1-e^{\alpha (y-z)}\vert$) and \ref{lemma:technical:2ndorder:i} of Lemma~\ref{lemma:technical:2ndorder}.
\end{enumerate}
From the estimates \ref{item:diffusion>0:first}-\ref{item:diffusion>0:last}, \ref{delta1} follows, since any compact subset of $\overline{\mathbb{C}^{\rightarrow}}$ is contained in the rectangle $[0,\rho_0]\times [-\pi/h_,\pi/h]$, for all $h\in (0,h_0)$, so long as $\rho_0$ is chosen large enough, and $h_0$ small enough. On the other hand \ref{delta2} follows by taking $h_0=h_\star\land 2$ and $\rho_0=\rho=\gamma$, so that $\alpha=\beta$.\qed
\begin{remark}
Pursuant to \ref{delta1} above and Remark~\ref{remark:convergencePhi}, we assume henceforth that $h_\star$ has already been chosen small enough, so that in addition $\gamma>\Phi^h(q)$ for all $h\in (0,h_\star)$
\end{remark}
\subsection{Estimating the absolute difference $\vert A^{h\prime}-A'\vert$ and growth of $A^{h\prime}$ at infinity}\label{subsection:_the_difference_of_derivatives}
We establish here the following two properties pertaining to the derivatives $A'$ and $A^{h\prime}$, $h\in (0,h_\star)$:
\begin{enumerate}[label=($\Delta^{\prime}_\arabic{*}$),ref=($\Delta^{\prime}_\arabic{*}$)]
\item\label{delta1D} For any finite $h_0\in (0,h_\star]$, there exists an $A_0\in (0,\infty)$ such that for all $h\in [0,h_0)$ and then all $s\in (-\pi/h,\pi/h)$: $$\vert A^{h\prime}(s)\vert\leq A_0\vert \beta\vert^{\epsilon-1},$$ where $\epsilon=2$, if $\sigma^2>0$; $\epsilon=1$, if $\sigma^2=0$ and $\kappa(0)<\infty$; finally, if $\sigma^2=0$ and $\kappa(0)=\infty$, then $\epsilon$ must satisfy \ref{assumption:salient:one} of Assumption~\ref{assumption:salient} from the Introduction.
\item\label{delta2D} There is an $A_0\in (0,\infty)$, such that for all $h\in (0,h_\star\land 2)$ and then all $s\in [-\pi/h,\pi/h]$, the following holds:
\begin{enumerate}[(i)]
\item When $\sigma^2>0$: $$\vert A^{h\prime}(s)-A'(s)\vert\leq A_0(h^2\vert \beta\vert^3+h\xi(h/2)\vert \beta\vert^2+(h+\zeta(h/2))\vert \beta\vert).$$
\item When $\sigma^2=0$: $$\vert A'(s)-A^{h\prime}(s)\vert\leq A_0\left[h+\zeta(h/2)+\xi(h/2)\right]\vert \beta\vert.$$ If in addition $\kappa(0)<\infty$, then: $$\vert \psi^h-\psi\vert(\beta)\leq A_0h\vert \beta\vert.$$
\end{enumerate}
\end{enumerate}
\noindent \emph{Proof of \ref{delta1D} and \ref{delta2D}.}
Indeed, we have, using differentiation under the integral sign, for $s\in \mathbb{R}$:
\begin{equation}\label{eq:FV:derivative_of_A}
A'(s)=i\sigma^2 \beta+i\mu+i\int_{(-\infty,0)}z\left(e^{\beta z}-\mathbbm{1}_{[-V,0)}(z)\right)\lambda(dz).
\end{equation}
Suppose now first that $\sigma^2>0$. Then, for $h\in (0,h_\star)$ and $s\in [-\pi/h,\pi/h]$:
\footnotesize
\begin{equation*}
A^{h\prime}(s)=i\mu\frac{e^{\beta h}+e^{-\beta h}}{2}-i\mu^h\left(\frac{e^{\beta h}+e^{-\beta h}}{2}-1\right)+(\sigma^2+c_0^h)i\frac{e^{\beta h}-e^{-\beta h}}{2h}+i\sum_{y\in \mathbb{Z}_h^{--}}\int_{A_y^h}y\left(e^{\beta y}-\mathbbm{1}_{[-V,0)}(z)\right)\lambda(dz).
\end{equation*}
\normalsize
From these expressions it follows readily, using \ref{lemma:technical:3rdorder:i} of Lemma~\ref{lemma:fundamental:inequalities}, \ref{lemma:technical:estimates:i} and \ref{lemma:technical:estimates:iv} of Lemma~\ref{lemma:technical:estimates}, $h\vert \mu^h\vert\leq 2h\kappa(h/2)\leq 4\int_{[-1,0)}y^2\lambda(dy)$ and $\vert y\vert \leq 2\vert z\vert$, $e^{\gamma y}\leq e^{\gamma(z+h/2)}$ for $z\in A_y^h$, $y\in \mathbb{Z}_h^{--}$, that $A'$ and $A^{h\prime}$ are both bounded by an affine function of $\vert s\vert$ on $s\in (-\pi/h,\pi/h)$, uniformly in $h\in (0,h_0)$ for any finite $h_0\in (0,h_\star]$.
On the other hand, when $\sigma^2=0$, we have for $h\in (0,h_\star)$ and then $s\in [-\pi/h,\pi/h]$:
\begin{eqnarray}\label{eq:FV:derivative_of_Ah}
A^{h\prime}(s)&=&\frac{ic_0^h}{2h}\left(e^{\beta h}-e^{-\beta h}\right)+i(\mu-\mu^h)e^{\beta h}+i\sum_{y\in \mathbb{Z}_h^{--}}c_y^hye^{\beta y}\\
&=&\frac{ic_0^h}{2h}\left(e^{\beta h}-e^{-\beta h}\right)+i\mu e^{\beta h}-\mu^h(e^{\beta h}-1)+i\sum_{y\in \mathbb{Z}_h^{--}}\int_{A_y^h}y\left(e^{\beta y}-\mathbbm{1}_{[-V,0)}(z)\right)\lambda(dz).\nonumber
\end{eqnarray}
Now, if $\kappa(0)<\infty$, it follows readily from $c_0^h\leq h\kappa(0)$, $-\mu^h\leq 2\kappa(0)$ and $\int_{(-\infty,0)}\vert y\vert e^{\gamma y}\lambda(dy)<\infty$ that $A'$ and $A^{h\prime}$ are bounded, uniformly in $h\in (0,h_0)$ for any finite $h_0\in (0,h_\star]$. If, however, $\kappa(0)=\infty$ and then under Assumption~\ref{assumption:salient}, the desired conclusion of \ref{delta1D} follows from the estimates of Lemma~\ref{lemma:technical:estimates} and Proposition~\ref{proposition:generally_on_measure_asymptotics_bis} using \ref{assumption:salient:one} of Assumption~\ref{assumption:salient}.
Finally, from the above expressions for the derivatives $A^\prime$ and $A^{h\prime}$, \ref{delta2D} follows using Lemma~\ref{lemma:technical:estimates} and a decomposition similar to the one in Subsection~\ref{subsection:the_difference}, which allowed to establish \ref{delta2}. For example, when $\sigma^2=0$, we have the following decomposition of $A^{h\prime}(s)-A'(s)$ into three summands, each of which is then easily estimated ($h\in (0,h_\star\land 2)$):
\begin{enumerate}[(1)]
\item $\sum_{y\in \mathbb{Z}_h^{--}}\int_{A_y^h}\left[y(e^{\beta y}-\mathbbm{1}_{[-1,0)}(z))-z(e^{\beta z}-\mathbbm{1}_{[-1,0)}(z))\right]\lambda(dz)$;
\item $i(\mu-\mu^h)(e^{\beta h}-1)$;
\item $ic_0^h\left[\frac{e^{\beta h}-e^{-\beta h}}{2h}\right]-i\int_{[-h/2,0)}y(e^{\beta y}-1)\lambda(dy)$.
\end{enumerate}
\qed
\begin{remark}\label{remark:A'forFV}
Note that if $\sigma^2=0$, $\kappa(0)<\infty$, then also $A'(s)=i\mu_0+i\int_{(-\infty,0)}ye^{\beta y}\lambda(dy)$, $s\in \mathbb{R}$.
\end{remark}
\subsection{Coercivity of $\vert\psi^h-q\vert$}\label{subsection:coercivity}
In this subsection we establish the following coercivity property:
\begin{enumerate}[label=(C),ref=(C)]
\item\label{(C)} There exists an $h_0\in (0,h_\star]$ and a $B_0\in (0,\infty)$, such that for all $h\in [0,h_0)$ and then all $s\in (-\pi/h,\pi/h)$, the following holds (recall $\psi^0=\psi$, $\beta=\gamma+is$): $$\vert \psi^h(\beta)-q\vert\geq B_0\vert \beta\vert^\epsilon,$$ where $\epsilon=2$, if $\sigma^2>0$; $\epsilon=1$, if $\sigma^2=0$ and $\kappa(0)<\infty$; finally, if $\sigma^2=0$ and $\kappa(0)=\infty$, then $\epsilon$ must satisfy \ref{assumption:salient:two} of Assumption~\ref{assumption:salient} from the Introduction.
\end{enumerate}
\noindent \emph{Proof of \ref{(C)}.}
(In the argument which follows, once again we refer the reader to expressions \eqref{eq:laplace_exponent}, \eqref{eq:laplace_exponent:scheme1} and \eqref{eq:laplace_exponent:scheme2}.)
Suppose first $\sigma^2>0$, so that we work under scheme 1. Consider $\psi(\beta)$. The diffusion term is certainly quadratic in $s$. The drift term (viewed as a function of $s$) is bounded by an affine function of $\vert s\vert$, and the L\'evy measure integral has subquadratic growth in $s$, as can be seen immediately by the DCT and Lemma~\ref{lemma:technical:3rdorder}:
\begin{equation}\label{equation:subquadratic}
\lim_{R\to\infty}\sup_{\alpha\in \overline{\mathbb{C}^{\rightarrow}},\vert \alpha\vert\geq R}\frac{1}{\vert \alpha\vert^2}\left\vert\int_{(-\infty,0)}\left(e^{\alpha y}-\alpha y\mathbbm{1}_{[-V,0)}(y)-1\right)\lambda(dy)\right\vert=0.
\end{equation}
In addition $(s\mapsto (\psi-q)(\beta))$ is bounded away from zero on bounded subsets of $\mathbb R$, by continuity and Proposition~\ref{proposition:well-posedness}. This establishes the claim for $h=0$.
To establish coercivity for $\psi^h(\beta)-q$, $h>0$, we proceed as follows. First, by \ref{lemma:technical:estimates:i} of Lemma~\ref{lemma:technical:estimates}, for any finite $h_0\in (0,h_\star]$, there exists a $B_0\in (0,\infty)$, such that for all $h\in (0,h_0)$ and then all $s\in (-\pi/h,\pi/h)$: $$\left\vert\frac{1}{2h}(e^{\beta h}-e^{-\beta h})\right\vert\leq B_0\vert \beta\vert.$$ This controls the term involving $\mu$. Next, by \ref{lemma:technical:estimates:rough:II} of Lemma~\ref{lemma:technical:estimates}, again for any finite $h_0\in (0,h_\star]$, there are $\{A_1,A_2\}\subset (0,\infty)$, such that for all $h\in (0,h_0)$ and then all $s\in (-\pi/h,\pi/h)$:
\begin{equation*}
\left\vert - \mu^h\left(\frac{e^{\beta h}-e^{-\beta h}}{2h}-\beta\right)\right\vert\leq A_1h^2\vert \beta\vert^3\vert \mu^h\vert\leq A_2 \vert \beta\vert^2\zeta(h/2)
\end{equation*}
with $\zeta(h/2)\to 0$ as $h\downarrow 0$. Further, just as in \eqref{equation:subquadratic}:
\begin{equation*}
\lim_{R\to\infty}\sup_{\alpha\in \overline{\mathbb{C}^{\rightarrow}},\vert \alpha\vert\geq R}\sup_{h>0}\frac{1}{\vert \alpha\vert^2}\left\vert\sum_{y\in\mathbb{Z}_h^{--}}\int_{A^h_y}\left(e^{\alpha y}-\alpha y\mathbbm{1}_{[-V,0)}(z)-1\right)\lambda(dz)\right\vert= 0,
\end{equation*}
where, additionally, one should note that for $y\in\mathbb{Z}_h^{--}$ and $z\in A_y^h$, $\vert y\vert\leq 2\vert z\vert$.
This, coupled with $\sigma^2>0$ and \ref{lemma:technical:coercivity:a} of Lemma~\ref{lemma:technical:estimates}, implies that there exist $\{B_0,C_0\}\subset (0,\infty)$ and an $h_0\in (0,h_\star]$, such that for all $h\in (0,h_0)$ and then all $s\in (-\pi/h,\pi/h) \backslash (-C_0,C_0)$: $\vert \psi^h(\beta)-q\vert\geq B_0 s^2$. Finally, since, as $h\downarrow 0$, $\psi^h(\beta)-q\to\psi(\beta)-q$ uniformly in $s$ belonging to bounded sets, and $\psi(\beta)-q$ is bounded away from $0$ on such sets, we obtain the asserted result.
Now suppose $\sigma^2=0$ (so that scheme 2 is in effect) and consider first the case when $\kappa(0)<\infty$. With regard to $\psi(\beta)$, note that $\mu_0\beta$ is linear in $s$, whereas:
\begin{equation}\label{eq:linear_coercivity}
\lim_{R\to\infty}\sup_{\alpha\in \overline{\mathbb{C}^{\rightarrow}},\vert \alpha\vert\geq R}\frac{1}{\vert \alpha\vert}\left\vert\int_{(-\infty,0)}\left(e^{\alpha y}-1\right)\lambda(dy)\right\vert=0,
\end{equation}
by \ref{lemma:technical:3rdorder:i} of Lemma~\ref{lemma:technical:3rdorder} and the DCT. The asserted coercivity follows immediately in the case $h=0$.
To handle $h>0$, it will be observed first that $\mu-\mu^h\to \mu_0>0$ as $h\downarrow 0$, e.g. by the DCT. Also by the DCT, \ref{lemma:technical:3rdorder:i} of Lemma~\ref{lemma:technical:3rdorder}, and the fact that $\kappa(0)<\infty$: $$\lim_{R\to\infty}\sup_{\alpha\in \overline{\mathbb{C}^{\rightarrow}},\vert \alpha\vert\geq R}\sup_{h>0}\frac{1}{\vert\alpha\vert}\left\vert \sum_{y\in \mathbb{Z}_h^{--}} c_y^h(e^{\alpha y}-1)\right\vert=0.$$ Moreover, by \ref{lemma:technical:estimates:iv} of Lemma~\ref{lemma:technical:estimates}, for any finite $h_0\in (0,h_\star]$, there exists $A_0\in (0,\infty)$ such that for all $h\in (0,h_0)$ and then all $s\in (-\pi/h,\pi/h)$, $$\left \vert c_0^h\left(\frac{e^{\beta h}+e^{-\beta h}-2}{2h^2}\right)\right\vert\leq A_0\vert \beta\vert \int_{[-h/2,0)}\vert y\vert\lambda(dy)$$ with $\int_{[-h/2,0)}\vert y\vert\lambda(dy)\to 0$ as $h\downarrow 0$, since $\kappa(0)<\infty$. Coupled with \ref{lemma:technical:coercivity:b} of Lemma~\ref{lemma:technical:estimates}, the asserted coercivity follows.
In the last instance, let $\sigma^2=0$ and $\kappa(0)=\infty$. Necessarily, $V=1$ and Assumption~\ref{assumption:salient} is in effect. We control first $\Re\psi(\beta)$. Clearly $\gamma \mu$; $\int_{(-\infty,-1)}(e^{\gamma y}\cos(sy)-1)\lambda(dy)$ and $\int_{[-1,0)}(e^{\gamma y}-\gamma y-1)\lambda(dy)$ are bounded in $s$, whereas (by \ref{lemma:technical:coercivity:xx} of Lemma~\ref{lemma:technical:estimates} and \ref{assumption:salient:two} of Assumption~\ref{assumption:salient}): $$\left\vert \int_{[-1,0)}e^{\gamma y}(\cos(sy)-1)\lambda(dy)\right \vert\geq B_1s^2\int_{[-\pi/\vert s\vert,0)}y^2\lambda(dy)\geq B_0\vert s\vert^{\epsilon},$$ for all $s\in\mathbb{R}$ with $\vert s \vert\geq K_0$, and some $\{K_0,B_0,B_1\}\subset (0,\infty)$. Coercivity for $\psi(\beta)-q$ follows.
Now we turn our attention to $\psi^h(\beta)$ and again we control $\Re\psi^h(\beta)$. First observe that:
\begin{equation}
\underbrace{\frac{c_0^h}{2h^2}\left(e^{\gamma h}+e^{-\gamma h}-2\right)\cos(sh)}_{\text{bounded in }s}+\underbrace{\frac{c_0^h}{h^2}\left(\cos(sh)-1\right)}_{=:\mytag{$(I)$}{single_star}\leq 0}.
\end{equation}
Next, with respect to the term involving the drift $\mu$, we refer to \ref{lemma:technical:estimates:ii} of Lemma~\ref{lemma:technical:estimates} to obtain a linear bound in $s$. On the other hand we have:
\begin{equation*}
\Re\left\{\left(\frac{e^{\beta h}-1}{h}-\beta\right)\left(-\mu^h\right)\right\}=\underbrace{\frac{e^{\gamma h}(\cos(sh)-1)}{h}(-\mu^h)}_{\leq 0}+\underbrace{\frac{e^{\gamma h}-1-\gamma h}{h}(-\mu^h)}_{\text{bounded in }s},
\end{equation*}
since $-\mu^h \leq 2\kappa(h/2)$ and $\zeta(h/2) \to 0$ as $h\downarrow 0$. Finally, we consider the term:
\begin{equation}\label{eq:coercivity:a_final_term}
\sum_{y\in\mathbb{Z}_h^{--}}c_y^h\left(e^{\gamma y}\cos(s y)-1\right)-\sum_{y\in\mathbb{Z}_h^{--}}\gamma y\int_{A_y^h}\mathbbm{1}_{[-1,0)}(z)\lambda(dz).
\end{equation} Certainly the part of \eqref{eq:coercivity:a_final_term} corresponding to $\mathbbm{1}_{(-\infty,-1)}\cdot\lambda$ is bounded in $s$. The part of \eqref{eq:coercivity:a_final_term} corresponding to $\mathbbm{1}_{[-1,0)}\cdot \lambda$ is:
\begin{eqnarray}
\nonumber &\phantom{=}&\sum_{y\in\mathbb{Z}_h^{--}}\int_{A_y^h\cap [-1,0)}\lambda(dz)\left[e^{\gamma y}\cos(s y)-\gamma y-1\right]\\
&=&\underbrace{\sum_{y\in \mathbb{Z}_h^{--}}\int_{A_y^h\cap [-1,0)}\lambda(dz)\left[e^{\gamma y}-\gamma y-1\right]}_{\text{bounded in }s}+\underbrace{\sum_{y\in\mathbb{Z}_h^{--}}\int_{A_y^h\cap [-1,0)}\lambda(dz)e^{\gamma y}\left(\cos(sy)-1\right)}_{=:\mytag{$(II)$}{double_star}\leq 0}.
\end{eqnarray}
Combining \ref{single_star} and \ref{double_star}, we have, via \ref{lemma:technical:coercivity:xx} of Lemma~\ref{lemma:technical:estimates}, for some $\{A_0,\beta_0,K_0,\alpha\}\subset (0,\infty)$, $h_0\in (0,h_\star]$, for all $h\in (0,h_0)$, and then all $s\in [-\pi/h,\pi/h]\backslash [-K_0,K_0]$:
\begin{equation*}
\vert \Re\psi^h(\beta)\vert\geq \beta_0s^2\left(c_0^h+\sum_{y\in\mathbb{Z}_h^{--},-y\leq \pi/\vert s\vert}y^2c_y^h\right)-A_0\vert s\vert.
\end{equation*}
Also:
\begin{equation*}
c_0^h+\sum_{y\in\mathbb{Z}_h^{--},-y\leq \pi/\vert s\vert}y^2c_y^h\geq\frac{4}{9}\int_{\left(-\left(\left(\frac{\pi}{\vert s\vert}-\frac{h}{2}\right)\lor \frac{h}{2}\right),0\right)}u^2\lambda(du)\geq\frac{4}{9}\int_{(-\frac{1}{2}\frac{\pi}{\vert s\vert},0)}u^2\lambda(du),
\end{equation*}
since either $\pi/\vert s\vert\geq h$, in which case $(\pi/\vert s\vert)-(h/2)\geq \frac{1}{2}\frac{\pi}{\vert s\vert}$, or $\pi/\vert s\vert\leq h$, in which case $\frac{h}{2}\geq \frac{1}{2}\frac{\pi}{\vert s\vert}$.
Using now item \ref{assumption:salient:two} of Assumption~\ref{assumption:salient}, the required coercivity follows at once. \qed
\subsection{Brownian motion with drift ($\sigma^2>0=\lambda(\mathbb R)$)}\label{subsection:scales:BMdrift}
The scale functions can be calculated explicitly here, by using the recursive relations of Proposition~\ref{proposition:calculating_scale_functions:basic}. Then the following two proposition follow readily (essentially by Taylor expansions; recall also the notation from \eqref{eq:diffW} and \eqref{eq:diffZ}):
\begin{proposition}[$\sigma^2>0=\lambda(\mathbb{R})$ ($W^{(q)}$ convergence)]\label{proposition:convergence:BM+drift:Wq}
Suppose $\sigma^2>0=\lambda(\mathbb{R})$ and let $q\geq 0$. If $q\lor \vert \mu\vert=0$, then for all $h\in (0,h_ \star)$ and all $x\in \mathbb{Z}_h^{++}$: $\Delta_{W}^{(q)}(x,h)=0$. If, however, $q\lor \vert \mu\vert>0$, then:
\begin{enumerate}[(i)]
\item There exist $\{A_0,h_0\}\subset (0,\infty)$ such that for all $h\in (0,h_0)$ and then all $x\in \mathbb{Z}_h^{++}$ with $xh^2\leq 1$: $$\left\vert\Delta_{W}^{(q)}(x,h)\right\vert\leq A_0h^2(1+x)e^{\alpha_+ x}.$$
\item For any nested sequence $(h_n)_{n\geq 1}\downarrow 0$ and then any $x\in \cup_{n\geq 1} \mathbb{Z}_{h_n}^{++}$: $$\lim_{n\to\infty}\frac{\Delta_{W}^{(q)}(x,h_n)}{h_n^2}=\frac{q^2}{2(\mu^2+2\sigma^2 q)}W^{(q)}(x)+\frac{x}{\sqrt{\mu^2+2\sigma^2 q}}\left(e^{\alpha_+ x}\theta_+-e^{\alpha_- x}\theta_-\right).$$ (In particular, when $q=0$, this limit is $-\frac{2}{3}\frac{\mu^2 x}{(\sigma^2)^3}e^{-2 \mu x/\sigma^2}$.)
\end{enumerate}
Here:
\begin{eqnarray*}
\alpha_\pm&:=&\frac{-\mu\pm\sqrt{\mu^2+2 q\sigma^2}}{\sigma^2}\\
\theta_\pm&:=&\frac{\mu^3\sqrt{2q\sigma^2+ \mu^2}\pm(\frac{1}{2}q^2(\sigma^2)^2-\mu^4-\mu^2\sigma^2 q)}{3(\sigma^2)^3\sqrt{2q\sigma^2+ \mu^2}}.
\end{eqnarray*}
\end{proposition}
\begin{remark}
We note that for all $x\geq 0$: $$W^{(q)}(x)=\frac{1}{\sqrt{\mu^2+2\sigma^2 q}}\left(e^{\alpha_+ x}-e^{\alpha_-x}\right)$$ (when $q\land \vert \mu\vert>0$) and $W^{(q)}(x)
=x$ (otherwise). Observe also that, unless $q=0$, $\alpha_\pm\in\pm (0,\infty)$.
\end{remark}
\begin{proposition}[$\sigma^2>0=\lambda(\mathbb{R})$ ($Z^{(q)}$ convergence)]\label{proposition:convergence:BM+drift:Zq}
Suppose $\sigma^2>0=\lambda(\mathbb{R})$, let $q>0$.
\begin{enumerate}[(i)]
\item There exist $\{A_0,h_0\}\subset (0,\infty)$ such that for all $h\in (0,h_0)$ and then all $x\in \mathbb{Z}_h^{++}$ with $xh^2\leq 1$: $$\left\vert \Delta_{Z}^{(q)}(x,h)\right\vert\leq A_0\left[h^2(1+x)e^{\alpha_+x}+h(e^{\alpha_+ x}-e^{\alpha_- x})\right].$$
\item For any nested sequence $(h_n)_{n\geq 1}\downarrow 0$ and then any $x\in \cup_{n\geq 1} \mathbb{Z}_{h_n}^{++}$: $$\lim_{n\to\infty}\frac{\Delta_{Z}^{(q)}(x,h_n)}{h_n}=-\frac{1}{2}\frac{q}{\sqrt{\mu^2+2\sigma^2 q}}\left(e^{\alpha_+x}-e^{\alpha_-x}\right).$$
\end{enumerate}
\end{proposition}
\subsection{Non-trivial diffusion component}
We consider the convergence when $\sigma^2>0$. This case is relatively straightforward, as coercivity is very strong (namely, quadratic). Note that $\delta^0=1$ and we work under scheme 1.
\begin{proposition}[$\sigma^2>0$ ($W^{(q)}$ convergence)] \label{proposition:Wqconvergence:non-trivial_diffusion}
Suppose $\sigma^2>0$ and let $q\geq 0$.
\begin{enumerate}[(i)]
\item\label{Wqs:diffusion_positibe:i} For any $\gamma>\Phi(q)$, there are $\{A_0,h_0\}\subset (0,\infty)$ such that for all $h\in (0,h_0)$ and then all $x\in \mathbb{Z}_h^{++}$: $$\left\vert\Delta_{W}^{(q)}(x,h)\right\vert\leq A_0 \left(h+\zeta(h/2)+\xi(h/2)h\log(1/h)\right) e^{\gamma x}.$$ In particular, if $\kappa(0)<\infty$, then $\left\vert\Delta_{W}^{(q)}(x,h)\right\vert\leq A_0 he^{\gamma x}$ and under Assumption~\ref{assumption:salient}, $\left\vert\Delta_{W}^{(q)}(x,h)\right\vert\leq A_0 h^{2-\epsilon}e^{\gamma x}$.
\item \label{Wqs:diffusion_positibe:ii} There exist:
\begin{enumerate}[(a)]
\item\label{Wqs:diffusion_postive:ii:a} a L\'evy triplet $(\sigma^2,\lambda,\mu)$ with $\sigma^2\ne 0$ and $0<\kappa(0)<\infty$;
\item\label{Wqs:diffusion_postive:ii:c} for each $\epsilon\in (1,2)$ a L\'evy triplet $(\sigma^2,\lambda,\mu)$ with $\sigma^2\ne 0$ and $\lambda(-1,-\delta)\sim 1/\delta^\epsilon$ as $\delta\downarrow 0$;
\end{enumerate}
and then in each of the cases \ref{Wqs:diffusion_postive:ii:a}-\ref{Wqs:diffusion_postive:ii:c} a nested sequence $(h_n)_{n\geq 1}\downarrow 0$ such that for each $q\geq 0$ there is an $x\in \cup_{n\geq 1}\mathbb{Z}_{h_n}^{++}$ with: $$\liminf_{n\to\infty}\frac{\left\vert\Delta_{W}^{(q)}(x,h_n)\right\vert}{ h_n\lor \zeta(h_n)}>0,$$ where $h_n\lor \zeta(h_n)\sim h_n$, if $\kappa(0)<\infty$ and $\sim h_n^{2-\epsilon}$, if $\kappa(0)=\infty$ as $n\to \infty$.
\end{enumerate}
\end{proposition}
\begin{remark}\label{remark:diffusion_positive:asymptotic}
Note that if $\lambda(-1,\delta)\sim 1/\delta^\epsilon$ as $\delta\downarrow 0$, with $\epsilon\in (1,2)$, then (as $h\downarrow 0$) $\xi(h/2)\sim h^{2-\epsilon}$, and $\zeta(h/2)\sim h^{2-\epsilon}$, so that $h\xi(h/2)\log(1/h)=o(h\kappa(h/2))$. See Proposition~\ref{proposition:generally_on_measure_asymptotics}. More generally, Assumption~\ref{assumption:salient} is fulfilled if $\lambda(-1,-\delta)\sim \delta^{-\epsilon}l(\delta)$ where $0<\liminf_{0+}l<\limsup_{0+}l<+\infty$ (see Lemma~\ref{lemma:fubini}).
\end{remark}
\begin{remark}\label{remark:under_assumption_salient_etc}
Under Assumption~\ref{assumption:salient}, it follows that $\zeta(h/2)+\xi(h/2)=O(h^{2-\epsilon})$ as $h\downarrow 0$ (see again Lemma~\ref{lemma:fubini}).
\end{remark}
\begin{proof}
First, with respect to~\ref{Wqs:diffusion_positibe:i}, we have as follows. \ref{W:a} of \eqref{eq:difference:Wqs} is seen immediately to be of order $O(h)$ by coercivity~\ref{(C)}; whereas~\ref{W:b} of \eqref{eq:difference:Wqs} is of order $O(h+h\xi(h/2)\log(1/h)+V\zeta(h/2))$ by coercivity~\ref{(C)} and the estimate of the absolute difference $\vert \psi^h-\psi\vert$~\ref{delta2}. Since $\delta^0=1$, \ref{W:c} of \eqref{eq:difference:Wqs} is void.
Second we prove~\ref{Wqs:diffusion_positibe:ii}.
\begin{itemize}
\item We consider first \ref{Wqs:diffusion_postive:ii:a}. Take $\lambda=\delta_{-1/2}$, $h_n=1/3^n$ ($n\geq 1$), $\mu=0$, $\sigma^2=1$, $x\in \cup_{n \geq 1}\mathbb{Z}_{h_n}^{++}$ ($x$ is now fixed!). The goal is to establish no better than linear convergence in this case.
\end{itemize}
Now,~\ref{W:a} of \eqref{eq:difference:Wqs} is actually of order $O(h^2)$. Indeed, in the difference to replacing $\psi(\beta)-q$ by $-\frac{1}{2}\sigma^2 s^2$, this is seen immediately to be even of order $O(h^3)$, by coercivity \ref{(C)} and a simple $\vert \int \cdot\vert\leq \int \vert\cdot\vert$ argument. On the other hand: $$\lim_{T\to\infty}\int_{(-T,T)\backslash (-\pi/h,\pi/h)}\frac{e^{isx}}{s^2}ds=O(h^2).$$ This is so by an integration by parts argument, writing: $$\frac{d}{ds}\left(\frac{e^{isx}}{s^2}\right)=\frac{ix e^{isx}}{s^2}-\frac{2e^{isx}}{s^3}.$$ We can thus focus on~\ref{W:b} of \eqref{eq:difference:Wqs}. Consider there the difference:
\begin{equation}
(\psi^h-\psi)(\beta)=\overbrace{\frac{\sigma^2}{2h^2}\left(e^{\beta h}+e^{-\beta h}-2\right)-\frac{1}{2}\sigma^2 \beta^2}^{\mytag{(b.1)}{diffusion:b.1}}+\overbrace{\int_{(-\infty,0)}(e^{\beta y}-1)\lambda(dy)-\sum_{y\in \mathbb{Z}_h^{--}}c_y^h(e^{\beta y}-1)}^{\mytag{(b.2)}{diffusion:b.2}}.
\end{equation} The part of \ref{W:b} in \eqref{eq:difference:Wqs} corresponding to \ref{diffusion:b.1} is, in the difference to the analogous term for Brownian motion without drift, bounded (up to a non-zero multiplicative constant) by: $$\int_{[-\pi/h,\pi/h]}ds \frac{h^2\vert \beta\vert^4 \vert \beta\vert^2}{\vert\beta\vert^8}=O(h^2)$$ (this follows by \ref{lemma:technical:estimates:iv} and \ref{lemma:technical:estimates:rough:I} of Lemma~\ref{lemma:technical:estimates}, the fact that $e^{\beta y}-1$ is uniformly bounded by $2$, and by coercivity \ref{(C)}). Since the term corresponding to just Brownian motion is shown to be of order $O(h^2)$ itself (see Proposition~\ref{proposition:convergence:BM+drift:Wq}), we can thus focus on \ref{diffusion:b.2}. The latter is $e^{-\beta/2}-e^{-(1-h_n)\beta/2}=e^{-\beta/2}(1-e^{\beta h_n/2})$. In the difference to replacing $1-e^{\beta h_n/2}$ by $-\beta h_n/2$, a term of order $O(h_n^2)$ emerges in $\Delta_{W}^{(q)}(x,h_n)$, this by \ref{lemma:technical:estimates:iii} of Lemma~\ref{lemma:technical:estimates}, and coercivity \ref{(C)}. Hence it is sufficient to study: $$\frac{1}{2\pi i}\int_{[-\pi/h_n,\pi/h_n]}e^{\beta x}\frac{e^{-\beta/2 }\beta}{(\psi-q)(\beta)(\psi^{h_n}-q)(\beta)}ds$$ which we would like bounded away from $0$, as $n\to\infty$. Now, by coercivity \ref{(C)}, and the DCT, this expression in fact converges to: $$\frac{1}{2\pi i}\int_{-\infty}^\infty \frac{e^{\beta (x-1/2)}\beta }{(\psi-q)^2(\beta)}ds=:g(x).$$ Note that $g$ is continuous in its parameter $x\in [0,\infty)$ by the DCT. Moreover, $g$ cannot vanish identically on $\cup_{n\geq 1}\mathbb{Z}_{h_n}^{++}$, since then it would do so on $\mathbb{R}_+$ by continuity. But this cannot be. Naively, since in $g$ we are looking at the inverse Laplace transform of a non-vanishing function $T$. Formally, one performs a Laplace transform of $g$, and concludes, via Fubini and Cauchy's Residue Theorem (recalling the quadratic behaviour of $\psi(\beta)$ as $\vert \beta\vert\to\infty$ over $\overline{\mathbb{C}^{\rightarrow}}$, see \eqref{equation:subquadratic}) that $T=\hat{g}$, where $T(\alpha):=\frac{e^{-\alpha /2}\alpha}{(\psi-q)^2(\alpha)}$ ($\alpha>\gamma$). Then $g$ vanishing implies the same of $T$, which is a clear contradiction.
\begin{itemize}
\item Consider now \ref{Wqs:diffusion_postive:ii:c}. We are seeking to establish strictly worse than linear convergence here, since $\kappa(0)=\infty$.
\end{itemize}
For sure \ref{W:a} in \eqref{eq:difference:Wqs} is of order $O(h)$. When it comes to~\ref{W:b}, consider its decomposition, in the numerator of the integrand according to items \ref{item:diffusion>0:first}-\ref{item:diffusion>0:last} from Subsection~\ref{subsection:the_difference}. Now, \ref{item:diffusion>0:first} thus yields in~\ref{W:b} a term of order $O(h)$; \ref{item:diffusion>0:2} one of order $o(h)$; with respect to \ref{item:diffusion>0:3} we will choose a $\lambda$ which falls under the scope of Remark~\ref{remark:diffusion_positive:asymptotic} and hence this will contribute a term of order $o(\zeta(h/2))$; \ref{item:diffusion>0:4} gives a term of order $o(h)$; \ref{item:diffusion>0:5} contributes as $o(h\kappa(h/2))$; whereas finally \ref{item:diffusion>0:last} yields a term of order $O(h)$ in the part corresponding to $\mathbbm{1}_{(-\infty,1)}\cdot\lambda$ and the part corresponding to $\mathbbm{1}_{[-1,0)}\cdot\lambda$ is where we will get sharpness of the rate from.
So we take $\sigma^2=1$, $\mu=0$, $h_n=1/3^n$ ($n\geq 1$), $\lambda=\sum_{k=1}^\infty w_k\delta_{-x_k}$, $x_k=\frac{3}{2}h_k$ and $w_k=1/x_k^\epsilon$ ($k\geq 1$). Clearly $\kappa(0)=\infty$ and by checking it on the decreasing sequence $(h_n)_{n\geq 1}$ it is clear that $\lambda(-1,-\delta)\sim \delta^{-\epsilon}$ as $\delta\downarrow 0$. Moreover, $$\left\vert\sum_{y\in \mathbb{Z}_h^{--}}\!\int_{A^h_y\cap [-1,0)}\left(e^{\beta y}-\beta y-(e^{\beta z}-\beta z)-\frac{1}{2}\beta^2(y^2-z^2)\right)\lambda(dz)\right\vert$$
yields in \eqref{eq:difference:Wqs}, by \ref{lemma:technical:2ndorder:ii} of Lemma~\ref{lemma:technical:2ndorder} and coercivity \ref{(C)}, a term of order $O(h\log(1/h))=o(h^{2-\epsilon})$. Therefore it is sufficient to study: $$\sum_{y\in \mathbb{Z}_h^{--}}\!\int_{A^h_y\cap [-1,0)}\beta^2(y^2-z^2)\lambda(dz)=\sigma_2-\sigma_1,$$ where:
\begin{equation*}
\sigma_1:=\int_{[-1,-h_n/2)}u^2\lambda(du)=\sum_{k=1}^nx_k^2w_k,\quad \sigma_2:=\sum_{k=1}^n(x_k-h_n/2)^2w_k
\end{equation*}
and hence $\sigma_1-\sigma_2=2\zeta(h_n/2)-\gamma(h_n/2)\geq \zeta(h_n/2)$. Moreover, $$\int_{[-\pi/h_n,\pi/h_n]}e^{isx}\frac{\beta^2}{ [(\psi-q)(\psi^{h_n}-q)](\beta)]}ds\to\int_{\mathbb{R}}e^{isx}\frac{\beta^2}{(\psi-q)^2(\beta)}ds$$ as $n\to\infty$ by the DCT. By the usual arguments, this integral does not vanish simultaneously in all $x\in \cup_{n\geq 1}\mathbb{Z}_{h_n}^{++ }$, whence tightness obtains.
\end{proof}
\begin{proposition}[$\sigma^2>0$ ($Z^{(q)}$ convergence)]\label{proposition:Zqconvergence:non-trivial_diffusion}
Suppose $\sigma^2>0$, let $q>0$.
\begin{enumerate}[(i)]
\item \label{diffusion>0:Zq:i} For any $\gamma>\Phi(q)$, there are $\{A_0,h_0\}\subset (0,\infty)$ such that for all $h\in (0,h_0)$ and then all $x\in \mathbb{Z}_h^{++}$: $$\left\vert\Delta_{Z}^{(q)}(x,h)\right\vert\leq A_0 \left(h+\zeta(h/2)\right) e^{\gamma x}.$$ In particular, if $\kappa(0)<\infty$, then $\left\vert\Delta_{Z}^{(q)}(x,h)\right\vert\leq A_0 he^{\gamma x}$ and otherwise $\left\vert\Delta_{Z}^{(q)}(x,h)\right\vert\leq A_0 \zeta(h/2)e^{\gamma x}$.
\item \label{diffusion>0:Zq:ii}
\begin{enumerate}[(a)]
\item\label{Zqs:diffusion_postive:ii:a} There exists a nested sequence $(h_n)_{n\geq 1}\downarrow 0$, such that for any $q>0$ and any $x\in \cup_{n\geq 1}\mathbb{Z}_{h_n}^{++}$, there exists a L\'evy triplet $(\sigma^2,\mu,\lambda)$ with $\sigma^2>0$ and $0<\kappa(0)<\infty$, and such that:
$$\liminf_{n\to\infty}\frac{\left\vert\Delta_{Z}^{(q)}(x,h_n)\right\vert}{ h_n}>0.$$
\item\label{Zqs:diffusion_postive:ii:c} There exists for each $\epsilon\in (1,2)$ a L\'evy triplet $(\sigma^2,\lambda,\mu)$ with $\sigma^2>0$ and $\lambda(-1,\delta)\sim 1/\delta^\epsilon$ as $\delta\downarrow 0$, and a nested sequence $(h_n)_{n\geq 1}\downarrow 0$ such that for each $q>0$, there is an $x\in \cup_{n\geq 1}\mathbb{Z}_{h_n}^{++}$ with:
\end{enumerate}
$$\liminf_{n\to\infty}\frac{\left\vert\Delta_{Z}^{(q)}(x,h_n)\right\vert}{\zeta(h_n)}>0.$$
\end{enumerate}
\end{proposition}
\begin{proof}
With respect to~\ref{diffusion>0:Zq:i}, we have as follows. First, \ref{Z:a} of \eqref{eq:difference:Zqs} is $O(h^2)$ and \ref{Z:b} $O(h)$ by coercivity \ref{(C)} (and by Lemma~\ref{lemma:convergence:a_uniformly_bounded_family} in the case of~\ref{Z:b}). Second, \ref{Z:c} is $O(h+\zeta(h/2))$ by coercivity~\ref{(C)} and the estimate \ref{delta2}.
Next we show~\ref{diffusion>0:Zq:ii}.
\begin{itemize}
\item We consider first \ref{Zqs:diffusion_postive:ii:a}. Take $\sigma^2=1$, $\mu=0$ and $\lambda=\alpha \delta_{-1}$, where we fix $\alpha>0$. The idea is to note that convergence is `tightly linear' in the Brownian motion case (see Proposition~\ref{proposition:convergence:BM+drift:Zq}), and then to show that by taking $\alpha$ small enough, we do not spoil this.
\end{itemize}
Now, remark that:
\begin{itemize}
\item[---] As $\alpha\downarrow 0$, $\psi$ is nondecreasing, hence $\Phi(q)$ is nonincreasing, and so a $\gamma>\Phi(q)$ can be chosen, uniformly in all $\alpha$ bounded.
\item[---] Moreover, the presence of $\alpha$ does not affect coercivity, which is in addition uniform in all $\alpha$ small enough. Indeed, just take $\gamma>\Phi^{BM}(q)$, where $\Phi^{BM}$ corresponds to the Brownian motion part of this L\'evy process. Then $\vert \psi^h_{BM}(\beta)-q\vert\geq B_0\vert \beta\vert^2$, for all $s\in (-\pi/h,\pi/h)$, for all $h\in [0,h_0)$, for some $\{B_0,h_0\}\subset (0,\infty)$. The part of $\psi^h(\beta)$ corresponding to the CP term is bounded uniformly in $s$ and all $h$ small enough (including $0$), and moreover, scales with $\alpha$. Hence there are $\{B_0, h_0,\alpha_0\}\subset (0,\infty)$ such that for all $\alpha\in (0,\alpha_0)$: $\vert \psi^h(\beta)-q\vert\geq B_0\vert \beta\vert^2$, for all $s\in (-\pi/h,\pi/h)$, for all $h\in [0,h_0)$.
\end{itemize}
Take now $h_n=1/2^n$ ($n\geq 1$), and a fixed $x\in \cup_{n\geq 1}\mathbb{Z}_{h_n}^{++}$. We show that \eqref{eq:difference:Zqs}, when looked in the difference to the analogous expression for the Brownian motion part, is of order $o(h)+\alpha O(h)$ (i.e. bounded in absolute value by terms either decaying faster than linear, or else with a coefficient that scales with $\alpha$). Indeed, the difference of~\ref{Z:b}s in \eqref{eq:difference:Zqs} follows readily as being $\alpha O(h)$. In addition,~\ref{Z:a} of \eqref{eq:difference:Zqs} is of order $O(h^2)$. Also~\ref{Z:c} is of order $o(h)$ except in the part corresponding to the CP term, which is itself $\alpha O(h)$. Thus, choosing $\alpha$ small enough, the desired sharpness obtains. However, at least in principle, the choice of $\alpha$ depends on $q$ and $x$, hence note the formulation of the proposition.
\begin{itemize}
\item Consider now \ref{Wqs:diffusion_postive:ii:c}. Clearly here the same example works as for the functions $W^{(q)}$: the presence of the extra $1/\beta$ in the integrand of~\ref{Z:c} of \eqref{eq:difference:Zqs} is of no consequence (if anything, beneficial), and the rest of the terms are of order $O(h)$ anyhow.
\end{itemize}
\end{proof}
\subsection{$\sigma^2=0$, finite variation paths}\label{subsection:scales:FV}
In this subsection we study the convergence when $\sigma^2=0$ and $\kappa(0)<\infty$. Note that in this case necessarily $\mu_0>0$, whereas $\delta^0=0$, and we work under scheme 2. The estimates are delicate here, since coercivity is weak (namely, linear).
We make the following key observation. By decomposing:
\begin{equation}\label{eq:FVpaths_decomposition}
A(s)=\psi(\beta)-q=\overbrace{\mu_0\gamma+\int(e^{\gamma y}\cos(sy)-1)\lambda(dy)-q}^{=:A_e(s)}+\overbrace{is\mu_0+i\int e^{\gamma y}\sin(sy)\lambda(dy)}^{=:A_o(s)}
\end{equation} into its even and odd part, it can be shown that, crucially,
\begin{equation}\label{eq:scales:crucial}
\int_1^\infty \frac{ds}{s^2}\vert A_e(s)\vert<\infty.
\end{equation}
See Proposition~\ref{appendix:proposition:integrability:main}. On the other hand, a similar argument to \eqref{eq:linear_coercivity}, shows that $\vert A_o(s)\vert\geq B_0 \vert s\vert$ for all $s\notin [a_0,a_0]$, for some $\{a_0,B_0\}\subset \mathbb{R}$. We shall refer to the latter property as ``coercivity of $A_o$''.
Note also that by the DCT, $\mu-\mu^h\to \mu_0$ as $h\downarrow 0$.
We next prove a key lemma. While \eqref{lemma:FV:one} and \eqref{lemma:FV:three} thereof will both be used explicitly in the sequel, the same cannot be said of \eqref{lemma:FV:two}. Nevertheless,
the proof of the latter is instructive of the techniques which we will be using, and so (also for the sake of
completeness) we choose to keep it.
\begin{lemma}\label{lemma:FV}
Suppose $\sigma^2=0$, $\kappa(0)<\infty$. Let $\{l,a,b,M\}\subset \mathbb{N}_0$ and let $h_0\in (0,h_\star]$ be given by the coercivity condition \ref{(C)}.
\begin{enumerate}
\item\label{lemma:FV:one} If $a+b+l\geq M+1$, then: $$\sup_{(h,z)\in (0,h_0)\times \mathbb{R}}\left\vert \int_{[-\frac{\pi}{h},\frac{\pi}{h}]}e^{isz}\frac{(h\land 1)^{l-l\land (M+1)}h^{l\land (M+1)}s^M}{A(s)^{a}A^h(s)^{b}}ds\right\vert<\infty.$$
\item\label{lemma:FV:two} If only $a+b+l\geq M$, then: $$\sup_{(h,z)\in (0,h_0\land K)\times \mathbb{R}}\left\vert z \int_{[-\frac{\pi}{h},\frac{\pi}{h}]}e^{isz}\frac{h^ls^M}{A(s)^{a}A^h(s)^{b}}ds\right\vert<\infty$$ for any $K\in (0,\infty)$.
\item\label{lemma:FV:three} If even $a+b+l\geq M+2$, then: $$\sup_{(h,x,z)\in (0,h_0)\times \mathbb{R}\times (\mathbb{R}\backslash \{0\})}\left\vert\frac{1}{z } \int_{[-\frac{\pi}{h},\frac{\pi}{h}]}e^{isx}\frac{(e^{isz}-1)(h\land 1)^{l-l\land (M+2)}h^{l\land (M+2)}s^M}{A(s)^{a}A^h(s)^{b}}ds\right\vert<\infty.$$
\end{enumerate}
\end{lemma}
\begin{remark}
Suppose $l=b=0$ and $a=M+1$ (respectively $a=M$, $a=M+2$) for simplicity (indeed the proof of \eqref{lemma:FV:one} and \eqref{lemma:FV:three} will be reduced to this case using \ref{(C)} and \ref{delta2}, whereas \eqref{lemma:FV:two} will (essentially) follow by an application of \eqref{lemma:FV:one}). Then for large $\vert s\vert$, the integrand in \eqref{lemma:FV:one} (respectively \eqref{lemma:FV:two}, \eqref{lemma:FV:three}) behaves as $\sim e^{isz}/s$ (respectively $e^{isz}$, $e^{isx}(e^{isz}-1)/s^2$) in the variable $s$. It is then not surprising that the proof of the claims is essentially a modification of the argument
implying that (in the sense of Cauchy's principal values, as appropriate) $\int_{[-\pi/h,\pi/h]}(e^{isz}/s)ds$, $\int_{[-\pi/h,\pi/h]} z e^{isz}ds$ and $\int_{[-\pi/h,\pi/h]}(e^{isx}(e^{isz}-1)/s^2)ds/z$ are bounded in the relevant suprema (as they are).
\end{remark}
\begin{proof}
We use in the proof, without explicit reference, as has already often been the case, the observation that $\vert sh\vert\leq\pi$ within the integration domain and the basic $\vert \int\cdot ds\vert\leq\int\vert\cdot\vert ds$-argument; it will be clear whenever these are being applied. Further, we sometimes employ, but do not always refer to, some elementary trigonometric inequalities, specifically $\vert \sin(v)\vert\leq \vert v\vert$, $1-\cos(v)\leq v^2/2$ ($v\in \mathbb{R}$), as well as \ref{lemma:technical:3rdorder:i} of Lemma~\ref{lemma:fundamental:inequalities}.
Finally, note that the integrands in the formulation of this lemma are certainly locally bounded, by coercivity \ref{(C)}, and (hence) the integrals well-defined. Indeed, by the same token, it is only non-trivial to show the finiteness of the suprema in \eqref{lemma:FV:one} and \eqref{lemma:FV:three} for $h\in (0,h_0\land 1)$, a restriction which we therefore make outright. In this respect, note that in \eqref{lemma:FV:two} the restriction to bounded $h$ is made a priori.
Consider now first \eqref{lemma:FV:one}. By coercivity \ref{(C)}, it is assumed without loss of generality that $l\leq M$, then $a+b=M-l+1$ and finally, $l=0$.
Next, it will be sufficient to consider the case when $b=0$, since $A(s)$ can then be successively replaced in the denominator by $A^h(s)$ modulo a quantity, which, using coercivity \ref{(C)} and the estimates \ref{delta2}, remains bounded (in the supremum over $(h,z)\in (0,h_0\land 1)\times \mathbb{R}$).
Recall now the decomposition $A=A_e+A_o$ of \eqref{eq:FVpaths_decomposition}. With $a_0$ as above, it will furthermore be sufficient to establish \eqref{lemma:FV:one} with the integration region $[-\pi/h,\pi/h]\backslash [-a_0,a_0]$ in place of $[-\pi/h,\pi/h]$, the integrand being locally bounded in the supremum over $(h,z)\in (0,h_0\land 1)\times \mathbb{R}$ (again by \ref{(C)}). Moreover, one can then successively replace $A(s)$ by $A_o(s)$, using coercivity of $A_o$ and \ref{(C)}, as well as \eqref{eq:scales:crucial}. Again this is done modulo a term which remains bounded in the supremum over $(h,z)\in (0,h_0\land 1)\times \mathbb{R}$. Hence we need only establish the finiteness of the quantity: $$\sup_{(h,z)\in (0,h_0\land 1)\times \mathbb{R}}\left\vert \int_{[-\frac{\pi}{h},\frac{\pi}{h}]\backslash [-a_0,a_0]}e^{isz}\frac{s^{M}}{A_o(s)^{M+1}}ds\right\vert.$$ Owing to the fact that the quotient in the integrand is odd in $s$, we may clearly restrict the supremum to $z\in \mathbb{R}\backslash \{0\}$, replacing also $e^{isz}$ by $\sin(sz)$ therein. A change of variables $u=s\vert z\vert$ then leads us to consider:
\begin{equation}\label{eq:scales:FV:fund}
\int_{[-\frac{\pi}{h}\vert z\vert,\frac{\pi}{h}\vert z\vert]\backslash [-a_0\vert z\vert,a_0\vert z\vert]}\sin(u)\frac{u^M}{(\vert z \vert A_o(u/\vert z\vert))^{M+1}}du,
\end{equation}
whose finiteness in the supremum over $(h,z)\in (0,h_0\land 1)\times \mathbb{R}\backslash \{0\}$ we seek to establish. Let $\mathcal{A}:=[-\frac{\pi}{h}\vert z\vert,\frac{\pi}{h}\vert z\vert]\backslash [-a_0\vert z\vert,a_0\vert z\vert]$. By coercivity of $A_o$, and since $\vert \sin(u) \vert\leq \vert u\vert$ ($u\in \mathbb{R}$), we do indeed get a finite quantity for the integral (in \eqref{eq:scales:FV:fund}) over $\mathcal{A}\cap [-1,1]$. On the other hand, to handle the rest of the domain, $\mathcal{A}\backslash [-1,1]$, we resort to integration by parts;
\begin{eqnarray*}
\frac{d}{du}\left(\cos(u)\frac{u^M}{(\vert z \vert A_o(u/\vert z\vert))^{M+1}}\right)&=&-\sin(u)\frac{u^M}{(\vert z \vert A_o(u/\vert z\vert))^{M+1}}+\\
&+&\cos(u)\left(\frac{Mu^{M-1}}{(\vert z \vert A_o(u/\vert z\vert))^{M+1}}-\frac{u^M(M+1)A_o'(u/\vert z\vert)}{(\vert z\vert A_o(u/ \vert z\vert))^{M+2}}\right).
\end{eqnarray*}
Now, once integration over $\mathcal{A}\backslash [-1,1]$ has been performed, on the left-hand side, a bounded quantity obtains, by coercivity of $A_o$. On the right-hand side we obtain from the first term the desired quantity (modulo the sign), whereas what emerges from the second term is bounded by coercivity of $A_o$, and the boundedness of $A_o'$ (see \ref{delta1D} in Subsection~\ref{subsection:_the_difference_of_derivatives}).
We next consider \eqref{lemma:FV:two}. Here an integration by parts must be done outright, thus:
\footnotesize
\begin{equation*}
\frac{d}{ds}\left( e^{isz} \frac{h^ls^M}{A(s)^{a}A^h(s)^{b}}\right)=iz e^{isz}\frac{h^ls^M}{A(s)^{a}A^h(s)^{b}}+e^{isz}\left(\frac{h^lMs^{M-1}}{A(s)^{a}A^h(s)^{b}}\right)-e^{isz}\left(\frac{h^ls^MaA'(s)}{A(s)^{a+1}A^h(s)^{b}}+\frac{h^ls^MbA^{h\prime}(s)}{A(s)^{a}A^h(s)^{b+1}}\right).
\end{equation*}
\normalsize
Further, once integration over $[-\pi/h,\pi/h]$ has been performed in this last equality, on the left-hand side a bounded quantity obtains by coercivity \ref{(C)}. On the right-hand side, the first term yields the desired quantity (modulo a non-zero multiplicative constant), and the second is bounded by part \eqref{lemma:FV:one}. Now, using \eqref{eq:FV:derivative_of_A} and \eqref{eq:FV:derivative_of_Ah}, via Fubini's Theorem, part \eqref{lemma:FV:one} again, and by elementary estimates such as $e^{\gamma h}$ being bounded for $h$ bounded, $\vert\sum_{y\in\mathbb{Z}_h^{--}}yc_y^he^{\beta y}\vert\leq 2e^{\gamma h/2}\int \vert y\vert e^{\gamma y}\lambda(dy)$, $c_0^h=O(h)$ and \ref{lemma:technical:estimates:i} of Lemma~\ref{lemma:technical:estimates}, the claim obtains.
Finally, we are left to consider \eqref{lemma:FV:three}. Again by coercivity \ref{(C)}, it is assumed without loss of generality that $l\leq M$, then $a+b=M-l+2$ and finally, $l=0$. Moreover, by the same argument as for \eqref{lemma:FV:one}, we may further insist on $b=0$, replace the integration region by $[-\pi/h,\pi/h]\backslash [-a_0,a_0]$ and finally $A$ by $A_o$. Thus we are left to analyse:
\begin{equation}\label{eq:FV:lemma:two_integrals}
\frac{1}{z } \int_{[-\frac{\pi}{h},\frac{\pi}{h}]\backslash [-a_0,a_0]}e^{isx}\frac{(\cos(sz)-1)s^M}{A_o(s)^{M+2}}ds\quad\text{ and }\quad\frac{1}{z } \int_{[-\frac{\pi}{h},\frac{\pi}{h}]\backslash [-a_0,a_0]}e^{isx}\frac{\sin(sz)s^M}{A_o(s)^{M+2}}ds,
\end{equation}
which we require both to be bounded in the relevant supremum.
In the first integral of \eqref{eq:FV:lemma:two_integrals} make the substitution $v=s\vert z\vert$ to obtain: $$\mathrm{sgn}(z) \int_{[-\frac{\pi}{h}\vert z\vert,\frac{\pi}{h}\vert z\vert]\backslash [-a_0\vert z\vert,a_0\vert z\vert]}e^{i vx/\vert z\vert}\frac{(\cos(v)-1)v^M}{(\vert z\vert A_o(v/\vert z\vert))^{M+2}}dv.$$ Letting, as usual, $\mathcal{A}:=[-\frac{\pi}{h}\vert z\vert,\frac{\pi}{h}\vert z\vert]\backslash [-a_0\vert z\vert,a_0\vert z\vert]$, the integral over $\mathcal{A}\backslash [-1,1]$ (respectively $\mathcal{A}\cap [-1,1]$) is bounded by coercivity of $A_o$ (respectively the latter and since $1-\cos(v)\leq v^2/2$).
On the other hand, in the second integral of \eqref{eq:FV:lemma:two_integrals}, note that by the oddness of $A_o$ only $\sin(sx)$ makes a non-zero contribution. Then we may assume $x\ne 0$ and make the substitution $u=s\vert x\vert$ to arrive at: $$\frac{x}{z}\int_{[-\frac{\pi}{h}\vert x\vert,\frac{\pi}{h}\vert x\vert]\backslash [-a_0\vert x\vert,a_0\vert x\vert]}\frac{\sin(u)\sin(uz/\vert x\vert)u^M}{(\vert x\vert A_o(u/\vert x\vert))^{M+2}}du.$$ Let again $\mathcal{A}:=[-\frac{\pi}{h}\vert x\vert,\frac{\pi}{h}\vert x\vert]\backslash [-a_0\vert \vert,a_0\vert x\vert]$ be the domain of integration. It is clear that the integral over $\mathcal{A}\cap [-1,1]$ is finite, using coercivity of $A_o$ and twice $\vert \sin(w)\vert\leq \vert w\vert$ ($w\in \mathbb{R}$). To handle the remainder of the domain, $\mathcal{A}\backslash [-1,1]$, we use one last time integration by parts, thus;
\footnotesize
\begin{eqnarray*}
\frac{x}{z}\frac{d}{du}\left(\frac{\cos(u)\sin(uz/\vert x\vert)u^M}{(\vert x\vert A_o(u/\vert x\vert))^{M+2}}\right)&=&-\frac{x}{z}\frac{\sin(u)\sin(uz/\vert x\vert)u^M}{(\vert x\vert A_o(u/\vert x\vert))^{M+2}}+\mathrm{sgn}(x)\frac{\cos(u)\cos(uz/\vert x\vert)u^M}{(\vert x\vert A_o(u/\vert x\vert))^{M+2}}+\\
&&\frac{x}{z}\frac{\cos(u)\sin(uz/\vert x\vert)Mu^{M-1}}{(\vert x\vert A_o(u/\vert x\vert))^{M+2}}-\frac{x}{z}\frac{\cos(u)\sin(uz/\vert x\vert)u^M(M+2)A_o'(u/\vert x\vert)}{(\vert x\vert A_o(u/\vert x\vert))^{M+3}}.
\end{eqnarray*}
\normalsize
The claim now obtains by coercivity of $A_o$, boundedness of $A_o'$ \ref{delta1D} and by using the elementary estimate $\vert \sin(w)\vert\leq \vert w\vert$ ($w\in \mathbb{R}$), as appropriate.
\end{proof}
\begin{proposition}[$\sigma^2=0$, $\kappa(0)<\infty$ ($W^{(q)}$ convergence)]\label{proposition:Wqconvergence:trivial_diffusion_FV}
Suppose $\sigma^2=0$ and $\kappa(0)<\infty$. Let $q\geq 0$.
\begin{enumerate}[(i)]
\item\label{diffusion_zero_finite_variation:i} For any $\gamma>\Phi(q)$ there are $\{A_0,h_0\}\subset (0,\infty)$ such that for all $h\in (0,h_0)$ and then all $x\in \mathbb{Z}_h^{+}$: $$\left\vert\Delta_{W}^{(q)}(x,h)\right\vert\leq A_0\frac{h}{x}e^{\gamma x}.$$
\item\label{diffusion_zero_finite_variation:ii} For the L\'evy triplet $(0,\delta_{-1},1)$ and the nested sequence $(h_n:=1/2^n)_{n\geq 1}\downarrow 0$, for each $q\geq 0$ and any $x\in \cup_{n\geq 1}\mathbb{Z}_{h_n}^{+}\cap [0,1)$: $$\lim_{n\to\infty}\frac{\Delta_{W}^{(q)}(x,h_n)}{h_n}=e^{x(1+q)}\frac{1}{2}(1+q)^2x.$$
\end{enumerate}
\end{proposition}
\begin{proof}
With respect to~\ref{diffusion_zero_finite_variation:i}, we estimate the three terms appearing on the right-hand side of \eqref{eq:difference:Wqs} one by one.
First,~\ref{W:a} in \eqref{eq:difference:Wqs} is easily seen to be of order $\frac{1}{x}O(h)$ by an obvious integration by parts argument, using coercivity and the fact that $A'$ is bounded.
Second, when it comes to~\ref{W:b} in \eqref{eq:difference:Wqs}, an integration by parts is also performed immediately: $$\frac{d}{ds}\left(e^{isx}\left(\frac{1}{A(s)}-\frac{1}{A^h(s)}\right)\right)=ix e^{isx}\left(\frac{1}{A(s)}-\frac{1}{A^h(s)}\right)+e^{isx}\left(-\frac{A'(s)}{A(s)^2}+\frac{A^{h\prime}(s)}{A^h(s)^2}\right).$$ Upon integration on $[-\pi/h,\pi/h]$, by coercivity, the left-hand side is of order $O(h)$ and hence will contribute $\frac{1}{x}O(h)$ to the right-hand side of \eqref{eq:difference:Wqs}. Write:
\begin{equation}\label{eq:finite_variation:zero}
-\frac{A'(s)}{A(s)^2}+\frac{A^{h\prime}(s)}{A^h(s)^2}=\underbrace{\frac{A^{h\prime}(s)-A'(s)}{A(s)^2}}_{\mytag{(1)}{FV:(1)}}+\underbrace{A^{h\prime}(s)\frac{(A(s)-A^h(s))(A(s)+A^h(s))}{A(s)^2A^h(s)^2}}_{\mytag{(2)}{FV:(2)}}.
\end{equation}
We focus on each term one at a time. For \ref{FV:(1)}, there corresponds to it, modulo non-zero multiplicative constants, and by Fubini (using \eqref{eq:FV:derivative_of_A} and \eqref{eq:FV:derivative_of_Ah}):
\begin{eqnarray}\label{eq:finite_variation:one}
\nonumber&\phantom{=}&(\mu-\mu^h)\int_{[-\pi/h,\pi/h]}\frac{e^{\beta h}-1}{A(s)^2}e^{isx}ds+c_0^h\int_{[-\pi/h,\pi/h]}\left(\frac{e^{\beta h}-e^{-\beta h}}{2h}\right)\frac{1}{A(s)^2} e^{isx}ds\\
&+&\sum_{y\in \mathbb{Z}_h^-}\int_{A_y^h}\lambda(dz)\int_{[-\pi/h,\pi/h]}\frac{y(e^{\beta y}-\mathbbm{1}_{[-V,0)}(z))-z(e^{\beta z}-\mathbbm{1}_{[-V,0)}(z))}{A(s)^2}e^{isx}ds.
\end{eqnarray}
There are three summands in \eqref{eq:finite_variation:one}. The first is $O(h)$ by employing the decomposition $e^{\beta h}-1=(e^{\beta h}-\beta h-1)+\beta h$ and then using \ref{lemma:technical:estimates:iii} of Lemma~\ref{lemma:technical:estimates} and coercivity \ref{(C)} (respectively \eqref{lemma:FV:one} of Lemma~\ref{lemma:FV}) for the first (respectively second) term. The same is true of the second summand, noting that, for sure, $c_0^h=O(h)$, employing the decomposition $\frac{e^{\beta h}-e^{-\beta h}}{2h}=\left(\frac{e^{\beta h}-e^{-\beta h}}{2h}-\beta\right)+\beta$ and then using \ref{lemma:technical:estimates:rough:II} of Lemma~\ref{lemma:technical:estimates} and again coercivity \ref{(C)} (respectively \eqref{lemma:FV:one} of Lemma~\ref{lemma:FV}) for the first (respectively second) term. As for the third summand of \eqref{eq:finite_variation:one}, write when $z\notin [-V,0)$: $$ye^{\beta y}-ze^{\beta z}=\overbrace{(y-z)e^{\beta y}}^{\mytag{(I)}{FV:I}}+ze^{\beta z}(\overbrace{e^{\beta(y-z)}-\beta(y-z)-1}^{\mytag{(II)}{FV:II}}+\overbrace{\beta(y-z)}^{\mytag{(III)}{FV:III}}).$$ By the findings of Lemma~\ref{lemma:technical:estimates}, the fact that $\lambda(-\infty,-V)<\infty$ and coercivity \ref{(C)}, it is clear that \ref{FV:I} and \ref{FV:II} will contribute a term of order $\frac{1}{x}O(h)$ to the right-hand side of \eqref{eq:difference:Wqs}. On the other hand, for \ref{FV:III}, the same follows by \eqref{lemma:FV:one} of Lemma~\ref{lemma:FV}. Next, when $z\in [-V,0)$ we write:
\begin{equation}
ye^{\beta y}-ze^{\beta z}-(y-z)=\overbrace{e^{\beta z}y(e^{\beta (y-z)}-1)}^{\mytag{(I)}{FV_I}}+\overbrace{(y-z)(e^{\beta z}-1)}^{\mytag{(II)}{FV_II}}.
\end{equation}
When it comes to \ref{FV_I}, it is dealt with precisely as it was for $z\notin [-V,0)$ (but note also that $\vert y\vert\leq 2\vert z\vert$). With regard to \ref{FV_II}, apply \eqref{lemma:FV:three} of Lemma~\ref{lemma:FV}.
To handle \ref{FV:(2)} of \eqref{eq:finite_variation:zero}, i.e.: $$A^{h\prime}(s)\frac{(A(s)-A^h(s))(A(s)+A^h(s))}{A(s)^2A^h(s)^2},$$ notice that in the difference to replacing this with $$2A'(s)\frac{(A(s)-A^h(s))}{A(s)^3}$$ a term of $\frac{1}{x}O(h)$ is contributed to \eqref{eq:difference:Wqs} (just make successive replacements $A^h\to A$ and study the difference by taking advantage of coercivity \ref{(C)}, \ref{delta2} and \ref{delta2D}. So it is in fact sufficient to study: $$\int_{[-\pi/h,\pi/h]}e^{isx}A'(s)\frac{(A(s)-A^h(s))}{A(s)^3}ds.$$ Now we do first Fubini for $A'$ (via Remark~\ref{remark:A'forFV}), and get, beyond a factor of $i$: $$\mu_0 \int_{[-\pi/h,\pi/h]}e^{isx}\frac{(A(s)-A^h(s))}{A(s)^3}ds+\int \lambda(dy)ye^{\gamma y}\int_{[-\pi/h,\pi/h]}e^{is(x+y)}\frac{(A(s)-A^h(s))}{A(s)^3}ds.$$ So what we would really like, is to show that:
\footnotesize
\begin{eqnarray}
\nonumber&& \int_{[-\pi/h,\pi/h]}e^{isz}\frac{(A^h(s)-A(s))}{A(s)^3}ds\\\nonumber
&=&(\mu-\mu^h)\int_{[-\pi/h,\pi/h]}e^{isz}\frac{\frac{e^{\beta h}-1}{h}-\beta}{A(s)^3}ds+c_0^h\int_{[-\pi/h,\pi/h]}e^{isz}\left(\frac{e^{\beta h}+e^{-\beta h}-2}{2h^2}\right)\frac{1}{A(s)^3}ds\\
&+&\sum_{y\in \mathbb{Z}_h^-}\int_{A_y^h}\lambda(du)\int_{[-\pi/h,\pi/h]}e^{isz}\frac{e^{\beta y}-\beta y\mathbbm{1}_{[-V,0)}(u)-(e^{\beta u}-\beta u\mathbbm{1}_{[-V,0)}(u))}{A(s)^3}ds \label{eq:FV:Wqs:another_one}
\end{eqnarray}
\normalsize
is bounded by a (constant times $h$), uniformly in $z\in\mathbb{R}$ (it will then follow immediately that a term of $\frac{1}{x}O(h)$ is being contributed to \eqref{eq:difference:Wqs}).
\begin{itemize}
\item Now, the part corresponding to $\mathbbm{1}_{(-\infty,-V)}\cdot \lambda$, namely: $$\sum_{y\in \mathbb{Z}_h^-}\int_{A_y^h\cap (-\infty,-V)}\lambda(du)\int_{[-\pi/h,\pi/h]}e^{isz}\frac{e^{\beta y}-e^{\beta u}}{A(s)^3}ds,$$ is clearly so.
\item With respect to the term involving $c_0^h=O(h)$, make the decomposition: $$\frac{e^{\beta h}+e^{-\beta h}-2}{2h^2}=\left(\frac{e^{\beta h}+e^{-\beta h}-2}{2h^2}-\frac{\beta^2}{2}\right)+\frac{\beta^2}{2}.$$ Then use coercivity \ref{(C)} and \ref{lemma:technical:estimates:rough:I} of Lemma~\ref{lemma:technical:estimates} (respectively \eqref{lemma:FV:one} of Lemma~\ref{lemma:FV}) for the first (respectively second) term.
\item As regards: $$(\mu-\mu^h)\int_{[-\pi/h,\pi/h]}e^{isz}\frac{\frac{1}{h}(e^{\beta h}-1)-\beta}{A(s)^3}ds$$ write:
\begin{equation}
\frac{1}{h}\left(e^{\beta h}-1\right)-\beta=\frac{1}{h}\left(\overbrace{e^{\beta h}-\frac{\beta^2 h^2}{2}-\beta h-1}^{\mytag{(I)}{FVI}}+\overbrace{\frac{\beta^2 h^2}{2}}^{\mytag{(II)}{FVII}}\right).
\end{equation}
By an expansion into a series, which converges absolutely and locally uniformly, and coercivity \ref{(C)}, it is clear that \ref{FVI} has the desired property, whereas \eqref{lemma:FV:one} of Lemma~\ref{lemma:FV} may be applied to \ref{FVII}.
\item Finally it will be sufficient to consider: $$\sum_{y\in\mathbb{Z}_h^{-}}\int_{A_y^h\cap [-V,0)}\lambda(du)\int_{[-\pi/h,\pi/h]}e^{isz}\frac{e^{\beta u}-\beta u-(e^{\beta y}-\beta y)}{A(s)^3}ds,$$ which we need bounded by a (constant times $h$) uniformly in $z\in \mathbb R$. For this to work it is sufficient that the innermost integral produces ($\vert u\vert$ times $h$ times a constant). Moreover, it is enough to produce $\vert y\vert$ (or, a fortiori, $\vert y-u\vert$) in place of $\vert u\vert$. Now write:
\begin{eqnarray*}
e^{\beta u}-\beta u-(e^{\beta y}-\beta y)&=&(e^{\beta y}-1)\left((e^{\beta (u-y)}-\beta(u-y)-1)+\beta(u-y)\right)+\\
&&\left(e^{\beta(u-y)}-\frac{\beta^2(u-y)^2}{2}-\beta(u-y)-1\right)+\frac{\beta^2(u-y)^2}{2}.
\end{eqnarray*}
These terms can now be dealt with in part straightforwardly and in part by employing \eqref{lemma:FV:three} and \eqref{lemma:FV:one} of Lemma~\ref{lemma:FV}.
\end{itemize}
Third, with respect to~\ref{W:c} of \eqref{eq:difference:Wqs}, again an integration by parts is made outright, thus:
$$\frac{d}{ds}\left(e^{isx}\frac{1-e^{ish}}{A^h(s)}\right)=ixe^{isx}\frac{1-e^{ish}}{A^h(s)}+e^{isx}\frac{-ihe^{ish}}{A^h(s)}-e^{isx}\frac{(1-e^{ish})A^{h\prime}(s)}{A^h(s)^2}.$$ Now the left-hand side is handled using coercivity \ref{(C)}. On the right-hand side, we apply \eqref{lemma:FV:one} of Lemma~\ref{lemma:FV} to the second term. Finally, in the third term on the right-hand side we may replace $A^{h\prime}(s)$ by $A'(s)$, followed by Fubini for $A'$ and an application of \eqref{lemma:FV:three} of Lemma~\ref{lemma:FV}. All in all, a term of order $\frac{1}{x}O(h)$ thus emerges on the right-hand side of \eqref{eq:difference:Wqs}.
Part~\ref{diffusion_zero_finite_variation:ii} can be obtained by explicit computation, and is elementary.
\end{proof}
\begin{proposition}[$\sigma^2=0$, $\kappa(0)<\infty$ ($Z^{(q)}$ convergence)]
Suppose $\sigma^2=0$ and $\kappa(0)<\infty$. Let $q>0$.
\begin{enumerate}[(i)]
\item\label{diffusion_zero_finite_variation:Zqs:i} For any $\gamma>\Phi(q)$, there are $\{A_0,h_0\}\subset (0,\infty)$ such that for all $h\in (0,h_0)$ and then all $x\in \mathbb{Z}_h^{++}$: $$\left\vert \Delta_{Z}^{(q)}(x,h)\right\vert\leq A_0he^{\gamma x}.$$
\item\label{diffusion_zero_finite_variation:Zqs:ii} For the L\'evy triplet $(0,\delta_{-1},1)$ and the nested sequence $(h_n:=1/2^n)_{n\geq 1}\downarrow 0$, for each $q> 0$ and any $x\in \cup_{n\geq 1}\mathbb{Z}_{h_n}^{++}\cap (0,1)$: $$\lim_{n\to\infty}\frac{\Delta_{Z}^{(q)}(x,h_n)}{h_n}=\frac{1}{2}q(1+q)xe^{x(1+q)}. $$
\end{enumerate}
\end{proposition}
\begin{proof}
With respect to~\ref{diffusion_zero_finite_variation:Zqs:i}, we have as follows. First, \ref{Z:a} of \eqref{eq:difference:Zqs} is of order $O(h)$ by coercivity. Second, in \ref{Z:b}, we employ the decomposition: $$1-\frac{\beta h}{1-e^{-\beta h}}=\left(1-\frac{\beta h}{1-e^{-\beta h}}+\frac{\beta h}{2}\right)-\frac{\beta h}{2}.$$ Then the first term may be estimated via \ref{lemma:technical:coercivity:b} of Lemma~\ref{lemma:technical:estimates} (for the denominator), a Taylor expansions into absolutely and locally uniformly convergent series (for the numerator) and coercivity \ref{(C)}; while to the second term we apply \eqref{lemma:FV:one} of Lemma~\ref{lemma:FV}. It follows that \ref{Z:b} of \eqref{eq:difference:Zqs} is $O(h)$. Finally, when it comes to \ref{Z:c} of \eqref{eq:difference:Zqs}, we have (beyond a non-zero multiplicative constant):
\begin{equation*}
\int_{[-\pi/h,\pi/h]}e^{isx}\frac{(A(s)-A^h(s))}{A(s)A^h(s)\beta}ds.
\end{equation*}
This can now be seen to be $O(h)$ in the same manner as \eqref{eq:FV:Wqs:another_one} was seen to be so (indeed, one can simply follow, word-for-word, the argument pursuant to \eqref{eq:FV:Wqs:another_one}, and recognize that the substitution of $A(s)A^h(s)\beta$ in place of $A(s)^3$ results in no material change).
Part~\ref{diffusion_zero_finite_variation:Zqs:ii} follows by a direct computation.
\end{proof}
\subsection{$\sigma^2=0$, infinite variation paths}\label{subsection:sclaes:diffusion0infvariation}\label{subsection:scales:diffusion=0:infinite_variation}
Finally we consider the case when $\sigma^2=0$ and $\kappa(0)=\infty$. We assume here that Assumption~\ref{assumption:salient} is in effect. Note also that $\delta^0=1$ and we work under scheme 2. We do not establish sharpness of the rates.
\begin{proposition}[$\sigma^2=0$ \& $\kappa(0)=\infty$]\label{proposition:convergence:diffusion_zero_infinite_variation}
Assume $\sigma^2=0$ and Assumption~\ref{assumption:salient}, let $q\geq 0$, $\gamma>\Phi(q)$. Then there are $\{A_0,h_0\}\subset (0,\infty)$ such that for all $h\in (0,h_0)$ and then all $x\in \mathbb{Z}_h^{++}$:
\begin{enumerate}[(i)]
\item\label{diffusion_zero_infinite_variation:one} $\left\vert\Delta_{W}^{(q)}(x,h)\right\vert\leq A_0\frac{h^{2-\epsilon}}{x}e^{\gamma x}$.
\item\label{diffusion_zero_infinite_variation:two} $\left\vert \Delta_{Z}^{(q)}(x,h)\right\vert\leq A_0 h^{2-\epsilon}e^{\gamma x}.$
\end{enumerate}
\end{proposition}
\begin{proof}
With respect to~\ref{diffusion_zero_infinite_variation:one}, we have as follows. Note that~\ref{W:a} in \eqref{eq:difference:Wqs} is $\frac{1}{x}O(h^\epsilon)$ by an integration by parts argument and \ref{delta1D} from Subsection~\ref{subsection:_the_difference_of_derivatives}. We do the same for~\ref{W:b}:
\begin{eqnarray*}
&&\frac{d}{ds}\left(e^{isx}\left(\frac{1}{A(s)}-\frac{1}{A^h(s)}\right)\right)=ixe^{isx}\left(\frac{1}{A(s)}-\frac{1}{A^h(s)}\right)+\\
&+&e^{isx}\left(\frac{A^2(s)(A^{h\prime}(s)-A'(s))+A'(s)(A(s)-A^h(s))(A(s)+A^h(s))}{A^2(s)A^h(s)^2}\right).
\end{eqnarray*}
Upon integration, one gets on the left-hand side a contribution of $\frac{1}{x}O(h^\epsilon)$ to \eqref{eq:difference:Wqs}, by coercivity \ref{(C)}. With regard to the rightmost quotient on the right-hand side, we obtain a contribution of order $\frac{1}{x}O(h^{2-\epsilon})$, as follows again by coercivity \ref{(C)}, Remark~\ref{remark:under_assumption_salient_etc}, \ref{delta1D} and the estimates \ref{delta2} and \ref{delta2D}. Remark that $\epsilon> 2-\epsilon$.
With respect to~\ref{diffusion_zero_infinite_variation:two}, we have as follows. \ref{Z:a} of \eqref{eq:difference:Zqs} is of order $O(h^\epsilon)$ by coercivity \ref{(C)}. Also, \ref{Z:b} is of order $O(h)$. Finally~\ref{Z:c} is of order $O(h^{2-\epsilon})$ immediately, with no need for an integration by parts.
\end{proof}
\subsection{A convergence result for the derivatives of ${W^{(q)}}$ ($\sigma^2>0$)}\label{subsection:functionals_of_scale_fncs_conv}
\begin{proposition}
Let $q\geq 0$, $\sigma^2>0$. Note that $W^{(q)}$ is then differentiable on $(0,\infty)$ \cite[Lemma 2.4]{kuznetsovkyprianourivero}. Moreover, for any $\gamma>\Phi(q)$, there exist $\{A_0,h_0\}\subset (0,\infty)$, such that for all $x\in \mathbb{Z}_h^{++}\backslash \{h\}$:
\begin{equation*}
\left\vertW^{(q)\prime}(x)-\frac{W^{(q)}_h(x)-W^{(q)}_h(x-2h)}{2h}\right\vert\leq A_0\frac{e^{\gamma x}}{x}\left(h+\zeta(h/2)+\xi(h/2)h\log(1/h)\right).
\end{equation*}
\end{proposition}
\begin{remark}
The case $\sigma^2=0$ appears much more difficult to analyze, since the balance between coercivity and the estimates of the differences of Laplace exponents (and their derivatives) worsens.
\end{remark}
\begin{proof}
First, integration by parts, monotone convergence and the fact that $W^{(q)}(0)=0$ \cite[p. 222, Lemma 8.6]{kyprianou} yield (for $\beta>\Phi(q)$): $\widehat{W^{(q)\prime}}(\beta)=\beta\widehat{W^{(q)}}(\beta)=\beta/(\psi(\beta)-q)$. Then analytic continuation, Laplace inversion and dominated convergence allow to conclude, for any $x>0$ that: $$W^{(q)\prime}(x)=\frac{1}{2\pi}\int_{-\infty}^\infty\frac{\beta e^{\beta x}}{\psi(\beta)-q}ds.$$ On the other hand, it follows directly from Corollary~\ref{corollary:scale_fncs_laplace-transforms}, that for $h\in (0,h_\star)$ and then $x\in \mathbb{Z}_h^{++ }$: $$\frac{W^{(q)}_h(x)-W^{(q)}(x-2h)}{2h}=\frac{1}{2\pi}\int_{-\pi/h}^{\pi/h}e^{\beta x}\left(\frac{e^{\beta h}-e^{-\beta h}}{2h}\right)\frac{ds}{\psi^h(\beta)-q}.$$
Now, it will be sufficient to estimate the following integrals:
\footnotesize
\begin{equation*}
\int_{(-\infty,\infty)\backslash [-\pi/h,\pi/h]}\beta e^{\beta x}A(s)^{-1}ds;\ \int_{-\pi/h}^{\pi/h}e^{\beta x}\left(\frac{e^{\beta h}-e^{-\beta h}}{2h}-\beta\right)A^h(s)^{-1}ds; \ \int_{-\pi/h}^{\pi/h}\beta e^{\beta x}\left(A(s)^{-1}-A^h(s)^{-1}\right)ds.
\end{equation*}
\normalsize
An integration by parts, coupled with coercivity \ref{(C)} and the boundedness in linear growth of $A'$ \ref{delta1D}, establishes the first of these two integrals as being of order $\frac{1}{x}O(h)$. The same emerges as being true of the second integral, this time using the boundedness in linear growth of $A^{h\prime}$ instead, but also \ref{lemma:technical:estimates:iv} and \ref{lemma:technical:estimates:rough:II} of Lemma~\ref{lemma:technical:estimates}. Finally, with respect to the third integral, again one performs integration by parts, and then uses \ref{delta2}, coercivity \ref{(C)}, the decomposition \eqref{eq:finite_variation:zero} and \ref{delta2D}. The claim follows.
\end{proof}
\section{Numerical illustrations and concluding remarks}\label{section:numerical_illustrations}
\subsection{Numerical examples}
We illustrate our algorithm for computing $W$, described in Eq.~\eqref{eq:LinRecursion} of the Introduction, in two concrete examples, applying it to determine some relevant quantities arising in applied probability. The examples are chosen with two criteria in mind:
\begin{enumerate}
\item They are natural from the modeling perspective (computation of (Example~\ref{example:log-normal}) ruin parameters in the classical Cram\'er-Lundberg model with log-normal jumps and (Example~\ref{example:main}) the L\'evy-Kintchine triplet of the limit law of a CBI process).
\item They do not posses a closed form formula
for the Laplace exponent of the spectrally negative L\'evy processes. Such examples arise often in practice, making it difficult to apply the standard algorithms for scale functions based on Laplace inversion. Our algorithm is well-suited for such applications.
\end{enumerate}
\begin{example}\label{example:log-normal}
A popular choice for the claim-size modeling in the Cram\'er-Lundberg surplus process is the log-normal distribution \cite[Paragraph I.2.b, Example 2.8]{asmussen}. Fixing the values of the various parameters, consider the spectrally negative L\'evy process $X$ having $\sigma^2=0$; $\lambda(dy)=\mathbbm{1}_{(-\infty,0)}(y)\exp(-(\log(-y))^2/2)/(\sqrt{2\pi}(-y))dy$; and (with $V=0$) $\mu=5$ (this satisfies the security loading condition \cite[Section~1.2]{gerber_shiu}). Remark that the log-normal density has fat tails and is not completely monotone.
We complement the computation of $W$ by applying it to the calculation of the density $k$ of the deficit at ruin, on the event that $X$ goes strictly above the level $a=5$, before venturing strictly below $0$, conditioned on $X_0=x=2$: $$\mathsf E_x[-X_{\tau_0^-}\in dy,\tau_0^-<\tau_a^+]=k(y)dy$$ ($\tau_0^-$, respectively $\tau_a^+$, being the first entrance time of $X$ to $(-\infty,0)$, respectively $(a,\infty)$). Indeed, $k(y)$ may be expressed as \cite[Theorem 5.5]{gerber_shiu} $k(y)=\int_0^af(z+y)\frac{W(x)W(a-z)-W(a)W(x-z)}{W(a)}dz$, where $f(y):=\exp(-(\log(y))^2/2)/(\sqrt{2\pi}y)$, $y\in (0,+\infty)$. We approximate the integral $k$ by the discrete sum $k_h$, given for $y\in (0,\infty)$ as follows: \footnotesize $$k_h(y):=h\left[f(y+a)\frac{W_h(x)W_h(0)}{2W_h(a)}+\sum_{k=1}^{a/h-1}f(kh+y)W_h(a-kh)\frac{W_h(x)}{W_h(a)}-\sum_{k=1}^{x/h-1}W_h(x-kh)f(kh+y)-\frac{W_h(0)f(x+y)}{2}\right].$$ \normalsize Results are reported in Figure~\ref{figure:log-normal}.
\begin{figure}
\includegraphics[width=\textwidth]{Lognormal_zussamen.pdf}
\caption{The scale function $W$ and the density $k$ of the deficit at ruin on the event $\{\tau_0^-<\tau_5^+\}$, for the log-normal Cram\'er-Lundberg process. The relative errors are consistent with the linear order of convergence predicted by Theorem~\ref{theorem:rates_for_scale_functions}. See Example~\ref{example:log-normal} for details.}\label{figure:log-normal}
\end{figure}
\end{example}
\begin{example}\label{example:main}
We take $\sigma^2=0$; the L\'evy measure $\lambda=\lambda_a+\lambda_c$ has atomic part $\lambda_a=\frac{1}{2}\left(\delta_{-1}+\delta_{-2}\right)$, whilst the density of its absolutely continuous part $\lambda_c(dy)=l(y)dy$ is given by: \footnotesize$$l(y)=\frac{3}{2(-y)^{5/2}}\mathbbm{1}_{[-1,0)}(y)+\frac{1}{2(-y-1)^{1/2}}\mathbbm{1}_{[-2,-1)}(y)+\left(\frac{e^{\cos(y)}(3+y\sin(y))}{(-y)^4}+\frac{e}{(-y)^3}\right)\mathbbm{1}_{(-\infty,-1)}(y),\ y\in \mathbb{R};$$ \normalsize and (with $V=1$) $\mu=15$. Remark
the case is extreme: there are two atoms, while the density is stable-like at $0$, has a fat tail at $-\infty$, and a discontinuity (indeed, a pole; in particular, it is not completely monotone). Furthermore, there is no Gaussian component, and the sample paths of the process have infinite variation.
We compute $W$ for the L\'evy process $X$ having the above characteristic triplet, and complement this with the following application. Let furthermore $X^F$ be an independent L\'evy subordinator, given by $X^F_t=t+Z_t$, $t\in [0,\infty)$, where $Z$ is a compound Poisson process with L\'evy measure $m(dy)=e^{-y}\mathbbm{1}_{(0,\infty)}(y)dy$. Denote the dual $-X$ of $X$ by $X^R$. To the pair $(X^F,X^R)$ there is associated, in a canonical way, (the law of) a (conservative) CBI process \cite{kawazu_watanabe}. The latter process converges to a limit distribution $L$, as time goes to infinity, since $\psi'(0+)>0$ \cite[Theorem~2.6(c)]{keller_mijatovic} and since further the log-moment of $m$ away from zero is finite \cite[Corollary~2.8]{keller_mijatovic}. Moreover, the limit $L$ is infinitely divisible, and: $$-\log\int_0^\infty e^{-ux}dL(x)=u\gamma-\int_{(0,\infty)}(e^{-ux}-1)\frac{k(x)}{x}dx,$$ where $\gamma=bW(0)$ vanishes, whilst: $$k(x)=bW'(x+)+\int_{(0,\infty)}[W(x)-W(x-\xi)]m(d\xi),$$ where $b=1$ and $m$ are the drift, respectively the L\'evy measure, of $X^F$ \cite[Theorem~3.1]{keller_mijatovic}. We compute $k$ via approximating, for $x\in \mathbb{Z}_h^{++}$, $k(x)$ by $k_h(x)$: $$k_h(x):=b\frac{W_h(x)-W_h(x-h)}{h}+W_h(x-h)m[h/2,\infty)-\sum_{k=1}^{x/h-1}W_h(x-kh-h)m[kh-h/2,kh+h/2).$$ Results are reported in Figure~\ref{figure:example:main}.
\end{example}
\begin{figure}
\includegraphics[width=\textwidth]{Furtherr_zussamen_X=10.pdf}
\caption{The scale function $W$ for the spectrally negative L\'evy process $X$, as described in Example~\ref{example:main}; followed by the $k$-function $k$ of the weak limit (as time goes to infinity) of the CBI process, whose spectrally positive component is the dual $X^R=-X$ of $X$, whilst the L\'evy subordinator part is the sum of a unit drift and a compound Poisson process of unit intensity and mean one exponential jumps. The relative errors are consistent with the $O(\sqrt{h})$ order of convergence predicted by Theorem~\ref{theorem:rates_for_scale_functions}. See Example~\ref{example:main} for details.
}\label{figure:example:main}
\end{figure}
Let us also mention that we have tested our algorithm on
very simple processes with completely monotone L\'evy densities \cite{albrecher} (Brownian motion with drift; positive drift minus a compound Poisson subordinator with exponential jumps; spectrally negative stable L\'evy process --- see Appendix~\ref{appendix:further_examples}), and the results were in nice agreement with the explicit formulae which are available for the scale functions in the latter cases.
\subsection{Concluding remarks}
(1) Computational cost. To compute $W^{(q)}_h(x)$ or $Z^{(q)}_h(x)$ for some $x\in \mathbb{Z}_h$ one effects recursions \eqref{equation:recursion:Wq} and \eqref{equation:recursion:Zq} (as applied to $Y=X^h/h$, $h\in (0,h_\star)$), at a cost of $O((x/h)^2)$ operations (assuming given the parameters of $X^h$).
\noindent
(2) Quantity $Z^{(q)}_h(x)$ may be obtained from the values of $W^{(q)}_h$ on
$[0,x]\cap \mathbb{Z}_h$ at a cost of order $O(x/h)$ operations by a nonnegative summation
(see Proposition~\ref{proposition:scale_functions_for_USF}).
\noindent
(3) The computation of the functions $W^{(q)}$, $q\geq 0$, can be reduced, under an exponential change of measure, to the computation of $W$ \cite[p. 222, Lemma~8.4]{kyprianou} for a process having $\Phi(0)=0$ \cite{surya}. Under such an exponential tilting $W_h(x)$ will have a temperate growth \cite[Proposition 4.8(ii)]{vidmar:fluctuation_theory}, since $\Phi^h(0)\to \Phi(0)$. \noindent
Finally, in comparison to the Laplace inversion methods discussed in \cite[Section~5.6]{kuznetsovkyprianourivero}, we note:
\noindent (1) Regarding \emph{only} the efficiency of our algorithm (i.e. how costly it is, to achieve a given precision): Firstly, that Filon's method (with Fast Fourier Transform) appears to outperform ours when an explicit formula for the Laplace exponent $\psi$ is known. Secondly, that our method is largely insensitive to the degree of smoothness of the target scale function -- and can match or outperform Euler's, the Gaver-Stehfest and the fixed Talbot's method in regimes when the scale function is less smooth, even as $\psi$ remains readily available (such, at least, was the case for Sets~3 and~4 of \cite[pp. 177-178]{kuznetsovkyprianourivero} --- see Appendix~\ref{appendix:further_examples}). Thirdly, that when $\psi$ is not given in terms of elementary/special function, any Laplace inversion algorithm, by its very nature, must resort to further numerical evaluations of $\psi$ (at complex values of its argument), which hinders its efficiency and makes it hard to control the error. Indeed, such evaluations of $\psi$ appear disadvantageous, as compared to the more innocuous operations required to compute the coefficients present in our recursion.
\noindent (2) In our method there is only one spatial discretization parameter $h$ to vary. On the other hand, Filon's method (which, when coupled with Fast Fourier Transform, appears the most efficient of the Laplace inversion techniques), has additionally a cutoff parameter in the (complex) Bromwich integral.
\bibliographystyle{plain}
|
1,116,691,497,350 | arxiv | \section{Introduction}
\IEEEPARstart{W}{ireless} sensor networks (WSNs) have drawn prominent research interests from both academia and industry in recent years and have been envisioned as key technologies for Internet-of-Things (IoTs) \cite{Razzaque2016Middleware}.
With a massive number of sensors deployed in a network, collecting and reporting diverse environmental data to fusion centers (FCs), WSNs show tremendous potential in a variety of applications, including security and battlefield surveillance, disaster alert, industrial automation, traffic management, smart healthcare and homes, etc \cite{Vo20185G}.
However, delivering sensing data over the air is prone to eavesdropping attacks due to the openness of wireless channels.
Moreover, it is challenging to employ key-based cryptographic techniques for WSNs, where the distribution, maintenance, and management of secret keys are expensive under dynamic and large-scale topologies.
In general, sensors are incapable of implementing complicated cryptographic algorithms due to low energy and computing power \cite{Chen2009Sensor}.
In this context, \emph{physical layer security} \cite{Le2021Physical}--\cite{Yang2015Safeguarding} has emerged as an appealing low-complexity paradigm to realize secure transmissions by exploiting wireless media characteristics, and it promises to be a powerful supplement or even alternative to the cryptographic mechanisms for WSNs.
\subsection{Background and Motivations}
The research of physical layer security dates back to as early as 1975 when Wyner postulated the information-theoretic foundation in his ground-breaking treatise \cite{Wyner1975The}.
Wyner introduced the degraded witetap channel model and showed that the reliability and secrecy of information delivery can be concurrently guaranteed with appropriate secrecy channel coding.
This pioneering work has motivated substantial endeavors invested in developing physical layer security during the past decade, from the viewpoints of both information theory and signal processing.
Early research on physical layer security have been more concentrated on point-to-point communication links.
Fundamental information-theoretic limits and optimal secrecy signaling schemes have been investigated by exploiting the channel state information (CSI) of both the main channel (spanning from transmitter to the intended receiver) and the wiretap channel (spanning from transmitter to the undesired receiver, or eavesdropper).
When the eavesdropper's CSI is completely unavailable, Goel and Negi \cite{Goel2008Guaranteeing} proposed to radiate controllable artificial noise or jamming signals along with confidential information, through either centralized multiple antennas or distributed cooperative jammers, to degrade the wiretap channel while without impairing the main channel.
With no need for the eavesdropper's CSI, the idea of artificial noise or cooperative jamming has opened a new avenue for enhancing physical layer security and has sparked a wave of innovation, e.g, see \cite{Zheng2015Multi}--\cite{Wang2020Energy}.
Different from the point-to-point scenarios, secure communications in large-scale wireless networks suffer from severe interference caused by a large amount of concurrent transmissions, and therefore the security performance depends heavily on the network geometry and the locations of nodes in the network.
Against this background, stochastic geometry theory has offered powerful tools to study large-scale wireless networks from a statistical point of view by modeling node positions as some spatial distributions like Poisson point process (PPP) \cite{Haenggi2009Stochastic}, and the research on physical layer security under a stochastic geometry framework has been extensively carried out recently.
For example, for large-scale ad hoc networks, Zhou \emph{et al.} \cite{Zhou2011Throughput} and Zhang \emph{et al.} \cite{Zhang2013Enhancing} respectively explored single- and multi-antenna secure transmissions and identified the tradeoff between reliability and secrecy against eavesdropping attacks.
Zheng \emph{et al.} \cite{Zheng2017Safeguarding,Zheng2017Physical} explored the great benefit of full-duplex receiver jamming in enhancing the network-wide secrecy throughput and energy efficiency.
For multi-cell cellular networks, Wang \emph{et al.} \cite{Wang2013Physical} investigated the secure downlink transmissions and discussed the impact of cell association and the location information of mobile users. Geraci \emph{et al.} \cite{Geraci2014Physical} further evaluated the achievable secrecy rate with regularized channel inversion precoding under a massive multiple-input multiple-output (MIMO) system. Wang \emph{et al.} \cite{Wang2016Physical} comprehensively analyzed the network-wide secrecy for a multi-tier heterogeneous cellular network, where a threshold-based mobile association policy was proposed to balance link quality and secrecy.
Wang \emph{et al.} \cite{Wang2020Physical} further applied the artificial noise aided physical layer security to the cellular vehicle-to-everything (C-V2X) networks.
Interested readers are referred to \cite{Wang2016Physical_book} for a more thorough understanding of the physical layer security in random wireless networks under the stochastic geometry framework.
As mentioned previously, physical layer security is particularly important for WSNs, since employing traditional cryptographic mechanism is rather costly and difficult. Recently, physical layer security has been advocated to protect communications from eavesdropping for WSNs.
The majority of existing literature on physical layer security in WSNs has been concentrated on deterministic network geometry, i.e., ignoring the uncertainty of nodes' locations or large-scale path loss \cite{Marano2009Distributed}--\cite{Barcelo-Llado2014Amplify}.
Given that sensors are generally randomly scattered, Lee \emph{et al.} \cite{Lee2013Distributed} first introduced the concept of distributed network secrecy and quantified the secrecy throughput and energy consumption for a multilevel WSN using tools from stochastic geometry.
Deng \emph{et al.} \cite{Deng2016Physical} further analyzed the average secrecy rate for a three-tier WSN.
However, these works only considered access technologies with orthogonal resource blocks (RBs).
Random multiple access has the virtue of being highly convenient and flexible without requiring a complicated control scheduling, which is well-suited for the large-scale WSNs particularly when the system load is overly heavy.
Although studies of secure multiple access have been reported for various wiretap channel models, the results cannot be directly applied for large-scale WSNs with stochastic network geometry.
In recent years, the physical layer security of non-orthogonal multiple access (NOMA) for large-scale networks has received considerable attention, e.g., \cite{Liu2017Enhancing}--\cite{Zhang2018Enhancing}, but unfortunately, at present researchers have mainly focused on two-user pairing sharing the same RB.
In fact, random multiple access with non-orthogonal RBs will significantly hamper the analysis of channel statistics for large-scale WSNs, since we have to deal with the combined effect of channel fading, the random locations of external interfering sensors, as well as the uncertainty of the successive interference cancellation (SIC) based decoding order for internal sensors belong to the same FC.
The intractability of analysis will in no doubt make it challenging to design schemes to optimize the network security performance in terms of e.g., sum secrecy throughput. Our research work aims to provide an analytical framework and design schemes to address the aforementioned problem.
\subsection{Our Work and Contributions}
In this paper, we study physical layer security for a large-scale WSN consisting of randomly deployed sensors and FCs, coexisting with randomly distributed eavesdroppers attempting to intercept the data broadcast by the sensors.
We establish a joint analysis and design framework to evaluate the transmission reliability and secrecy and optimize the network-wide performance in terms of the sum secrecy throughput.
Our main contributions care summarized as follows:
\begin{itemize}
\item
We propose a random multiple access strategy which associates each sensor to its nearest FC, and each FC randomly selects a certain number of sensors for data acquisition.
We then propose an uncoordinated jamming scheme to combat eavesdropping where those unscheduled sensors, who are not chosen for data collection at the current time slot, independently radiate jamming signals with a certain probability.
\item
We assume that each FC adopts zero-forcing SIC (ZF-SIC) to decode the multiple streams of the scheduled sensors, where the decoding order is determined according to their distances to the associated FC.
We derive new closed-form expressions for the connection outage probability of a typical FC, leveraging tools from the stochastic geometry theory and order statistics. We also provide analytical expressions for the secrecy outage probability of the typical FC, assuming that eavesdroppers employ the minimum mean square error (MMSE) receiver to demodulate signals and have a powerful multi-user detection capability.
\item
We formulate a problem of maximizing the sum secrecy throughput of the typical FC, imposing both reliability and secrecy constraints on each scheduled sensor. We jointly design the optimal parameters, including the code rates of the scheduled sensors and the jamming probability of the unscheduled sensors.
We also provide a computational-convenient sub-optimal solution by forcing each scheduled sensor to attain a target high level of reliability. We derive closed-form expressions for the optimal code rates, and we prove that the sum secrecy throughput is a quasi-concave function of the jamming probability, where the optimal jamming probability can be efficiently calculated via the bisection method.
\end{itemize}
\subsection{Organization and Notations}
The remainder of this paper is organized as follows.
Section II describes the random multiple access WSN and the optimization problem of interest.
Section III analyzes the connection and secrecy outage probabilities of the secure transmission of the scheduled sensors. Section IV details the sum secrecy throughput maximization, with both optimal and sub-optimal solutions provided. Section V concludes this paper.
\emph{Notations}: Bold uppercase (lowercase) letters denote matrices (column vectors). $|\cdot|$, $\|\cdot\|$, $(\cdot)^{\dagger}$, $(\cdot)^{\rm T}$, $\ln(\cdot)$, $\mathbb{P}\{\cdot\}$, and $\mathbb{E}_z[\cdot]$ denote the absolute value, Euclidean norm, conjugate, transpose, natural logarithm, probability, and the expectation over a random variable $z$, respectively.
$f_z(\cdot)$ and $\mathcal{F}_z(\cdot)$ denote the probability density function (PDF) and cumulative distribution function (CDF) of $z$, respectively.
$\mathcal{CN}(\mu,v)$, ${\rm Exp}(\lambda)$, and ${\rm Gamma}(N,\lambda)$ denote the circularly symmetric complex Gaussian distribution with mean $\mu$ and variance $v$, the exponential distribution with parameter $\lambda$, and the gamma distribution with parameters $N$ and $\lambda$, respectively. $\mathbb{R}^{m\times n}$ and $\mathbb{C}^{m\times n}$ denote the $m\times n$ real and complex number
domains, respectively.
In addition, $\binom{n}{m}\triangleq\frac{n!}{m!(n-m)!}$ for integers $n>m\ge0$.
\section{Network Model and Problem Description}
We consider the issue of secure wireless transmissions for a large-scale WSN as illustrated in Fig. \ref{figSys}, where a large number of sensor nodes continually monitor the surrounding environment and report their observations to the FCs which are responsible for decision making, whilst the ongoing data uploading is overheard by eavesdroppers hiding in the network. We assume that the sensors, FCs, and eavesdroppers are all spatially randomly positioned, and their locations are modeled as independent homogeneous PPPs $\Phi_s$, $\Phi_c$, and $\Phi_e$ in a two-dimensional plane $\mathbb{R}^2$, with spatial densities $\lambda_s$, $\lambda_c$, and $\lambda_e$, respectively.\footnote{Throughout this paper, we have a slight abuse with the notation $\Phi$, which is used to represent the set of nodes' locations as well as the nodes themselves.}
\begin{figure}[!ht]
\centering
\includegraphics[width=3.5in]{SystemModel.eps}
\caption{Illustration of a large-scale security-oriented WSN. A great quantity of FCs are deployed in the network (three FCs in the figure), each of which collects different categories of environmental information from a certain number (three sensor nodes as a group for the same FC in the figure) of near sensors nodes in the presence of numerous randomly located eavesdroppers (six eavesdroppers in the figure).}
\label{figSys}
\end{figure}
\subsection{Channel Model}
We consider that the sensors are single-antenna devices due to hardware restrictions, and the FCs and eavesdroppers are equipped with $M_c$ and $M_e$ antennas, respectively, for achieving signal enhancement, interference suppression, etc.
All the wireless channels are modeled by the combination of a frequency flat Rayleigh fading and a standard distance-based path loss.
Hence, the channels from a sensor located at $x$ to an FC located at $y$ and to an eavesdropper located at $z$ are respectively characterized as $\bm h_{y,x} r_{y,x}^{-\alpha/2}$ and $\bm g_{z,x} r_{z,x}^{-\alpha/2}$, respectively. To be specific, $\bm h_{y,x}\in\mathbb{C}^{M_c\times 1}$ and $\bm g_{z,x}\in\mathbb{C}^{M_e\times 1}$ represent the small-scale fading channel vector with independent and identically distributed (i.i.d.) entries obeying the distribution $\mathcal{CN}(0,1)$, $r_{y,x}$ and $r_{z,x}$ denote the corresponding Euclidean distances, and $\alpha>2$ is the path-loss exponent.
\subsection{Random Multiple Access}
We consider random multiple access scheduling, where each sensor is associated with its nearest FC, and each FC randomly chooses a set of $K<M_c$ sensors for data gathering at a given radio resource. Define $\mathcal{K}\triangleq\{1,2,\cdots,K\}$. Note that due to the ultra-dense deployment of sensors, we consider a plausible scenario where the density of sensors is much higher than $K$ times the density of FCs, i.e., $\lambda_s\gg K\lambda_c$, and there always exist more than $K$ sensors assigned to the same FC.
Under this circumstance, all the sensors in a specific time slot can be naturally divided into two thinned PPPs, namely, the scheduled sensors $\Phi_{a}$ with density $\lambda_a=K\lambda_c$ which are communicating with their associated FCs and the unscheduled sensors $\Phi_{i}$ with density $\lambda_i=\lambda_s - K\lambda_c$ which remain silent, respectively.
\subsection{Uncoordinated Random Jamming}
We assume that each FC knows perfectly the instantaneous CSI regarding its $K$ scheduled sensors, whilst only has the statistical CSI of the other sensors and of the eavesdroppers.\footnote{Theoretically, an FC can obtain the perfect CSI of its scheduled sensors via channel estimation by letting them transmit orthogonal training sequences simultaneously.}
In order to combat eavesdropping effectively while avoiding bringing severe interference to the FCs, an uncoordinated random jamming scheme is proposed, in which the unscheduled sensors radiate jamming signals at a probability $\rho\in[0,1]$.
By doing this, the distribution of the jamming sensors follows a PPP $\Phi_j$ with density $\lambda_j = \rho\lambda_i$.
We emphasize that the proposed random jamming scheme is suitable for the energy-limited sensor networks owing to its low-level collaboration. This is fundamentally different from those higher-level collaboration schemes such as coordinated ZF jamming, which will cause high overhead and implementation complexity due to information sharing, beamformer design, and synchronization.
Moreover, the jamming probability $\rho$ is carefully designed off line for maximizing the network security performance, as will be discussed in Sec. IV, and hence our scheme can balance well between network performance and complexity.
\subsection{Multi-Stream Decoding}
At the FC side, the ZF-SIC method is employed to separate the multiple data streams received from its associated $K$ sensors.\footnote{ZF is a typical linear filter for multi-user communication systems, and ZF-SIC is commonly used in an NOMA system to achieve SIC due to its ease of implementation and low computational complexity \cite{Jiang2017ZF,Jiang2018SIC}. In this sense, ZF-SIC is more subtable for the large-scale random multiple access WSN compared with more advanced but complicated methods, e.g., MMSE-SIC.}
Theoretically, the SIC order should be sorted according to the instantaneous received signal strengths from the strongest to the weakest.
However, given that the impact of large-scale path loss is generally more dominant on the channel impairment and is more stable compared with the small-scale channel fading, we schedule the SIC order based on the sensors' distances to the FC from the nearest to the farthest.
To be more specific, the procedure of ZF-SIC can be described as follows: 1) first decode the signal received from the nearest sensor by removing the aggregated signals received from the $K-1$ farther sensors through projecting these signals on to the null space of the instantaneous channel of the nearest sensor, 2) then cancel the decoded signal from the composite received signals, and 3) successively decode the signal from the second nearest sensor in a similar way, and so on.
After the ZF-SIC operation, when decoding the signal from any specific sensor, the FC can successfully eliminate the interfering signals generated by the other $K-1$ sensors.
\subsection{Performance Metrics and Optimization Problem}
In order to secure the data transmission, Wyner's secrecy-preserving channel code, generally known as the wiretap code, is employed.
In the wiretap code, the rates of the transmitted codewords and the embedded confidential messages are represented by the codeword rate $R_t$ and the secrecy rate $R_s$, respectively.
The rate redundancy $R_e \triangleq R_t - R_s$ is intentionally introduced for guaranteeing secrecy against eavesdropping attacks.
If the capacity of the main channel falls below the codeword rate $R_t$, the desired receiver cannot recover the codeword correctly, which is regarded as connection outage, and the probability that this event happens is termed \emph{connection outage probability} (COP).
If the capacity of the wiretap channel exceeds the rate redundancy $R_e$, perfect secrecy is not possible, which is considered to be secrecy outage, and the probability of this event occurring is referred to as \emph{secrecy outage probability} (SOP).
Without loss of generality, we focus on a typical FC which is placed at the origin $o$ of the polar coordinate, and denote its $K$ scheduled sensors as $S_1,S_2,\cdots,S_K$ with an ascending sort order of their distances.
The codeword rate and the secrecy rate of sensor $S_k$ are denoted as $R_{t,k}$ and $R_{s,k}$, respectively, and the corresponding rate redundancy is given by $R_{e,k} = R_{t,k}- R_{s,k}$.
The COP of $S_k$ can be defined as
\begin{equation}\label{cop_def}
p_{co,k} \triangleq \mathbb{P}\{{\rm SINR}_{o,k}<\beta_{t,k}\}, \quad \forall k\in\mathcal{K},
\end{equation}
where ${\rm SINR}_{o,k}$ denotes the instantaneous signal-to-interference-plus-noise ratio (SINR) of the typical FC for resolving the signal from $S_k$, and $\beta_{t,k} \triangleq 2^{R_{t,k}}-1$ is the threshold SINR for connection outage.
We consider the wiretap scenario in which the eavesdroppers do not collude with each other and decode messages individually.
In this case, a secrecy outage event takes place if only confidential information is leaked to the most threatening eavesdropper of the highest SINR.
Therefore, the SOP of $S_k$ can be defined as
\begin{equation}\label{sop_def}
p_{so,k} \triangleq \mathbb{P}\left\{\max_{e\in\Phi_e}{\rm SINR}_{e,k}>\beta_{e,k}\right\}, \quad \forall k\in\mathcal{K},
\end{equation}
where ${\rm SINR}_{e,k}$ denotes the instantaneous SINR of the eavesdropper located at $e\in\Phi_e$ for intercepting the data from $S_k$, and $\beta_{e,k} \triangleq 2^{R_{e,k}}-1$ is the threshold SINR for secrecy outage.
This paper uses the metric \emph{sum secrecy
throughput} to assess the capacity of multi-access secure transmissions from the viewpoint of secrecy outage, where the sum secrecy throughput of a typical FC is defined as the total average successfully received confidential information bits from its $K$ scheduled sensors per second per Hertz per channel use subject to certain secrecy constraints.
Formally, the sum secrecy throughput can be formulated as
\begin{equation}\label{sst_def}
\mathcal{T} = \sum_{k=1}^K R_{s,k}\left(1-p_{co,k}\right), ~ {\rm s.t.}~ p_{so,k}\le \epsilon, ~ \forall k\in\mathcal{K},
\end{equation}
where $\epsilon\in[0,1]$ is a prescribed threshold representing the maximal tolerable SOP.
In this paper, we aim to maximize the sum secrecy throughput $\mathcal{T} $ via jointly designing the wiretap code rates for each scheduled sensor (including the codeword rate $R_{t,k}$ and the secrecy rate $R_{s,k}$) and the jamming probability $\rho$ for the unscheduled sensors.
Before proceeding to the optimization problem, we will first derive analytical expressions for the COP $p_{co,k}$ and SOP $p_{so,k}$ for $k\in\mathcal{K}$ in the following section.
\section{Analyses of COP and SOP}
This section evaluates the reliability and secrecy performance of the large-scale WSN under investigation with random multiple access. Specifically, we will analyze in detail the COP $p_{co,k}$ and SOP $p_{so,k}$ of the secure transmission from the $k$-th nearest sensor $S_k$ to the typical FC located at the origin $o$, utilizing the stochastic geometry theory and order statistics.
For ease of notation, we define $\delta\triangleq{2}/{\alpha}$ and $\phi \triangleq \pi \Gamma(1+\delta) \Gamma(1-\delta)$ throughout the paper.
\subsection{General Results for COP}
Denote the locations of the $K$ sensors $S_k$ scheduled by the typical FC as $s_k$ for $k\in\mathcal{K}$, with an ascending order of their distances to the FC $L_{1}\leq L_{2},\cdots,\le L_{K}$.
Note that due to random multiple access and SIC order scheduling, the ordered distance $L_k$ is a random variable, the statistics of which is characterized by the following lemma.
\begin{lemma}\label{lemma_lk}
The PDF of the ordered distance $L_k$ from the typical FC to the $k$-th nearest sensor is given by
\begin{equation}\label{lk_pdf}
f_{L_{k}}(r)=2k \binom{K}{k}\sum_{l=0}^{k-1}\binom{k-1}{l}(-1)^l\pi \lambda_cr e^{-\pi\lambda_cr^2(K-k+l+1)},
\end{equation}
\end{lemma}
\begin{proof}
The PDF of $L_k$ follows from order statistics \cite{David2003Order},
\begin{equation}
f_{L_{k}}(r)=k\binom{K}{k} \mathcal{F}_{L}(r)^{k-1}\left[1-\mathcal{F}_{L}(r)\right]^{K-k} f_{L}(r),
\end{equation}
where $f_{L}(r)=2\pi r \lambda_ce^{-\pi\lambda_cr^2}$ and $\mathcal{F}_{L}(r)=1-e^{-\pi\lambda_cr^2}$ are the PDF and CDF of the unordered distance $L$ from a sensor to its nearest FC, respectively.
\end{proof}
According to the ZF-SIC decoding described in Sec. II-B, the instantaneous SINR of $S_k$ can be formulated as
\begin{equation}\label{sinr_ok}
{\rm SINR}_{o,k} = \frac{P_a|\bm w_{k}^{\rm T} \bm h_{o,s_k}|^2L_{k}^{-\alpha}}
{I_a+I_j+\omega},
\end{equation}
where $I_a=\sum_{x\in\Phi_a\setminus o}P_a|\bm w_{k}^{\rm T}\bm h_{o,x}|^2r_{o,x}^{-\alpha}$ denotes the interference generated by those sensors scheduled by the FCs other than the typical FC,
$I_j = \sum_{y\in\Phi_j} P_j|\bm w_{k}^{\rm T}\bm h_{o,y}|^2r_{o,y}^{-\alpha}$ denotes the power of the aggregated jamming signal from the unscheduled sensors, with $P_a$ and $P_j$ being the transmit power of the information-bearing signals and the jamming signals, respectively, and $\omega$ denotes the power of the receiver noise.
Here, $\bm w_{k} = \frac{\bm U_k^{\dagger}\bm U_k^{\rm T}\bm h_{o,s_k}^{\dagger}}{\|\bm U_k^{\rm T}\bm h_{o,s_k}^{\dagger}\|}$ denotes the weight vector designed for the $k$-th sensor as per the ZF-MRC criterion, where $\bm U_k\in\mathbb{C}^{M_c\times M_k}$, with $M_k \triangleq M_c - K + k$, is the projection matrix onto the null space of the matrix $[\bm h_{{o,s_{k+1}}}, \cdots,\bm h_{o,s_K}]$ such that $\bm w_{k}^{\rm T}\bm h_{o,s_j}=0$ for $j>k$. Note that the columns of $\bm U_k$ constitute an orthogonal basis, and hence $|\bm w_{k}^{\rm T} \bm h_{o,s_k}|^2= \|\bm U_k^{\rm T}\bm h_{o,s_k}^{\dagger}\|^2$ and $|\bm w_{k}^{\rm T}\bm h_{o,x}|^2$ obey the gamma distribution ${\rm{Gamma}}(M_k,1)$ and the exponential distribution ${\rm Exp}(1)$, respectively.
The COP of sensor $S_k$ is defined in \eqref{cop_def} with ${\rm SINR}_{o,k}$ given in \eqref{sinr_ok}.
Note that the COP is affected by various uncertainties, including fading channels, node locations, as well as the decoding order.
In the following proposition, we provide an expression for the exact COP.
\begin{proposition}\label{proposition_cop_general}
The COP of the secure transmission from the $k$-th nearest sensor $S_k$ to the typical FC is given by
\begin{align}\label{cop_general}
& p_{co,k}= 1 -\pi\lambda_ck\binom{K}{k}\sum_{l=0}^{k-1}\binom{k-1}{l}\sum_{m=0}^{M_k-1}\sum_{p=0}^m\binom{m}{p} \frac{(-1)^{l}}{m!}\times \nonumber\\
&
\left(\frac{\omega\beta_{t,k}}{P_a}\right)^{m-p}\left[\bm 1_{p=0}\Omega_{\frac{m\alpha}{2}}+\bm 1_{p\neq0}\sum_{n=1}^p\left(\delta\phi\lambda_o\beta_{t,k}^{\delta}\right)^n\Omega_{\mu}\Upsilon_{p,n}\right]
\end{align}
where $\bm 1_{\mathcal{H}}$ is the indicator function with $\bm 1_{\mathcal{H}}=1$ when event $\mathcal{H}$ is true and $\bm 1_{\mathcal{H}}=0$ otherwise, $\lambda_o \triangleq \lambda_a+\left({P_j}/{P_a}\right)^\delta \lambda_j$, $\mu = \frac{\alpha}{2}(m-p)+n$, $\Omega_{\mu}\triangleq \int_0^{\infty}x^{\mu}e^{-\tau_1x^{\alpha/2}-\tau_2x}dx$ with $\tau_1 = \omega\beta_{t,k}/P_a$ and $\tau_2=\phi\lambda_o\beta_{t,k}^{\delta}+\pi\lambda_c(K-k+l+1)$, and $\Upsilon_{p,n}\triangleq\sum_{\psi_j\in \mathrm{comb}\binom{p-1}{p-n}}\prod_
{ \substack{
q_{ij}\in\psi_j\\
i=1,\cdots,p-n} }
\left[q_{ij}-\delta(q_{ij}-i+1)\right]$, with the convention that $\Upsilon_{p,p}=1$ for $p\ge1$.
The term $\mathrm{comb}\binom{p-1}{p-n}$ represents the set of all distinct subsets of the natural numbers $\{1,2,\cdots,p-1\}$ with cardinality $p-n$.
The elements in each subset $\psi_j$ are sorted in an ascending order with $q_{ij}$ being the $i$-th element of $\psi_j$.
\end{proposition}
\begin{proof}
Please refer to Appendix \ref{proof_proposition_cop_general}.
\end{proof}
Although \eqref{cop_general} seems difficult to analyze due to the existence of the integral term $\Omega_{\mu}$,
it provides a general and accurate expression for the COP without requiring time-consuming simulations. More importantly, it can be used as a baseline for comparison with other approximate results.
For a special case with $\alpha=4$, $\Omega_{\mu}$ can be simplified by \cite[Eq. (3.462.1)]{Gradshteyn2007Table} as the following practically closed form,
\begin{equation}\label{omega}
\Omega^{\alpha=4}_{\mu} = (2\tau_1)^{-\frac{\mu+1}{2}}\Gamma(\mu+1)\exp\left(\frac{\tau_2^2}{8\tau_1}\right)\mathcal{D}_{-\mu-1}\left(\frac{\tau_2}{\sqrt{2\mu_1}}\right),
\end{equation}
where $\mathcal{D}_{-\mu}(z)$ denotes the parabolic cylinder function \cite[Eq. (9.241.2)]{Gradshteyn2007Table}.
Note that with \eqref{omega}, the new expression of the COP $p_{co,k}$ becomes rather computationally convenient which requires only the calculation or lookup of a $\mathcal{D}_{-\mu}(z)$ value.
\subsection{Interference-Limited Case for COP}
Owing to a large amount of uncoordinated concurrent transmissions in the network, the aggregate interference at a receiver generally dominates the thermal noise.
Motivated by this fact, we turn to examine the interference-limited WSN by ignoring the receiver noise at the FC side.
The following corollary provides a closed-form expression for the COP $p_{co,k}$ considering the interference-limited case.
\begin{corollary}\label{corollary_cop_int}
For the interference-limited WSN, the COP of $S_k$ is given by
\begin{align}\label{cop_int}
p_{co,k} = &1 - \pi\lambda_ck\binom{K}{k}\sum_{l=0}^{k-1}\binom{k-1}{l}\frac{(-1)^{l}}{\tau_2}\times\nonumber\\
&
\left(1+\sum_{m=1}^{M_k-1}\sum_{n=1}^{m}
\frac{n!}{m!}\left(\frac{\delta\phi\lambda_o\beta_{t,k}^{\delta}}{\tau_2}\right)^n\Upsilon_{m,n}\right).
\end{align}
\end{corollary}
\begin{proof}
Please refer to Appendix \ref{proof_corollary_cop_int}.
\end{proof}
It should be noted that although \eqref{cop_int} is in a closed form, the Diophantus equation therein still makes $p_{co,k}$ time-consuming to calculate when $M_k$ goes large. More importantly, the coupling of various parameters, including the number of scheduled sensors $K$, the SIC order index $k$, the number of receive antennas $M_k$, the COP threshold $\beta_{t,k}$, and the jamming probability $\rho$, makes $p_{co,k}$ complicated to analyze.
In order to circumvent such a difficulty and facilitate the analysis, we focus on a practical requirement of high reliability and low latency. In particular, we examine
the secure transmission in the ultra low COP regime for each sensor. Thereby, we obtain a much more compact expression for $p_{co,k}$ in the following corollary.
\begin{corollary}\label{corollary_cop_low}
In the low COP regime with $p_{co,k}\rightarrow 0$, the COP of $S_k$ can be approximated by
\begin{equation}\label{cop_low}
p_{co,k}\approx \frac{\phi\lambda_o\beta_{t,k}^{\delta}}{\pi\lambda_c}\Lambda_k\Xi_{M_k},
\end{equation}
where $\Lambda_k = k\binom{K}{k}\sum_{l=0}^{k-1}\binom{k-1}{l}(-1)^{l}\frac{1}{(K-k+l+1)^{2}}$ and $\Xi_{M_k}=1+\sum_{m=1}^{M_k-1}\frac{1}{m!}\prod_{i=0}^{m-1}(i-\delta)$.
\end{corollary}
\begin{proof}
Please refer to Appendix \ref{proof_corollary_cop_low}.
\end{proof}
It is worth noting that by means of the approximation given above, key parameters such as $K$, $k$, $M_k$, $\beta_{t,k}$, and $\rho$ are decoupled compared to \eqref{cop_int}, and various analytical relationships between the COP and the parameters can be extracted explicitly, some of which are particularly useful for the subsequent optimization of sum secrecy throughput.
For example, it is clearly shown that $p_{co,k}$ increases as $\beta_{t,k}$ and $\rho$ become larger, as $\lambda_o$ is a monotonically increasing function of $\rho$. Meanwhile, it is as expected that $p_{co,k}$ decreases when $M_k$ grows since $\Xi_{M_k}\in(0,1)$ is monotonically decreasing with $M_k$.
\begin{figure}[!t]
\centering
\includegraphics[width= 3.5 in]{COP.eps}
\caption{COP $ p_{co,k} $ at $k=1$ v.s. SINR threshold $ \beta_{t,k} $ for different $M_c$ and $K$, with $ P_j = 10 $ dBm, $ \lambda_c = 0.01 $, and $ \rho = 0.05 $. Unless otherwise specified, we always set $ P_a = 10 $ dBm, $ \omega = 0 $ dBm, $ \alpha = 4 $, and $ \lambda_s = 1 $. The labels \{Th, Sim, IL\} refer to the general theoretical result from \eqref{cop_general}, the Monte-Carlo simulated result, and the interference-limited result from \eqref{cop_int}, respectively. }
\label{figCOP}
\end{figure}
Fig. \ref{figCOP} depicts the COP $ p_{co,k} $ versus the SINR threshold $ \beta_{t,k} $ for different values of the number $M_c$ of FC antennas and the number $K$ of sensor nodes associated with the same FC. Obviously, the Monte-Carlo simulation results are in good agreement with the exact theoretical values.
It is expected that $ p_{co,k} $ monotonically increases with $ \beta_{s,k} $, and the interference-limited results are always smaller than the general ones whereas the gaps are relatively small and even can be negligible as $M_c$, $K$, or $\beta_{t,k}$ goes large enough.
We can observe that $p_{co,k}$ decreases with a smaller $K$ when $M_c$ is fixed or with a larger $M_c$ for a given $K$.
This indicates that once additional sensor nodes are connected to an FC, the reliability for each sensor node will be degraded, which however can be ameliorated by equipping the FC with more antennas.
\subsection{General Results for SOP}
From a robust secure transmission perspective, we are inclined to consider a worst-case scenario by overestimating the wiretap capability of eavesdroppers. Specifically, we assume that the eavesdroppers have powerful multi-stream decoding capabilities such that they can distinguish multiple data streams received from the scheduled sensors through subtracting interference generated by the superposed signals from each other. In this case, the aggregate interference received at the eavesdroppers only consists of the signals emitted by the jamming sensors.
We further assume that the eavesdroppers employ the optimal linear receiver, i.e., the MMSE receiver, to
improve the quality of the received signals.
According to the MMSE criterion, the weight vector of the eavesdropper located at $e\in\Phi_e$ for decoding the signal from $S_k$ can be devised in the form of \cite{Gao1998Theoretical}
\begin{equation}\label{weight_mmse}
\bm w_{e,k} = \left(\bm \Psi_{e}+\omega\bm I_{M_e}\right)^{-1}\bm h_{e,s_k},
\end{equation}
where $\bm{\Psi}_{e} \triangleq \sum_{z \in \Phi_{j}} P_{j} \bm{h}_{e,z} \bm{h}_{e,z} ^{H} r_{e,z}^{-\alpha}$.
The SINR of the eavesdropper at $e$ can be given by
\begin{equation}\label{sinr_ek}
{\rm{SINR}}_{e,k}=P_{a} \bm{h}_{e,s_k}^{H} \left(\bm \Psi_{e}+\omega\bm I_{M_e}\right)^{-1} \bm{h}_{e,s_k} r_{e,s_k}^{-\alpha}.
\end{equation}
The SOP $p_{so,k}$ of $S_k$ is defined in \eqref{sop_def} with ${\rm{SINR}}_{e,k}$ given above. The following proposition provides a general result for $p_{so,k}$.
\begin{proposition}\label{proposition_sop_general}
The SOP of the secure transmission from the $k$-th nearest sensor $S_k$ to the typical FC is given by
\begin{align}\label{sop_general}
p_{so,k} = 1 - \exp\left(-\pi\lambda_e\sum_{m=1}^{M_e}\sum_{n=0}^{M_e-m}\frac{\zeta_1^{m-1}\zeta_2^n}{(m-1)!n!}\Omega^{\circ}_{u}\right),
\end{align}
where $\Omega^{\circ}_{u}$ has the same form as $\Omega_{\mu}$ defined in Proposition \ref{proposition_cop_general} simply with $\mu = u=\frac{\alpha}{2}(m-1)+n$, $\tau_1 =\zeta_1 ={\omega\beta_{e,k}}/{P_a} $, and $\tau_2=\zeta_2 = \phi\lambda_j\left({P_j\beta_{e,k}}/{P_a}\right)^{\delta}$.
\end{proposition}
\begin{proof}
Please refer to Appendix \ref{proof_proposition_sop_general}.
\end{proof}
Proposition \ref{proposition_sop_general} clearly shows that the SOP $p_{so,k}$ exponentially increases with the eavesdropper density $\lambda_e$. That is to say, secrecy is severely compromised when facing dense eavesdroppers.
Note that for the spacial case of $\alpha=4$, $\Omega^{\circ}_{u}$ in \eqref{sop_general} can be recast into the same form of \eqref{omega}, which further leads to a practically closed-form expression for the SOP $p_{so,k}$.
\subsection{Interference-Limited Case for SOP}
Proposition \ref{proposition_sop_general} is not so straightforward for extracting key properties regarding the SOP, which motivates us to seek simplifications.
To this end, we consider the interference-limited scenario, just as the discussion for the COP, where the receiver noise at eavesdroppers is ignored. Note that this is reasonable since the noise power of eavesdroppers is typically unknown to the sensors.
In the following corollary, we provide an analytically tractable expression for the SOP.
\begin{corollary}\label{corollary_sop}
For the interference-limited WSN, the SOP of $S_k$ is given by
\begin{equation}\label{int_sop}
p_{so,k} = 1 - \exp\left(-\frac{\pi\lambda_e M_e}{\phi\rho\lambda_i}\left(\frac{P_a}{P_j\beta_{e,k}}\right)^{\delta}\right).
\end{equation}
\end{corollary}
\begin{proof}
The result follows easily by plugging $\omega=0$ into \eqref{sop_general} and leveraging some algebraic operations.
\end{proof}
Corollary \ref{corollary_sop} reveals that the SOP $p_{so,k}$ exponentially increases with $M_e$ but decreases with $\rho$ and $\beta_{e,k}$. This indicates that secrecy performance is dramatically degraded if eavesdroppers use a large number $M_e$ of receiving antennas, whereas it can be significantly ameliorated by making a larger fraction $\rho$ of sensors send jamming signals and choosing a larger rate redundancy $R_{e,k}$ for channel coding.
Note that this is fundamentally different from the case of COP where increasing the jammer fraction $\rho$ becomes harmful, which reflects an intrinsic trade-off between reliability and secrecy when introducing jamming signals.
\begin{figure}[!t]
\centering
\includegraphics[width= 3.5 in]{SOP.eps}
\caption{SOP $ p_{so,k} $ at $k=1$ v.s. SINR threshold $ \beta_{e,k} $ for different $M_e$ and $\lambda_e$, with $ P_j = 10 $ dBm, $ \lambda_c = 0.01 $, $ K = 3 $, and $ \rho = 0.05 $. The labels \{Th, Sim, IL\} refer to the general theoretical result from \eqref{sop_general}, the Monte-Carlo simulated result, and the interference-limited result from \eqref{int_sop}, respectively. }
\label{figSOP}
\end{figure}
The monotonicity of the SOP $ p_{so,k} $ w.r.t. the SINR threshold $ \beta_{e,k} $, the number $M_e$ of eavesdropping antennas, and the density $\lambda_e$ of eavesdroppers is validated by both numerical and simulated results as shown in Fig. \ref{figSOP}.
Different from the situation of COP, the interference-limited SOP is apparently larger than the general one. This implies that the interference-limited SOP embodies an overestimation of eavesdropping capability, which is generally preferred when investigating physical layer security for the purpose of robustness designs.
\section{Maximization of Sum Secrecy Throughput}
This section maximizes the sum secrecy throughput for a typical FC in the large-scale WSN with random multiple access, by jointly determining the optimal parameters, including the codeword rate $R_{t,k}$ and the secrecy rate $R_{s,k}$ of the wiretap code for the $K$ scheduled sensors, and the jamming probability $\rho$ of the proposed random jamming scheme.
Recalling the definition of sum secrecy throughput in \eqref{sst_def}, the optimization problem can be formulated as
\begin{subequations}\label{sst_max}
\begin{align}
\max_{R_{t,k},R_{s,k},\forall k\in\mathcal{K},\rho} ~&\mathcal{T} = \sum_{k=1}^K R_{s,k}\left(1-p_{co,k}\right)\\
{\rm s.t.}~~ \label{sst_max_c1}
&p_{co,k}\le\sigma,~ \forall k\in\mathcal{K},\\
\label{sst_max_c2}
&p_{so,k}\le\epsilon,~\forall k\in\mathcal{K},\\
\label{sst_max_c3}
&0\le R_{e,k} = R_{t,k}-R_{s,k},~ \forall k\in\mathcal{K},\\
\label{sst_max_c4}
&0\le \rho \le 1.
\end{align}
\end{subequations}
Note that constraints \eqref{sst_max_c1} and \eqref{sst_max_c2} describe the reliability and secrecy requirements, respectively;
constraints \eqref{sst_max_c3} and \eqref{sst_max_c4} are imposed by the wiretap code scheme and the random jamming scheme, respectively.
The original problem \eqref{sst_max} can be decomposed into the following two subproblems.
\begin{itemize}
\item[1)] We first design the optimal $R_{s,k}$ and $R_{e,k}$ (or $R_{t,k}$) for the $k$-th scheduled sensor to maximize its secrecy throughput $\mathcal{T}_k \triangleq R_{s,k}\left(1-p_{co,k}\right)$ conditioned on a fixed $\rho$, as
\begin{align}\label{opt_sst_max1}
\max_{R_{s,k},R_{e,k}} &\mathcal{T}_k, ~\forall k\in\mathcal{K},
~~{\rm s.t.}~~\eqref{sst_max_c1}-\eqref{sst_max_c3}.
\end{align}
\item[2)] With the resultant maximal $\mathcal{T}_k$ for $k\in\mathcal{K}$, we then design the optimal $\rho$ to maximize $\mathcal{T}$ expressed in \eqref{sst_max}, i.e.,
\begin{align}\label{opt_sst_max2}
\max_{\rho} ~\mathcal{T},~~
{\rm s.t.}~~
\eqref{sst_max_c4}.
\end{align}
\end{itemize}
In the following two subsections, we first discuss an optimal design scheme in which the optimal $R_{s,k}$ and $R_{e,k}$ can be efficiently calculated by the bisection method while the optimal $\rho$ can only be obtained by one-dimensional search. We then examine a sub-optimal scheme for the purpose of a low computational complexity, where closed-form expressions are derived for the optimal $R_{s,k}$ and $R_{e,k}$, and $\mathcal{T}$ is proved to be quasi-concave w.r.t. $\rho$ such that the optimal $\rho$ can be computed using the bisection method.
\subsection{Optimal Design}
Based on the above discussion, we first examine the subproblem \eqref{opt_sst_max1} and design the optimal $R_{e,k}$ and $R_{s,k}$ successively. Consider a fixed $R_{s,k}$, it is apparent from \eqref{cop_def} that the COP $p_{co,k}$ monotonically decreases with $R_{e,k}$ as $R_{t,k}=R_{s,k}+R_{e,k}$.
This suggests that the optimal $R_{e,k}$ for maximizing $\mathcal{T}_{k}$ should be the minimal $R_{e,k}$ while satisfying the secrecy constraint $p_{so,k}\le\epsilon$. Note that $p_{so,k}$ decreases with $R_{e,k}$ (since $\beta_{e,k}=2^{R_{e,k}}-1$ shown in \eqref{sop_def}), the optimal $R_{e,k}$ is given as the inverse of
$p_{so,k}(R_{e,k})$ at $\epsilon$, which is
\begin{equation}
R^*_{e,k} = p_{so,k}^{-1}(\epsilon).
\end{equation}
Obviously, $R^*_{e,k}$ monotonically decreases with $\epsilon$, which means that a larger rate redundancy is required to combat the eavesdropper in order to meet a more rigorous secrecy constraint.
Although it is intractable to
express $R^*_{e,k}$ in an explicit form due to the complicated expression of $p_{so,k}$, the value of $R^*_{e,k}$ can be efficiently obtained through bisection search with the equation $p_{so,k}(R_{e,k})=\epsilon$.
For designing the optimal $R_{s,k}$, we focus on the low COP regime and substitute the approximate COP $p_{co,k}$ given in \eqref{cop_low} into problem \eqref{opt_sst_max1}. Moreover, since $R_{t,k} = R_{e,k}^{*}+R_{s,k}\Rightarrow\beta_{t,k}=\beta_{e,k}^{*}+(1+\beta_{e,k}^{*})\beta_{s,k}$ with $\beta_{s,k}\triangleq 2^{R_{s,k}}-1$, problem \eqref{opt_sst_max1} can be equivalently translated into
\begin{align}\label{st_max}
\max_{0\le\beta_{s,k}\le \beta^{\rm max}_{s,k}}~ \mathcal{T}_k=\left(1-A_k\left(\beta_{s,k}+B_k\right)^{\delta}\right)\log_2(1+\beta_{s,k}),
\end{align}
where $A_k\triangleq\frac{\phi\lambda_o}{\pi\lambda_c}\Lambda_k\Xi_{M_k}$ with $\Lambda_k$ and $\Xi_{M_k}$ defined in Corollary \ref{corollary_cop_low}, $B_k \triangleq \frac{\beta^*_{e,k}}{1+\beta^*_{e,k}}$, and $\beta^{\rm max}_{s,k}\triangleq A_k^{-\alpha/2}-B_k$.
It is noteworthy that $\beta^{\rm max}_{s,k}$ is introduced to guarantee a non-negative value of $\mathcal{T}_k$.
The solution to the above problem is provided by the following proposition.
\begin{proposition}\label{proposition_st_opt}
The secrecy throughput $\mathcal{T}_k$ given in \eqref{st_max} is a concave function of $\beta_{s,k}$, and the optimal $\beta_{s,k}^*$ maximizing $\mathcal{T}_k$ satisfies the following equation,
\begin{equation}\label{opt_rs}
\frac{d\mathcal{T}_k}{d\beta_{s,k}}|_{\beta_{s,k}=\beta_{s,k}^*} = 0,
\end{equation}
i.e., it is the unique zero-crossing point $\beta_{s,k}$ of the derivative $\frac{d\mathcal{T}_k}{d\beta_{s,k}}$ given below
\begin{equation}\label{opt_bs_db_eqn}
\frac{d\mathcal{T}_k}{d\beta_{s,k}}= \frac{1-A_k(\beta_{s,k}+B_k)^{\delta}}{(1+\beta_{s,k})\ln2}-\frac{A_k\delta\log_2(1+\beta_{s,k})}{(\beta_{s,k}+B_k)^{1-\delta}}.
\end{equation}
\end{proposition}
\begin{proof}
For brevity, the subscripts of $\beta_{s,k}$, $A_k$, and $B_k$ are omitted.
It is intractable to prove the concavity of $\mathcal{T}_k$ on $\beta$ by determining the sign of the second-order derivative $\frac{d^2\mathcal{T}_k}{d\beta^2}$.
Instead, it can be easily confirmed that the two boundary values of $\beta$ yield $\frac{d\mathcal{T}_k}{d\beta}|_{\beta=0}>0$ and $\frac{d\mathcal{T}_k}{d\beta}|_{\beta=\beta^{\rm max}_{s,k}}<0$.
Combined with the fact that $\mathcal{T}_k$ is continuously differentiable on $\beta$, there at least exists one zero-crossing point of $\frac{d\mathcal{T}_k}{d\beta}$.
Let $\beta^{\circ}$ denote an arbitrary one such that $\frac{d\mathcal{T}_k}{d\beta}|_{\beta=\beta^{\circ}}=0$, and then the second-order derivative $\frac{d^2\mathcal{T}_k}{d\beta^2}$ at $\beta=\beta^{\circ}$ can be calculated as
\begin{align}
&\frac{d^2\mathcal{T}_k}{d\beta^2}|_{\beta=\beta^{\circ}}=-\frac{2A\delta(\beta^{\circ}+B)^{\delta-1}}{(1+\beta^{\circ})\ln2}-\frac{1-A(\beta^{\circ}+B)^{\delta}}{(1+\beta^{\circ})^2\ln2}+\nonumber\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad A\delta(1-\delta)(\beta^{\circ}+B)^{\delta-2}\log_2(1+\beta^{\circ})\nonumber\\
&\stackrel{\mathrm{(a)}}=\frac{A\delta(\beta^{\circ}+B)^{\delta-2}}{\ln2}\bigg((1-\delta)\ln(1+\beta^{\circ})-\nonumber\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
\frac{(\beta^{\circ}+B)\left[2+\ln(1+\beta^{\circ})\right]}{1+\beta^{\circ}}\bigg)\nonumber\\
&\stackrel{\mathrm{(b)}}<
\frac{A\delta(\beta^{\circ}+B)^{\delta-2}}{\ln2}\left(\ln(1+\beta^{\circ})-\frac{\beta^{\circ}\left[1+\ln(1+\beta^{\circ})\right]}{1+\beta^{\circ}}\right)\nonumber\\
&=\frac{A\delta(\beta^{\circ}+B)^{\delta-2}}{(1+\beta^{\circ})\ln2}\left[{\ln(1+\beta^{\circ})}-{\beta^{\circ}}\right]\nonumber\\
&\stackrel{\mathrm{(c)}}\leq0,
\end{align}
where $\mathrm{(a)}$ holds by noting that $\frac{d\mathcal{T}_k}{d\beta}|_{\beta=\beta^{\circ}}=0\Rightarrow \frac{1-A(\beta^{\circ}+B)^{\delta}}{(1+\beta^{\circ})^2}=A\delta(\beta^{\circ}+B)^{\delta-1}\ln(1+\beta^{\circ})$ in \eqref{opt_bs_db_eqn}, $\mathrm{(b)}$ gives an upper bound, and $\mathrm{(c)}$ follows from the fact $\ln(1+\beta^{\circ})\le \beta^{\circ}$.
The above result indicates that $\mathcal{T}_k$ is a quasi-concave function of $\beta$ \cite{Boyd2004Convex}, and $\beta^{\circ}$ is the unique zero-crossing point of $\frac{d\mathcal{T}_k}{d\beta}$ and is also the solution to problem \eqref{st_max}.
\end{proof}
Due to the quasi-concavity of $\mathcal{T}_k$ on $\beta_{s,k}$, the value of the optimal $\beta_{s,k}^*$ can be efficiently calculated using the bisection method with \eqref{opt_rs}.
After that, by substituting the obtained optimal $R_{e,k}^*$ and $R_{s,k}^*$ for $k\in\mathcal{K}$ into \eqref{sst_max}, the optimal jamming probability $\rho^*$ can be numerically searched by solving problem \eqref{opt_sst_max2}.
\begin{figure}[!t]
\centering
\includegraphics[width= 3.5 in]{Tk.eps}
\caption{Secrecy throughput $ \mathcal{T}_k $ at $ k = 1 $ v.s. secrecy rate $ R_{s,k} $ for different $\lambda_c$ and $\lambda_e$, with $ P_j = 0 $ dBm, $ \rho = 0.01 $, $ M_c = 16 $, $ M_e = 2$, $ K = 4 $, $ \epsilon = 0.1 $, and $ \sigma = 0.1 $.}
\label{figTk}
\end{figure}
Fig. \ref{figTk} depicts the secrecy throughput $ \mathcal{T}_k $ as a function of the secrecy rate $ R_{s,k} $. Just as analyzed in Proposition \ref{proposition_st_opt}, we see that $ \mathcal{T}_k $ indeed initially increases and then decreases with $ R_{s,k} $, and there is a unique $ R_{s,k} $ for maximizing $ \mathcal{T}_k $. It is expected that increasing the density $\lambda_e$ of eavesdroppers is harmful to the improvement of secrecy throughput. We also observe that, with our proposed random access scheme, the secrecy throughput can be dramatically increased with the increase of FC density $ \lambda_c $. This seemingly counter-intuitive result can be understood if one can realize that although deploying more FCs will accommodate more sensor nodes resulting in more severe network interference, it will reduce the distances between the FC and its associated sensor nodes making transmission reliability significantly enhanced.
\subsection{Sub-optimal Design}
Note that the optimal code rates and optimal jamming probability for the optimal design scheme can only be obtained numerically via bisection search or exhaustive search, which not only results in high computational complexity but also makes it difficult to develop useful insights into practical system designs.
To this end, this subsection examines a sub-optimal solution to problem \eqref{sst_max} by focusing on the sum secrecy throughput with COP constraints $p_{co,k}=\sigma$ and SOP constraints $p_{so,k}=\epsilon$ for $k\in\mathcal{K}$.
The corresponding sum secrecy throughput can be written as
\begin{equation}\label{sub_st}
\mathcal{T}=(1-\sigma)\sum_{k=1}^K\log_2\frac{1+\beta_{t,k}^*}{1+\beta_{e,k}^*},
\end{equation}
where $\beta_{t,k}^*$ and $\beta_{e,k}^*$ are the unique roots of $\beta_{t,k}$ and $\beta_{e,k}$ of the equations $p_{co,k}(\beta_{t,k})=\sigma$ and $p_{so,k}(\beta_{e,k})=\epsilon$, respectively.
The rationality of devising the sub-optimal design is that the values of $\sigma$ and $\epsilon$ are generally set small enough to guarantee a high level of reliability and secrecy.
Moreover, when the COP and SOP constraints can be controlled, we are able to maximize $\mathcal{T}$
by finding the optimal values of $\sigma$ and $\epsilon$ in \eqref{sub_st}.
By solving the equations $p_{co,k}(\beta_{t,k}^*)=\sigma$ and $p_{so,k}(\beta_{e,k}^*)=\epsilon$ recalling $p_{co,k}$ in \eqref{cop_low} and $p_{so,k}$ in \eqref{int_sop}, we can provide closed-form expressions for $\beta_{t,k}^*$ and $\beta_{e,k}^*$ (also $R_{t,k}^*$ and $R_{e,k}^*$) by the following proposition.
\begin{proposition}\label{value_bt_be}
The values of $\beta_{t,k}^*$ and $\beta_{e,k}^*$ for $k\in\mathcal{K}$ that satisfy $p_{co,k}(\beta_{t,k}^*)=\sigma$ and $p_{so,k}(\beta_{e,k}^*)=\epsilon$ can be respectively given by
\begin{equation}\label{bt}
\beta_{t,k}^*=\left(\frac{\sigma\pi\lambda_c}{\lambda_a\phi\Lambda_k\Xi_{M_k}}\right)^{\frac{\alpha}{2}}\left[1+\frac{\lambda_iP_j^{\delta}}{\lambda_aP_a^{\delta}}\rho\right]^{-\frac{\alpha}{2}},
\end{equation}
\begin{equation}\label{be}
\beta_{e,k}^*=\frac{P_a \rho^{-\frac{\alpha}{2}}}{P_j}\left(\frac{\pi\lambda_eM_e }{\phi\lambda_i \ln\frac{1}{1-\epsilon} }\right)^{\frac{\alpha}{2}}.
\end{equation}
\end{proposition}
The next step is to design the optimal jamming probability $\rho$ to maximize the sum secrecy throughput $\mathcal{T}$.
Before proceeding to the optimization procedure, we introduce three auxiliary variables $X_k\triangleq \left(\frac{\sigma\pi\lambda_c}{\lambda_a\phi\Lambda_k\Xi_{M_k}}\right)^{\frac{\alpha}{2}}$, $Y\triangleq \frac{\lambda_iP_j^{\delta}}{\lambda_aP_a^{\delta}}$, and $Z\triangleq \frac{P_a}{P_j}\left(\frac{\pi\lambda_eM_e }{\phi\lambda_i \ln\frac{1}{1-\epsilon} }\right)^{\frac{\alpha}{2}}$ for \eqref{bt} and \eqref{be}, with which $\mathcal{T}$ in \eqref{sub_st} can be recast as a function of a single variable $\rho$ as given below:
\begin{equation}\label{sub_st_xyz}
\mathcal{T}=\sum_{k=1}^K \mathcal{T}_k=(1-\sigma)\sum_{k=1}^K\log_2\frac{1+X_k(1+Y\rho)^{-\frac{\alpha}{2}}}{1+Z\rho^{-\frac{\alpha}{2}}}.
\end{equation}
Remarkably, $X_k$, $Y$, and $Z$ have clear physical significance.
Specifically, $X_k$ can be interpreted as the ability of boosting the achievable rate for the legitimate channel.
For example, a looser COP constraint (a larger $\sigma$) and a larger number of receive antennas at the FC side (a smaller $\Xi_{M_k}$) will increase $X_k$ and are beneficial for improving transmission reliability.
Similarly, $Z$ can be translated as the wiretapping capability and $Y$ reflects the jamming power level a sensor can afford.
It is obvious that in order to guarantee a non-negative secrecy throughput for each sensor, i.e., $\mathcal{T}_k\ge 0$ for $k\in\mathcal{K}$, $X_k(1+Y\rho)^{-\frac{\alpha}{2}}>Z\rho^{-\frac{\alpha}{2}}$ must be satisfied from \eqref{sub_st_xyz}, which produces $\rho\ge \left(\left(\frac{X_k}{Z}\right)^\delta-Y\right)^{-1}$.
In other words, we should ensure $\rho\ge \rho_{\rm min}\triangleq\max_{k\in\mathcal{K}}\left(\left(\frac{X_k}{Z}\right)^\delta-Y\right)^{-1}$. Hence, the optimal $\rho^*$ maximizing $\mathcal{T}$ can be obtained by solving the following equivalent problem.
\begin{align}\label{sub_st_max}
\max_{\rho_{\rm min}\le\rho\le 1} ~T(\rho)\triangleq \sum_{k=1}^K\ln\frac{1+X_k(1+Y\rho)^{-\frac{\alpha}{2}}}{1+Z\rho^{-\frac{\alpha}{2}}}.
\end{align}
Although the above problem is not convex, in the following proposition we introduce a derivative reconstruction method to prove that $T(\rho)$ is actually first-increasing-then-decreasing w.r.t. $\rho$ such that the optimal $\rho^*$ maximizing $T(\rho)$ must be unique.
\begin{proposition}\label{proposition_subopt_rho}
The objective function $T(\rho)$ in \eqref{sub_st_max} initially increases and then decreases with an increasing $\rho$, and the optimal $\rho^*$ that maximizes $T(\rho)$ is provided as
\begin{align}\label{opt_rho}
\rho^*=
\begin{cases}
\rho_{\rm min}, & Z<\frac{Y\rho_{\rm min}^{\alpha/2+1}\left(1-\kappa(\rho_{\rm min})\right)}{1+ Y\rho_{\rm min}\kappa(\rho_{\rm min})}\\
1,& Z\ge \frac{Y(1-\kappa(1))}{1+Y\kappa(1)}\\
\rho^{\circ}, & {\rm otherwise}
\end{cases}
\end{align}
where $\kappa(\rho)\triangleq\frac{1}{K}\sum_{k=1}^K\frac{1}{X_k\left(1+Y\rho\right)^{-\alpha/2}+1}<1$, and $\rho^{\circ}$ is the unique root $\rho$ of the equation $G(\rho)=0$ with $G(\rho)$ being a monotonically decreasing function of $\rho$ given by
\begin{equation}\label{G}
G(\rho)=1 + Y\rho \kappa(\rho)-\frac{Y\rho^{\alpha/2+1}}{Z}\left(1-\kappa(\rho)\right).
\end{equation}
\end{proposition}
\begin{proof}
Please refer to Appendix \ref{proof_proposition_subopt_rho}.
\end{proof}
Some observations regarding the design of the optimal jamming probability $\rho^*$ can be obtained from Proposition \ref{proposition_subopt_rho}:
1) As previously explained, the variable $Z$ actually embodies the advantage that eavesdroppers can perform attacks.
When such advantage is marginal, i.e., $Z<\frac{Y\rho_{\rm min}^{\alpha/2+1}\left(1-\kappa(\rho_{\rm min})\right)}{1+ Y\rho_{\rm min}\kappa(\rho_{\rm min})}$, it is not necessary to activate too many sensors to radiate jamming signals to confuse the eavesdroppers.
In this case, we can simply set the optimal $\rho^*$ to its minimal achievable value $\rho_{\rm min}$.
Nevertheless, the maximization of sum secrecy throughput comes at the expense of unfairness, since there exists at least one sensor whose secrecy throughput would be reduced to zero, i.e., the sensor with index $k^{\circ}=\arg\max_{k\in\mathcal{K}}\left(\left(\frac{X_k}{Z}\right)^\delta-Y\right)^{-1}$.
2) If the eavesdroppers' superiority exceeds a certain level, i.e., $Z\ge \frac{Y(1-\kappa(1))}{1+Y\kappa(1)}$, all the idle sensors have to be mobilized for anti-eavesdropping.
Hence, $\rho^*=1$ is optimal for the sum secrecy throughput maximization.
3) Beyond the above two situations, we should properly set the jamming probability $\rho$ to strike a good balance between throughput and secrecy. Although an explicit form of $\rho^{\circ}$ cannot be derived, we can still develop some useful properties on $\rho^{\circ}$ to guideline practical designs, as summarized in the following corollary.
\begin{corollary}\label{corollary_opt_rho}
The optimal jamming probability $\rho^{\circ}$ decreases with the maximal endurable COP $\sigma$ and SOP $\epsilon$, the number $M_c$ of antennas at the FC side, the sensor density $\lambda_s$, and the ratio $P_j/P_a$ of jamming power to transmit power of a sensor, while increases with the FC density $\lambda_c$, the number $K$ of sensors associated with each FC, the eavesdropper density $\lambda_e$, and the number $M_e$ of antennas at the eavesdropper side.
\end{corollary}
\begin{proof}
Please refer to Appendix \ref{proof_corollary_opt_rho}.
\end{proof}
\begin{figure}[!t]
\centering
\includegraphics[width= 3.5 in]{Trho.eps}
\caption{Secrecy throughput $ \mathcal{T} $ v.s. jamming probability $ \rho $ for different $\epsilon$ and $\sigma$, with $ P_j = 1 $ dBm, $ \lambda_c = 0.01 $, $ M_c = 16 $, $ K = 4 $, $ M_e = 2$, and $ \lambda_e = 0.0001 $.}
\label{figTrho}
\end{figure}
Fig. \ref{figTrho} illustrates the secrecy throughput $\mathcal{T}$ as a function of jamming probability $\rho$ for both the optimal and sub-optimal schemes. As proved previously, $ \mathcal{T} $ first increases and then decreases as $ \rho $ increases, and there exists a unique optimal $ \rho $ that maximizes $ \mathcal{T} $.
We show that as either $\epsilon$ or $\sigma$ becomes larger, the optimal $\rho$ becomes smaller producing a higher $ \mathcal{T} $ for both the optimal and sub-optimal schemes, which validates Corollary \ref{corollary_opt_rho}.
The reason behind is that facing a looser SOP constraint (a larger $\epsilon$), fewer sensor nodes are required to send jamming signals against eavesdropping; meanwhile, when a larger COP $\sigma$ can be tolerable, activating less jammers significantly benefits secrecy throughput via supporting a much larger secrecy rate.
We find that as $\rho$ increases, the two curves with different $\epsilon$'s but identical $\sigma$ merge. This implies that the jamming probability is sufficiently large to defeat eavesdroppers such that
the secrecy throughput performance is less sensitive to the variation of the SOP constraints.
It is interesting to observe that the gap between optimal and sub-optimal schemes decreases obviously as the COP threshold $\sigma$ reduces.
This is because for a more stringent COP constraint, the feasible region of the secrecy rate for the optimal scheme shrinks such that the optimal secrecy rate maximizing secrecy throughput approaches that of the sub-optimal scheme.
\begin{figure}[!t]
\centering
\includegraphics[width= 3.5 in]{Tstar.eps}
\caption{Maximal secrecy throughput $ \mathcal{T}^* $ v.s. eavesdropper density $\lambda_e$ for different $\epsilon$, $\lambda_c$, and $M_c$, with $ P_j = 1 $ dBm, $ K = 4 $, $ M_e = 2 $, and $ \sigma = 0.2 $.}
\label{figTstar}
\end{figure}
Fig. \ref{figTstar} plots the maximal secrecy throughput $ \mathcal{T}^* $ of both optimal and sub-optimal schemes.
It is easy to understand that $\mathcal{T}^*$ decreases with increasing density $\lambda_e$ of eavesdroppers and grows with increasing SOP threshold $\epsilon$, density $\lambda_c$ of FCs, and number $M_c$ of antennas at the FC side.
We also show that the gap between the optimal and sub-optimal schemes decreases as
$ \epsilon $ or $ \lambda_c$ increases or as $\lambda_e$ decreases.
The underlying reason is that for these situations, adopting a larger secrecy rate can be more beneficial for maximizing secrecy throughput even sacrificing the reliability. This would make the resultant COP for the optimal scheme approach the COP threshold $\sigma$ which is the COP for the sub-optimal case, and hence the secrecy throughput performance for the two schemes becomes similar.
\section{Conclusions}
Physical layer security was investigated for a large-scale WSN with random multiple access under a stochastic geometry framework.
An uncoordinated jamming scheme was devised to thwart the randomly distributed eavesdroppers.
Analytical expressions were derived for both the COP and SOP of the secure data delivery from sensors to a typical FC against eavesdropping.
Afterwards, the optimal wiretap code rates and the jamming probability were jointly designed to maximize the sum secrecy throughput subject to both COP and SOP constraints, with both optimal and sub-optimal algorithms examined.
Furthermore, some insights into how the optimal parameters should be adjusted to the communication environment and performance requirements were provided.
Numerical results were presented to validate the theoretical fundings.
In particular, it was shown that for a stringent COP constraint or a loose SOP constraint, the performance gap between optimal and sub-optimal schemes becomes insignificant meaning that the sub-optimal scheme can be adopted as a low-complexity alternative to the optimal one.
|
1,116,691,497,351 | arxiv | \section{Introduction}
Surgical phase recognition is an important step for analyzing and optimizing surgical workflow and has been an important area of research within the computer-assisted interventions (CAI) community. Real-time surgical phase recognition technology is essential for developing context-aware systems, which can be used to provide automatic notifications regarding the progress of surgeries and can also be used for alerting the surgeon in the case of an inconsistency in the surgical workflow. Additionally, context-aware systems are important for human-machine interaction within the OR and find applications in surgical education.
The rise of laparoscopic surgery, in addition to improving the quality of surgery for the patient in terms of recovery, safety and cost, provides a rich source of information in the form of videos. Our approach relies purely on these videos for automatically extracting surgical phase information in real-time and does not utilize other sources of information such as tool usage signals, RFID data or data from other specialized instruments since these are not ubiquitous in laparoscopic procedures.
State-of-the-art surgical phase recognition algorithms using laparoscopic videos have achieved good levels of performance with accuracies greater than 80 percent \citep{thesistwinanda}. However, these algorithms are based on fully supervised learning, which limits their potential impact on the development of context-aware systems, since there is a dearth of manually annotated data. Creating manual annotations is an expensive process in terms of time and personnel. Even though there are a few large datasets of manually annotated data, they are available for only a few types of surgeries and cover just a small fraction of the total laparoscopic videos available. For actual clinical deployment there is a clear need for algorithms which are less reliant on manually annotated data in order to scale up surgical phase recognition to different types of surgeries and to be able to use all available data to obtain optimal performance. An unsupervised algorithm would be the ideal solution, however, no purely unsupervised method for training neural networks to effectively recognize surgical phases solely from the frames of laparoscopic videos is known at present. As a result, some degree of supervised learning is necessary. In this regard, we propose an effective semi-supervised algorithm for tackling the problem.
Previous work on self-supervised pre-training \citep{doersch2017multitask} has demonstrated that neural networks can learn a representation of certain inherent characteristics of data by first being trained to perform an auxiliary task for which labels are generated automatically. We propose to pre-train convolutional neural networks (CNN) and long short-term memory (LSTM) networks on the self-supervised task of predicting remaining surgery duration (RSD) from laparoscopic videos \citep{twinanda2018rsd}. We hypothesize that the progression of time in a surgical procedure is closely related to the phases of the surgery and that the variations in surgical phases often correspond to variations in the duration of surgeries. Hence, a model pre-trained to predict RSD could more easily adapt to surgical phase recognition and generalize better to variations in surgical phases. Additionally, the use of self-supervised learning makes it feasible to pre-train the network on a large number of laparoscopic videos. This could enable the network to generalize better to surgeries involving differences in patient characteristics, surgeon skill levels and surgeon styles. In this work, we modify the architecture and training approach used in \cite{twinanda2018rsd} for RSD prediction in order to make it more suitable for pre-training CNN-LSTM networks for surgical phase recognition. Our results show that the pre-training improves performance on the subsequent supervised surgical phase recognition task. Consequently, similar levels of performance could be obtained with less annotated data.
Despite its importance, very few publications have addressed the topic of semi-supervised surgical phase recognition. \cite{bod2017}, the only prior work that we know of to address this problem, presented a method to pre-train CNNs by predicting the correct temporal order of randomly sampled pairs of frames from laparoscopic videos. The idea is to enable the model to understand the temporal progression of laparoscopic workflow, quite similar to the goal of RSD pre-training. While this method does improve performance, its limitations are that it does not utilize complete video sequences to learn about surgical workflow and only the CNN is pre-trained. With the proposed RSD prediction task, the network is pre-trained on complete laparoscopic video sequences. Furthermore, LSTMs, which are responsible for learning temporal features, are pre-trained alongside CNNs. We believe this to be a more effective approach for learning about the temporal workflow of surgical procedures. The experimental results validate the advantages of our proposed RSD pre-training approach. In this work, we also present the first detailed analysis of the effect of self-supervised pre-training on surgical phase recognition performance when different amounts of annotated laparoscopic videos are available.
It has become a popular choice to combine recurrent neural networks (RNNs) with CNNs for surgical phase recognition \citep{jin2016,bod2017,thesistwinanda}. \cite{jin2016} and \cite{bod2017} trained CNN-RNN models in an end-to-end manner, which enables better correlation between spatial features extracted by the CNN and the temporal knowledge acquired by the RNN. However, due to the high space complexity of such an approach, the RNN is not unrolled over complete video sequences and is optimized on video segments. \cite{thesistwinanda} optimize their model, which we refer to as \textit{EndoLSTM}, over complete video sequences, which is ideal for capturing long range relationships within the surgical procedure, but achieve this by training the CNN and RNN separately in a two-step process. The aforementioned publications have not provided an apples-to-apples comparison between the two methods of training CNN-RNN networks, which we look to address. We propose a model (\textit{EndoN2N}), which optimizes a CNN-RNN network in an end-to-end manner on complete video sequences using an approximate backpropagation through time (BPTT) and compare it to the \textit{EndoLSTM} model based on the same architecture. Understanding the best method of training surgical phase recognition models is important for obtaining optimal performance. This helps when scaling up surgical phase recognition to different types of surgeries, since a better optimized model will require less annotated data to obtain the required levels of performance. We observe that end-to-end training leads to superior performance and better generalization within the different surgical phases.
The innovation presented in this paper can be summarized as follows: (1) introduction of RSD prediction as a self-supervised pre-training task, which outperforms the previous self-supervised pre-training approach proposed for surgical phase recognition, (2) self-supervised pre-training of CNN-LSTM networks in an end-to-end manner on long duration surgical videos, (3) the first systematic study of semi-supervised surgical phase recognition performance with variation in the amount of annotated data and (4) apples-to-apples comparison between an end-to-end CNN-LSTM training approach (\textit{EndoN2N}) and the two step optimization used in the \textit{EndoLSTM} model. We also present additional experiments to better understand the characteristics of the proposed RSD pre-training model and examine the potential of our models for actual clinical application.
\section{Related Work}\label{sec:related_work}
\subsection{Self-Supervised Learning}
Unsupervised representation learning has been an active area of research within the context of deep learning. Initial work on the topic focused on methods for initializing deep neural networks with weights close to a good local optimum, since no method was known at the time to effectively train randomly initialized deep networks. One of the most popular approaches was to learn compact representations which could be used for reconstruction of the input data \citep{hinton504,bengio2006}. \cite{hinton504} demonstrated a method for initializing the weights of a deep autoencoder through unsupervised training of stacked single-layer restricted Boltzmann machines (RBMs). \cite{bengio2006} showed that deep neural networks can be initialized with meaningful weights by training each layer as an individual autoencoder. Both these works highlighted the performance improvement obtained from unsupervised pre-training for subsequent supervised learning tasks.
The availability of large datasets containing millions of labeled high-resolution images made it possible to effectively train deep CNNs for vision tasks without relying on any form of pre-training. \cite{kri2012alexnet} trained a randomly initialized deep CNN on the ImageNet dataset for image recognition, which considerably outperformed previous state-of-the-art machine learning approaches. Since then, for several computer vision applications \citep{girshick2014, donahue2017}, pre-training CNNs using supervised learning on large datasets, such as ImageNet, has become the norm as it tends to outperform unsupervised pre-training approaches. However, large datasets of annotated data are not available in all domains and are difficult to generate. Hence, unsupervised representation learning is still a very attractive option, especially if it can match the performance of purely supervised pre-training approaches.
Self-supervised learning has been recently introduced as an alternate method for unsupervised pre-training. The goal is to learn underlying relationships in real-world data by utilizing inherent labels. Several approaches were presented to capture visual information from static images which would be beneficial for subsequent supervised learning tasks such as image classification and object detection. \cite{doersch2015} built a siamese network to predict the relative position between randomly sampled pairs of image patches in order to learn spatial context within images. \cite{noroozi2016} extended the method to arrange multiple randomly shuffled image patches in the correct order, essentially making the network solve a jigsaw puzzle. \cite{zhang2016} and \cite{larsson2016} proposed to pre-train CNNs by making them predict the original color of images which have been converted to grayscale. \cite{dosovitskiy2014} created surrogate classes corresponding to single images and extended the classes by applying several transformation to the images. They then pre-trained CNNs by learning to differentiate between different surrogate classes while being invariant to the transformations applied. However, the aforementioned self-supervised pre-training approaches are not ideal for a task such as surgical phase recognition, where it is beneficial to utilize video data rather than static images, as it possesses temporal information in addition to visual information.
Several works have explored self-supervised representation learning using video data. \cite{mobahi2009} presented a method to learn temporal coherence in videos by enforcing that the features extracted using a CNN from consecutive images be similar. \cite{agarwal2015} utilize egomotion as a supervisory signal for self-supervised pre-training. \cite{wang2015} proposed to learn video representations by tracking image patches through a video. \cite{misra2016unsupervised}, \cite{fernando2017}, \cite{lee2017opn}, and \cite{bod2017} all aimed to learn representations that capture the temporal structure of video data. \cite{misra2016unsupervised} pre-trained CNNs by predicting if a set of frames are in the correct temporal order and they formulated the task as a binary classification problem. \cite{fernando2017} sampled subsequences, containing both correct and incorrect temporal sequences, from videos and trained a network to distinguish the subsequences that have an incorrect temporal order. \cite{lee2017opn} trained a network to sort a sequence of randomly shuffled image frames into the correct temporal sequence. The method proposed by \cite{bod2017} involved predicting the correct order of a pair of frames, which have been randomly sampled from a laparoscopic video, and is very similar to the approach of \cite{misra2016unsupervised}. All of these approaches focus on pre-training CNNs. CNN-LSTM networks are often utilized for applications related to action recognition, where the LSTM is the critical component for learning temporal structure within video data. Hence, we claim that it is not always optimal to merely pre-train the CNN. We believe that pre-training both the CNN and LSTM networks would be ideal for learning representations that capture correlations between the spatial and temporal structure of video data.
For pre-training LSTM networks, future prediction in videos has been proposed as a self-supervised learning task \citep{srivastava2015,lotter2017}. It was argued that a learned representation which could be used to predict future frames of a video, would gather knowledge about temporal and spatial variations. \cite{srivastava2015} train an LSTM encoder-decoder network to simultaneously predict future frames and reconstruct a video sequence. This pre-training approach was shown to improve performance on activity recognition tasks. \cite{lotter2017} present a network for video prediction, comprising of CNNs and Convolutional LSTM networks \citep{shi2015}, inspired by neuroscience research on 'predictive coding'. The network was trained in an end-to-end manner and was shown to be more effective in predicting the future frames of a video as compared to an LSTM encoder-decoder network, but the potential for utilizing this approach as a pre-training step for action recognition was not explored. Despite future prediction approaches being viable for pre-training CNN-LSTM networks, they have only been validated on short video sequences. Our proposed method aims to obtain long-range spatio-temporal knowledge by utilizing complete laparoscopic videos, which are of long durations.
\begin{figure*}[t]
\centering
\includegraphics[width=18cm]{./phases_2.pdf}
\caption{Cholecystectomy surgical phases with mean ($\pm$ std) duration in seconds within the \textit{Cholec120} dataset.}\label{fig:phases}
\end{figure*}
\subsection{Surgical Phase Recognition}
Previous surgical phase recognition approaches have usually relied on either visual data \citep{twinanda2017endonet, blum2010}, tool usage signals \citep{padoy2012, forestier2013} or surgical action triplets \citep{forestier2015, katic2014}. Among these, visual information is the only source of data that is ubiquitous in all laparoscopic surgical procedures, whereas tool usage and triplet information are obtained either through specialized equipment or manual provision. Since the focus of this work is on the development of real-time surgical phase recognition approaches suitable for widespread deployment in ORs, we propose models which rely purely on visual data, though they could be extended to utilize other data too. In this section, only the previous works which also utilize visual data are discussed.
Various statistical models have been utilized for modeling the temporal structure of surgical videos. Hidden Markov Models (HMMs) were used in \cite{padoy2008}, \cite{lalys2012} and \cite{cadene2016}. \cite{derga2016} implemented a Hidden semi-Markov Model. \cite{twinanda2017endonet} utilized hierarchical HMMs; and Conditional Random Fields have also been a popular choice \citep{quellec2014, charrire2017, lea2015}. Some works have also utilized Dynamic Time Warping \citep{blum2010, lalys2013}, which is not applicable for real-time surgical phase prediction though, as the algorithm requires information from the entire video and is also not well suited for complex non-sequential workflows.
\cite{padoy2008} and \cite{derga2016} combined tool usage signals and visual cues from laparoscopic images for real-time surgical phase recognition, however widespread application of surgical phase recognition algorithms relying on tool usage signals seems to be a difficult task. Several works proposed effective approaches for surgical phase recognition in cataract surgeries \citep{lalys2012, quellec2014, quellec2015, charrire2017}. These approaches relied on handcrafted features though and it was shown in \cite{twinanda2017endonet} that automatically extracting features using a CNN significantly outperformed commonly utilized handcrafted features.
The approach proposed by \cite{cadene2016} is to provide a HMM with features extracted using a deep CNN. However, the use of a RNN, such as the LSTM, for temporal sequence learning is shown to perform better than HMMs \citep{thesistwinanda}.
Approaches that combine CNNs with RNNs have been presented in \cite{thesistwinanda}, \cite{jin2016} and \cite{bod2017}. The \textit{EndoLSTM} model presented in \cite{thesistwinanda} utilized a two-step approach of training a CNN and a LSTM independently for surgical phase recognition. The CNN was used to extract features specific to the surgical phases from the frames of laparoscopic videos, which were then provided to the LSTM during training. The LSTM was optimized on complete video sequences. However, theoretically, end-to-end training is ideal for combining the complementary spatial and temporal knowledge captured by the CNN and RNN networks respectively \citep{hajj2017}.
Practically, end-to-end training of a CNN-RNN network on complete laparoscopic video sequences is not feasible due to the high space complexity of the approach. Previous works have presented end-to-end training approaches optimized on video subsequences. \cite{jin2016} performed end-to-end optimization of a CNN-LSTM network over a set of 3 frames sampled at regular intervals from a laparoscopic video. This was the best performing model at the M2CAI 2016 surgical workflow challenge\footnote{http://camma.u-strasbg.fr/m2cai2016/index.php/workflow-challenge-results/}, outperforming the \textit{EndoLSTM} model. However, it is not evident if the performance improvement is due to the alternate training approach or the utilization of a deeper CNN. \cite{bod2017} incorporated a gated recurrent unit (GRU) \citep{cho2014} and trained a CNN-GRU network in an end-to-end manner on laparoscopic video subsequences. They copy the GRU's hidden state between consecutive subsequences belonging to the same video sequence. However, this model was unable to match the performance of \textit{EndoLSTM} on the EndoVis15Workflow\footnote{http://endovissub-workflow.grand-challenge.org/} dataset and the authors attributed this to the large cholecystectomy specific surgical dataset used to train \textit{EndoLSTM}. Essentially, no previous publication has provided an apples-to-apples comparison between an end-to-end optimization approach and the two-step optimization approach of \cite{thesistwinanda}.
\begin{figure*}[t]
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=8.5cm]{./cnn_finetuning_3-opt.pdf}
\caption{CNN Fine-Tuning Architecture}\label{fig:cnn_finetuning}
\end{subfigure}
\begin{subfigure}[t]{0.55\textwidth}
\centering
\includegraphics[width=9cm]{./endon2n_resized-opt.pdf}
\caption{CNN-LSTM Architecture}\label{fig:cnn_lstm}
\end{subfigure}
\caption{\textit{EndoN2N} model for surgical phase recognition. The initial CNN fine-tuning network is depicted in (a) and (b) depicts the CNN-LSTM network which is trained in an end-to-end manner, where the layers within the dotted line are initialized by the CNN fine-tuning step.}\label{fig:endon2n}
\end{figure*}
\section{Methodology}
In this paper, surgical phase recognition approaches for cholecystectomy surgeries are discussed. It is to be noted though that the proposed approaches are generalizable to other surgery types as well. We divide cholecystectomy surgical procedures into 7 distinct surgical phases, depicted in Figure \ref{fig:phases}, similar to \cite{twinanda2017endonet}. We classify each time step of a laparoscopic video as one of the 7 surgical phases, hence formulating the surgical phase recognition task as a multi-class classification problem.
This section will first discuss the CNN-LSTM architecture of the proposed \textit{EndoN2N} model. This will be followed by a detailed presentation of our end-to-end training approach to contrast it with existing CNN-LSTM training approaches.
Then, the proposed RSD pre-training will be presented. The motivation for using this pre-training task along with the proposed RSD prediction model and the corresponding surgical phase recognition model will be discussed. Finally, we will briefly describe the temporal context pre-training approach of \cite{bod2017}, which we use as a comparison baseline for our pre-training approach, since it is the only self-supervised pre-training approach previously validated for surgical phase recognition.
\begin{figure}[ht]
\begin{subfigure}[ht]{0.15\textwidth}
\centering
\includegraphics[height = 3.3cm]{./rolled_network.pdf}
\caption{Rolled graph}\label{fig:rolled}
\end{subfigure}
\begin{subfigure}[ht]{0.35\textwidth}
\centering
\includegraphics[height = 3.3cm]{./unrolled_network.pdf}
\caption{Unrolled graph}\label{fig:unrolled}
\end{subfigure}
\caption{Computation graph for end-to-end training of a CNN-LSTM network. $I_t$ and $P_t$ are the input frame and network prediction at the $t^{\textnormal{th}}$ time-step of a sequence, respectively. (a) shows the rolled CNN-LSTM network. (b) depicts the unrolled computational graph, with the blue lines illustrating the backpropagation of the loss through the CNN-LSTM network for a sequence of length $T$.}\label{fig:graph}
\end{figure}
\begin{figure*}[t]
\begin{subfigure}[t]{0.4\textwidth}
\centering
\includegraphics[height=2.3cm]{./progress_cnn-opt.pdf}
\caption{Progress Regression Architecture}\label{fig:progress_cnn}
\end{subfigure}
\begin{subfigure}[t]{0.6\textwidth}
\centering
\includegraphics[height=2.3cm]{./rsd_model-opt.pdf}
\caption{RSD and Progress Multi-Task Network}\label{fig:rsdnet2}
\end{subfigure}
\begin{center}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[height=2.3cm]{./updated_endon2n-opt.pdf}
\caption{Updated \textit{EndoN2N} Model}\label{fig:endon2n_updated}
\end{subfigure}
\end{center}
\caption{Proposed RSD pre-training model. CNN architecture for progress regression is shown in (a). CNN-LSTM network designed for multi-task RSD and progress regression, incorporating elapsed time and predicted progress as additional LSTM features, is depicted in (b). (c) is the updated \textit{EndoN2N} CNN-LSTM network for compatibility with the RSD pre-training model. The layers within the dotted lines are fine-tuned after having been initialized on the training task mentioned within the brackets. The layers outside the dotted lines are randomly initialized.}\label{fig:RSD_pretraining}
\end{figure*}
\subsection{\textit{EndoN2N}}
The \textit{EndoN2N} model, depicted in Figure \ref{fig:cnn_lstm}, combines a CNN with a LSTM. The model is adaptable to any CNN architecture and RNN variant. We utilize the LSTM as our recurrent network due to its robustness to the vanishing gradient problem \citep{hoc1997, bengio1994vanish}. Our experiments are performed using the CaffeNet \citep{Jia2014caffe} CNN architecture, which is a slight modification of the AlexNet \citep{kri2012alexnet} architecture. Although we observed in our preliminary experiments that a deeper CNN architecture improves performance, we utilize CaffeNet, a relatively shallow architecture, for two reasons: (1) our aim is to provide an apples-to-apples comparison between the end-to-end training approach of the proposed \textit{EndoN2N} model and the two-step training approach of \textit{EndoLSTM}. This only requires a common CNN architecture to be used in both models and is not dependent on any specific architecture. Additionally, the advantages of the proposed RSD pre-training approach can also be demonstrated using any CNN architecture. (2) end-to-end CNN-LSTM training is computationally intensive. The use of a relatively shallow CNN makes it possible to perform an extensive experimental evaluation in order to clearly demonstrate the advantages of our proposed semi-supervised approach.
As the CNN-LSTM network is trained in an end-to-end manner, we have named the model \textit{EndoN2N}. Before the end-to-end training step, the CNN is first separately fine-tuned for surgical phase recognition, depicted in Figure \ref{fig:cnn_finetuning}, in order to initialize the CNN-LSTM network with informative features corresponding to the surgical phases. In our experiments, we observe that the CNN-LSTM training converges to a poor local optima unless the CNN is first independently fine-tuned for surgical phase recognition.
The CNN fine-tuning is accomplished by replacing the final fully connected layer of the CaffeNet architecture with a fully connected layer, $\textnormal{fc}'_{phase}$ as shown in Figure \ref{fig:cnn_finetuning}, of size equal to the number of surgical phases. In our case, there are $7$ output neurons. All the fine-tuned layers of the CNN, except $\textnormal{fc}'_{phase}$, are then appended to an LSTM which in turn is followed by a new fully connected layer, $\textnormal{fc}_{phase}$, also containing as many output neurons as the number of surgical phases (Figure \ref{fig:cnn_lstm}). The softmax function is applied at the end of both $\textnormal{fc}'_{phase}$ and $\textnormal{fc}_{phase}$ layers to obtain a probability distribution over the different surgical phases.
Since surgical phase recognition is formulated as a multi-class classification problem, we compute the classification loss using the multinomial logistic function defined as:
\begin{equation}
\mathcal{L} = \frac{-1}{T}\sum_{t=1}^{T}\sum_{p=1}^{M}y^{t}_{p}\log(\sigma(z^{t})_{p})
\label{eq:loss}
\end{equation}
where $T$ is the total number of frames in a laparoscopic video, $M$ refers to the number of distinct surgical phases, $y^{t}_{p}\in\{0,1\}$ is the ground truth for phase $p$ and $z^{t}$ is the vector of activations of $\textnormal{fc}_{phase}$ at the $t^{\textnormal{th}}$ time step of the surgery and $\sigma(\cdot)_{p}$ is the softmax function computing the predicted probability of phase $p$
\subsubsection{Training Approach}\label{sec:training}
Here we present a detailed explanation of our end-to-end training approach for optimizing CNN-LSTM networks on long duration video sequences. The aim is to contrast our approach with the training approach employed in \textit{EndoLSTM} and existing approaches for end-to-end training of CNN-LSTM networks on laparoscopic video subsequences. Since the following discussion focuses on the approximation in the BPTT algorithm, a generic description of the layer-wise gradient computation is presented. While the discussion is based on the basic stochastic gradient descent algorithm for simplicity, any other optimization algorithm can also be used.
As illustrated in Figure \ref{fig:graph}, end-to-end training of a CNN-LSTM network requires the loss to be backpropagated through both the LSTM and CNN. Additionally, the BPTT algorithm requires the loss to be backpropagated from the last time step of a sequence, $t=T$, to the very first time step, $t=1$. If we denote the CNN weights as $W_{cnn}$ and the weights belonging to the LSTM as $W_{lstm}$, the gradients of the loss function $\mathcal{L}$, with respect to the network weights at a time instant $t$ can be expressed as:
\begin{equation}\label{eq:lstm_grad}
\frac{\partial\mathcal{L}}{\partial W_{lstm}}^{t} = f\left(P_t , \frac{\partial\mathcal{L}}{\partial W_{lstm}}^{t+1}\right),
\end{equation}
\begin{equation}\label{eq:cnn_grad}
\frac{\partial\mathcal{L}}{\partial W_{cnn}}^{t} = g\left(\frac{\partial\mathcal{L}}{\partial W_{lstm}}^{t}\right),
\end{equation}
where $f$ and $g$ are generic functions used to express the computation of gradients for different layers of the LSTM and CNN and $P_{t}$ is the network prediction at the $t^{\textnormal{th}}$ time step as illustrated in Figure \ref{fig:graph}.
The boundary condition of the BPTT algorithm at the end of the sequence is:
\begin{equation}\label{eq:bc_true}
\frac{\partial\mathcal{L}}{\partial W_{lstm}}^{T+1} = 0.
\end{equation}
\begin{figure}[t]
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{./true_backprop_2.pdf}
\caption{}\label{fig:true_backprop}
\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}
\centering
\vspace{0.25cm}
\includegraphics[width=\textwidth]{./backprop_2.pdf}
\caption{}\label{fig:backprop}
\end{subfigure}
\caption{Illustration of the BPTT algorithm. Red arrow denotes a forward pass and a green arrow denotes a backward pass. (a) depicts the standard algorithm and (b) illustrates our approximation.}
\end{figure}
The stochastic gradient descent weight update when utilizing a mini-batch of one video sequence of length $T$ is given by:
\begin{equation}\label{eq:sgd_backprop}
W^{\tau+1} = W^{\tau} - \eta\sum_{t=1}^{T}\frac{\partial\mathcal{L}}{\partial W^{\tau}}^{t},
\end{equation}
where $W^{\tau}$ are the learned CNN-LSTM weights at the end of $\tau$ training iterations. The weights are updated using the gradients computed for the entire video sequence, as shown in Figure \ref{fig:true_backprop}. The recursive structure of Equation \eqref{eq:lstm_grad} implies that to calculate the gradient of the loss function with respect to the network weights, $W$, at the first time step of the sequence, we require the gradients from the final time step. For this to be possible, the entire unrolled computational graph of the CNN-LSTM network, Figure \ref{fig:unrolled}, needs to be stored in memory. Due to the long duration of cholecystectomy surgeries and the large number of CNN parameters, end-to-end training of a CNN-LSTM network on complete laparoscopic video sequences has a high space complexity. Since no efficient method for storing the complete unrolled graph of the CNN-LSTM network during training is known, we utilize an approximation of the BPTT algorithm.
In our approach, we restructure the loss shown in Equation \eqref{eq:loss} as:
\begin{equation}
\mathcal{L} = \frac{-1}{\ell}\sum_{k=1}^{\ell}\frac{\ell}{T}\sum_{t=\frac{(k-1)T}{\ell}+1}^{\frac{kT}{\ell}}\sum_{p=1}^{M}y^{t}_{p}\log(\sigma(z^{t})_{p})
\label{eq:new_loss}
\end{equation}
In Equation \eqref{eq:new_loss} we divide the complete laparoscopic video into $\ell$ consecutive subsequences. $\ell$ is appropriately selected such that the available computational resources are sufficient for storing the unrolled CNN-LSTM graph for $\sfrac{T}{\ell}$ time-steps. The loss is backpropagated for every subsequence and the gradients are accumulated independently over the $\ell$ different subsequences before updating the weights.
In this method, at the boundaries between consecutive subsequences, the LSTM cell states and hidden states are forward propagated, while the BPTT algorithm is truncated as illustrated in Figure \ref{fig:backprop}. This implies an approximation in Equation \eqref{eq:lstm_grad} at the subsequence boundaries as:
\begin{equation}\label{eq:approx_lstm}
\frac{\partial\mathcal{L}}{\partial W_{lstm}}^{\frac{kT}{\ell}} = f(P_{\frac{kT}{\ell}},0), \hspace{0.5cm}\forall k = 1,2,...,\ell \\
\end{equation}
Since the gradient of the loss with respect to the weights of the CNN, Equation \eqref{eq:cnn_grad}, are dependent on the loss gradient with respect to the LSTM weights, these are being approximated as well. The stochastic gradient descent weight update step is now computed as:
\begin{equation}\label{eq:sgd_backprop2}
W^{\tau+1} = W^{\tau} - \eta\sum_{k=1}^{\ell}\sum_{t=\frac{(k-1)T}{\ell}+1}^{\frac{kT}{\ell}}\frac{\partial\mathcal{L}}{\partial W^{\tau}}^{t}.
\end{equation}
\subsection{\textit{EndoLSTM}}
The architecture adopted for the \textit{EndoLSTM} model in our experiments is exactly the same as that of the \textit{EndoN2N} model, Figure \ref{fig:endon2n}. This is essential to be able to provide an accurate comparison between the two models. Similar to the \textit{EndoN2N} model, the CNN is first fine-tuned for surgical phase recognition to provide the LSTM with informative features. The difference between the two models lies in the backpropagation of the gradients through the CNN-LSTM network. In the \textit{EndoLSTM} model, only the weights of the LSTM are updated using the BPTT algorithm, while the weights of the CNN remain fixed, as depicted in Figure \ref{fig:lstm_graph}. Since the LSTM does not contain a large number of parameters like a CNN, it is feasible to store in memory the computational graph of the unrolled LSTM network over complete cholecystectomy video sequences. Hence, we do not need to approximate the BPTT algorithm.
\begin{figure}[t]
\begin{subfigure}[ht]{0.15\textwidth}
\centering
\includegraphics[height = 2.5cm]{./rolled_lstm.pdf}
\label{fig:lstm_rolled}
\end{subfigure}
\begin{subfigure}[ht]{0.35\textwidth}
\centering
\includegraphics[height = 2cm]{./unrolled_lstm.pdf}
\label{fig:lstm_unrolled}
\end{subfigure}
\caption{Computation graph for two-step optimization of a CNN-LSTM network. $I_t$ and $P_t$ are the input frame and network prediction at the $t^{\textnormal{th}}$ time-step of a sequence, respectively. $F_t$ is the extracted CNN features corresponding to $I_t$. The rolled graph is shown on the left. The unrolled LSTM network is depicted on the right.}\label{fig:lstm_graph}
\end{figure}
\subsection{Remaining Surgery Duration Pre-training}
A key contribution of this work is the self-supervised pre-training of CNN-LSTM networks on the RSD prediction task. We hypothesize that accurate prediction of the time remaining in a surgery requires a good understanding of the surgical workflow. It is likely that a network that has been trained to accurately predict RSD would have indirectly gained knowledge related to the different surgical phases that occur, the duration of each phase and the variations in these surgical phases, since it would correspond to variations in remaining surgery duration. This could make it easier for the network to later be adapted for surgical phase recognition, thereby requiring less manually annotated data and making it easier to scale up surgical phase recognition to many types of surgeries.
The RSD prediction task is formulated as a supervised regression task, where the network is provided with labels for the remaining surgery duration. Since for a given laparoscopic video the remaining surgery duration at a time instant is simply the remaining time in that video, the labels are available without the need for any manual annotation. As the labels are obtained for \textit{free}, RSD prediction is a self-supervised learning task. This makes it feasible to utilize a large number of laparoscopic videos to train a network for RSD prediction, ensuring that potentially valuable information from even unlabeled videos is exploited. For example, the network could acquire knowledge related to variable patient conditions and surgeon styles, thereby making it generalize better.
\subsubsection{RSD Prediction Model}
The CNN-LSTM network for RSD prediction is shown in Figure \ref{fig:rsdnet2}. The model is similar to the RSDNet model presented in \cite{twinanda2018rsd}, but for two key changes: (1) elapsed time and predicted progress are taken as additional input features into the LSTM and (2) the CNN-LSTM network is trained in an end-to-end manner. Although the original RSDNet model uses a two-step optimization, similar to the \textit{EndoLSTM} model, end-to-end training is the most natural choice for pre-training a CNN-LSTM network. End-to-end training enables the optimal correlation between the features learned by the CNN and by the LSTM.
We adopt the approach proposed by \cite{twinanda2018rsd} to learn a RSD prediction model without any manual annotation, which is contrary to previous approaches \citep{aksamentov2017miccai}. This involves first the fine-tuning of the CNN for progress estimation, as depicted in Figure \ref{fig:progress_cnn}, which is the task of predicting the percentage of the surgery that has been completed at a given time instant. Progress estimation is also formulated as a self-supervised regression task. The CNN-LSTM model is then trained for the multi-task objective of RSD prediction and progress prediction. \cite{twinanda2018rsd} showed that training for this multi-task objective was better than training for RSD alone.
End-to-end training of a CNN-LSTM network on laparoscopic video sequences for RSD prediction is performed with the same approach used in \textit{EndoN2N} (Section \ref{sec:training}). We restructure the loss function as:
\begin{equation}
\begin{split}
\mathcal{L} &= \frac{-1}{T}\sum_{t=1}^{T}\left[y^{t}_{rsd}\Omega(z^{t}_{rsd}) + y^{t}_{prog}\Omega(\rho(z^{t}_{prog}))\right],\\% + \lambda\lVert W \rVert_{2}, \\
&= \frac{-1}{\ell}\sum_{k=1}^{\ell}\frac{\ell}{T}\sum_{t=\frac{(k-1)T}{\ell}+1}^{\frac{kT}{\ell}}\left[y^{t}_{rsd}\Omega(z^{t}_{rsd}) + y^{t}_{prog}\Omega(\rho(z^{t}_{prog}))\right]
\label{eq:smooth_l1_loss}
\end{split}
\end{equation}
where $y^{i}_{rsd}$ and $y^{i}_{prog}$ are the ground truths for RSD and progress, $z^{i}_{rsd}$ and $z^{i}_{prog}$ are the activations of the fully connected layers $\textnormal{fc}_{\textnormal{rsd}}$ and $\textnormal{fc}_{\textnormal{prog}}$ for the $t^{\textnormal{th}}$ frame of the laparoscopic video. $\rho(\cdot)$ is the sigmoid function, and $\Omega$ is the smooth L1 loss \citep{girshick2015} defined as:
\begin{equation}
\Omega(x) = \begin{dcases}
0.5x^{2},& \textnormal{if } |x|<1 \\
|x|-0.5,& \textnormal{otherwise}.
\end{dcases}
\end{equation}
\subsubsection{Updated Surgical Phase Recognition Model}
In \cite{twinanda2018rsd}, it was argued that knowledge of the elapsed time ($t_{el}$) and progress ($prog$) was beneficial for RSD prediction, since they possess a fundamental relation with RSD ($t_{rsd}$), as shown below:
\begin{equation}\label{eq:time}
t_{rsd} = T - t_{el} = \frac{t_{el}}{prog} - t_{el},
\end{equation}
where $T$ is the total duration of the surgery. The RSDNet model simply concatenated elapsed time along with the output of the LSTM and incorporated progress only as an additional output to be predicted. We incorporate elapsed time as well as estimated progress, $\textnormal{fc}'_{\textnormal{prog}}$, as input features to the LSTM itself as shown in Figure \ref{fig:rsdnet2}. We believe the LSTM is then capable of learning more complex relationships in between the elapsed time and the model's perception of surgery progress and RSD.
The goal of utilizing these additional features in our CNN-LSTM model is to make RSD prediction a more effective pre-training approach for surgical phase recognition. The \textit{EndoN2N} model, however, needs to be modified in order to be compatible with the proposed RSD pre-training model. The updated \textit{EndoN2N} model architecture, which includes these additional features as inputs to the LSTM, is shown in Figure \ref{fig:endon2n_updated}. We later present an ablation study to demonstrate the advantages of proposed RSD pre-training model and the updated \textit{EndoN2N} architecture.
\subsection{Temporal Context Pre-training}
\begin{figure}[t]
\centering
\includegraphics[width=4.5cm]{./tempcon-opt.pdf}
\caption{TempCon pre-training model.}\label{fig:tempcon}
\end{figure}
The TempCon pre-training approach of \cite{bod2017}, which we use as a baseline for comparison, aims to learn the temporal order of laparoscopic workflow by training a Siamese network to predict the relative order of two randomly sampled frames of a laparoscopic video. The specific model used in our experiments is derived from the architecture of \cite{misra2016unsupervised}, which is designed for a similar task of predicting the correct temporal order of randomly sample frames, since we observed it to perform better than the architecture of \cite{bod2017} in our preliminary experiments ($74$ percent vs $72$ percent accuracy on the temporal context prediction task). Additionally, such an architecture ensures that all the layers of the CaffeNet CNN used in the subsequent surgical phase recognition task will be pre-trained.
This approach is designed for pre-training CNNs only. A two-stream Siamese network is created by replicating layers conv1 to fc7 of the CaffeNet architecture, as shown in Figure \ref{fig:tempcon}. Two randomly sampled frames from a laparoscopic video are provided as inputs to the network. Weights are shared in between the two-streams. The final layers of the Siamese network are concatenated and are followed by a fully connected layer comprising of two neurons which provides the classification output. Each neuron respectively corresponds to one of the input frames. The output of the network is either 0 or 1 depending on whether frame 1 or frame 2 is predicted as occurring first in the surgical sequence.
\begin{table*}[t]
\begin{centering}
\begin{tabular}{|l|c|c||c|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{\multirow{2}{*}{Model}} & \multirow{2}{*}{Task} & \multicolumn{7}{c|}{Hyperparameters}\tabularnewline
\cline{4-10}
\multicolumn{2}{|c|}{} & & Optimizer & Iterations & $\alpha$ & Step-Size& $\gamma$ & Batch-Size & $\lambda$ \tabularnewline
\hline
\hline
\multirow{2}{*}{\textit{EndoN2N}} & CNN Fine-tuning & Phase & SGD & 50k & $10^{-3}$ & 20k & 0.1 & 50& 5$\cdot10^{-4}$\tabularnewline
\cline{2-10}
& CNN-LSTM training & Phase & Adam & 8k & $10^{-4}$ & 2k & 0.25& 500 $\times$12 & 5$\cdot10^{-4}$\tabularnewline
\hline
\multirow{2}{*}{\textit{EndoLSTM}} & CNN fine-tuning & Phase & SGD & 50k &$10^{-3}$ &20k & 0.1 & 50& 5$\cdot10^{-4}$\tabularnewline
\cline{2-10}
& LSTM training & Phase & SGD & 30k & $10^{-3}$& 10k& 0.1& 6000& 5$\cdot10^{-4}$\tabularnewline
\hline
\hline
\multirow{2}{*}{RSD Prediction} & CNN fine-tuning & Progress & SGD & 50k &$10^{-3}$ &15k & 0.1 & 64 & 5$\cdot10^{-4}$\tabularnewline
\cline{2-10}
& CNN-LSTM training & RSD - Progress & SGD & 8k &$10^{-3}$ & 2k& 0.5& 500 $\times$12& $10^{-3}$\tabularnewline
\hline
\multicolumn{2}{|c|}{Temporal Context prediction} & Relative Order & SGD & 50k & 5$\cdot10^{-4}$& n/a&n/a &160 &5$\cdot10^{-4}$ \tabularnewline
\hline
\end{tabular}\tabularnewline
\par\end{centering}
\vspace{0.2cm}
\caption{Training hyperparameters for each individual model, including their different training steps and their respective training task.\label{tab:hyperparameters}}
\end{table*}
\section{Experimental Setup}
The experiments are carried out on $Cholec120$, a dataset of 120 cholecystectomy laparoscopic videos. The surgical procedures contained in the dataset were performed by 33 surgeons at the University Hospital of Strasbourg. The videos are recorded at 25 fps and have an average duration of 38.1 mins ($\pm$16.0 mins). In total, the dataset accumulates over 75 hours of recordings. All 120 videos have been annotated at a frame rate of 1 fps with surgical phase labels corresponding to the 7 phases shown in Figure \ref{fig:phases}.
We have designed experiments for two specific goals: (1) to evaluate the improvement in performance obtained by the \textit{EndoN2N} model over \textit{EndoLSTM} and (2) to demonstrate the benefits of the proposed self-supervised RSD pre-training approach in reducing the reliance of supervised surgical phase recognition algorithms on annotated data. The division of data for the experiments and the evaluation metrics are described below.
\subsection{\textit{EndoN2N} Evaluation}
Surgical phase recognition performance is evaluated on the \textit{Cholec120} dataset using a 4-fold cross-validation setup. Each fold is divided into 80 training, 10 validation and 30 test videos. 60 randomly sampled training videos (75 percent) are used to first fine-tune the CNN individually for surgical phase recognition. All 80 videos are then used to train the combined CNN-LSTM network of the \textit{EndoN2N} model. In the case of the \textit{EndoLSTM} model, the LSTM is independently trained on the 80 training videos by utilizing features extracted from the fine-tuned CNN. All 10 validation videos are used during CNN fine-tuning as well as CNN-LSTM or just LSTM training in case of \textit{EndoN2N} or \textit{EndoLSTM}, respectively. The final \textit{EndoN2N} model weights selected for testing correspond to the best performing model on the validation set. The validation videos are also used to perform the hyperparameter search discussed in Section \ref{sec:model_training}. Both models are evaluated on all 30 of the test videos. The final results presented are the averages over the four folds.
\subsection{RSD Pre-training Evaluation}\label{sec:pretraining_data}
The \textit{EndoN2N} model is utilized to evaluate the advantages of the proposed RSD pre-training approach in reducing the amount of annotated data required for successful surgical phase recognition. We perform a comparison between the following three pre-training approaches: (1) RSD pre-training, (2) TempCon pre-training and (3) no self-supervised pre-training. The experiments are conducted using the same 4-fold cross-validation setup. For each fold, all 80 training videos are used for pre-training the network, without relying on the available annotations. The \textit{EndoN2N} model is then fine-tuned for surgical phase recognition using 10, 20, 25, 40, 50, 80 and 100 percent of the labeled training videos from each fold, i.e., 8, 16, 20, 32, 40, 64 and 80 videos respectively. The 80 training videos of each fold are divided into four quartiles based on the surgery durations and the supervised fine-tuning subsets are created by sampling an equal number of videos from each quartile. 4 different subsets of 8, 16, 20, 32 and 40 videos along with 2 different subsets of 64 videos are sampled from each fold. The average performance over these different subsets is evaluated in order to ensure that the model is not biased by the particular videos selected. 75 percent of the total fine-tuning videos are randomly sampled in each case for the initial CNN fine-tuning step. The evaluation is again performed on the 30 test videos of each fold, and the final results are the averages over all four folds.
\subsection{Evaluation Metrics}
To provide a quantitative measure of the performance of the proposed surgical phase recognition models, we utilize the metrics of accuracy, precision and recall as defined in \cite{padoy2012}. Accuracy is defined as the percentage of correct surgical phase predictions within a laparoscopic video. Precision is defined as the ratio between correct predictions and the total number of predictions, while recall is the ratio between correct predictions and the total number of instances in the ground truth. In every laparoscopic video, precision and recall are computed for each individual phase and the average values are reported as well.
To compare the performance of the various models utilizing self-supervised pre-training, we use the F1-score metric, which is the harmonic mean of the average precision and recall values, since it provides a balanced measure of the combined precision and recall metrics. The use of a single score allows us to concisely quantify the performance of different models and eases comparisons.
\begin{table*}[t]
\begin{center}
\begin{tabular}{|c c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{\multirow{2}{*}{ Model }} & \multirow{2}{*}{ Accuracy } & \multirow{2}{*}{ Average Precision } & \multirow{2}{*}{ Average Recall } & \multirow{2}{*}{ F1-Score } \tabularnewline
\multicolumn{2}{|c|}{} & & & &\tabularnewline
\hline
\multicolumn{2}{|c|}{ \textit{EndoN2N} } & \textbf{86.7$\pm$9.3} & \textbf{81.4$\pm$23.0} & \textbf{80.9$\pm$22.1} & \textbf{81.1$\pm$7.5}\tabularnewline
\hline
\multicolumn{2}{|c|}{ \textit{EndoLSTM} } & 83.0$\pm$10.8 & 77.5$\pm$24.0 & 77.2$\pm$24.2 & 77.3$\pm$8.0\tabularnewline\hline
\end{tabular}
\end{center}
\caption{Surgical phase recognition performance in terms of accuracy, average precision, average recall and F1-score (percentages) evaluated on the complete Cholec120 dataset. Results have been calculated using 4-fold cross validation.}\label{tab:full_cholec120}
\end{table*}
\begin{table*}[t]
\begin{subtable}{\linewidth}
\begin{center}
\begin{tabular}{|l| *{7}{c|}}
\hline
Precision & P1 & P2 & P3 & P4 & P5 & P6 & P7 \tabularnewline
\hline
\textit{EndoN2N} & \textbf{84.5$\pm$25.0} & \textbf{91.5$\pm$11.2} & \textbf{76.3$\pm$22.9} & \textbf{90.2$\pm$13.9} & \textbf{79.4$\pm$18.6} & 72.3$\pm$32.9 & \textbf{75.4$\pm$28.1} \tabularnewline
\hline
\textit{EndoLSTM} & 77.5$\pm$28.3 & 90.8$\pm$11.5 & 64.4$\pm$28.1 & 83.8$\pm$18.7 & 76.6$\pm$20.2 & \textbf{74.9$\pm$29.1} & 74.5$\pm$26.6 \tabularnewline
\hline
\end{tabular}
\end{center}
\vspace{0.2cm}
\end{subtable}
\begin{subtable}{\linewidth}
\begin{center}
\begin{tabular}{|l| *{7}{c|}}
\hline
Recall & P1 & P2 & P3 & P4 & P5 & P6 & P7 \tabularnewline
\hline
\textit{EndoN2N} & \textbf{84.5$\pm$24.0} & \textbf{93.7$\pm$11.4} & \textbf{70.6$\pm$26.0} & 90.2$\pm$14.3 & 77.8$\pm$19.1 & 71.1$\pm$29.3 & \textbf{78.8$\pm$24.8} \tabularnewline
\hline
\textit{EndoLSTM} & 71.4$\pm$28.5 & 87.8$\pm$18.6 & 63.2$\pm$29.6 & \textbf{90.6$\pm$18.1} & \textbf{78.2$\pm$18.4} & \textbf{73.6$\pm$28.5} & 75.6$\pm$24.0 \tabularnewline
\hline
\end{tabular}
\end{center}
\end{subtable}
\caption{Surgical phase recognition performance for each individual phase in terms of precision and recall metrics, evaluated on the complete Cholec120 dataset using 4-fold cross validation.}\label{tab:per_phase}
\end{table*}
\begin{figure*}[t]
\begin{center}
\begin{subfigure}[ht]{0.45\textwidth}
\centering
\includegraphics[height = 6cm]{./accuracy.pdf}
\caption{Accuracy}
\end{subfigure}
\begin{subfigure}[ht]{0.45\textwidth}
\centering
\includegraphics[height = 6cm]{./f1.pdf}
\caption{F1-Score}
\end{subfigure}
\par\end{center}
\caption{Comparison of surgical phase recognition performance of the \textit{EndoN2N} model initialized using (1) only ImageNet pre-training without any self-supervised pre-training (vanilla \textit{EndoN2N}), (2) the proposed RSD pre-training or (3) temporal context pre-training. The effect of variation in the number of annotated training videos on surgical phase recognition performance in terms of (a) accuracy and (b) F1-score is illustrated.\label{fig:pretraining}}
\end{figure*}
\section{Model Training}\label{sec:model_training}
All the experiments are performed using the \textit{Caffe} library \citep{Jia2014caffe}. In order to obtain an effective training setup for the \textit{EndoN2N} and RSD pre-training models, a hyperparameter search was performed over the optimizer as well as several parameter values such as the learning rate ($\alpha$), size of LSTM hidden state vectors, learning rate decay factor ($\gamma$), learning rate decay step-size and regularization factor ($\lambda$). Table \ref{tab:hyperparameters} details the hyperparameters for the different models discussed in this paper. It is to be noted that all layers with random initializations are set a 10 times higher learning rate than pre-trained layers and that weights of the $\textnormal{fc}'_{\textnormal{prog}}$ layer of the RSD-Progress multi-task network (Figure \ref{fig:rsdnet2}) are not updated.
The stochastic gradient descent (SGD) optimizer is utilized with a momentum of 0.9 and the Adam optimizer is implemented with the parameters proposed in \cite{kingma2015adam}. While utilizing the Adam optimizer was beneficial for the \textit{EndoN2N} model, we did not find it effective for \textit{EndoLSTM}.
\subsection{\textit{EndoN2N} Weight Initialization}\label{sec:initialization}
In the experiment comparing the \textit{EndoN2N} and the \textit{EndoLSTM} models, the network weights are initialized from the open-source CaffeNet model, which has been pre-trained on the ImageNet dataset. No self-supervised pre-training is utilized. This initializes the CNN layers with pre-trained weights, while the LSTM is randomly initialized. We refer to this as the vanilla \textit{EndoN2N} model.
For the self-supervised pre-training experiments, the \textit{EndoN2N} model is pre-trained on either the RSD prediction or TempCon prediction task. Transferring weights from a model trained for RSD prediction enables both the CNN and LSTM to be initialized with pre-trained weights. However, in the case of TempCon pre-training, only the CNN weights are pre-trained while the LSTM weights are once again randomly initialized.
\subsection{End-to-End CNN-LSTM Training}
The batch size used in a single forward pass corresponds to subsequences of 500 consecutive frames. We trained our models on NVidia GeForce GTX TitanX and NVidia GeForce GTX 1080 GPUs, with 12 GB and 11GB of RAM respectively, which is sufficient for storing the complete unrolled CNN-LSTM graph for 500 time-steps. Since the longest video comprises of 5987 frames when sampled at 1 fps, all videos are padded with blank images to make them equal to 6000 frames. During training the loss is accumulated for 12 forward passes before performing a weight update. The padded images are ignored from the loss computation. Hence, one complete iteration corresponds to an effective batch size of one video. Additionally, the total iterations during end-to-end CNN-LSTM training always correspond to 100 epochs. The iterations and step-size are scaled proportionally when different amounts of videos are used for training as discussed in Section \ref{sec:pretraining_data}.
\subsection{RSD Pre-training}
The CNN is first trained for progress estimation after initializing the weights from the CaffeNet model pre-trained on the ImageNet dataset. Unlike in the phase recognition pipeline, data from all training videos are used for fine-tuning the CNN with self-supervision since this leads to optimal semi-supervised surgical phase recognition performance.
As discussed in \cite{twinanda2018rsd}, the naturally high range of RSD target values for cholecystectomy surgeries (longest surgery in \textit{Cholec120} being 100 minutes) would need to be normalized in order to be able to regress the target values, while using a sufficiently large regularization parameter to prevent overfitting. We use the same normalization factor that was used by \cite{twinanda2018rsd} on the Cholec120 dataset, \textit{i}.\textit{e}.,\@\xspace $s_{norm}=5$.
\subsection{TempCon Pre-training}
The CNN weights are first initialized on the ImageNet dataset similar to the RSD pre-training network. Unlike our proposed RSD pre-training, which utilizes complete video sequences, TempCon pre-training requires pairs of frames to be sampled from the videos. From each of the 80 training videos, 50k pairs of frames are sampled. The model is trained for two full epochs over all the sampled pairs of frames.
\section{Results}
\subsection{\textit{EndoN2N} Evaluation}
Table \ref{tab:full_cholec120} shows a comparison between the accuracy as well as average precision and recall across all surgical phases obtained by the \textit{EndoN2N} and \textit{EndoLSTM} models on the complete Cholec120 dataset with the 4-fold cross-validation setup. \textit{EndoN2N} outperforms \textit{EndoLSTM} in each of the metrics.
A comparison between the recognition performance of \textit{EndoN2N} and \textit{EndoLSTM} for each of the individual surgical phases is shown in Table \ref{tab:per_phase}. As expected from the results of Table \ref{tab:full_cholec120}, \textit{EndoN2N} performs better in most of the phases in terms of both precision and recall. The 3rd phase, the clipping and cutting phase, is the most crucial phase of cholecystectomy surgeries. It is also a short duration phase and occurs in between two of the longest duration phases, making it difficult for a surgical phase recognition algorithm to recognize. \textit{EndoN2N} is seen to be considerably better at recognizing this phase. It can also be seen that even in the few cases that \textit{EndoLSTM} outperforms \textit{EndoN2N}, the difference is not significant in any of the metrics.
\subsection{RSD Pre-training Evaluation}
The graphs depicted in Figure \ref{fig:pretraining} illustrate the variation in surgical phase recognition performance with different amounts of annotated training data. The proposed RSD pre-training approach (shown in red) leads to superior performance for all sets of training data in terms of both accuracy and F1-score. TempCon pre-training is only effective when the ratio of the quantity of annotated training videos to pre-training videos is small. When the number of annotated videos increases, the pre-training approach starts to become detrimental, which is a common trend in semi-supervised learning \citep{paine2014}. On the other hand, the proposed RSD pre-training improves performance even when the training data is fully annotated, further highlighting the superiority of the approach.
\begin{figure}[t]
\begin{subfigure}[ht]{0.5\textwidth}
\centering
\includegraphics[width = 7cm]{./80ratio.pdf}
\caption{}\label{fig:80ratio}
\end{subfigure}
\begin{subfigure}[ht]{0.5\textwidth}
\centering
\includegraphics[width = 7cm]{./50ratio.pdf}
\caption{}\label{fig:50ratio}
\end{subfigure}
\caption{Relative performance of the RSD pre-trained \textit{EndoN2N} model with respect to the vanilla \textit{EndoN2N} model, derived from Figure \ref{fig:pretraining}. The RSD pre-trained model is supervised using either (a) 20\% or (b) 50\% fewer annotated videos than the vanilla \textit{EndoN2N} model. The pair of numbers on the horizontal axis represents the number of annotated training videos used by the RSD pre-trained model/vanilla \textit{EndoN2N} model, respectively.}\label{fig:ratios}
\end{figure}
To highlight the effectiveness of the proposed RSD pre-training approach in reducing the reliance of surgical phase recognition models on annotated laparoscopic videos, we show in Figure \ref{fig:ratios} the relative performance of the RSD pre-trained \textit{EndoN2N} model, trained using less annotated videos, as compared to the same model without any self-supervised pre-training, but trained on more annotated videos. We notice that similar levels of performance can be achieved with less annotated data by adopting our proposed pre-training approach. Figures \ref{fig:80ratio} and \ref{fig:50ratio}, which are derived from Figure \ref{fig:pretraining}, show the difference in performance when (a) 20\% and (b) 50\% fewer annotated training videos are utilized respectively. The pre-training is still performed using 80 videos. In Figure \ref{fig:80ratio}, the RSD pre-trained model using less labeled videos performs better in general when the number of pre-training videos is higher than the number of annotated videos. This is of particular significance for actual clinical application, where there is a vast amount of data, but only a small fraction of it can be annotated. In Figure \ref{fig:50ratio} we see that the accuracy drops further as the number of labeled videos increases. This is expected since pre-training is more effective when the ratio of the amount of pre-training data to annotated data is high. Yet, the difference in accuracy is still always under 5\%, even though only half the number of annotated videos are used after the pre-training. The difference in F1-score also remains within a similar range, where the largest difference observed is 5.1\%.
\section{Discussion}
\subsection{Ablation Study}
\begin{figure}[ht]
\centering
\includegraphics[width=7cm]{./ablation_study.pdf}
\caption{Comparison between the surgical phase recognition performance of \textit{EndoN2N} when pre-trained using either our proposed RSD prediction architecture or the RSDNet architecture of \cite{twinanda2018rsd}.}\label{fig:ablation}
\end{figure}
An ablation study is presented to understand the benefits of utilizing elapsed time and estimated progress as additional features in the RSD pre-training model. Figure \ref{fig:ablation} illustrates the improvement in surgical phase recognition performance of the \textit{EndoN2N} model when it is pre-trained with our proposed RSD prediction model as compared to the RSDNet model of \cite{twinanda2018rsd}. The study is performed on the first fold of the \textit{Cholec120} dataset. All 80 training videos are used for self-supervised pre-training while either 20, 40 or 80 annotated training videos are used to fine-tune \textit{EndoN2N} for surgical phase recognition. The smaller subsets of 20 and 40 videos have been sampled from the 80 training videos using the method described in section \ref{sec:pretraining_data}. It is to be noted that the vanilla \textit{EndoN2N} architecture, Figure \ref{fig:cnn_lstm}, is pre-trained when using RSDNet, while the updated model architecture, Figure \ref{fig:endon2n_updated}, is required when using our proposed RSD pre-training approach.
It can clearly be seen that our proposed RSD pre-training approach leads to superior surgical phase recognition performance in terms of both accuracy and F1-Score. It is also noteworthy that when the \textit{EndoN2N} model is trained on all 80 annotated videos using the RSDNet model for pre-training (86.9\% accuracy and 80.4\% F1-score), it performs worse than the \textit{EndoN2N} model without any self-supervised pre-training (88.2\% accuracy and 81.9\% F1-score). However, this is not the case with our proposed RSD pre-training model, which leads to an improvement in performance (89.6\% accuracy and 83.4\% F1-score) even when all annotated training data is used.
\subsection{Amount of Pre-Training Data}
\begin{figure}[t]
\centering
\includegraphics[width=7cm]{percentage_nolab.pdf}
\caption{Graph illustrating the effect of the amount of pre-training videos utilized on surgical phase recognition performance.}\label{fig:percentage}
\end{figure}
Here, we design an experiment to study the effect of the amount of pre-training data available on our RSD pre-training approach. We first divide the 80 training videos of each fold into four quarters. 20 videos are used to fine-tune the network for surgical phase recognition. Increasing amounts of the remaining training videos, i.e., 20, 40 and 60 training videos, are used for RSD pre-training. Figure \ref{fig:percentage} shows the results of the RSD pre-trained \textit{EndoN2N} model with different amounts of pre-training videos. The results shown are the averages over the four folds. As we would intuitively expect, the accuracy and F1-score increase with greater amounts of pre-training data.
\subsection{Phase Boundary Detection}
\begin{figure*}[ht]
\begin{subfigure}[ht]{0.5\textwidth}
\centering
\includegraphics[width=8cm]{./boundaries_distance.pdf}
\caption{Temporal distance}\label{fig:temporal_distance}
\end{subfigure}
\begin{subfigure}[ht]{0.5\textwidth}
\centering
\includegraphics[width=8cm]{./boundary_noise.pdf}
\caption{Noise}\label{fig:noise}
\end{subfigure}
\caption{Graphs depicting the variation in quality of phase boundary predictions when different amounts of annotated training videos are used, with or without the proposed RSD pre-training. (a) shows the temporal distance between the actual phase boundaries in the ground truth and the phase boundaries predicted by the RSD pre-trained and vanilla \textit{EndoN2N} models. The temporal distance is calculated with respect to both the first predicted and closest predicted phase boundaries. (b) shows the percentage of noise in the predictions.}\label{fig:boundary}
\end{figure*}
We perform an additional experiment to study how well the surgical phase recognition model is able to locate phase boundaries within a laparoscopic procedure. We measure this using the temporal distance, which is the absolute time difference in seconds between the actual phase boundary in the ground truth and the corresponding predicted phase boundary. For comparison, the temporal distance is measured with respect to both the first prediction and to the closest prediction. On the one hand, the first prediction of a phase boundary is important during actual clinical application. For example, an automatic notification system will alert the required hospital staff at the first instance a new phase is detected. On the other hand, high accuracy of the closest predicted phase boundary enables good initial annotations to be generated that can be of assistance to manual annotators. This can facilitate the creation of annotated data and be further beneficial for scaling up surgical phase recognition algorithms.
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{./predictions.pdf}
\caption{Visualization of ground truth (above) and RSD pre-trained \textit{EndoN2N} predictions (below) for one video of \textit{Cholec120}. The 7 color coded phase labels are displayed at the bottom. An instance of noise and temporal distance (TD) has been highlighted. (Best seen in color.)}\label{fig:predictions}
\end{figure}
Another metric we compute is the noise. Certain incorrect phase intervals are predicted by the model which do not appear in the ground truth, as shown in Figure \ref{fig:predictions}. Noise is computed as the percentage of total time steps of a laparoscopic video which belong to the incorrect phase intervals. Higher noise is detrimental both in clinical applications and when aiming to create a good set of initial annotations.
The calculations presented in Figure \ref{fig:boundary} are obtained after the predictions of \textit{EndoN2N} are filtered using a 5 second window to remove any short-term noise or spikes. A 5 second delay in prediction is deemed acceptable for practical real-time applications. The results presented are the averages over all four folds of \textit{cholec120}.
It can be seen from the graphs in Figure \ref{fig:boundary} that using as little as 40 annotated training videos along with RSD pre-training leads to a performance similar to the vanilla \textit{EndoN2N} model trained on all 80 annotated videos. The proposed RSD pre-training is particularly effective in improving the accuracy of the first predicted phase boundaries and reducing false predictions which contribute to noise, making it beneficial for clinical applications. The reduction in prediction noise is also beneficial for creating good initial annotations. Though the closest phase boundaries are generally more accurately predicted by the vanilla \textit{EndoN2N} model, the difference is very small (less than 5 seconds on average). It should be noted that the temporal distance is computed with respect to the phase boundaries in the ground truth, which are very strict. For practical applications, a slight error is acceptable since the phase transitions are actually more gradual. The use of a bi-directional LSTM based model could further improve predictions for creating initial annotations, although such a model can not be used in real-time applications.
\section{Conclusion}
A new self-supervised pre-training approach based on RSD prediction has been presented and shown to be particularly effective in reducing the amount of annotated data required for successful surgical phase recognition. This makes our approach beneficial for scaling up surgical phase recognition algorithms to different types of surgeries. Surgical phase recognition performance when using only half the amount of annotated data generally remains within 5\% if the proposed RSD pre-training is utilized. Additionally, when a sufficiently large amount of pre-training data is utilized, surgical phase recognition performance can even be slightly improved despite relying on 20\% less annotated data. This is especially significant for real-world clinical applications, where despite a scarcity of annotated data, which is time-consuming and difficult to generate, there exists an abundance of unlabeled data. The use of self-supervised pre-training ensures that no data remains unexploited. The proposed RSD pre-training approach also outperforms the temporal context pre-training approach, the single self-supervised pre-training approach previously implemented for surgical phase recognition. Further, it is interesting to note that the proposed RSD pre-training approach leads to improvement in performance even when all the training data is annotated.
This work also presents an apples-to-apples comparison between the end-to-end optimization and the two-step optimization of surgical phase recognition models based on CNN-LSTM networks. The results show that the proposed end-to-end optimization approach leads to better performance. Additional experiments were presented, which provide a greater insight into the proposed RSD pre-training model as well as the effectiveness of our models for both deployment in ORs and generation of initial surgical phase annotations.
We hope this paper serves as a motivation for other works to address the important problem of developing surgical phase recognition approaches, which are less reliant on annotated data. In future work, the effectiveness of other semi-supervised approaches, such as the application of generative adversarial networks or the use of synthetic data, for example, can be explored. Additional CNN-LSTM pre-training approaches based on self-supervised learning can also prove to be effective. We would also like to carry out the proposed RSD pre-training using a much larger number of laparoscopic videos than the 80 used in this work, to study the benefit in performance that can be obtained.
\section*{Acknowledgements}
This work was supported by French state funds managed within the Investissements d'Avenir program by BPI France (project CONDOR) and by the ANR (references ANR-11-LABX-0004 and ANR-10-IAHU-02). The authors would also like to acknowledge the support of NVIDIA with the donation of a GPU used in this research.
\bibliographystyle{model2-names}
\input{Pretraining_CNN_LSTM.bbl}
\end{document} |
1,116,691,497,352 | arxiv | \section{Introduction}
Recently a rapid progress\cite{Novo,Zha,Cai,Kato} has been made in the
fabrication of graphene nanoribbons\cite{Fuj,Neto,Brey,Son,Yang,Lee} .
They have a great potential for spintronic applications, where
electronic many-body interactions may play a significant
role\cite{Son,Yang,Lee}. Nanoribbons may also provide numerous
interesting issues in fundamental physics, such as topological insulators\cite{Bern,topoins1,topoins2,Kit,Hee,Wen,Zhang,Jeong}.
A nanoribbon is an excellent candidate for a Chern-Simons topological insulator since it obeys particle-hole symmetry, which is one of the
symmetry requirements of low dimensional Chern-Simons topological insulators
(this symmetry plays an important role in the $Z_2$ classification of the Hamiltonians).
A $(1+1)$ dimensional Chern-Simons topological insulator is defined by a Lagrangian density\cite{Zhang}
which involves the electric polarization $\mathcal{P}$\cite{Van,Res,Zak}
\begin{eqnarray}
\mathcal{L}=\mathcal{P}\epsilon^{\mu\nu}\partial_{\mu}A_{\nu},
\label{Lang1}
\end{eqnarray}
where $A_{\nu}$ is the electromagnetic field.
From the Lagrangian density follows that
the current density is related to the polarization
\begin{eqnarray}
j_{\mu}=-\epsilon_{\mu\nu}\frac{\partial \mathcal{P}}{\partial y_{\nu}}.
\label{jvsP}
\end{eqnarray}
According to this Lagrangian one
needs to show the presence of electric polarization to establish that a nanoribbon is indeed a Chern-Simons topological insulator. A $(0+1)$ dimensional topological insulator can be also defined by a Chern-Simons
effective Lagrangian\cite{Zhang}
\begin{eqnarray}
\mathcal{L}=Tr[B_0],
\label{Lang0}
\end{eqnarray}
where $B_0$ is a Berry vector potential and
the trace stands for the sum over the occupied states.
In the Chern-Simons theory one-dimensional Hamiltonians are labeled\cite{Zhang} by the parameter $\theta$ such that for $\theta=0$ the Hamiltonian is $Z_2$ trivial and for $\theta=\pi$
the Hamiltonian is $Z_2$ non-trivial. As the parameter $\theta$ changes by $2\pi$, the polarization change, $\Delta \mathcal{P}=\mathcal{P}(2\pi)-\mathcal{P}(0)$, is given by
the surface integral over a toroidal surface with the value equal to an integer in units of $e$ (a first Chern number).
Here the polarization of an occupied band is defined by the Zak phase $Z$, obtained from { \it dimensional reduction} by performing a
one-dimensional cut of a two-dimensional Brillouin zone\cite{Del}\begin{eqnarray}
\mathcal{P}(\theta)=\frac{e}{2\pi}Z=\frac{e}{2\pi}\oint dk_y (-i)\left\langle \theta,k_y \right|\frac{\partial}{\partial k_y}\left|\theta,k_y\right\rangle,
\label{Zak}
\end{eqnarray}
where the $\left|\theta,k_y\right\rangle$ are the periodic part of two-dimensional Bloch wavefunctions
(from now one we will call this quantity the {\it reduced} Zak phase to contrast it to the one-dimensional Zak phase\cite{Zak} defined in Eqs. (\ref{pol}) and (\ref{vecp})).
When the integration over $k_y$ in the reduced Zak phase is performed along a zigzag edge the Zak phase is zero while when integrated along an armchair edge it is $\pi$\cite{Del}.
In contrast, it should be noted that, in translationally invariant one-dimensional systems the ordinary definition of the Zak phase is
a multivalued quantity with a quantum uncertainty\cite{Zak,topoins2,Hat} (we will show this explicitly for graphene nanoribons).
When translational invariance is broken the spatial variation of polarization can change abruptly and domain walls can be present in graphene nanoribbons: in a graphene zigzag ribbon translational invariance is broken along the {\it perpendicular} direction to the zigzag edges and
the polarization changes suddenly across the zigzag edges. This effect is described by the Chern-Simons field theoretical result Eq.(\ref{jvsP}), where the coordinate $y$ is along the perpendicular direction. The domain wall has nearly zero energy zigzag {\it edge} states, similar to quantum Hall systems. A periodic armchair ribbon is translationally invariant along
the ribbon direction. This invariance can be broken by cutting the bonds transversely, which will produce two extra zigzag edges (see Fig.\ref{rect}).
This will gives rise to the spatial variation of polarization along the ribbon direction, and produce {\it end} states (domain walls) with nearly zero energy\cite{Kit}. In such finite graphene armchair ribbons\cite{Fuj,Yang,Brey,Lee}
we find that a $(1+1)$ Chern-Simons topological insulator is realized as the aspect ratio between the lengths of armchair and zigzag edges
goes $L_y/L_x\rightarrow \infty$. We will show from the relation between charge and polarization, Eq.(\ref{jvsP}),
that end states have a fractional charge, analogous to polyacetylene\cite{Hee}: the polarization $P(y)$ varies from $0$ to $e/2$ as a function of $y$ along the ribbon direction. This is an important implication of the Lagrangian density, and is
a hallmark of a non-trivial topological insulator.
The corresponding charge density of an end state for metallic and semiconducting armchair ribbon decays exponentially with very different
the decay lengths. In the metallic case with a pseudogap the decay length is comparable to the system size, and only a {\it quasi-topological} insulator is formed.
When the aspect ratio is $L_y/L_x\sim 1$ under the condition $L_x \lesssim 100\AA$ a RGS\cite{Tang,Kim0} is formed. We show that it is a $(0+1)$ Chern-Simons topological insulator with particle-hole symmetry intact.
In this island-like system a Berry phase can be generated through modulation of the hopping parameter of a C-C
bond. We find that some states of a RGS have a cyclic Berry phase of $\pi$ mod $2\pi$
while other states have zero Berry phase. In addition, the Berry phases of particle-hole conjugate states are equal to each other mod $2\pi$.
We will show that it
has also nearly zero energy edge states, which is another hallmark of topological insulators.
In both $(1+1)$ and $(0+1)$ Chern-Simons topological insulators we find, using the Atiyah-Singer index theorem\cite{atiyah},
that number of nearly zero energy end states is proportional to the length of the zigzag edges.
We find also a correction to this Atiyiah-Singer result due to the effects beyond the effective mass approximation.
This paper is organized as follows. We compute the polarization near a domain wall in a finite graphene ribbon In Sec.II.
In Sec.III, we compute a Berry phase of a RGS, and show that such a system is a $(0+1)$ dimensional topological Chern-Simons insulator.
The number of zero modes of a finite length zigzag edge is computed in Sec.IV, in addition to a correction to the Atiyah-Singer index theorem on the lower bound on the number of zero modes. Conclusions are given in Sec. V. In Appendix we show that the polarization a one-dimensional nanoribbon has a quantum uncertainty between the values $0$ and $\frac{e}{2}$ (modulo $e$).
\section{Armchair ribbon with broken translational invariance: domain wall and fractional charge }
\begin{figure}[!hbpt]
\begin{center}
\includegraphics[width=0.25\textwidth]{armzig.eps}
\end{center}
\caption{ Finite graphene ribbon has
two zigzag edges and two armchair edges.
When the length of the zigzag edges is $L_x=3La_0$ or $(3L+1)a_0$
a gap is present, but for $L_x=(3L+2)a_0$ no gap exists ($L$ is an integer and $a_0=\sqrt{2}a$ is the unit cell length of the honeycomb lattice). Vectors $\vec{\tau}_l$ connect a A carbon atom to three neighboring B carbon atoms. A unit cell of an armchair ribbon is shown in the dashed box. }
\label{rect}
\end{figure}
Here we consider a spatial variation of the polarization and the formation of a fractional charge state.
Cutting transversely the bonds of a periodic an armchair ribbon breaks translational invariance and generates a {\it finite} length one-dimensional system with two zigzag edges at ends of the system, see Fig.\ref{rect}. Then outside of the armchair the system is trivial with $\theta=0$, while inside the ribbon the system is non-trivial
with $\theta=\pi$. In such an armchair ribbon, as the length of armchair edges becomes longer, doubly degenerate end states with nearly zero energy appear, as our tight-binding numerical results shown in Fig.\ref{rect-anal1}(a) demonstrate (the magnetic flux $\phi$ is set to zero). Here the length of zigzag edges is $L_x=3La_0$ so that an energy gap\cite{Lee} exist. In the limit where the length of the armchair edges $L_y\rightarrow \infty$ we find that the decay length of the probability density of the end state is short, comparable to $a_0$, see Fig.\ref{rect-anal1}(b).
This behavior is typical of a zigzag edge state\cite{Neto}.
These results are also true for
a semiconducting armchair ribbon with different length of zigzag edges $L_x=(3L+1)a_0$.
In the metallic armchair ribbon with the length zigzag edges
$L_x=(3L+2)a_0$ nearly zero energy
end states also appear, see Fig.\ref{rect-anal2}(a). However, as shown in Fig.\ref{rect-anal2}(b) the decay length of these states are much longer, comparable to the system length. We will thus call this metallic armchair ribbon with a pseudogap a {\it quasi-topological insulator}.
\begin{figure}[!hbpt]
\begin{center}
\includegraphics[width=0.3\textwidth]{ShToSArm.eps}
\caption{Results for finite semiconducting armchair ribbon with a short width equal to the length of zigzag edges $L_x=3La_0=3a_0$. (a) The eigenenergy of an end state vs the length armchair edges $L_{y}$. (b) Plot of the probability density, $|\psi_M(x=0,y)|^2$, of the state shown in (a) vs $y$ for the length armchair edges $L_y=30a_0$. The left zigzag edge ($y=0$) consists of $A$-carbons and the right edge ($y=3.52a_0$) of $B$-carbons. The red line indicates the value of the probability density on $A$-carbons and the blue line indicates the probability density on $B$-carbons. }
\label{rect-anal1}
\end{center}
\end{figure}
\begin{figure}[!hbpt]
\begin{center}
\includegraphics[width=0.3\textwidth]{ShToMArm.eps}
\caption{Same as in Fig.\ref{rect-anal1} but for the metallic case with the length zigzag edges $L_x=(3L+2)a_0=2a_0$. From (b) we see that the width of domain wall is long, comparable to the length of the system $ L_y=300a_0$. }
\label{rect-anal2}
\end{center}
\end{figure}
These probability densities satisfy the relation between polarization and charge of a Chern-Simons topological insulator, Eq.(\ref{jvsP}):
for a semiconducting armchair ribbon (with a gap in the DOS) the A- or B-type probability density decays rapidly
\begin{eqnarray}
\rho(y)&=&\frac{\partial \mathcal{P}(y)}{\partial y}\sim e^{-\alpha\frac{y}{ a_c}}.\nonumber\\
\end{eqnarray}
For a metallic armchair ribbon (with a pseudogap in the DOS) it decays slowly
\begin{eqnarray}
\rho(y)&=&\frac{\partial \mathcal{P}(y)}{\partial y}\sim e^{-\beta\frac{y}{ L_y}},\nonumber\\
\end{eqnarray}
where $L_y$ is the length of the armchair edges. In the limit $L_y\rightarrow \infty$ the integrated probability density on one type of carbon atoms is
gives the fractional charge, given by\cite{Zhang}
\begin{eqnarray}
Q&=&\int _{0}^{\infty}dy \frac{\partial \mathcal{P}(y)}{\partial y}=P(\infty)-P(0)=-\frac{e}{2}.
\label{frac}
\end{eqnarray}
The existence of this charge of one-half is thus related to the variation the polarization $P(y)$ from $0$ to $-e/2$.
These end states with zero energy represent a fractional charge of $1/2$\cite{Kit,Hee,Jeong}
of a domain wall (they appear in degenerate pairs\cite{Jeong,Kane0}).
\section{Rectangular sheet: $(0+1)$ dimensional Chern-Simons topological insulator }
\begin{figure}[!hbpt]
\begin{center}
\includegraphics[width=0.3\textwidth]{GraShe.eps}
\caption{ RGS is shown. The length of armchair (zigzag) edges is $L_y (L_x)$. We choose a C atom and distort its hopping integral $t_1$. Adiabatic cycle is performed through the variation of $t_1$. }
\label{recsh}
\end{center}
\end{figure}
In the opposite regime $L_y/L_x\sim 1$ with $L_x\lesssim 100\AA$ a RGS is realized, see Fig.\ref{recsh}. A RGS is an island-like system and is a $(0+1)$ dimensional system.
However, the effects of
armchair and zigzag edges may compete.
It is thus unclear whether a RGS is a topological insulator. Here we investigate whether a RGS is really a Chern-Simons topological insulator.
To establish that its effective Lagrangian is given by a $(0+1)$ Chern-Simons topological field theory, Eq.(\ref{Lang0}),
we need to construct non-vanishing Berry vector potentials.
It is not trivial to choose the appropriate adiabatic cyclic parameters. We use the following adiabatic cyclic parameters.
We induce a time-dependent
change of a C-C bond, see Fig.\ref{recsh}. Its hopping parameter has time dependence $t_1=0.9te^{i2\pi \tau/\tau_0}$ with $0\leq \tau/\tau_0 \leq1$.
Note that this perturbation preserves particle-hole symmetry.
The Berry vector potential of this $(0+1)$ dimensional system is
\begin{eqnarray}
B_0=\sum_{\alpha }' \left\langle\psi_{\alpha} \right|i\frac{d}{d \tau}\left|\psi_{_{\alpha}} \right\rangle,
\end{eqnarray}
where the summation is over the occupied states. We use the tight-binding method in site-representation to compute the Berry phase since tight-binding approach is more accurate than the effective mass approach for small island-like systems.
We use a gauge fixing to ensure numerically stable results\cite{Hat}.
\begin{figure}[!hbpt]
\begin{center}
\includegraphics[width=0.35\textwidth]{Elevelvst.eps}
\caption{ Energy levels of a RGS vs $\tau$. The lengths are $(L_x,L_y)=(3a_0,3.52a_0)$. Particle-hole symmetry is present. }
\label{energy}
\end{center}
\end{figure}
\begin{figure}[!hbpt]
\begin{center}
\includegraphics[width=0.3\textwidth]{BerryVsE2.eps}
\includegraphics[width=0.3\textwidth]{gapberry.eps}
\caption{Berry Phase of a RGS vs eigenenergies $E$ when $(L_x,L_y)=(3a_0,3.52a_0)$ (a) and $(L_x,L_y)=(5a_0,3.52a_0)$ (b).
In (a) we exclude points at $E=\pm t$ because they are degenerated.}
\label{Berry}
\end{center}
\end{figure}
To compute the Berry phase we first need to obtain eigenstates and eigenenergies as a function of the adiabatic parameter. The energy levels of a RGS are shown in Fig.\ref{energy} as a functions of time $\tau$.
Note that the computed energies display particle-hole symmetry.
The computed Berry phases of the levels are
either zero or $\pi$ mod $2\pi$\cite{Hat}, see Fig.{\ref{Berry}}(a) ($L_x=3La_0$). Note the Berry phases of a pair of particle-hole conjugate states is equal each other mod $2\pi$. We have also found that the nearly zero energy zigzag edge states exist, see Fig.\ref{energy}. It has aslo nearly zero energy edge states, which is another hallmark of topological insulators,
When the length zigzag edges is
$L_x=(3L+2)a_0$ the Berry phase is again either zero or $\pi$ mod $2\pi$, and
the states in a particle-hole conjugate have the same value of Berry phase mod $2\pi$.
\section{Number of zero modes}
So far we have investigated nearly zero modes of narrow ribbons, i.e, zero modes of one-dimensional systems. We now study quasi-one-dimensional ribbons that have broader widths.
We compute, as a function of the ribbon width, the number of nearly zero energy edge modes\cite{Del,Ryu} a finite length zigzag edge supports. The localization length of these
edge modes increases with increasing energy.
We choose to employ
the Atiyah-Singer index theorem\cite{Pac,atiyah} to compute the number of edge modes. We will show that it
is proportional to the length of the zigzag edges. There is also a correction to this result, which we compute.
A RGS can be made out of an armchair ribbon by cutting C-C bonds along the transverse direction of the ribbon, which produces two zigzag edges.
Mathematically this can be done by applying a chiral vector potential $\vec{A}_c=(A_{c,x}\Theta(y),0,0))$ along the x-axis in a rectangle \cite{Jeong} (this area can be chosen as the unit cell shown in Fig.\ref{rect}). The bonds will be cut at the critical strength
\begin{eqnarray}
\frac{eA_cv_F}{c}=t,
\end{eqnarray}
where $\hbar v_F=\frac{3}{2}t a$ with the C-C distance $a$.
Since the vector potential is chiral the relevant $\mathbf{K}$ and $\mathbf{K'}$ Dirac Hamiltonians are:
\begin{eqnarray}
H_{\mathbf{K}}=v_{F}\vec{\sigma}\cdot
(\vec{p}-\frac{e}{c}\vec{A}_c(\vec{r}))
\end{eqnarray}
for the $\mathbf{K}$ valley, and
\begin{eqnarray}
H_{\mathbf{K'}}=v_{F}\vec{\sigma}'\cdot
(\vec{p}+\frac{e}{c}\vec{A}_c(\vec{r}))
\end{eqnarray}
for the $\mathbf{K'}$ valley ($e>0$).
The x and y components of
$\vec{\sigma}'$ are $-\sigma_{x}$ and $\sigma_{y}$. Note that,
unlike for the real vector potential, the chiral vector potential
appears with opposite signs for $\mathbf{K}$ and
$\mathbf{K'}$ valleys (it is chiral in the sense that it
distinguishes valleys).
In a RGS $\mathbf{K}$ and $\mathbf{K'}$ valleys are coupled by the armchair edges\cite{hclee}, and the effective mass Hamiltonian of a RGS is
a block diagonal matrix with
$H_\mathbf{K}$ and $H_\mathbf{K'}$ as blocks.
Since the Dirac operator is an elliptic operator, it is possible to
employ the Atiyah-singer index theorem\cite{Pac} (In the case of non-compact
surfaces the theorem is applicable only when there is no net flux going through the open faces\cite{Pac0}).
To apply the theorem we rewrite the $\mathbf{K}$ valley Hamiltonian as
\begin{equation}
H_{\mathbf{K}}=\left(
\begin{array}{cc}
0 & P^{\dag}\\
P&0\\
\end{array}
\right),
\end{equation}
where the operator $P^{\dag}=\hbar v_F(\partial_{x}-i\partial_y-\frac{ie}{\hbar c}A_{c,x})$. Then
\begin{equation}
H_{\mathbf{K}}^2=\left(
\begin{array}{cc}
P^{\dag}P & 0 \\
0 & PP^{\dag}
\end{array}
\right).
\end{equation}
We define $\nu_{+}$ as the number of zero modes of $P^{\dag}P$ and $\nu_{-}$ as the number of zero modes of $PP^{\dag}$. Since there is no net flux the Atiyah-Singer index theorem dictates the difference in the numbers of zero modes is related to the vector potential
\begin{eqnarray}
\mathrm{index}(H_{\mathbf{K}})=\frac{1}{2\pi\phi_0}\oint_C\vec{A}_c\cdot d\vec{l}=\nu_{1}-\nu_{2}=0,\nonumber\\
\end{eqnarray}
where the contour C is along the edges of the rectangle. Non-zero contributions come from contour pieces $C_1$ and $C_2$, where $C_1=-C_2$.
We have checked numerically that the numbers of zero modes from the contours $C_1$ and $C_2$
are identical, i.e., $\nu_{1}=\nu_{2}$. It should be noted that these zero modes also have {\it identical} wavefunctions.
This implies that the index $\nu_1$ is proportional to the line integral along $C_1$.
The proportionality constant can be determined from the comparison with the tight-binding numerical result for the number of zero modes:
we find that the total number of zero energy modes is proportional to
the length of the zigzag edges $L_x$
\begin{eqnarray}
N_0= \frac{1}{2\phi_0}\int_{C_1} \vec{A}_c\cdot d\vec{l} =\frac{A_cL_x}{2\phi_0}=\frac{1}{\sqrt{3}}\frac{L_x}{a_0}.
\label{AS}
\end{eqnarray}
This number is equal to lower limit of the number of zero modes, see Fig.\ref{number}.
The tight-binding result shows that there is a correction to this result due to the effects beyond the effective mass approximation, see Fig.\ref{number}.
\begin{figure}[!hbpt]
\begin{center}
\includegraphics[width=0.35\textwidth]{zerovslen.eps}
\end{center}
\caption{Number of zero modes of a RGS, computed using the tight-binding Hamiltonian, are plotted
as a function of the length of the zigzag edges $L_x/a_0$.
Here the length of the armchair edges is $L_y=143.6a_0$. Squares are for $L_x=3La_0$ and circles are for $L_x=(3L+1)a_0$. Note the number does not increase linearly with $L_x$. The lowest curve represents the lower bound and is given by the Atiyah-Singer result of Eq.(\ref{AS}).}
\label{number}
\end{figure}
\begin{figure}[!hbpt]
\begin{center}
\includegraphics[width=0.35\textwidth]{PZGR.eps}
\end{center}
\caption{A unit cell of a periodic zigzag ribbon is shown. The length of the unit cell is $a_c=a_0$.}
\label{PZGR}
\end{figure}
\begin{figure}[!hbpt]
\begin{center}
\includegraphics[width=0.35\textwidth]{number-zero-mode.eps}
\end{center}
\caption{ For a periodic zigzag ribbon nearly zero modes exist at wavevectors $k_na_0=2\pi/N, 4\pi/N, .....14\pi/N$. These values are plotted as a function of $N$.
Solid and dashed lines represent lower and upper bounds $k_c^-(N)$ and $k_c^+(N)$.}
\label{deviation}
\end{figure}
There is another way to compute the number of nearly zero energy modes of a zigzag edge of length $L_x$. Consider a periodic zigzag ribbon with a finite length $L_x$, see Fig.\ref{PZGR}. For this system we can compute the exact number of zero modes using the tight-binding approach.
In the tight-binding calculation\cite{Waka,Kim1} the nearly
zero eigenenergies are positive in the interval $k_c^+(N)<k_n<\pi/a_0$ while they are negative in the interval $\pi/a_0<k_n<k_c^-(N)$,
where the values of the critical wavevectors are
\begin{eqnarray}
k_c^{\pm}(N)a_0= 2cos^{-1}\left(\pm\frac{1/2}{1+1/N}\right)
\end{eqnarray}
with the periodic length $L_x=Na_0$ ($N$ is the number of unit cells cells and $a_0$ is the unit cell length, see Fig.\ref{PZGR}). At the values $k_c^+(N)$ and $k_c^-(N)$ eigenenergies split from the bulk graphene
energy spectrum. The number values of $k_n$ satisfying these inequalities $k_c^+(N)<k_n<k_c^-(N)$ are shown in Fig.\ref{deviation}. We find that, as $N$ increases,
this number does not always increase monotonously, in contrast to the result of Atiyah-Singer, Eq.(\ref{AS}). Note that solutions at each $k_n$ are nearly
degenerate due to particle-hole symmetry, which gives an additional factor of $2$. Thus, in total, the number of solutions
must be multiplied by 2. This result for a zigzag ribbon is in agreement with the result of RGS for $L_x\gg a_c$:
for $N=21$ we find $12$ zero modes in both systems, see Figs.\ref{number} and \ref{deviation}.
Also it should be noted that the numerical values of the number of zero modes are all even, consistent with the $Z_2$
classification\cite{Kane0}.
\section{summary}
One of the
symmetry requirements of low dimensional Chern-Simons topological insulators is particle-hole symmetry. Another important ingredient of the $(1+1)$ Chern-Simons theory is a finite electric polarization defined by the Zak phase obtained from dimensional reduction. In a finite armchair ribbon with broken translational invariance a $(1+1)$ Chern-Simons topological insulator is realized as the aspect ratio between the lengths of armchair and zigzag edges goes $L_y/L_x\rightarrow \infty$. However, only a quasi-topological insulator is formed in such a system when the energy gap goes to zero , i.e., in the metallic case with a pseudogap. In the opposite limit $L_y/L_x \sim 1$ a $(0+1)$ Chern-Simons topological insulator is realized, and
the cyclic Berry phase is quantized as $\pi$ or $0$ (mod $2\pi$). The Berry phases of particle-hole conjugate states are equal each other.
In both armchair ribbon and RGS the number of nearly zero energy end modes is proportional to the length of the zigzag edges. A correction to this result that includes effects ignored in the effective mass approach is computed.
It will be interesting to measure the number of zero modes and the fractional charge in these systems.
When a magnetic field is applied to a RGS time reversal symmetry is broken and more zero energy states will be formed\cite{Kim0,KimAP}.
|
1,116,691,497,353 | arxiv | \section{Introduction}
Recently, spacetimes whose geometry changes with the probed scale have been introduced (see \cite{frc1,frc2,frc6,frc7} and references therein). Dynamics is defined through an action of the form $S=\intd\varrho(x)\,\mathcal{L}$, where $\varrho(x)$ is a Lebesgue--Stieltjes measure with anomalous scaling $\varrho(\lambda x)\sim\lambda^{d_{\rm H}}\varrho(x)$. The anomalous exponent $d_{\rm H}$ is the Hausdorff dimension of spacetime and its value is, in general, scale dependent and different from the integer topological dimension $D$. Assuming that $d\varrho(x)=d^Dx\,v(x)$ and that the weight $v$ is positive semidefinite and factorizable in the coordinates, $v(x)=\prod_{\mu=0}^{D-1}v_\mu(x^\mu)$, one can realize anomalous and multiscale geometries via multifractional measures, i.e., such that each $v_\mu$ is the sum of noninteger powers of the coordinates. The purpose of fractional models is twofold. On one hand, to understand renormalization properties of field theories living in nonconventional geometries \cite{frc2,frc9}. On the other hand, to better characterize effective spacetimes in quantum gravity by introducing an ``alternative toolbox'' of instruments borrowed from multifractal geometry \cite{frc1,frc2} and transport and probability theory \cite{frc7,frc4} (all fields where multifractional measures play an important role). Anomalous geometries arise most naturally in quantum gravity, either by construction (in the quest for theories with absent or tamed infinities in the ultraviolet) or from quantum effects, or both. Then, multiscale geometries and the tools of fractional calculus may serve as effective models describing certain regimes or features of other theories \cite{ACOS,fra7,CES}.
Presently, we will continue this trend of investigation by examining in detail the relativistic particle. The particle's worldline is described by a parametrization of the spacetime position coordinates $x^\mu(\t)$, which are the fundamental variables of the problem. As in standard Minkowski spacetime, imposing the action to be invariant under \emph{ordinary} Lorentz or Poincar\'e symmetries severely constrains the dynamics, while demanding the nonrelativistic limit \cite{frc5} to be recovered removes any further ambiguity in the multiscale formulation. The case with only Lorentz (and neither translation nor parametrization) invariance is considerably more complicated and we develop it in full only in the uncharged case.
The coordinates $x^\mu$, however, act also as parameters of fields, which transform under the Poincar\'e algebra of generators according to the rules of tensor calculus. These rules are modified in multiscale spacetimes, and the Poincar\'e algebra is deformed \cite{frc6,frc8}. Thus, when relativistic particles are embedded as sources in field theory, we expect the equations of motion to reflect the multiscale nature of the problem. This is indeed the case, as we will illustrate for a charged particle in Maxwell theory. In the presence of a source $J^\mu(x)$, the energy-momentum tensor of a $U(1)$ gauge field $A_\mu$ is not conserved. To get a conserved tensor density, we must specify the dynamics of the source term. In ordinary Maxwell theory, one adds to the action the contribution of a relativistic charged particle \cite{LL2,Zwi09}. Building on the multiscale version of Maxwell theory \cite{frc8}, here we will do the same, eventually obtaining the expected continuity equation of the full energy-momentum tensor, but only in geometries which are multiscale only in spatial directions. This anisotropic configuration is precisely the one which can effectively describe both noncommutative geometries with cyclicity-preserving measures \cite{ACOS} and Ho\v{r}ava--Lifshitz spacetimes \cite{fra7} as multifractional systems.
Before starting, we comment on one particular multiscale theory where the discussion is simple enough to be summarized in one paragraph. In a model [dubbed ``$q$ theory'' from the use of composite coordinates $q(x)$ here called $\varrho$] \cite{frc1,frc2,frc7} invariant under the nonlinear Poincar\'e-like transformation $\varrho^\mu({x'}^\mu)=\Lambda^\mu_{\ \nu}\,\varrho^\nu(x^\nu)+a^\mu$, all coordinates in the \emph{usual} action are replaced by the distributions
\begin{equation}\label{xvr}
x^\mu\to \varrho^\mu(x^\mu)\,.
\end{equation}
The resulting theory is not a trivial reformulation of ordinary physics because the ``coordinates'' $\varrho^\mu$ are actually composite objects and have scale-dependent, multiply anomalous scaling. This means that the physical momentum is canonically associated with the nonanomalous coordinates $x$, not with $\varrho$, and the structure of momentum space is different. In this geometry, the line element is
\begin{equation}\label{svr}
d s_\varrho:=\sqrt{-d \varrho^\mu(x)\,d \varrho_\mu(x)}\,,
\end{equation}
the action for the relativistic neutral particle is simply $S_{\rm p}=-m\intd s_\varrho$, and the nonrelativistic action $S_{\rm nonrel}\sim (m/2)\intd\varrho^0(t)\,{\varrho^i}'{\varrho_i}'$ is recovered. Here, roman indices $i=1,\dots,D-1$ run over spatial directions, primes denote derivatives with respect to $\varrho^0$, the sum convention for repeated indices is adopted, and we take the Minkowski metric with signature $\eta={\rm diag}(-,+,\cdots,+)$. The equations of motion $m\partial_s^2 \varrho^\mu(x)=0$ follow suit. If the particle is charged, its coupling with a Maxwell field is straightforward and is formally identical to the standard case under the replacement \Eq{xvr}. Although the $q$ theory can have a number of physical applications \cite{frc1,frc7,fra7}, regarding the specific problem of the relativistic particle we will focus on another multiscale model, namely, the one with weighted Laplacian \cite{frc6,frc7,frc8}, which is mathematically more interesting.
In Sec.\ \ref{unch}, we propose two versions of the uncharged system. The choice between the cases will be determined by various factors, including whether we impose nonrelativistic mechanics as the limit of the more fundamental action $S_{\rm p}$. This will select the anisotropic case, but the two cases will actually collapse into each other when asking compatibility with Maxwell theory (Secs.\ \ref{ch} and \ref{emt}). This restriction (optional, as argued in Sec.\ \ref{disc}) is $v_0(t)=1$, eventually leading back to the Poincar\'e-invariant scenario. Still, the geometry is nontrivial, since it is multiscale along spatial directions.
\section{Uncharged particle}\label{unch}
Consider first a noncharged relativistic particle. In ordinary spacetime, the only Poincar\'e-invariant action recovering the nonrelativistic Lagrangian $L=(m/2)\dot x_i \dot x^i$ is, in $c=1$ units,
\begin{equation}\label{Sp1}
\bar S_{\rm p}[x]=-m\intd \bar s\,,\qquad d \bar s[x]:=\sqrt{-d x_\mud x^\mu}\,,
\end{equation}
where $d \bar s$ is the infinitesimal line element. In other words, the action measures the proper-time interval between the initial and final point of the particle worldline. In time-space components, the line element can be also written as $d \bar s=d t/\bar\gamma$, where $\bar\gamma:=1/\sqrt{1-\dot x_i\dot x^i}$ and dots denote derivatives with respect to time $t=x^0$.
In the context of field theory, the free field action of multiscale spacetimes with weighted Laplacians is invariant under the Poincar\'e algebra of transformations defined by fractional momentum and Lorentz operators \cite{frc6} associated with ordinary translations and Lorentz transformations
\begin{equation}\label{lore}
{x'}^\mu=\Lambda^\mu_{\ \nu}\, x^\nu
\end{equation}
on the coordinates $x^\mu$. A relativistic particle's worldline is represented by the coordinates themselves; so, when it is added to a field action as a matter content, a natural possibility is to require the particle action $S_{\rm p}$ to be invariant under Eq.\ \Eq{lore}. This considerably limits the form of the dynamics. If we ask for Poincar\'e invariance, we end up with the action \Eq{Sp1}, which, however, does not lead asymptotically to the action of a fractional nonrelativistic particle \cite{frc5}
\begin{equation}\label{nore}
S_{\rm nonrel}= \frac12 m\sum_i\int d t\, v_0(t)\, ({}_v\mathcal{D}_t x^i)^2\,,
\end{equation}
where $v_0(t)$ is the measure weight along the time direction and
\begin{equation}\label{vD}
{}_v\mathcal{D}_t=\frac{1}{\sqrt{v_0(t)}}\partial_t \left[\sqrt{v_0(t)}\,\cdot\,\right]\,.
\end{equation}
This weighted derivative actually stems from the spacetime derivative with the full measure weight $v(x)$,
\begin{equation}\label{wed}
\mathcal{D}_\mu:=\frac{1}{\sqrt{v(x)}}\partial_\mu \left[\sqrt{v(x)}\,\cdot\,\right]\,.
\end{equation}
Since $v$ is assumed to be factorizable, on a function $f(t)$ one has $\mathcal{D}_\mu f={}_v\mathcal{D}_t f$. This derivative is the natural choice in multiscale quantum mechanics giving rise to a self-adjoint momentum operator \cite{frc5}. In order to recover \Eq{nore} for $v_0(t)\neq 1$, we have to give up translation and Lorentz invariance. This is still compatible with field theory, since the action $S_{\rm p}$ is, like Eq.\ \Eq{Sp1}, nonquadratic in the fundamental variables.
\subsection{Isotropic action weight}
\subsubsection{Line element and action}
Multiscale actions feature a Lebesgue measure with a nontrivial weight, and one may ask what happens if one applies the same criterion to augment the action \Eq{Sp1} by a generic positive semidefinite weight function $\tilde\omega(s)$:
\begin{equation}\label{Sp2}
S_{\rm p}=-m\intd s\,\tilde \omega(s)\,,\qquad d s:=\sqrt{-\omega(s)\,d_\omega x_\mud_\omega x^\mu}\,,
\end{equation}
where
\begin{equation}
d_\omega:=\frac{1}{\sqrt{\omega(s)}}\,d \left[\sqrt{\omega(s)}\,\cdot\,\right]
\end{equation}
is the weighted fractional differential with respect to another weight function $\omega(s)$. For the time being, we regard $\omega$ and $\tilde \omega$ as independent. Notice that $s$ is not the Lorentz distance in multiscale Minkowski spacetime; the actual distance is $s_\varrho$ in all versions of the theory, Eq.\ \Eq{svr} \cite{frc1,frc2}.
The above definition of $d s$ is implicit; after some algebraic manipulation, one finds
\begin{eqnarray}
d s&=&\frac{2}{4/\omega+\Omega^2\, x^\nu x_\nu}\left\{-\Omega\,x_\mud x^\mu\vphantom{\sqrt{\frac12}}\right.\nonumber\\
&&\left.+\sqrt{\Omega^2\,(x_\mud x^\mu)^2-\left[\frac{4}{\omega}+\Omega^2\,x^\nu x_\nu\right] d x_\mu d x^\mu}\right\},\nonumber\\\label{tras}
\end{eqnarray}
where $\Omega(s):=\partial_s \omega/\omega$, and the sign in front of the square root has been chosen to get the correct signature in the limit $\Omega\to 0$. This expression is transcendental in $s$ and, typically, cannot be inverted to get $s=s(x)$; we will use it once later for a check.
Let us parametrize the worldline $x^\mu(\t)$ of the particle with an arbitrary parameter $\t$, as in the ordinary case \cite{Zwi09}. The action \Eq{Sp2} becomes
\begin{equation}\label{Sp3}
S_{\rm p}=\intd \t\, w(\t)\,L=-m\intd \t\,\tilde w(\t)\,\sqrt{-w(\t)\,\hat u_\mu\hat u^\mu}\,,
\end{equation}
where the Lagrangian density $L$ has been defined according to the weight carried by the derivatives \cite{frc5}, the functions $\tilde w(\t):=\tilde \omega[s(\t)]$ and $w(\t):=\omega[s(\t)]$ are fixed profiles of the parameter $s$, and $\hat u^\mu = \mathcal{D}_\t x^\mu$, with the understanding that $\mathcal{D}_\t$ has weight $w(\t)=\omega[s(\t)]$ and $\mathcal{D}_s$ has weight $\omega(s)$. Therefore, under a reparametrization $\t\to \t'$ they will change form as $w(\t)\to w'(\t')=\omega[s(\t')]$, and the action is parametrization invariant only if $\omega=1=w$. Here we called $\hat u^\mu$ the $D$-dimensional ``fractional velocity'' vector, using a hat to avoid confusion with the relativistic velocity symbol
\begin{equation}\label{uu}
u^\mu := \mathcal{D}_s x^\mu\,,\qquad \omega\,u^2=\omega\,u_\mu u^\mu \stackrel{\text{\tiny \Eq{Sp2}}}{=}-1\,.
\end{equation}
The measure weight $\tilde w$ can be naively fixed by requiring that the action \Eq{Sp3} in the integer picture reduces to the usual one \Eq{Sp1}. This is not mandatory but it is somewhat expected from the behaviour of the nonrelativistic particle mechanics \cite{frc5} and of field theories \cite{frc6}. In particular, the nonrelativistic case suggests to define \cite{frc5}
\begin{equation}\label{chiw}
\chi^\mu(\t):=\sqrt{w(\t)}\,x^\mu(\t)\,,
\end{equation}
so that $d s=\sqrt{-d\chi_\mud\chi^\mu}=d\bar s[\chi]$ and
\begin{equation}\label{Sp4}
S_{\rm p}[x]=-m\intd \t\,\tilde w\,\sqrt{-\partial_\t\chi_\mu\partial_\t\chi^\mu}=\bar S_{\rm p}[\chi]
\end{equation}
if, and only if,
\begin{equation}\label{ww}
\tilde w(\t)=1= \tilde \omega(s)\,,
\end{equation}
which will be also required by the dynamics for self-consistency.
\subsubsection{Equations of motion}
We do \emph{not} yet impose Eq.\ \Eq{ww}. Since $\tilde w$ and $w$ are fixed functions of the parameter $\t$, their variation with respect to $\delta x^\mu$ when calculating the equations of motion is zero. In fact, from
\begin{eqnarray}\nonumber
\delta (d s)^2 &=& 2d s\,\delta(d s)=-2w \delta (\mathcal{D}_\t x^\mu) \mathcal{D}_\t x_\mu\, (d\t)^2\\
&=&-2w \mathcal{D}_\t \delta x^\mu \mathcal{D}_\t x_\mu\, (d\t)^2,\nonumber
\end{eqnarray}
one gets $\delta (d s)=-d\t\,w\,u_\mu \mathcal{D}_\t \delta x^\mu$. Applying this equation and the variational principle to the action \Eq{Sp3} with measure \Eq{ww}, after integrating by parts we obtain
\begin{equation}\label{deS}
\delta S_{\rm p} =-\intd\t\,w\,(m\mathcal{D}_\t u_\mu)\, \delta x^\mu
\end{equation}
and the equations of motion
\begin{equation}\label{cdp}
\mathcal{D}_\t p_\mu=0\,,\qquad p_\mu:=\tilde \omega\,m u_\mu\,.
\end{equation}
The vector $p^\mu$ is the canonical momentum of the relativistic particle, as one can check when treating the latter as a constrained Hamiltonian system \cite{HRT}. Given the Lagrangian density $L=-m(\tilde w/w)\sqrt{-w\hat u^2}$ in \Eq{Sp3},
\begin{eqnarray}
p^\mu &:=&\frac{\partial L}{\partial \hat u_\mu}= \frac{\tilde w}{\sqrt{w}}\frac{m\hat u^\mu}{\sqrt{-\hat u^2}}
= \frac{\tilde w}{\sqrt{w}}\frac{m u^\mu}{\sqrt{-u^2}}\nonumber\\
&\ \stackrel{\text{\tiny \Eq{uu}}}{=}\ &\tilde w\,m u^\mu\,.\label{pcan}
\end{eqnarray}
From the expression $p^2=\tilde\omega^2\, m^2 u^2$, one gets the momentum-mass relation
\begin{equation}\label{1st}
p_\mu p^\mu+m_w^2= 0\,,\qquad m_w=\tilde w\,\frac{m}{\sqrt{w}}=\tilde\omega\,\frac{m}{\sqrt{\omega}}\,.
\end{equation}
This is not the usual dispersion relation one would have expected from the findings in nonrelativistic mechanics \cite{frc5} or in scalar field theory \cite{frc6}. The ultimate reason is the dynamical role of Eq.\ \Eq{1st}, where $-p^2$ is (the square of) a dynamical canonical variable and not the eigenvalue of the Laplace--Beltrami operator. The presence of an effective varying mass is remindful of the analogous varying electric charge found in Maxwell theory and electrodynamics \cite{frc8}, although the density currents associated with, respectively, the actual mass $m$ and the electric charge will obey crucially different continuity equations.
Let
\begin{equation}
\check{\mathcal{D}}_\t:=\frac{1}{w(\t)}\partial_\t \left[w(\t)\,\cdot\,\right]
\end{equation}
be the weighted derivative acting on bilinear densities: $\check{\mathcal{D}}_\t (AB)=B \mathcal{D}_\t A+A\mathcal{D}_\t B$. Applying $\check{\mathcal{D}}_\t$ to Eq.\ \Eq{1st} and using the equations of motion \Eq{cdp}, we uniquely fix the function $\tilde w$ as in Eq.\ \Eq{ww}. Thus, $p_\mu=m u_\mu$, and setting $\t=s$ ($w=\omega$) in the equations of motion \Eq{cdp} yields
\begin{equation}\label{pmu}
\mathcal{D}_s p^\mu= m\mathcal{D}_s^2 x^\mu=0\,,
\end{equation}
in agreement with the equations of motion of a massless free particle in nonrelativistic mechanics ($\mathcal{D}_t^2 x^i=0$) \cite{frc5} and with the Klein--Gordon equation in massless scalar field theory ($\mathcal{D}_\mu\mathcal{D}^\mu \phi=0$) \cite{frc6}.
Notice that the integer picture \Eq{Sp4} stemming from the change of variables \Eq{chiw} is consistent. We have seen that the action \Eq{Sp4} corresponds to the one of a standard particle with worldline $\chi^\mu$. Since $\bar s=s$, the momentum associated with $\chi$ is $\bar p^\mu=m\partial_{\bar s}\chi^\mu=\sqrt{\omega}\,p^\mu$, and Eq.\ \Eq{pmu} coincides with the standard equation $\partial_{\bar s}\bar p^\mu=m\partial_{\bar s}^2\chi^\mu=0$.
In Hamiltonian formalism, Eq.\ \Eq{1st} is a first-class constraint, which we implement by replacing the symbol $=$ with a weak equality $\approx$ on the constraint surface. As for the relativistic particle in ordinary spacetime, the presence of a constraint stems from the fact that the momentum $p$ and the Lagrangian $L$ are, respectively, zeroth and first order in the ``velocities'' $\hat u$. As a consequence, there is no unique solution $\hat u^\mu(x,p)$, the Lagrangian is singular, and the canonical Hamiltonian $H:= p_\mu \hat u^\mu-L=0$ vanishes identically. The Dirac Hamiltonian can then be written as the sum of the canonical Hamiltonian plus the first-class constraint multiplied times a function $f(\t)$: $H_{\rm D}:= H+f(\t)\,(p_\mu p^\mu+m_w^2)\approx 0$. The fractional Hamilton equations \cite{frc5} correctly stem from this object. The Poisson bracket between momentum and $H_{\rm D}$ yields the equations of motion \Eq{cdp},
\begin{equation}
\mathcal{D}_\t p^\mu=\{p^\mu,H_{\rm D}\}=-\frac{\partial H_{\rm D}}{\partial x_\mu}=-\frac{\partial f}{\partial x_\mu}(p^2+m_w^2)\approx 0\,,
\end{equation}
while the bracket for $x^\mu$ fixes the function $f$: $\hat u^\mu=\mathcal{D}_\t x^\mu=\{x^\mu,H_{\rm D}\}=\partial H_{\rm D}/\partial p_\mu=2fp^\mu$, hence
\begin{equation}
H_{\rm D}= \frac{w}{2m}\sqrt{-\hat u^2}\,(p^2+m_w^2)\,.
\end{equation}
When $w(\t)=1$, the system is parametrization invariant and the residual arbitrariness in the parameter $\t$ can be removed by considering a gauge constraint $x^0-\t\approx 0$ and treating Eq.\ \Eq{1st} as a second-class constraint \cite{HRT}, but we will not do it here.
\subsubsection{Nonrelativistic limit}
The representation in the integer picture can be used as the starting point to discuss the nonrelativistic limit. Expanding Eq.\ \Eq{Sp4} ($\tilde w=1$) in an inertial frame where $|\partial_\t\chi^i/\partial_\t\chi^0|\ll 1$, we get
\begin{eqnarray}
S_{\rm p} &=&-m\intd \t\,\sqrt{-\partial_\t\chi_\mu\partial_\t\chi^\mu}\nonumber\\
&=& -m\intd \t\,\sqrt{(\partial_\t\chi^0)^2-(\partial_\t\chi^i)^2}\nonumber\\
&\approx& -m\intd \chi^0+S_{\rm nr}\,,\label{24}
\end{eqnarray}
where we assumed, without loss of generality, that $\partial_\t \chi^0>0$ and
\begin{eqnarray}
S_{\rm nr} &=& \frac12 m\sum_i\int d \t\, \frac{1}{\partial_\t \chi^0}\,(\partial_\t \chi^i)^2\nonumber\\
&=& \frac12 m\sum_i\int d t\, \frac{1}{\dot\chi^0}\,v_0(t)\,({}_v\mathcal{D}_t x^i)^2\,,\label{nore2}
\end{eqnarray}
where we set $w[\t(t)]=v_0(t)$ for any identification of the worldline parameter $\t$ with a function of time $x^0=t$. The limit \Eq{24} is also obtained from Eq.\ \Eq{tras}, dropping $O(\Omega^2)$ and $O(x_i^3)$ terms.
To recover the nonrelativistic limit \Eq{nore}, one should impose $\dot\chi^0(t)=\partial_t(\sqrt{v_0}\,t)=1$, which implies $w(\t)=1=v_0(t)$ and a spacetime whose multiscale structure is encoded only in spatial directions. This result motivates a different scenario, where the measure weight $w\to w_\mu$ in the derivatives is anisotropic.
\subsection{Anisotropic action weight}
\subsubsection{Line element and action}
A generalization of the previous case is obtained by allowing for the differential $d_\omega$ in Eq.\ \Eq{Sp2} to carry different weights $\omega_\mu$ for each of the $D$ directions:
\begin{equation}
(d_\omega x)^\mu:=\frac{1}{\sqrt{\omega_\mu(s)}}\,d \left[\sqrt{\omega_\mu(s)}\,x^\mu\right]\,.
\end{equation}
The notation of the previous section can be used, provided expressions of the form $w \hat u^2$ and similar are replaced by their counterpart where the sum over spacetime indices is extended also to the weights, $w \hat u^2\to w\cdot \hat u\cdot \hat u=\sum_\mu w_\mu \hat u^\mu \hat u_\mu$. For instance, the action can be written as ($\tilde \omega=1$ from the start)
\begin{eqnarray}
S_{\rm p}&=&-m\intd s=-m\int\sqrt{-d(\sqrt{\omega_\mu}\,x_\mu)\,d(\sqrt{\omega_\mu}\,x^\mu)}\nonumber\\
&=&-m\intd \t\,\sqrt{w\cdot\hat u\cdot \hat u}\,,\qquad \hat u^\mu=(\mathcal{D}_\t x)^\mu\,.\label{Sp5}
\end{eqnarray}
Consequently, the normalization \Eq{uu} of the velocity holds, $ \omega\cdot u\cdot u=-1$. We call $\omega_\mu$ the ``action weights'' to avoid confusion with the spacetime weight $v(x)$. A system with anisotropic action weights is not necessarily associated with a spacetime with anisotropic spacetime weight.
\subsubsection{Equations of motion}
The variation $\delta (d s)=-d\t\,w\cdot u\cdot\mathcal{D}_\t \delta x$
yields the equations of motion
\begin{equation}\label{cdp2}
(\mathcal{D}_\t p)^\mu=0\,,\qquad p^\mu= m (\mathcal{D}_s x)^\mu\,.
\end{equation}
In particular, for $\t=s$ one has
\begin{equation}\label{pmu2}
(\mathcal{D}_s p)^\mu= m(\mathcal{D}_s^2 x)^\mu=0\,.
\end{equation}
The momentum $p$ can be also defined by the generalization of Eq.\ \Eq{pcan} working in the integer picture and computing $\bar p^\mu=\sqrt{\omega_\mu}\,p^\mu$.
The dispersion relation \Eq{1st} cannot be cast in terms of $p^2$, but, rather, with a weighed squared momentum:
\begin{equation}\label{1st2}
\omega\cdot p\cdot p+m^2= 0\,.
\end{equation}
\subsubsection{Nonrelativistic limit}
Reconciling Eqs.\ \Eq{nore} and \Eq{nore2} is now straightforward. It is sufficient to make the following anisotropic identifications of the action weights, such that derivatives acting on $x^0$ are normal and those acting on $x^i$ all have the same weight:
\begin{equation}
\omega_0=1=w_0\,,\qquad \omega_i[s(t)]=v_0(t)=w_i[\t(t)]\,.
\end{equation}
Then, $\dot\chi^0(t)=\partial_t(\sqrt{w_0}\,t)=1$ and $S_{\rm nr}= S_{\rm nonrel}$. The line element of this system is
\begin{subequations}\label{es}\begin{equation}
d s=\sqrt{d t^2-v_0(t)\,(d_v x^i)^2}=\frac{d t}{\gamma}\,,
\end{equation}
where
\begin{equation}
\gamma=\frac{1}{\sqrt{1-v_0(t)\,({}_v\mathcal{D}_t x^i)^2}}\,.
\end{equation}\end{subequations}
The nature of the nonrelativistic approximation $\gamma\sim 1$ in multiscale spacetimes depends not only on how much the modulus of the velocity $\dot {\bf x}$ is smaller than the speed of light $c=1$, but also on the time when the approximation is taken with respect to the evolution of the Universe determined by the hierarchy of scales contained in the measure weight (and which can be better appreciated in the multifractional realization of these measures \cite{frc6,frc7,frc4,frc8}).
The case of anisotropic geometries with standard time direction is of particular interest. Setting $v_0=1$ makes the relativistic-particle system with line element \Eq{es} ordinary ($\chi^\mu=x^\mu$ for all $\mu$) and Poincar\'e invariant, Eq.\ \Eq{Sp1}. The nontrivial multiscale structure along spatial directions [$v_i(x^i)\neq 1$] can be seen only in quantum mechanics \cite{frc5} or when fields are coupled to the particle, as in the case of electromagnetism that we will illustrate below. In fact, the application of ordinary Lorentz transformations to the coordinates labeling fields will be instrumental to couple the relativistic particle and multiscale field theory consistently.
Ultimately, the complications of the case with isotropic action weights stem from the fact that, for the relativistic particle, the time coordinate is also a degree of freedom of the system. In contrast, in nonrelativistic mechanics time is a parameter and spatial coordinates are the degrees of freedom, while in field theory all coordinates are parameters and the degrees of freedom are tensorial densities. In other words, the coordinate or field redefinitions mapping the fractional picture to the integer picture [e.g., $\chi^i(t)=\sqrt{v_0(t)}\, x^i(t)$ in nonrelativistic mechanics and $\mathcal{A}_\mu=\sqrt{v(x)}\,A_\mu$ in electrodynamics] do not combine different degrees of freedom, since the measure weights are functions of parameters. On the other hand, in relativistic mechanics the relation \Eq{chiw} does entangle different degrees of freedom when moving to a parametrization choice $\t=f(t)$ or $\t=f(s)$.
\section{Charged particle}\label{ch}
When the relativistic particle interacts with fields, there is no clean representation in the integer picture except in anisotropic geometries with $v_0(t)=1$. We can see this in electromagnetism. The Maxwell action in multiscale spacetimes is \cite{frc8}
\begin{equation}\label{clf}
S_F=-\frac1{4}\intd^Dx\,v(x)\,F_{\mu\nu} F^{\mu\nu}\,,
\end{equation}
where
\begin{equation}
F_{\mu\nu} =\mathcal{D}_\mu A_\nu-\mathcal{D}_\nu A_\mu\label{maxf}
\end{equation}
is the field strength of the Abelian gauge field density vector $A$. To recast the action in the integer picture, it is sufficient to make the field redefinition $A_\mu\to \mathcal{A}_\mu=\sqrt{v(x)}\,A_\mu$, so that $S_F=(-1/4)\intd^Dx\,\mathcal{F}_{\mu\nu} \mathcal{F}^{\mu\nu}$, where $\mathcal{F}_{\mu\nu} =\partial_\mu \mathcal{A}_\nu-\partial_\nu \mathcal{A}_\mu=\sqrt{v(x)} F_{\mu\nu}$. However, at the level of particle mechanics the integer picture entails a change of coordinates, Eq.\ \Eq{chiw}, which should not happen in a field-theory context. Combined with the above field redefinition, the net result is considerably more intricate. First, we notice that (sum over three-time repeated index $\sigma$)
\begin{equation}
\frac{d}{d x^\mu}=\frac{d\chi^\sigma}{d x^\mu}\frac{d}{d \chi^\sigma}=\sqrt{w_\sigma}\frac{\mathcal{D}_\t x^\sigma}{\partial_\t x^\mu}\frac{d}{d \chi^\sigma}=:\xi_{\mu\sigma}(x,\t)\frac{d}{d \chi^\sigma}\,,
\end{equation}
so that
\begin{eqnarray}
F_{\mu\nu} &=& \frac{1}{\sqrt{v(x)}}\left(\partial_\mu \mathcal{A}_\nu-\partial_\nu \mathcal{A}_\mu\right)\nonumber\\
&=& \frac{1}{\sqrt{v(x)}}\left[\xi_\mu^{\, ~\sigma}(x,\t)\frac{\partial\mathcal{A}_\nu}{\partial\chi^\sigma}-\xi_\nu^{\, ~\sigma}(x,\t)\frac{\partial\mathcal{A}_\mu}{\partial\chi^\sigma}\right].
\end{eqnarray}
The expression in brackets coincides with $\mathcal{F}_{\mu\nu}$ only if $\xi_\mu^{\,~\sigma}=\delta_\mu^\sigma$. This happens only when $w_\mu=1$ for all $\mu$. In particular, the case where $w_0=1$ and $w_i=v_0(t)$ is compatible with the integer-picture field theory if, and only if,
\begin{equation}\label{vw}
v_0(t)=1\,,\qquad w_\mu(\t)=1\,,
\end{equation}
consistently with the nonrelativistic limit. This corresponds to a geometry which is multiscale \emph{only along spatial directions}. The alternative to imposing this condition is to forfeit the integer-picture field theory altogether. This is not an issue \emph{per se}, since the physics is defined in the fundamental variables $x^\mu$ and field $A_\mu$, while the integer picture is only a useful calculational tool which, as known, fails in interacting systems \cite{frc6}. Here we have the double complication of having both an interaction (between the particle and the electromagnetic field) and a system (the relativistic particle) which is nonlinear even when taken alone, at least at the level of the action. Overall, it would seem reasonable to abandon the integer picture.
However, one may raise at least three objections against doing so. First, the electromagnetic interaction should at most produce an effective spacetime-dependent electric charge while allowing for the integer picture \cite{frc8}. Second, the equations of motion of the relativistic particle are indeed linear in the coordinates. Last, failing to impose Eq.\ \Eq{vw} renders the equations of motion and the treatment of the energy-momentum tensor far more complicated. Therefore, from now on we do assume \Eq{vw}.
We will keep the same velocity symbols as before, with the understanding that $\hat u^\mu=\partial_\t x^\mu$ and $u^\mu=\partial_s x^\mu$. A charged particle produces an electromagnetic field $A_\mu[x^\nu(\t)]$ according to the action $S_{\rm p}+S_e$, where
\begin{equation}\label{Se}
S_e=\intd x^\mu\, \tilde e A_\mu=\intd \t\,\hat u^\mu\, \tilde e A_\mu \,.
\end{equation}
Here, $\tilde e=e_0\sqrt{v(x)}$ is the effective spacetime-dependent charge appearing in the covariant derivatives of electrodynamics \cite{frc8} and $e_0$ is the electron charge.
The equations of motion can be easily found following almost the usual steps \cite{LL2,Zwi09}. Varying the action \Eq{Se} with respect to $\delta x^\mu$,
\begin{eqnarray}
\delta S_e &=& \intd\t\left[\hat u^\nu\delta(\tilde e A_\nu)+\tilde e A_\mu\delta(\partial_\t x^\mu)\right]\nonumber\\
&=& \intd\t\left[\hat u^\nu\partial_\mu (\tilde e A_\nu)\delta x^\mu+\tilde e A_\mu\partial_\t \delta x^\mu\right]\nonumber\\
&=& \intd\t\left[\hat u^\nu\partial_\mu (\tilde e A_\nu)\delta x^\mu-\partial_\t(\tilde e A_\mu) \delta x^\mu\right]\nonumber\\
&=& \intd\t\,\hat u^\nu\left[\tilde e\,(\mathcal{D}_\mu A_\nu)\delta x^\mu-\partial_\nu(\tilde e A_\mu) \delta x^\mu\right]\nonumber\\
&=& \intd\t\,\hat u^\nu\,\tilde e F_{\mu\nu}\,\delta x^\mu\,.
\end{eqnarray}
Adding this to Eq.\ \Eq{deS} with $w=1$, one obtains the equations of motion
\begin{equation}\label{deS2}
\partial_\t p_\mu=\hat u^\nu\tilde e F_{\mu\nu}\,,
\end{equation}
or, for $\t=s$, $m\partial_s u_\mu= u^\nu\tilde e F_{\mu\nu}$.
Equation \Eq{Se} can be written in terms of a source $J^\mu_e$. First, one replaces $\tilde e$ with a spatial charge distribution $\rho_e$ of pointwise charges $\tilde e_n\propto \tilde e({\bf x}_n)$ located at various points ${\bf x}_n$. In our case, $\rho_e$ should take into account the nontrivial measure weight of space, so that the ordinary Dirac distribution is replaced by a multiscale spatial delta distribution $\delta_v({\bf x},{\bf x}_n):=\prod_i\delta(x^i-x^i_n)/\sqrt{v_i(x^i)\,v_i(x^i_n)}$ (e.g., Ref.\ \cite{frc4}). Multiplying $\rho_e$ times $\dot x$ yields the charge density current
\begin{equation}
J^\mu_e:=\rho_e\,\dot x^\mu\,,\qquad \rho_e=\sum_n\tilde e_n\delta_v({\bf x},{\bf x}_n)\,.
\end{equation}
Thus, $\tilde e\to d\tilde e:=\rho_e\,d\varrho({\bf x})$ in Eq.\ \Eq{Se}, which becomes ($\t=t$)
\begin{equation}\label{Se2}
S_e=\intd x^\mu\, d\tilde e A_\mu=\intd^Dx\,v(x)\, J^\mu_e A_\mu\,.
\end{equation}
In particular, in the integer picture one has $S_e=\bar S_e=\intd^Dx\, \mathcal{J}^\mu_e \mathcal{A}_\mu$, where $\mathcal{J}^\mu_e:=\sqrt{v}\,J^\mu_e$. The equations of motion \Eq{deS2} can be recast in terms of mass and charge densities,
\begin{equation}\label{deS3}
\rho_m\partial_s u_\mu=\rho_e u^\nu F_{\mu\nu}\,,
\end{equation}
where $\rho_m=\sum_n m_n\delta_v({\bf x},{\bf x}_n)$.
\section{Energy-momentum tensor}\label{emt}
To get the energy-momentum tensor \cite{LL2}, one notices that the four-momentum density should be of the form ${}^{({\rm p})}T^{0\nu}=\rho_m u^\nu$, so that
\begin{equation}
p^\nu =\intd\varrho({\bf x})\,{}^{({\rm p})}T^{0\nu}\,.
\end{equation}
Moreover, the mass density is the $0$ component of $\rho_m \dot x^\mu$, so that the symmetric energy-momentum tensor of the relativistic particle is
\begin{equation}\label{Tp}
{}^{({\rm p})}T^{\mu\nu}:=\rho_m \gamma u^\mu u^\nu=J_m^\mu u^\nu\,,
\end{equation}
where $\gamma=d s/d t$ and in the second step we defined a mass density current $J_m^\mu:=\rho_m\dot x^\mu$. Despite its similarity with the charge density current, they obey different continuity laws. Let
\begin{equation}
\check{\mathcal{D}}_\mu:=\frac1v \partial_\mu \left[v\,\cdot\,\right]
\end{equation}
be the weighted spacetime derivative for bilinears. In the present case, $\check{\mathcal{D}}_0=\partial_t$. While it was shown in Ref.\ \cite{frc8} that the charge density current is conserved with respect to the weighted derivative \Eq{wed}, $\mathcal{D}_\mu J_e^\mu=0$, due to conservation of the total mass $M=\sum_n m_n$ we have
\begin{equation}\label{JJ}
0=\check{\mathcal{D}}_\mu J_m^\mu= \dot \rho_m+ \frac{1}{v_i}\,\partial_i (v_i\,J_m^i)\,.
\end{equation}
Integrating over the spatial volume and throwing away a boundary term, one gets $\dot M=0$.
The multiscale continuity equation for ${}^{({\rm p})}T$ is obtained by applying these results:
\begin{eqnarray}
\check{\mathcal{D}}_\mu\, {}^{({\rm p})}T^\mu_{\, ~\nu} &=& u_\nu\check{\mathcal{D}}_\mu J_m^\mu+ J_m^\mu \partial_\mu u_\nu\ \stackrel{\text{\tiny \Eq{JJ}}}{=}\ J_m^\mu \partial_\mu u_\nu\nonumber\\
&=& \rho_m \dot x^\mu \frac{d s}{d x^\mu} \partial_s u_\nu\ \stackrel{\text{\tiny \Eq{deS3}}}{=}\ -\rho_e \dot x^\mu \frac{d s}{d x^\mu} u^\mu F_{\mu\nu}\nonumber\\
&=& -J_e^\mu F_{\mu\nu}\label{Tpcon}\,.
\end{eqnarray}
If we add the dynamics for the $U(1)$ gauge field $A$, we obtain conservation of the energy-momentum tensor of Maxwell theory in multiscale spacetimes with pointwise particle sources. The total action is $S=S_{\rm p}+S_e+S_F$. The energy-momentum tensor associated with the Maxwell term is \cite{frc8}
\begin{equation}\label{emt2}
{}^{(F)}T_{\mu\nu}=-\frac{1}{4}F^{\sigma\tau} F_{\sigma\tau}\eta_{\mu\nu} +F_\mu^{\ \sigma}F_{\nu\sigma}\,,
\end{equation}
which, upon using the cyclic relation $\check{\mathcal{D}}_\sigma F_{\mu\nu}+\check{\mathcal{D}}_\mu F_{\nu\sigma}+\check{\mathcal{D}}_\nu F_{\sigma\mu}=0$ and Maxwell equations $\mathcal{D}_\nu F^{\mu\nu}=J^\mu_e$, obeys the (non)conservation law \cite{frc8}
\begin{equation}\label{TFcon}
\check{\mathcal{D}}_\mu\, {}^{(F)}T^\mu_{\, ~\nu}=J^\mu_e F_{\mu\nu}\,.
\end{equation}
Combining this with Eq.\ \Eq{Tpcon}, we get
\begin{equation}
\check{\mathcal{D}}_\mu\,T^\mu_{\, ~\nu}=\check{\mathcal{D}}_\mu\, [{}^{(F)}T^\mu_{\, ~\nu}+{}^{({\rm p})}T^\mu_{\, ~\nu}]=0\,,
\end{equation}
as announced in \cite{frc8}.
\section{Discussion}\label{disc}
We conclude by drawing some consequences for the spectral dimension $d_{\rm S}$ of multiscale spacetimes \cite{frc7} and electrodynamics \cite{frc8}.
The spectral dimension of a geometry is obtained by letting a test particle diffuse via a transport equation which can be constructed from stochastic nonrelativistic mechanics. While in ordinary geometries this procedure is straightforward, in multiscale spacetimes there is an ambiguity in the choice of the scaling of the abstract diffusion time. In turn, this scaling determines the value of the spectral dimension \cite{frc7}. If the scaling is nonanomalous, $d_{\rm S}=D$. In this paper, we found that the charged relativistic particle admits an integer picture and is compatible with field theory only in spatially multiscale geometries where time direction is ordinary. An ordinary nonrelativistic limit, as implied by Eq.\ \Eq{vw}, would tell us that the diffusion equation should be standard and, thus, $d_{\rm S}=D$.
The same reasoning would also lead to the conclusion that uniform charge distributions in multiscale spacetimes with $v_0(t)=1$ are associated with a measured constant electric charge. This would rule out the effects of the time-varying fine-structure constant of multiscale origin discussed in Ref.\ \cite{frc8}. Yet, nontrivial effects coming from a space-dependent electric charge (hence, a spatially varying fine-structure constant) would not be excluded.
All these conclusions are based upon the nonrelativistic limit \Eq{nore} and compatibility of the charged relativistic particle with field theory. Concerning the first assumption, a simple alternative avoiding the condition $d_{\rm S}=D$ is to give up the nonrelativistic limit \Eq{nore}, assume the relativistic formulation as fundamental, and define the nonrelativistic limit through it, as in Eq.\ \Eq{nore2}. Then, one would not need to impose constraints on the action weights, but the nonrelativistic action would nevertheless lead to a modified mechanics and a modified diffusion equation with respect to that found in Ref.\ \cite{frc7}. On the other hand, if one insists to keep Eq.\ \Eq{nore}, one should also remark that the particle-plus-fields system is not fundamental, in contrast with a pure field-theory approach where the electromagnetic source producing the current $J_e$ is a fermionic density field \cite{frc8}. At a level more fundamental than classical mechanics, the requirement \Eq{vw} is not compelling. This suggests to turn around the above conclusions and reinterpret them in a more conservative way: Electromagnetic sources may be well approximated by the relativistic mechanics presented here only in multiscale geometries with ordinary time direction. When time is anomalous, one should resort either to field theory or to a more complicated mechanics model.
\bigskip
\begin{acknowledgments}
The author thanks Giuseppe Nardelli and especially David Rodr\'iguez for useful discussions. This work is under a Ram\'on y Cajal tenure-track contract.
\end{acknowledgments}
|
1,116,691,497,354 | arxiv |
\section{I.~Clebsch-Gordan coefficients}
Upon addition of a quasi-particle to a cluster, the probabilities to realize a state with a total spin $S'$ and spin projection $S'_z$ are determined by the corresponding Clebsch-Gordan coefficients for combining the spin $S$ of the existing cluster with the spin $1/2$ of the incoming quasi-particle [See, e.g., L. D. Landau and E. M. Lifshitz, {\em Quantum Mechanics: Non-Relativistic Theory}, Vol. 3 (3rd ed., Pergamon Press, 1977)].
The states we consider, labelled by $N$ and $m$, correspond to $S=S_z=(N-m)/2$. Thus, the relevant Clebsch-Gordan coefficients are the following:
\begin{eqnarray}
\left\langle\frac{N-m}2,\frac{N-m}2;\frac12,\frac12\Big|\frac{N-m+1}2,\frac{N-m+1}2\right\rangle&=&1\\
\left\langle\frac{N-m}2,\frac{N-m}2;\frac12,-\frac12\Big|\frac{N-m+1}2,\frac{N-m-1}2\right\rangle&=&\sqrt{\frac1{N-m+1}}\\
\left\langle\frac{N-m}2,\frac{N-m}2;\frac12,-\frac12\Big|\frac{N-m-1}2,\frac{N-m-1}2\right\rangle&=&\sqrt{\frac{N-m}{N-m+1}}
\end{eqnarray}
The first two lines correspond to processes with $\Delta m=0$ whereas the third line describes a process with $m\to m+1$.
\section{II.~Master equation and Fokker-Planck equation}
The master equation that describes the evolution of the probability $P_{N,m}$, taking into account all processes listed in the main text, is
\begin{eqnarray}
\label{master}
\frac{d}{dt}P_{N,m}&=&\frac{AV}2\left[\left(1+\frac1{N-m}\right)P_{N-1,m}+\left(1-\frac1{N-m}\right)P_{N-1,m-1}-2P_{N,m}\right]\\
&&+\frac{\bar\Gamma}V\left[(N-m+1)(m+1)P_{N+2,m+1}-(N-m)mP_{N,m}\right]\nonumber\\
&&+\frac{\bar\Gamma_{\rm t}}{2V}\left[(m+2)(m+1)P_{N+2,m+2}+2(N-m+1)(m+1)P_{N+2,m+1}+(N-m+2)(N-m+1)P_{N+2,m}\right.\nonumber\\
&&\left.-N(N-1)P_{N,m}\right]
+\frac1{\tau_{\rm sf}}\left[(m+1)P_{N,m+1}+(N-m+1)P_{N,m-1}-NP_{N,m}\right].\nonumber
\end{eqnarray}
Assuming that the number of particles exceeds the average density of the unpolarized system, we may write a simplified {master} equation for the fully polarized system, $m\to0$, {where singlet annihilation is the dominant process}. In that case, injection of a quasi-particle with opposite spin plus singlet annihilation yields a process $N+1\to N$ with rate $AV/2$. Similarly spin flip plus singlet annihilation yields a process $N+2\to N$ with rate $1/\tau_{\rm s}$. In particular,
\begin{eqnarray}
\label{master-P}
\frac{d}{dt}P_{N}&=&\frac{AV}2\left[\left(1+\frac1{N}\right)P_{N-1}+\left(1-\frac1{N+2}\right)P_{N+1}-2P_{N}\right]\\
&&+\frac{\bar\Gamma_{\rm t}}{2V}\left[(N+2)(N+1)P_{N+2}-N(N-1)P_{N}\right]
+\frac1{\tau_{\rm s}}\left[(N+2)P_{N+2}-NP_{N}\right].\nonumber
\end{eqnarray}
In the continuum limit, the probability $P(N)$ obeys the equation
\begin{equation}
\frac{\partial P}{\partial t}=\frac{AV}2\frac\partial{\partial N}\left[N^2\frac\partial{\partial N}\left(\frac P{N^2}\right)\!+\!\left(\frac{2\bar\Gamma_{\rm t}}{AV^2}N^2
\!+\!\frac4{AV\tau_{\rm s}}N\right)P\right]\!.\!\!\!\!\!\end{equation}
The stationary solution is Eq.~(7) of the main text.
\section{III.~The cross-over to an unpolarized state}
While, in principle, our model is only valid at $m\ll N$, we may still use it to better characterize the cross-over to an unpolarized state. The full stationary solution of Eqs.~(3) and (4) of the main text can be written as
\begin{eqnarray}
\frac{AV}{N-m}&=&2\frac {\bar\Gamma}Vm-\frac{\bar\Gamma_{\rm t}}V\frac{N^2}{N-m}\label{stat-2}\\
&=&\left(\frac{\bar\Gamma_{\rm t}}VN+\frac2{\tau_{\rm s}}\right)(N-2m).\label{stat-1}
\end{eqnarray}
If the polarization $p=1-2m/N$ is not too close to 1, namely $1-p\gg N_c^{-1}$, the second term in Eq.~\eqref {stat-2} may be neglected.
Using dimensionless units $a=A/A_c$, $v=V/V_c$, and $n=N/N_c$, we then obtain
\begin{eqnarray}
n=\frac{1-p}p-2v, \qquad a=2(1-p^2)\left(\frac{1-p}{2vp}-1\right)^2.\label{eq-res-full}
\end{eqnarray}
The second equation yields the polarization as a function of $a$ and $v$. Substitution of the result into the first equation then allows one to obtain the number of particles as function of $a$ and $v$. Equations (5) and (6) of the main text are reproduced at $1-p\ll1$. The polarization as well as the number of particles in the $(a,v)$-plane is shown in Fig.~\ref{fig-av}. The polarization smoothly decreases as $a$ or $v$ are increased, see Fig.~\ref{fig-av}a). While $n$ increases with $a$ or $v$, see Fig.~\ref{fig-av}b), the ratio $N/N_{\rm unpol.}$ decreases, see Fig.~\ref{fig-av}c). A large enhancement of the quasi-particle number, $N/N_{\rm unpol.}\gg1$, can be seen at small $a$ and $v$.
\begin{figure}[h!]
a)\resizebox{.25\textwidth}{!}{\includegraphics{figSMa.png}}
b)\resizebox{.26\textwidth}{!}{\includegraphics{figSMb.png}}
c)\resizebox{.28\textwidth}{!}{\includegraphics{figSMc.png}}
\caption{\label{fig-av} {Cross-over between the polarized and unpolarized state as described by Eqs.~\eqref{eq-res-full}.} a) The polarization as a function of $v$ and $a$. b) The number of quasi-particles as a function of $v$ and $a$. c) Detail of the enhancement of the number of quasi-particles compared to the unpolarized case, $N/(c_0V)$.}
\end{figure}
{Within this description, the cross-over between the unpolarized regime and the SR dominated polarized regime appears smoother than sketched in Fig.~1 of the main text. Namely, according to Eqs.~\eqref{eq-res-full}, a finite polarization $p$ builds up at vanishing injection, $a\to 0$, for a sufficently small volume, $v<v_0$ with $v_0=(1-p)/(2p)$. As the injection increases, the volume corresponding to the same polarization decreases,
\begin{equation}
a\approx 8p^2\frac{1+p}{1-p}(v_0-v)^2\quad {\rm at}\quad v_0-v\ll v_0.
\end{equation}
}
\section{IV.~Ising spin model}
While the $1/(N-m)$ corrections in the first line of Eq.~\eqref{master} were essential to obtain the mean field result, they are much less important for the fluctuations. To confirm this, we study classical (or Ising) spins as considered in [A. Bespalov {\it et al.},
Phys. Rev. Lett. {\bf 117}, 117002 (2016)] for comparison. As the spin quantization axis is now {\lq\lq pinned\rq\rq}, one loses the $1/N$-asymmetry that favors the spontaneous polarization in the quantum (or Heisenberg) case. We note, however, that, if a polarized cluster is generated due to fluctuations, spin relaxation or triplet annihilation are necessary for it to relax, as the singlet recombination process does not change the total spin.
The {m}aster equation in that case is very similar to Eq.~\eqref{master}, however, without the $1/(N-m)$ terms in the first line. Note that, by contrast to the quantum case, where the condition $m\ll N$ was used to estimate the number of available states, here no approximations are necessary: the equation holds for all $m$. Assuming that the number of particles exceeds the average density of the unpolarized system, we may again write a simplified equation for the {polarized system, $m\to0$ (or $m\to N$),} adapting Eq.~\eqref{master-P}{ (i.e., omitting the terms $\sim 1/(N\pm 1)$)}. In the continuum limit, the stationary solution now obeys the equation
\begin{equation}
0=\frac\partial{\partial N}\left[\frac\partial{\partial N}P^{\rm cl}\!+\!\left(\frac{2\bar\Gamma_{\rm t}}{AV^2}N^2
\!+\!\frac4{AV\tau_{\rm s}}N\right)P^{\rm cl}\right]\!,\end{equation}
yielding
\begin{eqnarray}
P^{\rm cl}(N)=C_{\rm cl}\exp\left[-\frac{2\bar\Gamma_{\rm t}}{3AV^2}N^3-\frac2{AV\tau_{\rm s}}N^2\right].
\end{eqnarray}
If spin relaxation dominates, one finds
\begin{eqnarray}
\langle N\rangle_{\rm cl}^{\rm (s)}=\sqrt{\frac{AV\tau_{\rm s}}{2\pi}}.
\end{eqnarray}
If triplet annihilation dominates, one finds
\begin{eqnarray}
\langle N\rangle_{\rm cl}^{\rm (t)}=\frac{6^{1/3}\sqrt\pi}{\Gamma(1/6)}\left(\frac{AV^2}{\bar\Gamma_{\rm t}}\right)^{1/3}.
\end{eqnarray}
Surprisingly the average number of particles $\langle N \rangle_{\rm cl}$ differs from the quantum case only by a prefactor. The same holds true for the fluctuations. By contrast, the {mean field solution} is now given by $N_0^{\rm cl}=N_{\rm unpol.}\ll\langle N \rangle_{\rm cl}$ and $m_0^{\rm cl}=N_{\rm unpol.}/2$.
\end{document}
|
1,116,691,497,355 | arxiv | \section{Quantum erasure channel with maximally entangled inputs}
We show that the fidelity would decrease exponentially with the number of channel uses for
rates above capacity for maximally entangled inputs that have the full Schmidt rank.
A quantum erasure channel transmits the input state with probability $1-p$ and
``erases" it, i.e., replaces it with an orthogonal erasure state with probability $p$
\cite{grassl-1997} (see also Ref. \cite{bennett-1996-pra}).
The dimension of the output Hilbert space is one larger than that of the input.
A quantum erasure channel $\mathcal{N}_{p}^{A^\prime \to B}$, defined in
Ref. \cite{wilde-book}, is given by the following Kraus operators
$\Big\{ \sqrt{(1-p)} \sum_{i=1}^{|A^\prime|} \ket{i}^B\bra{i}^{A^\prime},$
$\sqrt{p}\ket {e}^B\bra{1}^{A^\prime},...,\sqrt{p}\ket {e}^B\bra{|A^\prime|}^{A^\prime} \Big\}$,
$i = 1,..., |A^\prime|$, $p \in [0,1]$, $|B| = |A^\prime|+1$,
$\left\{\ket{i}^{A^\prime}\right\}, \left\{\ket{i}^B\right\}$ are orthonormal bases in
$\mathcal{H}_{A^\prime}$ and $\mathcal{H}_B$ respectively, and $\ket{e}^B = \ket{j}^B$ for
$j = |B|$. The action of the channel can be understood as follows
\[
\mathcal{N}_{p}^{{A^\prime} \to B}(\rho^{A{A^\prime}}) = (1-p) \sigma^{AB} + p \rho^A \otimes
\ket{e} \bra{e}^B.
\]
Let $\sigma^{AB}$ $= \mathcal{G}^{A^\prime \to B} (\rho^{AA^\prime})$, where $\mathcal{G}$
increases the dimension but leaves the state intact.
Then with probability $1-p$, the channel
leaves the state as $\sigma^{AB}$
and with probability $p$, it erases the state and replaces by $\ket{e}^B$.
It is not difficult to see that $\sigma^{AB}$ is orthogonal
to $\rho^A \otimes \ket{e} \bra{e}^B$.
Taking this further for $n$ channel uses, let $\sigma^{AB^n}$
$= ( \mathcal{G}^{A^\prime \to B} )^{\otimes n} (\rho^{AA^{\prime n}})$.
The output can be written as the sum of $2^n$ orthogonal density
matrices where each of these matrices results from $i$ erasures $i \in\left\{0,...,n \right\}$
and this occurs with probability $(1-p)^{n-i}p^i$. The number of states that have
suffered exactly $i$ erasures is ${n \choose i}$.
Let $B_{i_1} \cdots B_{i_{n-k}}$ be the
quantum systems that have not suffered erasures and we could write the state in this
case using $\sigma^{AB^n}$ as
\begin{fleqn}
\begin{align*}
\zeta_{i_1,...,i_{n-k}}^{AB_{i_1} \cdots B_{i_n}}
~ = ~ & \sigma^{AB_{i_1} \cdots B_{i_{n-k}}} \otimes
\bigotimes_{j=1}^k \ket{e} \bra{e}^{B_{i_{n-k+j}}}.
\end{align*}
\end{fleqn}
It now follows that
\begin{equation}
\label{outstate}
\rho^{A B^n} = \sum_{2^n {\mathrm{terms}}}
\alpha_{k,n} \times \zeta_{i_1,...,i_{n-k}}^{AB_{i_1} \cdots B_{i_n}},
\end{equation}
where $\alpha_{k,n} = (1-p)^{n-k} p^k$.
To prove the strong converse, we find an upper bound for $K_{\lambda}(A \rangle B^n)$.
We assume that $\rho^{A A^{\prime n}}$ is a maximally entangled state with a Schmidt
rank of $d_A^n$ where $d_A = |A^\prime|$. Note that this is the capacity-achieving input
for this channel and $Q(\mathcal{N}) = (1-2p)^+ \ln d_A$ is the single-letter
quantum capacity for this channel \cite{bennett-1997} (see also Ref. \cite{wilde-book}).
Note that $d_A^k \times \rho^{A A^\prime_1 \cdots A^\prime_{n-k}}$ is a projector of
rank $d_A^k$ and $\rho^{A^\prime_1 \cdots A^\prime_{n-k}}$ is
the maximally mixed state.
\begin{theorem}
\label{erasureconverse}
The strong converse holds for the quantum erasure channel for the above
chosen maximally entangled channel inputs.
\end{theorem}
\begin{proof}
Note the following set of inequalities for $s = \lambda^{-1}-1$, $\lambda \in (1,2]$
\begin{fleqn}
\begin{align}
K_{\lambda}(A \rangle B^n) &\stackrel{a}= -\frac{1}{s}
\ln \mathrm{Tr}\left[\mathrm{Tr}_{A}(\rho^{AB^n})^\lambda\right]^{\frac{1}{\lambda}} \nonumber \\
& \stackrel{b} = -\frac{1}{s} \ln \sum_{2^n {\mathrm{terms}}}
\alpha_{k,n}\mathrm{Tr}\left[\mathrm{Tr}_A(\zeta_{i_1,...,i_{n-k}}^{AB_{i_1} \cdots B_{i_n}})^\lambda\right]^\frac{1}{\lambda} ~~~~~~ \nonumber \\
\label{last}
& \stackrel{c} \leq -\frac{1}{s} \ln \Bigg\{ \sum_{2^n {\mathrm{terms}}}
\alpha_{k,n} \nonumber \\
& ~~~~~~~~~~ \exp \left[\frac{-K_{\lambda}
\left(A \rangle A^\prime_{i_1} \cdots A^\prime_{i_{n-k}}\right)}{s} \right] \Bigg\} \nonumber,
\end{align}
\end{fleqn}
where $a$ follows from Lemma \ref{qsibson}, $b$ follows from \eqref{outstate} and the orthogonality of $\zeta$'s
and $c$ follows because $K_{\lambda}$ satisfies monotonicity
and Lemma \ref{qsibson}. Using the fact that
$d_A^k\times\rho^{AA^\prime_{i_1} \cdots A^\prime_{i_{n-k}}}$
is a projector of rank $d_A^k$, we get
$K_{\lambda}(A \rangle B^n) \leq n E_0(s)/s$ where we define (with some abuse of notation)
\[
E_0(s):= -\ln \Big[(1-p) d_A^{-s} + p d_A^{s} \Big]
\]
and $E_0(0)=0$.
Using \eqref{temp3002}, we have
\[
\mathbb{F} \leq \exp \left\{ n \left[ s\mathcal{R} - E_0(s) \right] \right\}.
\]
Furthermore, for $p \in [0,1/2]$,
\[
\lim_{s \uparrow 0} \frac{E_0(s)}{s} = Q(\mathcal{N}).
\]
Hence, for all $\mathcal{R} > Q(\mathcal{N})$, $\exists$ $s \in [-1/2,0)$ s.t.
$\mathcal{R} - E_0(s)/s$ $> 0$, and thus the strong converse holds.
For $p > 1/2$, $E_0^\prime(0)$ $< 0$ and hence, using similar arguments
as above, for any $\mathcal{R}>0$, the strong converse holds.
\end{proof}
An alternate proof of Theorem \ref{erasureconverse} using the hockey stick divergence
is provided in the Appendix.
To summarise our results, we have given an exponential upper bound on the
reliability of quantum information transmission. The bound is fundamental in
the same vein as the bounds known for transmission of classical information
across classical/quantum channels (see Refs.
\cite{arimoto-1973-converse, ogawa-1999-converse, konig-2009-converse})
and holds under general conditions. We then apply
our bound to yield the first known example for exponential decay of reliability at
rates above capacity for quantum information transmission.
The authors gratefully acknowledge the comments by A. Winter.
|
1,116,691,497,356 | arxiv | \section{Introduction}
Graded Lie algebras or Lie superalgebras ({\it i.e.} algebras having commuting
and anti-commuting generators) were at one time considered as models for a
more complete unified electroweak theory \cite{dondi} \cite{neeman} \cite{fairlie}
\cite{squires} \cite{taylor1} as well as Grand Unified Theories
\cite{taylor}. Such graded algebras had many attractive features such as including
both vector and scalar bosons within the same theory, fixing the Weinberg angle,
and in some formulations the mass of the Higgs. However these graded algebras had
shortcomings \cite{eccle} \cite{eccle1} such as giving rise to negative kinetic energy terms
for some of the gauge fields when the graded trace or supertrace was used.
In this paper we point out that this negative kinetic energy feature
of the original graded algebras can be used to
construct a model for phantom energy \cite{caldwell}. In addition to the phantom field
there are other fields which arise in this model which could act as dark matter. The
advantage of the combined phantom energy/dark matter model presented here is that
it is derived from a modified gauge principle ({\it i.e.} the gauge principle applied
to graded algebras) rather than being introduced phenomenologically. This feature
fixes the parameters, such as the coupling between the phantom energy and dark
matter, that are free in more phenomenological models.
Phantom energy is a form of dark energy which has a ratio of density to pressure less than
$-1$ {\it i.e.} $w = p / \rho < -1$. Dark energy in general is a cosmological
``fluid'' with $w < -1/3$, which gives rise to an accelerated
cosmological expansion. Dark energy was proposed to explain the accelerated
expansion that was observed in studies of distance type Ia supernova \cite{riess} \cite{riess1}
\cite{perlmutter}. There are various proposals as to the nature of dark energy
which include a small, positive cosmological constant, quintessence models \cite{zlatev},
brane world models \cite{ddg} \cite{deff}, Chaplyin gas \cite{kam}, k-essense \cite{gonzalez1},
axionic tensor fields \cite{gonzalez2} and others. A good review of the
the current hypotheses as to the nature of dark energy and a guide to the literature can be
found in reference \cite{sahni}. Phantom energy is simply an extreme form of dark energy. The simplest
model for phantom energy involves a scalar field with a negative kinetic energy term \cite{caldwell}
\begin{equation}
\label{scalar-pe}
{\mathcal L}_p = -\frac{1}{2} (\partial _\mu \phi) (\partial ^\mu \phi) - V(\phi)
\end{equation}
The negative sign in front of the kinetic energy term makes this an
unusual field theory. Such theories with negative kinetic energies have been
investigated theoretically starting with \cite{bronn} \cite{ellis} where they were used to
study wormhole solutions. Other papers considering scalar fields with
negative kinetic energies can be found in \cite{nega-ke} \cite{picon}
\cite{lobo} \cite{sushkov}. However these theoretical
studies did not garner much attention because of various problems with
negative kinetic energies. Quantum mechanically such a field theories
violate either conservation of probability or it has no
stable vacuum state due to an energy density that is unboundedly negative.
Nevertheless such unusual field theories are not absolutely ruled out
\cite{caldwell}, but one can place significant constraints on them \cite{cline}.
Despite the theoretical problems of a scalar field with a negative kinetic energy
term the reason to consider such a strange field theory is that recent
observations give $-1.48 < w < -0.72$ \cite{hann} \cite{melch} \cite{knop} \cite{sper}
\cite{tegmark} and thus favor $w < -1$. A very recent
comparison of data from various sources can be found in \cite{jass}
The result $w<-1$ coming from the Lagrangian in \eqref{scalar-pe} depends not only on the
negative kinetic energy term, but also requires that the potential, $V(\phi )$, be present
and satisfy some conditions. From \eqref{scalar-pe} one can calculate $p$ and $\rho$
in the standard way as
\begin{equation}
\label{p-rho}
\rho = T_{00} = -\frac{1}{2}{\dot \phi} ^2 + V(\phi ) \; ; \hspace{2cm}
p = - T_{ii} = -\frac{1}{2}{\dot \phi} ^2 - V(\phi )
\end{equation}
where it is assumed that the scalar field is spatially homogeneous enough so that only the
time variation is important. Using \eqref{p-rho} to calculate $w$ gives
\begin{equation}
\label{w}
w = \frac{p}{\rho} = \frac{-\frac{1}{2}{\dot \phi} ^2 - V(\phi )}{-\frac{1}{2}{\dot \phi} ^2 + V(\phi )}
\end{equation}
If $V (\phi ) >0$ and satisfies $ \sqrt { 2 V (\phi ) } > | {\dot \phi} |$ then one has $w <-1$.
We will show that it is possible, using graded algebras, to construct a field theory that satisfies
these conditions and so gives rises to phantom energy with $w <-1$. Unlike other models, the
negative kinetic term comes from the structure of the graded algebras rather than being put
in by hand. In addition there are other fields which could play the role of dark matter.
\section{SU(2/1) algebra}
We briefly review the graded algebra SU(2/1), which we will use to construct the phantom energy
model in the next section. The basic idea of using graded algebras to give phantom energy
works for larger graded algebras like SU(N/1) with $N > 2$. We have taken SU(2/1) for
simplicity.
We use the representation for SU(2/1) which consists of the following eight
$3 \times 3$
\begin{align*}
EVEN: \quad T_{1} & =\frac{1}{2}\left(
\begin{array}
[c]{ccc}%
0 & 1 & 0\\
1 & 0 & 0\\
0 & 0 & 0
\end{array}
\right) ,\quad T_{2}=\frac{1}{2}\left(
\begin{array}
[c]{ccc}%
0 & -i & 0\\
i & 0 & 0\\
0 & 0 & 0
\end{array}
\right) ,\quad T_{3}=\frac{1}{2}\left(
\begin{array}
[c]{ccc}%
1 & 0 & 0\\
0 & -1 & 0\\
0 & 0 & 0
\end{array}
\right) \quad T_{8}=\frac{1}{2}\left(
\begin{array}
[c]{ccc}%
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 2
\end{array}
\right)
\\
ODD: \quad T_{4} & =\frac{1}{2}\left(
\begin{array}
[c]{ccc}%
0 & 0 & 1\\
0 & 0 & 0\\
1 & 0 & 0
\end{array}
\right) ,\quad T_{5}=\frac{1}{2}\left(
\begin{array}
[c]{ccc}%
0 & 0 & -i\\
0 & 0 & 0\\
i & 0 & 0
\end{array}
\right) ,\quad T_{6}=\frac{1}{2}\left(
\begin{array}
[c]{ccc}%
0 & 0 & 0\\
0 & 0 & 1\\
0 & 1 & 0
\end{array}
\right) , \quad T_{7} =\frac{1}{2}\left(
\begin{array}
[c]{ccc}%
0 & 0 & 0\\
0 & 0 & -i\\
0 & i & 0
\end{array}
\right) .
\end{align*}
Except for $T_8$ this is the standard, fundamental
representation of SU(3). The matrices on the first line above ({\it i.e.}
$T_1, T_2, T_3, T_8$) are the even generators, and those on the second line
({\it i.e.} $T_4, T_5, T_6, T_7$) are odd generators. The even generators satisfy
commutation relationships among themselves which can be written symbolically as
$[EVEN, EVEN] = EVEN$. Mixtures of even and odd generators satisfy commutators of
the form $[EVEN, ODD] = ODD$. Finally the odd generators satisfy anti-commutation
relationships of the form $\{ ODD, ODD \} = EVEN$. The further details of the SU(2/1)
graded algebra can be found in the paper by Dondi and Jarvis \cite{dondi} or
in Ecclestone \cite{eccle} \cite{eccle1}. The odd generators above are different than those
usually taken in the literature. The connection of the odd generators above with
those in \cite{dondi} is given by ${\bar Q} ^1 , Q_1 = T_4 \pm i T_5$
and ${\bar Q} ^2 , Q_2 = T_6 \pm i T_7$. In the rest of the article we will use the
convention that generators with indices from the middle of the alphabet ($i, j, k$)
are the even generators, $T_1, T_2, T_3, T_8$, while indices from the beginning of the
alphabet ($a, b, c$) are the odd generators $T_4, T_5, T_6, T_7$.
For the graded algebra one replaces the concept of the trace by the supertrace.
For SU(2/1) this means that one writes some general element of the group as
\begin{align*}
\quad M =\left(
\begin{array}
[c]{cc}%
A_{2 \times 2} & B_{2 \times 1} \\
C_{1 \times 2} & d_{1 \times 1} \\
\end{array}
\right) .
\end{align*}
The subscripts indicate the size of the sub-matrix. The supertrace is now defined as
\begin{equation}
\label{supertrace}
{\operatorname {str}}(M) = {\operatorname {tr}} [A] - {\operatorname {tr}}[d]
\end{equation}
which differs from the regular trace due to the minus sign in front of $d$.
In the next section we will need the supertraces of the various products of the
eight generators $(T_i , T_a)$, thus we collect these results here. For products of
even generators we have
\begin{equation}
\label{even}
{\operatorname {str}}(T_i T_j) = \delta _{ij} \frac{1}{2} \quad \text{except} \quad
{\operatorname {str}}(T_8 T_8) = -\frac{1}{2}
\end{equation}
for the odd generators we have
\begin{equation}
\label{odd}
{\operatorname {str}} (T_4 T_5) = -{\operatorname {str}}(T_5 T_4) = \frac{i}{2} , \quad
{\operatorname {str}} (T_6 T_7) = -{\operatorname {str}}(T_7 T_6) = \frac{i}{2}
\end{equation}
All other supertraces of the product of two matrices are zero.
\section{Phantom energy from SU(2/1)}
In \cite {dondi} vector fields were associated with the even
generators and scalar fields with the odd generators as
\begin{equation}
\label{graded-alg}
A_\mu = i g A_\mu ^i T^{even} _i \; \; , \quad \phi = -g \varphi ^a T^{odd} _a
\end{equation}
The fields $A_\mu ^i$ are regular commuting fields while $\varphi ^a$ are Grassmann fields.
In block form one can write \eqref{graded-alg} as
\begin{align*}
\quad A_M =\left(
\begin{array}
[c]{ccc}%
A_\mu ^3 + A_\mu ^8 & A_\mu ^1 - i A _\mu ^2 & \varphi ^4 - i \varphi ^5\\
A_\mu ^1 + i A_\mu ^2 & -A_\mu ^3 + A_\mu ^8 & \varphi ^6 - i \varphi^7\\
\varphi ^4 + i \varphi ^5 & \varphi ^6 + i \varphi ^7 & 2 A_\mu ^8
\end{array}
\right) .
\end{align*}
In this fashion, {\it and by using the regular trace}, Dondi and
Jarvis \cite{dondi} showed that the Lagrangian
\begin{equation}
\label{g-lag}
{\mathcal L} = \frac{1}{2 g^2} {\operatorname {tr}} ( F_{MN} F^{MN} ) , \quad F_{MN} = \partial_M A_N
-\partial _N A_M + [A_M, A_N] ,
\end{equation}
reduced to an $SU(2) \times U(1)$ Yang-Mills Lagrangian for $A_\mu$ and a
Higgs-like Lagrangian for $\phi$. In \eqref{g-lag} we use a different overall
sign for the Lagrangian as compared to \cite{dondi}. This comes because we
have chosen different factors of $i$ in the vector potentials defined
below in \eqref{graded-alg2}. This was a more unified electroweak
theory since $SU(2/1)$ is simple so that the Weinberg angle was fixed rather
than being a parameter. However, if in \eqref{g-lag} one used the correct
SU(2/1) invariant supertrace then the Yang-Mills part of the reduced Lagrangian
would have the wrong sign for the kinetic term for the $U(1)$ gauge field
and the kinetic energy term for the scalar field
would be lost. In \cite{eccle} \cite {eccle1} this was used to develop arguments against the
use of SU(2/1) as a unified electroweak theory.
Here we would like to use these apparent negative features of the graded algebras
to construct a model for phantom energy. Instead of making the association between
even/odd generators and vector/scalar fields made in \eqref{graded-alg} we will make
the opposite choice
\begin{equation}
\label{graded-alg2}
A_\mu = i g A_\mu ^a T^{odd} _a \; \; , \quad \phi = - g \varphi ^i T^{even} _i
\end{equation}
Because of the reversal of roles relative to \eqref{graded-alg} the fields $A_\mu ^a$
are Grassmann fields while $\varphi ^i$ are regular, commuting fields. Then taking the
correct SU(2/1) invariant supertrace we find that one of the scalar fields develops
a negative kinetic energy term in addition to having a potential term which is positive
definite. Thus the graded algebra gives rise to a phantom field.
With the choice in \eqref{graded-alg2} the Lagrangian in \eqref{g-lag} reduces as
follows
\begin{equation}
\label{g-lag2}
{\mathcal L} = \frac{1}{2 g^2} {\operatorname{str}} ( F_{MN} F^{MN} ) =
\frac{1}{2g^{2}}{\operatorname{str}}\left[ \left( \partial_{\lbrack\mu}A_{\nu]}+[A_{\mu},A_{\nu}]\right)
^{2}\right] +\frac{1}{g^{2}}{\operatorname{str}}\left[ \left( \partial
_{\mu}\phi + [A_{\mu},\phi ]\right) ^{2}\right]
\end{equation}
We have introduced the notation
$\partial_{\lbrack\mu}A_{\nu]} = \partial _\mu A_\nu - \partial _\nu A_\mu$. Note
that in comparison to other works such as \cite{dondi} and \cite{eccle} \cite{eccle1} we have not
introduced extra Grassmann coordinates, $\zeta^\alpha$ in addition to the normal
Minkowski coordinates $x^\mu$. Thus in \cite{dondi} and \cite{eccle} \cite{eccle1} coordinates and
indices ran over six values -- four Minkowski and two Grassmann. The final result in \eqref{g-lag2}
can be obtained from \cite{dondi} by dropping the Grassmann coordinates.
We first focus on the scalar term in \eqref{g-lag2}. Inserting $\phi$ and $A_\mu$ from
\eqref{graded-alg2} into the last term in \eqref{g-lag2} we find
\begin{eqnarray}
\label{scalar}
{\mathcal L}_S & = & \frac{1}{g^{2}}{\operatorname{str}}\left[
\left( \partial_{\mu}\phi+[A_{\mu},\phi]\right) ^{2}\right] \nonumber \\
& = & {\operatorname{str}}\left[ \left( \partial_{\mu}\varphi
^{8}T_{8}+igA_{\mu}^{a}\varphi^{8}[T_{a},T_{8}]\right) ^{2}\right]
+ {\operatorname{str}}\left[ \left( \partial_{\mu}\varphi
^i T_i +igA_{\mu}^{a}\varphi^i[T_{a},T_i]\right) ^{2}\right]
\end{eqnarray}
We now show that the first term in \eqref{scalar} takes the form of a phantom energy field.
Expanding the first term in \eqref{scalar} gives
\begin{eqnarray}
\label{phantom}
\mathcal{L}_{Ph} & = & {\operatorname{str}}\left[ \left(
\partial_{\mu}\varphi^{8}T_{8}+igA_{\mu}^{4}\varphi^{8}[T_{4},T_{8}]+igA_{\mu
}^{5}\varphi^{8}[T_{5},T_{8}] + igA_{\mu}^{6}\varphi^{8}[T_{6},T_{8}]
+ igA_{\mu}^{7}\varphi^{8}[T_{7},T_{8}] \right) ^{2}\right] \nonumber \\
& = & {\operatorname{str}}\left[ \left( \partial_{\mu}\varphi
^{8}T_{8}-gA_{\mu}^{4}\varphi^{8}T_{5}/2+gA_{\mu}^{5}\varphi^{8}%
T_{4}/2 -gA_{\mu}^{6}\varphi^{8}T_{7}/2 + gA_{\mu}^{7}\varphi^{8}%
T_{6}/2\right) ^{2}\right]
\end{eqnarray}
We have used the representation of the SU(2/1) matrices from the previous
section to evaluate the commutators. Using the supertrace results from \eqref{even}
and \eqref{odd} the expression in \eqref{phantom} yields
\begin{eqnarray}
\label{phantom2}
\mathcal{L}_{Ph} & = &-\frac{1}{2}(\partial_{\mu}\varphi^{8})^{2}+\frac{1}%
{4}g^{2}(\varphi^{8})^{2}{\operatorname{str}} \left[ \left( A_{\mu}
^{4}T_{5}-A_{\mu}^{5}T_{4} + A_{\mu}^{6}T_{7}-A_{\mu}^{7}T_{6} \right) ^2 \right] \nonumber \\
& = & -\frac{1}{2}(\partial_{\mu}\varphi^{8})^{2}- \frac{i}{8}g^{2}(\varphi
^{8})^{2} \left( A_{\mu}^{5}A^{4 \mu} - A_{\mu}^{4}A^{5 \mu} + A_{\mu}^{7} A^{6 \mu}
- A_{\mu}^{6}A^{7 \mu} \right) \\
& = & -\frac{1}{2}(\partial_{\mu}\varphi^{8})^{2}- \frac{1}{16}g^{2}(\varphi
^{8})^{2}\left( A_{\mu}^{+}A^{- \mu} - A_{\mu}^{-}A^{+ \mu} + B_{\mu}^{+}B^{- \mu}
- B_{\mu}^{-}B^{+ \mu} \right) \nonumber
\end{eqnarray}
with $A_{\mu}^\pm=A_{\mu}^{4} \pm iA_{\mu}^{5}$
and $B_{\mu}^\pm =A_{\mu}^{6} \pm iA_{\mu}^{7}$. Both $A_{\mu} ^\pm$
and $B_{\mu} ^\pm$ are Grassmann so the last line in \eqref{phantom2}
can be written
\begin{equation}
\label{phantom3}
\mathcal{L}_{Ph} = -\frac{1}{2}(\partial_{\mu}\varphi^{8})^{2}- \frac{1}{8}g^{2}(\varphi
^{8})^{2}\left( A_{\mu}^{+}A^{- \mu} + B_{\mu}^{+}B^{- \mu} \right)
\end{equation}
This is of the form of the phantom energy Lagrangian in \eqref{scalar-pe} but with the
potential involving not only the scalar field, $\varphi ^8$, but Grassmann vector fields,
$A_\mu ^\pm$ and $B_\mu ^\pm$. We will discuss these shortly. The minus sign in front of the kinetic
energy term comes from taking the SU(2/1) invariant supertrace rather
than the ordinary trace (see the second supertrace result in \eqref{even}).
We next focus on the other scalar fields, $\varphi ^i$, $i =1,2,3$ which come from the
second term in \eqref{scalar}. The calculation proceeds as in equations \eqref{phantom} -
\eqref{phantom2} but with $\varphi ^8$ replaced by $\varphi ^i$, $i= 1, 2, 3$. For example
for $\varphi ^1$ \eqref{phantom} becomes
\begin{equation}
\label{dm}
\mathcal{L}_{\varphi ^1} = {\operatorname{str}}\left[ \left( \partial_{\mu}\varphi
^{1}T_{1}+gA_{\mu}^{4}\varphi^{1}T_{7}/2-gA_{\mu}^{5}\varphi^{1}%
T_{6}/2 -gA_{\mu}^{6}\varphi^{1}T_{5}/2 + gA_{\mu}^{7}\varphi^{1}%
T_{4}/2\right) ^{2}\right]
\end{equation}
and \eqref{phantom2} becomes
\begin{eqnarray}
\label{dm2}
\mathcal{L}_{\varphi^1} & = & \frac{1}{2}(\partial_{\mu}\varphi^{1})^{2}- \frac{1}{16}g^{2}(\varphi
^{1})^{2}\left( A_{\mu}^{+}A^{- \mu} - A_{\mu}^{-}A^{+ \mu} + B_{\mu}^{+}B^{- \mu}
- B_{\mu}^{-}B^{+ \mu} \right) \nonumber \\
& = & \frac{1}{2}(\partial_{\mu}\varphi^{1})^{2}- \frac{1}{8}g^{2}(\varphi
^{1})^{2}\left( A_{\mu}^{+}A^{- \mu} + B_{\mu}^{+}B^{- \mu} \right)
\end{eqnarray}
There are two keys points: the kinetic term for $\varphi ^1$ is positive since
${\operatorname{str}} (T_1 T_1) = + 1/2$, and the potential term is the same as
for $\varphi ^8$. The other two even scalar fields follow
the same pattern so that in total one can write
\begin{equation}
\label{dm3}
\mathcal{L}_{DM} = \frac{1}{2}(\partial_{\mu}\varphi^{i})^{2}- \frac{1}{8}g^{2}(\varphi
^{i})^{2}\left( A_{\mu}^{+}A^{- \mu} + B_{\mu}^{+}B^{- \mu} \right)
\end{equation}
where $i$ is summed from 1 to 3. Thus the total scalar field Lagrangian resulting from
\eqref{scalar} is the sum of \eqref{phantom3} and \eqref{dm3}. The scalar field in
\eqref{phantom3} has the ``wrong'' sign for the kinetic term and acts as a phantom
field. The scalar fields in \eqref{dm3} are ordinary scalar field which we will interpret
as a dark matter candidate. The phantom field and dark matter fields are coupled
through the $A^\pm _\mu$ and $B^\pm _\mu$ fields. Thus our model provides a coupling
between phantom energy and dark matter. Other models have been considered \cite{cai} where
there is coupling between dark/phantom energy and dark matter.
We will now examine the Grassmann vector fields, $A^4 _\mu ,
A^5 _\mu, A^6 _\mu , A^7 _\mu$. The final Lagrangian for these fields will
have a nonlinear interaction between the $A^{\pm}_\mu$ and $B^{\pm}_\mu$ fields.
In analogy with QCD we argue that these fields form permanently confined condensates like
$\langle A^4 _\mu A^5 _\mu \rangle$ or $\langle A^+ _\mu A^- _\mu \rangle$.
These then supply potential (mass-like)
terms for the phantom energy and scalar fields of \eqref{phantom3} and \eqref{dm3}.
This also avoids violation of the spin-statistics theorem since these condensates
have bosonic statistics (they are composed of two Grassmann fields) and integer
spin (they are composed of two integer spin fields).
Having a potential term is crucial for the interpretation of $\varphi ^8$ as a
phantom energy field, since for a massless, non-interacting scalar field reversing the
sign of the kinetic energy term does not lead a phantom field with $w<-1$ as can be seen
from \eqref{w} if $V( \phi) =0$.
From \eqref{g-lag2} the vector part of the Lagrangian can be expanded as
\begin{eqnarray}
\label{vector}
\mathcal{L}_V & = & \frac{1}{2g^{2}}{\operatorname{str}}
\left[ \left( \partial_{\lbrack\mu}A_{\nu]}+[A_{\mu},A_{\nu}]\right)^{2}\right]
= - \frac{1}{2}{\operatorname{str}}
\left[ \left( \partial_{\lbrack\mu}A^a _{\nu]} T_a + ig A^a _\mu A^b _\nu \{ T_a,T_b \} \right)
^{2}\right] \nonumber \\
& = & - \frac{1}{2}{\operatorname{str}}
\left[ \left( \partial_{\lbrack\mu}A^a _{\nu]} T_a \right) ^2 \right]
+ \frac{g^2}{2} {\operatorname{str}}\left[ \left( A^a _\mu A^b _\nu \{ T_a,T_b \} \right)
^{2}\right] = \mathcal{L}_{V1} + \mathcal{L}_{V2}
\end{eqnarray}
The commutator has become an anticommutator due to the Grassmann nature of the $A^a _\mu$'s. Also
note that there is no cubic cross term between the derivative and anticommutator part. This
comes about since the anticommutator, $\{ T_a , T_b \}$ results in even generators, and
the supertrace between odd and even generators vanishes. $\mathcal{L}_{V1}$ is
a kinetic term for the fields and $\mathcal{L}_{V2}$ a potential term. We will now consider each
of these in turn.
The kinetic part can be written explicitly as
\begin{equation}
\label{v-kinetic}
\mathcal{L}_{V1} = - \frac{1}{2}{\operatorname{str}}
\left[ \left( \partial_{\lbrack\mu}A^4 _{\nu]} T_4 + \partial_{\lbrack\mu}A^5 _{\nu]} T_5
+\partial_{\lbrack\mu}A^6 _{\nu]} T_6 + \partial_{\lbrack\mu}A^7 _{\nu]} T_7\right) ^2 \right]
\end{equation}
Due to the property of the supertrace of the odd generators given in \eqref{odd} it is
only the cross terms between $T_4, T_5$ and $T_6, T_7$ which survive.
\begin{eqnarray}
\label{v-kinetic2}
\mathcal{L}_{V1} = - \frac{i}{2}
\left( \partial_{\lbrack\mu}A^4 _{\nu]} \partial_{\lbrack\mu}A^5 _{\nu]}
+\partial_{\lbrack\mu}A^6 _{\nu]} \partial_{\lbrack\mu}A^7 _{\nu]} \right)
= - \frac{1}{4}
\left( \partial_{\lbrack\mu}A^- _{\nu]} \partial_{\lbrack\mu}A^+ _{\nu]}
+\partial_{\lbrack\mu}B^- _{\nu]} \partial_{\lbrack\mu}B^+ _{\nu]} \right)
\end{eqnarray}
where we have used the anticommutating properties of the $A ^a _\mu$'s.
In the last step we have replaced the $A ^a _\mu$ by $A^\pm _\mu$ and
$B^\pm _\mu$. This kinetic part is reminiscent of the kinetic terms for
a charged ({\it i.e.} complex) vector field.
Next we work out the form of the interaction terms coming from
${\mathcal L}_{V2}$. We do this explicitly for $A^4 _\mu$; the
results for the other vectors fields can be obtained in a similar manner.
The $A^a _\mu = A^4 _\mu$ part of $\mathcal{L}_{V2}$ expands like
\begin{equation}
\label{vector-int}
{\mathcal L}_{V2} = \frac{g^2}{2} {\operatorname{str}}\left[ \left( A^4 _\mu A^4 _\nu \{ T_4,T_4 \}
+ A^4 _\mu A^5 _\nu \{ T_4,T_5 \} + A^4 _\mu A^6 _\nu \{ T_4,T_6 \}
+ A^4 _\mu A^7 _\nu \{ T_4,T_7 \}
\right)^{2}\right]
\end{equation}
Using the explicit representations of the odd matrices we have $\{T_4 , T_4 \} =
(T_3 + T_8)/2$, $\{ T_4 , T_5 \} = 0$, $\{ T_4 , T_6 \} = T_1 /2 $,
$\{ T_4 , T_7 \} = -T_2 / 2$. Squaring and using the supertrace results
of \eqref{even} one finds that \eqref{vector-int} becomes
\begin{equation}
\label{vector-int2}
{\mathcal L}_{V2} = \frac{g^2}{16}\left( A^4 _\mu A^6 _\nu A^{4 \mu} A^{6 \nu}
+ A^4 _\mu A^7 _\nu A^{4 \mu} A^{7 \nu} \right)
\end{equation}
Note that there is no quartic term in $A^4 _\mu$ since the contributions from
$T_3$ and $T_8$ cancel. The contribution from $A^5 _\mu$ looks the same as
\eqref{vector-int2} but with $A^4 _\mu \rightarrow A^5 _\mu$. The $A^6 _\mu$
and $A^7 _\mu$ terms can be obtained by making the exchange $A^4 _\mu
\leftrightarrow A^6 _\mu$ and $A^5 _\mu \leftrightarrow A^7 _\mu$. Using the
Grassmann character of the $A^a _\mu$ 's one can see that the $A^4 _\mu$ and $A^6 _\mu$
contributions, and also the $A^5 _\mu$ and $A^7 _\mu$ contributions are the same.
In total the interaction part of the vector Lagrangian can be written as
\begin{eqnarray}
\label{vector-int3}
{\mathcal L}_{V2} & = & \frac{g^2}{8}\left( A^4 _\mu A^6 _\nu A^{4 \mu} A^{6 \nu}
+ A^4 _\mu A^7 _\nu A^{4 \mu} A^{7 \nu} + A^5 _\mu A^6 _\nu A^{5 \mu} A^{6 \nu}
+ A^5 _\mu A^7 _\nu A^{5 \mu} A^{7 \nu}\right) \nonumber \\
& = & \frac{g^2}{16} \left( A^+ _\mu B^+ _\nu A^{- \mu} B^{- \nu} + A^+ _\mu B^- _\mu
A^{- \nu} B^{+ \nu} \right)
\end{eqnarray}
In the last line we have written the interaction in terms of $A^{\pm} _\mu$ , $B^{\pm} _\mu$.
The total Lagrangian for the vector Grassmann fields is, ${\cal L}_{V1} + {\cal L}_{V2}$, where
${\cal L}_{V1}$ is a kinetic term and and ${\cal L}_{V2}$ gives a nonlinear
interaction term between $A^\pm _\mu$ and $B^\pm _\mu$. We assume that the interaction
is strong enough that the fields, $A^\pm _\mu$ and $B^\pm _\mu$ are permanently confined into
condensates
\begin{equation}
\label{vev}
\langle A^+ _\mu A^{- \mu} \rangle = \langle B^+ _\mu B^{- \mu} \rangle = v
\end{equation}
Given the symmetry between the $A^{\pm}_\mu$ and $B^{\pm} _\mu$ fields we have
taken them to have the same vacuum expectation value. This conjectured condensation
is similar to the {\it gauge variant}, mass dimension 2 condensate, in regular
Yang-Mills theory, $\langle {\cal A}^a _\mu {\cal A}^{a \mu} \rangle$. Despite being
{\it gauge variant} this quantity has been shown \cite{boucaud} \cite{boucaud1} \cite{gubarev} \cite{gubarev1} to
have real physical consequences in
QCD. Here ${\cal A}^a _\mu$ is a normal SU(N) Yang-Mills field. In \cite{kondo} a BRST-invariant
mass dimension 2 condensate was constructed which was a combination of the quadratic
gauge field term -- $\langle {\cal A}^a _\mu {\cal A}^{a \mu} \rangle$ --
plus a quadratic Fadeev-Popov \cite{fp} ghost field term -- $i \alpha \langle {\cal C}^a {\bar{\cal C}}^a \rangle$ --
where $\alpha$ was a gauge parameter. In the Landau gauge, $\alpha =0$, this reduced to a
pure quadratic gauge field condensate $\langle {\cal A}^a _\mu {\cal A}^{a \mu} \rangle$.
Note that the ghost fields, ${\cal C}^a , {\bar{\cal C}}^a$, are bosonic (i.e. scalar)
Grassman fields. This mass dimension 2 condensate gives the gluon a mass
\cite{ds} \cite{ds1} \cite{shakin}. Estimates have been made for
$\sqrt{\langle {\cal A}^a _\mu {\cal A}^{a \mu} \rangle}$ using lattice methods
\cite{boucaud} \cite{boucaud1} \cite{shakin}, analytical techniques \cite{dudal} or some mixture.
All these various methods give a condensate value in the range
$\sqrt{\langle {\cal A}^a _\mu {\cal A}^{a \mu} \rangle} \approx 1$ GeV.
Given the similarities between the regular gauge field condensate
of \cite{boucaud} \cite{boucaud1} \cite{gubarev} \cite{gubarev1} \cite{kondo} and that on the left
hand side of \eqref{vev} we can estimate the vacuum expectation value as
$v \approx 1$ GeV $^2$.
Now inserting these vacuum expectation values from \eqref{vev}
into \eqref{phantom3} yields
\begin{equation}
\label{phantom4}
\mathcal{L}_{Ph} = -\frac{1}{2}(\partial_{\mu}\varphi^{8})^{2}- \frac{v}{4}g^{2}(\varphi
^{8})^{2}
\end{equation}
This is of the form \eqref{scalar-pe} with $V(\varphi ^8) = \frac{v}{4}g^{2}(\varphi
^{8})^{2}$. This will give phantom energy with $w < -1$ if
$\frac{g}{2} |\varphi ^8 | {\sqrt{2 v}} > | \dot \varphi ^8 |$.
If the vacuum expectation value, $v$, change over time it is possible
to cross into (out of) the phantom regime if $v$ increases (decreases).
Thus whether one has phantom energy or not would depend on the dynamical
evolution of $v$. Such models, where one crosses the ``phantom divide'', have been
considered in \cite{feng1} \cite{phantom-divide} \cite{andri} \cite{noji}. Usually in such models it is the sign in front of the
kinetic energy term that is modified, whereas in the present case it is a modification
of the potential which causes the transition between phantom and non-phantom phases.
Further extensions of these ``quintom" models can be found in \cite{feng2} \cite{feng3}
\cite{feng4} \cite{zhao} \cite{xia}.
Inserting the vacuum expectation values into the Lagrangian for the scalar fields
$\varphi _1 , \varphi _2 , \varphi _3$, equation \eqref{dm3} becomes
\begin{equation}
\label{dm4}
\mathcal{L}_{DM} = \frac{1}{2}(\partial_{\mu}\varphi^{i})^{2}- \frac{v}{4}g^{2}(\varphi
^{i})^{2}
\end{equation}
The Lagrangian for these fields is for a standard, non-interacting scalar with mass
$m= \frac{g}{2}{\sqrt{2 v}}$. These massive scalar fields could be taken as a
candidate for cold dark matter if $m$ (i.e. $v$) is chosen appropriately. For example,
using the similarity between the condensate of \eqref{vev} and the mass dimension
condensate of \cite{boucaud} \cite{boucaud1} \cite{gubarev}
\cite{gubarev1} \cite{kondo} one might set $v \approx 1$ GeV $^2$.
This would given $m \approx 1$ GeV making $\varphi ^a$ a cold dark matter candidate.
The original Lagrangian we started with in \eqref{g-lag2} has no coupling to the
usual Standard Model fields except through gravity. This would certainly explain
why these phantom energy and dark matter fields have not been seen since they
could only be detected through their gravitational influence. However if this
is the path nature chooses it would be hard if not impossible to get any kind
of experimental signal of these phantom energy/dark matter candidates. One could introduce
some effective coupling between the phantom energy/dark matter fields of \eqref{g-lag2}
and the usual Standard Model fields. More rigorously one might try to use some larger
SU(N/1) group, but having some of the vector fields be associated with the even generators
and some associated with the odd generators and similarly for the scalar fields. In this way
it might be possible to have a new kind of ``Grand Unified Theory": from a single Lagrangian
one could have Standard Model gauge fields as well as new fields that would be phantom energy and
dark matter candidates, instead of extra Grand Unified gauge bosons.
The Grassmann vector fields are an odd feature of this model since they would violate the
spin-statistics theorem. These Grassmann vector fields are similar to the Fadeev-Popov ghosts \cite{fp}
which are scalar fields with Fermi-Dirac statistics. The Fadeev-Popov ghosts however are
not real particles in that they never appear as asymptotic states. In order to avoid having
the Grassmann vector fields violate the spin-statistics theorem, we have postulated that
the composite states, $A^+ _\mu A^{- \mu}$ and $B^+ _\mu B^{- \mu}$ are permanently confined so that
the particles associated with $A^\pm _\mu$ and $B^\pm _\mu$ never appear as asymptotic states.
Since the composites are ordinary fields (integer spin with bosonic statistics) violation
of the spin-statistics theorem is avoided. These vectors fields act as a
second dark matter component in addition to the three scalar fields $\varphi _i$.
There have been other recent proposals for dark matter candidates with non-standard
relationships between spin and mass dimension. In \cite{grumiller} \cite{grumiller1} a spin 1/2 dark matter
candidate was proposed which has mass dimension 1. In the present case our vector fields,
$A^{\pm} _\mu$, $B^{\pm} _\mu$, have the same mass dimension -- 1 -- and statistics --
fermionic -- as the dark matter candidate in \cite{grumiller} \cite{grumiller1}, and only differ in the
value of spin -- 1 versus 1/2.
\section{Summary and Conclusions}
We have given a model for phantom energy using a modification of the
graded Lie algebras models which attempted to give a more unified electroweak
theory, or Grand Unified theories. Despite interesting features of the original graded Lie
algebra models ({\it e.g.} prediction of the Weinberg angle and having
both vectors and scalars coming from the same Lagrangian) they
had shortcomings. Chief among these was that if one used the correct SU(N/1) invariant
supertrace then some of the vector fields had the wrong sign for the kinetic energy term in
the Lagrangian. In the original models the vector fields were associated with the even
generators of the algebra and the scalars fields were associated with the odd
generators. Here we took the reverse identification (scalar field $\rightarrow$ even generators
and vector field $\rightarrow$ odd generators) which led to the wrong sign kinetic energy
term coming from a scalar field rather than from a vector field. The wrong sign scalar field, $\varphi ^8$,
gives a model of phantom energy, while the other scalar fields, $\varphi ^i$, and the
vector fields, $A^a _\mu$, act as dark matter components. In the way our model is
formulated here all the fields are truly dark in that they have no coupling to any of
the Standard Model fields and would thus only be detectable via their gravitational interaction.
This would make the experimental detection of these dark fields impossible through
non-gravitational interactions. However the above is intended only as a toy model of how a phantom energy
field can emerge naturally from a gauge theory with a graded Lie algebra. A more experimentally testable
variation of the above toy model could have some coupling between
the scalar and vector fields of the present model and the Standard Model fields. Such a
coupling could be introduced in a phenomenological fashion via some {\it ad hoc} coupling. A more interesting
option would be to consider some larger graded algebra, such as SU(N/1). Some of the fields could be given
the standard assignment of even or odd generators ({\it i.e.} as in \eqref{graded-alg}) while others could be given
the assignment in \eqref{graded-alg2}. The fields given the standard assignment would give standard gauge
fields, while fields given the non-standard assignment would give phantom energy and dark matter
fields. This would give a new type of ``Grand Unified Theory'' with the phantom energy and
dark matter fields replacing the extra gauge bosons of ordinary Grand Unified Theories.
Other authors \cite{wei} have used non-standard gauge groups such as SO(1,1) to give models of phantom energy.
An important feature of the above model is the assumption that the Grassmann
vector fields form permanently confined condensates.
This was a crucial to our phantom energy model since it
leads to a condensate of the $A^\pm _\mu$ and $B^\pm _\mu$ fields. This in turn gave
a potential $V(\varphi ^8) = \frac{v}{4}g^{2}(\varphi ^{8})^{2} $
for the $\varphi ^8$ field which was of the correct form to allow $\varphi ^8$ to act
as phantom energy. Aside from the present application to phantom energy one
might try to use the above mechanism to generate standard symmetry break by starting with
a graded Lie algebra but using all vector fields rather than mixing vector and scalar. In
this way some of the vector fields would be standard vector fields, while other would be
Grassmann vector fields. By the above mechanism the Grassmann vector fields would form
condensates which would then give masses to the standard vector fields {\it i.e.} one would have a
Higgs mechanism with only vector fields).
One additional avenue for future investigation is to see if one could have a phantom energy model
with the original graded Lie algebra models ({\it i.e.} with vector fields assigned to
even generators and scalars to odd) but using the supertrace. One would then have the problem
of some of the vector fields having the wrong sign in the kinetic term, but this might
then give a phantom energy model with a vector rather than scalar field.
\begin{flushleft}
{\bf Acknowledgments} DS acknowledges
the CSU Fresno College of Science and Mathematics for a sabbatical leave during
the period when this work was completed.
\end{flushleft}
|
1,116,691,497,357 | arxiv | \section{Introduction}
Networks are ubiquitous in the real world, and many real-world datasets take the form of networks such as social networks, citation networks, language networks and biological networks. Generally, networks can be divided into two categories: structural networks \cite{wang2016structural,xie2019tpne} and attributed networks \cite{chen2020relation,yu2020structured}. Compared with attributed networks, structural networks are more widely used, which can be constructed by the most fundamental network structure information without using auxiliary information from node attributes.
In structural networks, node classification is one of the most typical learning tasks, which focuses on exploiting the node interactions to predict the missing labels of unlabeled nodes in a structural network. Many real world applications can be modeled as the node classification problem, such as profession identification \cite{tu2017prism} and persona classification \cite{kaul2020persona}.
Generally speaking, existing node classification methods can be divided into two categories: traditional methods \cite{neville2000iterative,yamaguchi2015omni} and network embedding based methods \cite{tang2015line,grover2016node2vec,xie2019sim2vec,dai2019adversarial, xie2019tpne}. Compared with the traditional methods that directly infer posterior distribution of node labels from neighborhood information, network embedding based methods can achieve better performance by alleviating the curse of dimensionality for large-scale structural networks and avoiding cascading errors. However, nearly all the existing embedding based node classification methods in structural networks suffer from the following two problems: (1) They are hard to capture the actual category features hidden in highly nonlinear network structure, because of the linearly inseparable problem in low-dimensional space; (2) They only preserve network structure information into network embedding, without node label information.
To address the above problems, we propose a novel Deep Kernel Supervised Hashing, called DKSH, to learn node representations for node classification in structural networks. Specifically, a deep multiple kernel learning is first proposed to map nodes into suitable Hilbert space, which can deal with linearly inseparable problem of category features. Then, instead of only considering structural similarity and ignoring category similarity between two nodes, a novel similarity matrix is designed to merge both network structure information and node label information. Supervised by the similarity matrix, the learned hashing representations of nodes can simultaneously preserve the two kinds of information from the learned Hilbert space. Extensive experiments show that the proposed method significantly outperforms the state-of-the-art baselines over three real world benchmark datasets.
The main contributions of our work are summarized as follows:
\begin{itemize}
\item We design a deep kernel hashing to maps nodes into suitable Hilbert space, which can deal with linearly inseparable problem of category features so as to generate good-quality hashing representations of nodes for node classification.
\item We define a novel similarity matrix in network embedding area to merge both network structure information and node label information. Supervised by the similarity matrix, the proposed method can incorporate simultaneously the two kinds of information into network embedding.
\item Extensive experiments over three real world benchmark datasets show that the proposed method significantly outperforms the state-of-the-art baselines.
\end{itemize}
The rest of paper is arranged as follows: In Section 2, we will first review the related work of node classification and kernel hashing. Then, the details of our DKSH will be presented in Section 3. Moreover, in Section 4, we will present the experimental results of node classification over three real world benchmark datasets. Finally, the conclusions will be given to summary our work in Section 5.
\section{Related Work}
\subsection{Node Classification in Structural Networks}
In this section, we discuss the recent trends and some state-of-the-art node classification methods in structural networks instead of attributed networks. Generally, existing node classification methods can be divided into two categories: traditional methods and embedding based methods.
Traditional methods \cite{neville2000iterative,yamaguchi2015omni} pose node classification as an inference in an undirected Markov network, and then use iterative approximate inference algorithms to directly compute the posterior distribution of labels given the network structure. For example, OMNI-Prop \cite{yamaguchi2015omni} assigns each node with the prior belief about its label and then updates the label using the evidence from its neighbors, i.e., if the most of neighbors have the same label, then the rest also have the same label. However, these methods have a high computational complexity, which suffer from the curse of dimensionality for large-scale structural networks; meanwhile they cannot avoid cascading errors.
Different from traditional methods, network embedding based methods learn a classifier from the learned low-dimensional node representations, which can achieve better performance by alleviating the curse of dimensionality for large-scale structural networks and avoiding cascading errors. Nowadays, network embedding based methods become the recent trend for node classification. Essentially, this type of methods adopt following three steps \cite{cui2018survey}: (1) a network embedding algorithm, such as deep neural network \cite{perozzi2014deepwalk,wang2016structural} and matrix factorization \cite{ou2016asymmetric}, is applied to learn low-dimensional node representations with preserving rich network structure information; (2) the nodes with known labels are used as the training set; (3) a classifier, such as support vector classifier \cite{dai2019adversarial} and logistic regression classifier \cite{grover2016node2vec}, is learned from the representations and labels of training nodes to perform node classification. The representative methods include DeepWalk \cite{perozzi2014deepwalk}, node2vec \cite{grover2016node2vec}, SDNE \cite{wang2016structural} and Dwns \cite{dai2019adversarial}. DeepWalk first adopts random walk to extract local structure information of a node into node representation and then use an one-vs-rest logistic regression for classification. node2vec adopts a flexible method for sampling node sequences to strike a balance between local and global structure information in network embedding process and then also use an one-vs-rest logistic regression classifier to classify. SDNE first adopts a deep autoencoder to simultaneously extract both the first-order and the second-order similarity into node representations and then uses a support vector classifier for classification. Dwns improves DeepWalk with generative adversarial networks (GANs) based regularization methods to generate better node representation and then also use a support vector classifier to classify. With these methods, the performance of node classification has been greatly improved.
The major problems of previous methods are that: (1) They are hard to capture the actual category features of a node because of the linearly inseparable problem; (2) They only preserve network structure information into network embedding without considering node labels. Among these methods, node2hash \cite{wang2018feature} is the closest to our DKSH, which also uses a kernel hashing method to obtain node representations from network information. However, their algorithm adopts a shallow kernel hashing method \cite{shi2009hash}, which is still suffer from the above two problems. In contrast, the proposed method adopts a deep kernel supervised hashing, which can address the above two problems well. Before introducing our deep kernel supervised hashing, the existing kernel hashing methods is briefly described in the next subsection.
\begin{figure*}[t]
\centering
\includegraphics[width=15.5cm,height=8.2cm]{figure1.pdf}
\caption{The conceptual framework of Deep Kernel Supervised Hashing (DKSH). This framework contains three parts: (1) Preprocessing: extracting structure matrix from bold red random walk and then deriving similarity matrix; (2) Learning deep kernel matrix: learning DKL constructed by the structure matrix; (3) Optimized deep kernel hashing: mapping nodes into learned Hilbert space and then learning hashing functions $H$ supervised by the similarity matrix.}
\label{fig1}
\end{figure*}
\subsection{Kernel Hashing}
Kernel hashing is an useful method for nonlinear data, which maps original data into suitable Hilbert space and then learns hashing representations from this space. Generally, the existing kernel hashing methods can be categorized into single kernel hashing \cite{he2010scalable,wang2018feature} and multiple kernel hashing \cite{liu2014multiple}. Compared with multiple kernel hashing that is designed for multiple features data, single kernel hashing is the most fundamental method in kernel hashing. Thus, we take single kernel hashing as an example to introduce kernel hashing.
Single kernel hashing is an useful method to deal with classification tasks, which can learn hashing functions to map data from Hilbert space to hashing space. The formulation of hashing functions is:
\begin{equation}
\begin{split}
B_{mi} = h_{m}(X_i) = sign(V_m^T \varphi(X_i)-b_m)
\end{split}
\label{eq1}
\end{equation}
with
$$
V_m = \sum_{r=1}^R W_{rm} \varphi(X_r)
$$
where $h_{m}$ is m-th hashing code of data $X_i$, $\varphi$ is the function that maps original data to Hilbert space, and $b_m$ is the threshold scalar. Besides, $V_m$ is the m-th hyperplane vector in the Hilbert space, which is a linear weighted combination of $R$ landmarks, i.e., $X_r, r = 1, \cdots, R$, with weight matrix $W_{R \times M}$. Note that, the landmarks are cluster centers produced by clustering or random choosing.
According to the definition of kernel matrix $K_{ij}=\varphi(X_i)^T\varphi(X_j)$, Equation~\eqref{eq1} can be rewritten in a kernel form:
\begin{equation}
\begin{split}
B_{i} = sign(W^T K_i-b)
\end{split}
\label{eq2}
\end{equation}
where $W_{R \times M}$ is the weight matrix of $R$ landmarks, $K_i$ is i-th column of a designed kernel matrix $K_{R \times N}$. In this way, the formulation of hashing functions is obtained, which can be used to learn nonlinear features from many data modalities, especially image \cite{he2010scalable}.
Nevertheless, both single kernel hashing and multiple kernel hashing adopt shallow kernel \cite{zhuang2011two}, which is often powerless to capture the actual features from highly nonlinear network data.
\section{Deep Kernel Supervised Hashing}
In this section, we first describe the problem formulation of node classification in structural networks, and then introduce the details of the proposed Deep Kernel Supervised Hashing (DKSH) method. The conceptual framework of DKSH is shown in Figure~\ref{fig1}.
\subsection{Problem Formulation}
Formally, an undirected network is denoted as $G=(V,E,Y)$, where $V=\{v_i\}_{i=1}^N$ represents the set of $N$ nodes, $E=\{e_{ij}\}_{i,j=1}^N$ represents the set of edges between two nodes and $Y$ denotes the labels set. For $v_i$ and $v_j$ are linked by an edge, $e_{ij} = 1$. Otherwise, $e_{ij} = 0$. Network hashing embedding is to learn a set of hash functions $H=\{h_m\}_{m=1}^M$, which are used to map each node in $G$ into a low-dimension hashing representation $B_i \in \{-1, 1\}^M$, $M$ is the dimension of hashing representations.
Given the labeled node set $V_L$ and the unlabeled node set $V_N$, where each node $v_i \in V_L$ is associated with a label $y_i \in Y$ but not in another, our goal is to predict the missing labels of unlabeled nodes $V_N$ with the learned hashing representations $B$.
\begin{algorithm}[t]
\caption{Building Structure Matrix}
\label{alg:Framwork}
\begin{algorithmic}[1]
\REQUIRE Network denoted as $G=(V, E, Y)$, window size $p$, walk length $l$ and walks per nodes $\gamma$.
\ENSURE Structure matrix $P$.
\STATE Initialize structure matrix $P=O$, $paths = \emptyset$.
\STATE {\bf for} t = 0 to $\gamma$ {\bf do}
\STATE ~~~~Nodes = Shuffle($V$)
\STATE ~~~~{\bf for} each $v_i$ $\in$ Nodes {\bf do}
\STATE ~~~~~~~~$path_{v_i}$ = RandomWalk(G, $v_i$, $l$)
\STATE ~~~~~~~~Put $path_{v_i}$ in $paths$
\STATE ~~~~{\bf end for}
\STATE {\bf end for}
\STATE Store $paths$
\STATE {\bf for} each $path$ in $paths$ {\bf do}
\STATE ~~~~{\bf for} each $v_i$ $\in$ $path$ {\bf do
\STATE ~~~~~~~~{\bf for} each $v_j$ $\in$ $path[i-p : i+p]$ {\bf do
\STATE ~~~~~~~~~~~~$P_{ij} = P_{ij}+(p+1-dis(v_i, v_j))/p$
\STATE ~~~~~~~~{\bf end for}
\STATE ~~~~{\bf end for}
\STATE {\bf end for}
\STATE Store structure matrix $P$
\end{algorithmic}
\label{alg1}
\end{algorithm}
\subsection{Preprocessing Algorithms}
The algorithms of preprocessing is used to construct structure matrix by sampling network structure information and then similarity matrix by merging node labels and the structure matrix.
\subsubsection{Structure Matrix}
In network embedding area, the random walk is one of the most popular and powerful network sampling methods, which reflects the rich network structure information of center node in $G$. Generally, the relationship extracted from random walks contains $0$ and $1$, where $0$ is the relationship between unknown node pairs and $1$ is the relationship between similar node pairs \cite{perozzi2014deepwalk}. However, this type of relationship ignores the relative distance between center node and context nodes of it in window. Therefore, in this paper, we assign different weights to context nodes in window, according to their relative distance to the center node.
Initializing structure matrix $P=O$, where $O$ is zero matrix. For each similar node pairs $(v_i, v_j)$ in $window_{vi}^p$, where $v_i$ is the center node, $v_j$ is the context nodes of $v_i$ and $p$ is the window size, the recursive definition of $P$ is:
\begin{equation}
\begin{split}
P_{ij}^{\prime} = P_{ij} + \frac{p+1-dis(v_i, v_j)}{p}
\end{split}
\label{eq4}
\end{equation}
where $dis(v_i, v_j)$ is to compute the relative distance between $v_i$ and $v_j$ in the window. Note that, $(p+1-dis(v_i, v_j))/p$ is the weight provided by $(v_i, v_j)$, which is negatively related to the relative distance. In this way, $P_{ij}$ can reflect simultaneously the relative distance and the co-occurrence frequency of $(v_i, v_j)$ in random walks. More details of constructing structure matrix is shown in Algorithm~\ref{alg1}.
\subsubsection{Similarity Matrix}
According to DeepWalk \cite{perozzi2014deepwalk}, structure matrix reflects rich structure information, which can be treated as feature matrix. Thus, in order to simultaneously preserve network structure information and node label information, similarity matrix $S$ is defined as:
\begin{equation}
\begin{split}
S_{ij}=
\left\{
\begin{array}{lcl}
\exp{(-\frac{\|P_i-P_j\|^2}{\max(dis^2)})} &, & {y_j = y_i}\\
0 &, & otherwise\\
\end{array}
\right.
\end{split}
\label{eq5}
\end{equation}
where, $P_i$ and $P_j$ are feature vectors of $v_i$ and $v_j$, $\max(dis^2)$ is the max globally distance between all the feature vectors and $y_i$ is the label of node $v_i$.
\subsection{Learning Deep Kernel Matrix}
Learning a deep kernel matrix aims to map nodes into suitable Hilbert space, which detail the architecture and implementation of deep multiple kernel learning.
\subsubsection{Deep Multiple Kernel Learning}
Multiple kernel learning (MKL) \cite{liu2014multiple} is a widely used technique for kernel designing. Its principle consists in learning, for a given support vector classifier, the most suitable convex linear combination of standard elementary kernels. However, this kind of linear combination of kernels is a shallow way, which often cannot capture highly nonlinear features. In this way, deep multiple kernel learning (DKL) is proposed \cite{strobl2013deep,jiu2017nonlinear}. Interestingly, network structure information is just a highly nonlinear network information. Thus, in this section, we describe how to make DKL to fit network data.
Figure~\ref{fig2} shows the architecture of our DKL, which adopts a nonlinear multi-layered combination of multi-kernels. The recursive definition of our deep kernel is:
\begin{equation}
\begin{split}
& K_{t}^{(l)} = g_t(\sum_{t=1}^T \mu_{t}^{(l-1)} K_t^{(l-1)})\\
& \begin{array}{l@{\quad}l}
s.t. & \mu \geq 0
\end{array}
\end{split}
\label{eq3}
\end{equation}
where $g_t(\cdot)$ is the activation function for kernel matrix $K_t$ like $rbf()$, which can map feature matrix (or kernel matrix) to kernel matrix. Moreover, we assume that the architecture of DKL has $L$ layers and each layer contains $T$ single kernel matrices, $l \in \{1, \cdots, L\}$, $t \in \{1, \cdots, T\}$. In this case, $K_{t}^{(l)}$ expresses the kernel matrix of l-th layer and t-th unit in this model. Besides, $K_{t}^{(1)} = g_t(P)$, where P is the structure matrix of a structural network (see Algorithm~\ref{alg1}), and the finial output of the proposed DKL is $K = \sum_{t=1}^T \mu_{t}^{(L)} K_t^{(L)}$.
\begin{figure}[t]
\centering
\includegraphics[width=7.75cm,height=4cm]{figure2.pdf}
\caption{The architecture of a 3-layers deep multiple kernel learning. Lines represent the weight for each kernel matrix, $\mu_{t}^{(l)}$.}
\label{fig2}
\end{figure}
\begin{algorithm}[t]
\caption{Learning Deep Kernel Matrix}
\label{alg:Framwork}
\begin{algorithmic}[1]
\REQUIRE Initial network parameters $\mu=\frac{1}{T}$, structure matrix $P$, the number of nodes $N$.
\ENSURE Optimal $\mu$, coefficients of SVM $\alpha$.
\STATE {\bf repeat}
\STATE ~~~~Fix $\mu$, compute output deep kernel $K$.
\STATE ~~~~$\alpha$ is optimized by SVM solver.
\STATE ~~~~Fix $\alpha$, compute the gradient $\nabla_{\mu}$ of $\mathcal{T}_{span}$ w.r.t $K$.
\STATE ~~~~Update weight $\mu$, according to $\nabla_{\mu}$, and keep $\mu \geq 0$.
\STATE {\bf until} Convergence
\end{algorithmic}
\label{alg2}
\end{algorithm}
\subsubsection{Implementation}
In order to optimize the network parameters $\mu$ of the proposed DKL, we use the backward information from an one-vs-rest SVM classifier. The SVM classifier for network data is designed as $sgn(\sum_{i=1}^N \alpha_i y_i K(v_i, v)+b)$, where $y_i$ is the label of node $v_i$. Generally, $\mu$ can be optimized by minimizing a objective function, which is a global hinge loss of the classifier like \cite{jiu2017nonlinear}. However, in order to reduce the risk of over-fitting, we use span bound method to learn deep kernel \cite{liu2011learning}. Under the assumption that the set of support vectors remains the same during the leave-one-out procedure, the span bound can be stated as:
\begin{equation}
\begin{split}
\mathcal{L}((v_1, y_1), \cdots, (v_N, y_N)) \leq \sum_{i=1}^N \varphi(\alpha_i D_i^2 - 1)
\end{split}
\label{eq6}
\end{equation}
where $\mathcal{L}$ is leave-one-out error, and $D_i$ is the distance between the point $\varphi(v_i)$ and the set $\Gamma_i = \{\sum_{j \neq i, \alpha_j>0} \lambda_j \varphi(v_j)|\sum_{j \neq i} \lambda_j=1\}$. Based on Equation~\eqref{eq6}, we can formulate the objective function of deep multiple kernel learning as minimize the upper bounds of leave-one-out error:
\begin{equation}
\begin{split}
\min_{\mu, \alpha} \mathcal{T}_{span} = \sum_{i=1}^N \varphi(\alpha_i D_i^2 - 1)
\end{split}
\label{eq7}
\end{equation}
Here, the objective function is optimized w.r.t two parameters: $\mu$ and $\alpha$. Alternating optimization strategy is adopted, i.e., we fix $\mu$ to optimize $\alpha$, and then vice-versa. At each iteration, when $\mu$ is fixed, the deep kernel $K$ is also fixed, $\alpha$ can be auto-optimized using a SVM classifier \cite{fan2008liblinear}. When $\alpha$ fixed, $\mu$ can be directly optimized by computing the gradient of Equation~\eqref{eq7} \cite{strobl2013deep}. The iterative procedure continues until convergence or when a maximum number of iterations is reached (see Algorithm~\ref{alg2}).
\subsection{Optimized Deep Kernel Hashing}
The function of deep kernel hashing is used to learn the hashing representations of nodes from the learned Hilbert space. Note that, in the section of related work, we introduce the existing kernel hashing methods. However, both single kernel hashing and multiple kernel hashing are shallow kernel hashing methods, which are often powerless to learn a suitable Hilbert space for highly nonlinear network structure. In order to address this problem, we extend shallow kernel into the learned deep kernel. According to Equation~\eqref{eq2} and Equation~\eqref{eq5}, the proposed deep kernel hashing functions can be written as:
\begin{equation}
\begin{split}
B_{i} = sign(W^T K_i-b)
\end{split}
\label{eq8}
\end{equation}
with
$$
K_{R \times N} = (\sum_{t=1}^T \mu_{t}^{(L)} K_t^{(L)})_{R \times N}
$$
where $W_{R \times M}$ is weighted matrix of landmarks, $K_i$ is i-th column of the learned deep kernel matrix $K_{R \times N}$.
With the similarity matrix and the hashing functions, the form of similarity-distance product minimization \cite{wang2017survey} is adopted to design the following objective function for network hashing representation:
\begin{equation}
\begin{split}
& \min\limits_{W} \quad \frac{1}{2}\sum\limits_{i,j=1}^N S_{ij}\|B_i-B_j\|^2 + \lambda \sum_{m=1}^M \|V_m\|^2\\
& \begin{array}{l@{\quad}l}
s.t.& \sum\limits_{i=1}^N B_i = 0\\ &\frac{1}{N}\sum\limits_{i=1}^N B_iB_i^T = I\\
\end{array}
\end{split}
\label{eq9}
\end{equation}
where $S$ is the similarity matrix. $B_i$ is the hashing representation of $v_i$ obtained from Equation~\eqref{eq8}, which has the same constraints. $\sum_{m=1}^M \|V_m\|^2$ is utilized to a regularized term to control the smoothness of hyperplane vector $V_m$. The constraint $\sum_{i=1}^N B_i = 0$ is to make sure bit balance, i.e., $50\%$ to be $1$ and $50\%$ to be $-1$. The constraint $\frac{1}{N}\sum_{i=1}^N B_iB_i^T = I$ is to ensure bit uncorrelation. In this way, we can obtain compact hashing representations of nodes.
Using Laplacian matrix $L = diag(S\mathbf{1})-S$, the objective function can be derived as:
\begin{equation}
\begin{split}
& \min\limits_{W} \quad tr(W^T \frac{(C+C^T)}{2} W)\\
& \begin{array}{l@{\quad}l}
s.t.& W^T G W=I
\end{array}
\end{split}
\label{eq10}
\end{equation}
with
$$
C = K_{R \times N} L K_{R \times N}^T + \lambda K_{R \times R}
$$
$$
G = \frac{1}{N}K_{R \times N}(I-\frac{1}{N}\mathbf{1}\mathbf{1}^T)K_{R \times N}^T
$$
Here, $b = -(\frac{1}{N})W^TK_{R \times N} \mathbf{1}$, $W_{R \times M}$ is weighted matrix of landmarks. Note that, the derivation follows \cite{he2010scalable}.
\subsubsection{Implementation}
For simpler implementation, Equation~\eqref{eq10} can be further rewritten into an eigen vector problem:
\begin{equation}
\begin{split}
& \min\limits_{W} \quad tr(\hat W^T \hat C \hat W)\\
& \begin{array}{l@{\quad}l}
s.t.& \hat W^T \hat W=I
\end{array}
\end{split}
\label{eq11}
\end{equation}
with
$$
\hat C = \Lambda^{-\frac{1}{2}} T^T \frac{(C+C^T)}{2} T \Lambda^{-\frac{1}{2}}
$$
$$
G = T_0 \Lambda_0 T_0^T
$$
$$
W = T \Lambda^{-\frac{1}{2}} \hat W
$$
where $\Lambda$ is a diagonal matrix consisting of $M$ largest elements of $\Lambda_0$, and $T$ is the corresponding columns of $T_0$. In this way, the solution of this eigen vector problem is matrix $\hat W$, which is $M$ eigen vectors of matrix $\hat C$. Given $\hat W$, $W$ can be directly obtained by $W = T \Lambda^{-\frac{1}{2}} \hat W$. Based on $W$, we can get the hashing representations of $v_i, i=1, \cdots, N$, according to Equation~\eqref{eq8}.
Note that, after obtaining the hashing representations of nodes, an off-the-shelf classifier is trained to predict the missing labels of unlabeled nodes.
\section{Experiments}
In the previous section, the proposed method incorporates simultaneously network structure information and node label information into the hashing representations of nodes. In this section, extensive node classification experiments are conducted to verify that the proposed method can improve the performance of node classification in structural networks.
\begin{table*}[t!]
\begin{center}
\caption{ Accuracy ($\%$) of Node Classification over Wiki. }
\label{tab2}
\scalebox{1}{\begin{tabular}{l|ccccccccc}
\hline \hline
\multirow{2}{*}{Methods} & \multicolumn{9}{c}{\% Labeled Nodes} \\ \cline{2-10}
& 10\% & 20\% & 30\% & 40\% & 50\% & 60\% & 70\% & 80\% & 90\% \\ \hline
DeepWalk & 46.60 & 54.48 & 59.05 & 62.70 & 64.66 & 65.95 & 66.98 & \underline{68.37} & \underline{68.78} \\
Line & 57.88 & 61.08 & 63.50 & 64.68 & \underline{66.29} & 66.91 & \underline{67.43} & 67.46 & 68.61 \\
GraRep & \underline{58.57} & \underline{61.91} & \underline{63.58} & 63.77 & 64.68 & 65.39 & 65.92 & 65.18 & 67.05 \\
nove2vec & 55.94 & 59.67 & 61.11 & 64.21 & 65.08 & 65.58 & 66.76 & 67.19 & 68.73 \\
AIDW & 57.32 & 61.84 & 63.54 & \underline{64.90} & 65.58 & 66.54 & 65.59 & 66.58 & 68.02 \\
Dwns & 55.77 & 59.63 & 61.98 & 64.01 & 64.59 & \underline{66.99} & 66.45 & 67.55 & 67.51 \\ \hline
node2hash & 53.35 & 55.32 & 57.74 & 59.65 & 61.28 & 60.96 & 62.83 & 62.08 & 64.07 \\ \hline
\textbf{DKSH-1L} & 66.50 & 69.77 & 70.91 & 72.13 & 71.92 & 74.45 & 73.38 & 74.39 & 73.53 \\
\textbf{DKSH-2L} & 66.65 & 70.72 & 72.70 & 72.70 & 74.06 & 75.07 & 74.13 & 74.05 & 75.85 \\
\textbf{DKSH} & \textbf{69.05} & \textbf{71.20} & \textbf{73.41} & \textbf{74.35} & \textbf{74.38} & \textbf{75.16} & \textbf{74.85} & \textbf{75.30} & \textbf{77.10} \\ \hline \hline
\end{tabular}}
\end{center}
\end{table*}
\begin{table*}[t!]
\begin{center}
\caption{ Accuracy ($\%$) of Node Classification over Cora. }
\label{tab3}
\scalebox{1}{\begin{tabular}{l|ccccccccc}
\hline \hline
\multirow{2}{*}{Methods} & \multicolumn{9}{c}{\% Labeled Nodes} \\ \cline{2-10}
& 10\% & 20\% & 30\% & 40\% & 50\% & 60\% & 70\% & 80\% & 90\% \\ \hline
DeepWalk & 64.60 & 69.85 & 74.21 & 76.68 & 77.59 & 77.68 & 78.63 & 79.35 & 79.23 \\
Line & 66.06 & 70.86 & 72.25 & 73.94 & 74.03 & 74.65 & 75.12 & 75.30 & 75.76 \\
GraRep & \underline{\textbf{74.98}} & 77.48 & 78.57 & 79.38 & 79.53 & 79.68 & 79.75 & 80.89 & 80.74 \\
nove2vec & 73.96 & \underline{78.04} & \underline{80.07} & \underline{81.62} & \underline{\textbf{82.16}} & 82.25 & 82.85 & 84.02 & \underline{84.91} \\
AIDW & 73.83 & 77.93 & 79.43 & 81.16 & 81.79 & \underline{\textbf{82.27}} & \underline{82.93} & \underline{84.11} & 83.69 \\
Dwns & 73.20 & 76.98 & 79.83 & 80.56 & 82.27 & 82.52 & 82.92 & 82.97 & 84.54 \\ \hline
node2hash & 55.99 & 56.16 & 63.87 & 67.51 & 70.24 & 70.20 & 71.34 & 72.83 & 73.43 \\ \hline
\textbf{DKSH-1L} & 71.55 & 75.32 & 76.64 & 76.74 & 78.63 & 78.60 & 79.80 & 79.93 & 81.40 \\
\textbf{DKSH-2L} & 72.90 & 77.83 & 79.40 & 79.57 & 80.99 & 81.14 & 81.16 & 80.22 & 82.73 \\
\textbf{DKSH} & 74.22 & \textbf{78.71} & \textbf{80.55} & \textbf{81.89} & 81.89 & 82.03 & \textbf{83.87} & \textbf{84.69} & \textbf{86.23} \\ \hline \hline
\end{tabular}}
\end{center}
\end{table*}
\begin{table*}[t!]
\begin{center}
\caption{ Accuracy ($\%$) of Node Classification over Citeseer. }
\label{tab4}
\scalebox{1}{\begin{tabular}{l|ccccccccc}
\hline \hline
\multirow{2}{*}{Methods} & \multicolumn{9}{c}{\% Labeled Nodes} \\ \cline{2-10}
& 10\% & 20\% & 30\% & 40\% & 50\% & 60\% & 70\% & 80\% & 90\% \\ \hline
DeepWalk & 45.53 & 50.98 & 53.79 & 55.25 & 56.05 & 56.84 & 57.36 & 58.15 & 59.11 \\
Line & 47.03 & 50.09 & 52.71 & 53.52 & 54.20 & 55.42 & 55.87 & 55.93 & 57.22 \\
GraRep & 50.60 & 53.56 & 54.63 & 55.44 & 55.20 & 55.07 & 56.04 & 55.48 & 56.39 \\
nove2vec & \underline{50.78} & \underline{55.89} & \underline{57.93} & \underline{58.60} & 59.44 & 59.97 & 60.32 & 60.75 & 61.04 \\
AIDW & 50.77 & 54.82 & 56.96 & 58.04 & \underline{59.65} & \underline{60.03} & \underline{60.99} & \underline{61.18} & \underline{62.84} \\
Dwns & 50.00 & 53.74 & 57.37 & 58.59 & 59.00 & 59.53 & 59.62 & 59.51 & 60.18 \\ \hline
node2hash & 38.58 & 47.96 & 49.29 & 50.96 & 52.66 & 52.30 & 53.22 & 56.11 & 57.23 \\ \hline
\textbf{DKSH-1L} & \textbf{58.17} & 58.82 & 59.28 & 59.82 & 60.10 & 59.50 & 60.46 & 60.33 & 60.24 \\
\textbf{DKSH-2L} & 57.44 & 58.80 & 59.72 & 59.01 & 59.73 & 59.67 & 60.26 & 61.84 & 61.75 \\
\textbf{DKSH} & 57.97 & \textbf{59.81} & \textbf{59.78} & \textbf{59.96} & \textbf{60.59} & \textbf{60.89} & \textbf{61.79} & \textbf{62.84} & \textbf{63.25} \\ \hline \hline
\end{tabular}}
\end{center}
\end{table*}
\subsection{Experimental Setup}
\subsubsection{Datasets}
We conduct experiments on three real world benchmark datasets, which are popularly used in many previous works \cite{dai2019adversarial,zhao2020deepemlan}. Wiki \cite{sen2008collective} is a network with nodes as web pages and edges as the hyperlinks between web pages. The network has 2,405 nodes, 17,981 edges, and 17 different labels. Cora \cite{mccallum2000automating} is a network of citation relationships of scientific papers. The network has 2,708 nodes, 5,429 edges, and 7 different labels. Citeseer is also a scientific paper citation network constructed by \cite{mccallum2000automating}. The network has 3,312 nodes, 4,732 edges, and 6 different labels. We regard these three networks as undirected structural networks, and do some preprocessing on the original datasets by deleting nodes with zero degree.
\subsubsection{Metric}
Following previous works \cite{dai2017adversarial,dai2019adversarial}, we employ the popularly used \textbf{Accuracy} to evaluate the performance of node classification. In this paper, accuracy measures the micro-averaged accuracy of the aggregated contributions of all classes.
\subsubsection{Baselines}
We compare our DKSH with the state-of-the-art baselines and its variants. For fair comparisons, all the selected baselines are widely used structural network embedding methods, which need not node attributes information. Besides, among these baselines, node2hash is the only hashing based method like our DKSH. The details of the baselines are as follows:
\begin{itemize}
\item \textbf{DeepWalk} \cite{perozzi2014deepwalk}: DeepWalk is an unsupervised method, which uses local structure information obtained from truncated random walks to learn low-dimensional feature representations of nodes.
\item \textbf{Line} \cite{tang2015line}: Line uses the breadth-first strategy to sample the inputs, based on node neighbors. The method preserves both the first-order and second-order similarity in network embedding process.
\item \textbf{GraRep} \cite{cao2015grarep}: GraRep applies SVD technique to different k-step probability transition matrix to learn node representations, and finally obtains global representations through concatenating all k-step representations.
\item \textbf{node2vec} \cite{grover2016node2vec}: node2vec differs from DeepWalk by proposing more flexible method for sampling node sequences to strike a balance between local and global structure information.
\item \textbf{AIDW} \cite{dai2017adversarial}: AIDW is an inductive version of DeepWalk with GAN-based regularization methods. A prior distribution is imposed on node representations through adversarial learning to achieve a global smoothness in the distribution.
\item \textbf{Dwns} \cite{dai2019adversarial}: Dwns is also an inductive version of DeepWalk, which introduces a succinct and effective regularization technique, namely adversarial training method, in network embedding process.
\item \textbf{node2hash} \cite{wang2018feature}: node2hash uses the encoder-decoder framework, where the encoder is used to map the structural similarity of nodes into a feature space, and the decoder is used to generate the node representations through a single kernel hashing.
\end{itemize}
Besides, we also include two additional baselines, namely \textbf{DKSH-1L} and \textbf{DKSH-2L}, which are the variants of our DKSH and respectively represent DKSH with 1-layer MKL and 2-layers MKL. Note that, although graph neural networks based methods \cite{tang544triple,zhang2019graph} are relevant to our DKSH, to the best of our knowledge, nearly all these methods need node attributes information. It means that these methods are limited, and we do not select them as baselines.
\subsubsection{Parameter Settings}
For DKSH and its variants including DKSH-1L and DKSH-2L, the window size $p$, walk length $l$, walks per nodes $\gamma$, number of kernel in each layer $T$, number of landmarks $R$ and regularization parameter $\lambda$ are respectively set to 50, 200, 10, 4, 256 and 0.0001. Different from DKSH-1L and DKSH-2L, DKSH adopts a 3-layer DKL. For each layers in these kernel models, we consider 4 elementary kernels, i.e., a linear kernel, an RBF kernel with $\gamma$ = 1, a sigmoid kernel with $\alpha = -1 \times 10^{-4}$ and $\beta = 1$, and a polynomial kernel with $\alpha = 1$, $\beta = 1$, and $\delta = 2$. Besides, the dimension of node representations are set to 128 for all methods, and the other parameters are set to be the default value for the baselines.
\begin{figure*}[t!]
\centering
\includegraphics[width=12cm,height=4.2cm]{figure31.pdf}
\includegraphics[width=18cm,height=4.2cm]{figure32.pdf}
\caption{Parameter sensitivity analysis of our DKSH over Wiki, Cora and Citeseer.}
\label{fig3}
\end{figure*}
\subsection{Comparison of Node Classification Performance}
For node classification, a portion of the labeled nodes is randomly selected as training data in which there is a same increment from 10\% to 90\% on training ratio and the rest of unlabeled nodes is used to test. Besides, for fair comparisons, all the methods adopt a support vector classifier in Liblinear \cite{fan2008liblinear}. Table~\ref{tab2} lists the results of node classification over Wiki. Table~\ref{tab3} lists the results of node classification over Cora. Table~\ref{tab4} lists the results of node classification over Citeseer. From these tables, it can be observed that:
\begin{itemize}
\item The proposed method DKSH consistently outperforms all the state-of-the-art baselines across all the ratio of labeled nodes over Wiki and Citeseer. As shown in Tables~\ref{tab2}, when the training ratio is 10\%, our DKSH performs better than these baselines with 90\% training ratio. As shown in Tables~\ref{tab3}, with the training ratio change from 10\% to 90\%, our DKSH outperforms these baselines about 1.5\%. Besides, over Cora, although the baselines can outperforms our DKSH with training ratio 10\%, 50\% and 60\%, our DKSH is still competitive.
\item DKSH consistently outperforms DKSH-2L, and DKSH-2L mostly outperforms DKSH-1L on three datasets across all the ratio of labeled nodes. For example, DKSH achieves about 1\% gain in Accuracy on Wiki, about 3\% gain in Accuracy on Cora, about 1.5\% gain in Accuracy on Citeseer with ratio of labeled nodes being 90\%. Besedes, DKSH-2L mostly gives about 1\% gain in Accuracy over DKSH-1L on Wiki, Cora and Citeseer. These results demonstrate that it is easier to capture the actual category features of a node in a more suitable Hilbert space.
\item It can significantly improve the performance of node classification by incorporating simultaneously network structure information and node label information into network embedding, instead of only network structure information. For example, both of node2hash and DKSH-1L adopt a single kernel hashing method. Compared with node2hash, DKSH-1L achieves more than 10\% gain in Accuracy on Wiki, more than 11\% gain in Accuracy on Cora, about 5\% gain in Accuracy on Citeseer across all training ratios.
\item The hashing method does not necessarily lead to accuracy loss. In fact, it may avoid over-fitting in this paper. For example, both DKSH and node2hash use hashing methods to learn node representations. The results show that DKSH is better than nearly all the non-hashing methods, and node2hash has a competitive performance compared with DeepWalk and Line.
\end{itemize}
\subsection{Sensitivity Analysis}
In this subsection, we analyze sensitivity of parameters in our DKSH, i.e., the number of landmarks $R$, the dimension of node representations $M$, the window size $p$, walk length $l$ and walks per nodes $\gamma$ on Wiki, Cora and Citeseer datasets with ratio of labeled nodes being 90\% being 128bits. More specially, Figure \ref{fig3} (a) shows the affect of the parameter $R$ over the three datasets with the value between 128 and 1,024. Figure \ref{fig3} (b) shows the affect of the parameter $M$ over the three datasets with the value between 16 and 256. Figure \ref{fig3} (c) shows the affect of the parameter $p$ over the three datasets with the value between 10 and 60. Figure \ref{fig3} (d) shows the affect of the parameter $l$ over the three datasets with the value between 40 and 240. Figure \ref{fig3} (e) shows the affect of the parameter $\gamma$ over the three datasets with the value between 1 and 20. Note that except for the parameter being tested, all the other parameters are set to default values.
Figure~\ref{fig3} (a) shows Accuracy of our DKSH w.r.t. the number of landmarks $R$. When the dimension increases from $M$ to $1,024$, the Accuracy relatively stable over the three benchmark datasets, which means our DKSH is not sensitive on the number of landmarks $R$. However, training time is positively correlated with R. Thus, we select an relatively small $R$, i.e., 256.
Figure~\ref{fig3} (b) shows Accuracy of our DKSH w.r.t. the dimension of node representations $M$. DKSH achieves the best Accuracy over the three benchmark datasets when $M=128$. The reason why the Accuracy over the three benchmark datasets improves first when $M$ varies from 16 to 128 is largely because more features are captured into higher dimension of node representation. However, when the dimension further increases to 256, the linear SVM classifier is powerless to classify these high-dimensional node representations.
Figure~\ref{fig3} (c), (d) and (e) show Accuracy of our DKSH w.r.t. the window size $p$, walk length $l$ and walks per nodes $\gamma$. It can be found that DKSH is not sensitive to all the three parameters, i.e., $p$, $l$ and $\gamma$. For instance, DKSH can achieve good performance on all the three datasets in the range of 10 to 60 for the parameter $p$, and also can achieve good performance on all the three datasets with $40 \leq l \leq 240$. Furthermore, DCHUC can get the high MAP values with only one walks per nodes.
\section{Conclusions}
In this paper, we propose a novel Deep Kernel Supervised Hashing (DKSH) method to learn the hashing representations of nodes for node classification in structural networks. In DKSH, we designed a deep kernel hashing to mapping highly non-linear network structure information into suitable Hilbert space to deal with linearly inseparable problem, and define a novel similarity to incorporate simultaneously network structure information and node label information into network embedding. The experimental results demonstrate the superior usefulness of the proposed method in node classification.
|
1,116,691,497,358 | arxiv | \section{Introduction}
In cool stars, which possess a convective envelope, magnetism is thought
to be constantly regenerated against ohmic decay by dynamo effect. For
Sun-like stars the interface layer between the inner radiative zone and the
outer convective envelope is generally thought to play a major role in the
dynamo processes \cite[see \eg][]{Charbonneau10}. Since fully-convective stars
-- either main sequence stars below 0.35~\hbox{${\rm M}_{\odot}$}\ (\ie with spectral type later
than $\sim$ M4) or young T Tauri stars -- do not possess such an interface
layer, generation of magnetic field in their interiors is often thought to rely
on a non-solar-type dynamo. However, the precise mechanism and the properties
of the resulting magnetic have been a debated issue
\cite[][]{Durney93, Chabrier06, Dobler06, Browning08}.
Two main complementary approaches are successfully applied to study magnetic
fields close to the fully-convective transition. On the one hand, by modelling
Zeeman broadening of photospheric spectral lines it is possible to assess the
magnetic field averaged over the visible stellar disc \cite[\eg][]{Reiners06}.
This method is therefore able to probe magnetic fields regardless of their
complexity but provides very little information about the field geometry. On
the other hand, the Zeeman-Doppler imaging technique models the evolution of
polarization in spectral lines during at least one rotation period in order to
reconstruct a map of the large-scale component of the vector magnetic field on
the stellar photosphere.
Spectropolarimetric studies of a sample of M0--M4 dwarfs, conducted
with ESPaDOnS and NARVAL, have revealed for the first time a
strong change in large-scale magnetic topologies occurring close to the
fully-convective boundary. Stars more massive than 0.5~\hbox{${\rm M}_{\odot}$}\ exhibit
large-scale fields of moderate intensity featuring a significant toroidal
component and a strongly non-axisymmetric poloidal component, with evolution
happening on a timescale of less than 1~yr \cite[][D08]{Donati08b}. For those
in the range 0.25--0.50~\hbox{${\rm M}_{\odot}$}\ much stronger large-scale fields are observed,
which are dominated by the axial dipolar component and show only very limited
evolution over successive years \cite[][M08a,b]{Morin08a, Morin08b}.
Comparisons of these large-scale magnetic field measurements with X-ray
activity indices or with measurements of the total magnetic field (\ie at all
spatial scales) derived from the analysis of Zeeman broadening of FeH molecular
lines, suggest that fully-convective stars are much more efficient at
generating large-scale magnetic field than partly-convective ones
\cite[D08,][]{Reiners09b}.
\section{Surface magnetic fields of late M dwarfs}
A sample of 11 active M dwarfs with masses significantly below the
fully-convective boundary ($0.08<\hbox{$M_{\star}$}<0.21~\hbox{${\rm M}_{\odot}$}$ or spectral types M5--M8)
has been observed with the ESPaDOnS spectropolarimeter \cite[][hereafter
M10]{Morin10a}. Below 0.15~\hbox{${\rm M}_{\odot}$}, we observe two radically different categories
of large-scale magnetic fields: either a strong and steady almost dipolar field
(hereafter SD, similar to stars in the range 0.15--0.5~\hbox{${\rm M}_{\odot}$}); or a weaker
multipolar, non-axisymmetric field configuration undergoing dramatic evolution
on a timescale of at most 1~yr (hereafter WM). However the two groups of
objects cannot be separated in a mass-rotation diagram, see
Fig.~\ref{morin:fig1}. No object is observed to evolve from one type of
magnetism to the other during the survey (some objects were observed for 4
years). In terms of large-scale magnetic field values, a gap exists between
these two types of magnetism, with no object with $200<\hbox{$B_{\rm V}$}<900~G$ in this mass
range, see Fig.~\ref{morin:fig45}. Both stars hosting weak multipolar (WM) or
strong dipolar (SD) fields have very strong total magnetic fields (2--4~\hbox{$\rm kG$}).
No systematic correlation is found between the type of large-scale magnetic
topology and the total magnetic field \hbox{$B_{\rm I}$}\ (see Fig.~\ref{morin:fig23}). Hence,
the two different types of magnetic field configurations are only detected when
considering the large-scale component (probed by spectropolarimetry, and which
represents 15-30~\% of the total flux in the SD regime, but only a few percent
in the WM regime) and not the total magnetic flux derived from unpolarised
spectroscopy. This unexpected observation may be explained in several
different ways: for instance, another parameter than mass and rotation period
(such as age) may be relevant, two dynamo modes may be possible or stars may
switch between two states in this mass range, etc.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.8\textwidth,clip]{morin_fig1}
\caption{Mass--period diagram of fully-convective stars derived from
spectropolarimetric data and Zeeman-Doppler Imaging (ZDI). Symbol size
represents the reconstructed magnetic energy, the color ranges from blue for a
purely toroidal to red for a purely poloidal field, and the shape depicts the
degree of axisymmetry from a sharp star for non-axisymmetric to a regular
decagon for axisymmetric. For a few stars of the sample \cite{Morin10a} could
not perform a definite ZDI reconstruction, in these cases only an upper limit
of the rotation period is known and the magnetic flux is extrapolated, those
objects are depicted as empty symbols. The theoretical fully-convective limit
is depicted as a horizontal dashed line. Thin solid lines represent contours of
constant Rossby number Ro=0.01 (left) and 0.1 (right), as estimated in
\cite{Morin10a}.}
\label{morin:fig1}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.485\textwidth,clip]{morin_fig2}\hfill%
\includegraphics[width=0.485\textwidth,clip]{morin_fig3}\hfil\\
\caption[]{Average large-scale magnetic fluxes of fully-convective stars
derived from
spectropolarimetric data and Zeeman-Doppler Imaging (ZDI), as a function of
mass ({\bfseries left panel}) and mass $\times\,\hbox{$P_{\rm rot}$}^2$ ({\bfseries right
panel}). Symbols are similar
to those used in the mass--period diagram (see Fig.~\ref{morin:fig1}). For stars
in
the WM regime symbols corresponding to different epochs for a given star are
connected by a vertical grey line. The yellow region represents the domain
where bistability is observed and the orange one separates the two types of
magnetic fields identified (see text).}
\label{morin:fig23}
\end{figure}
\section{Weak and strong field dynamos: from the Earth to the stars}
In this section we briefly detail one of the hypothesis that could explain the
observation of two groups of late M dwarfs with very different magnetic
properties: the fact that two different dynamo modes could genuinely operate in
stars having very similar mass and rotation. We focus here on the weak
\textit{vs} strong field dynamo bistability, initially proposed for the
geodynamo. The underlying idea is that both, magnetic fields and rotation,
taken separately tend to inhibit convection, but that if both effects are
combined the impeding influences of the Lorentz and of the Coriolis forces may
be reduced, allowing convection to set on at lower Rayleigh number and to
develop on larger length scales \cite[][]{Chandra61}. This led \cite{Roberts78}
to conjecture that in a rapidly rotating system, for magnetic fields stronger
than a threshold value, the Lorentz force would enhance convection and hence
dynamo action, resulting in a runaway growth of the magnetic field. The
corresponding bifurcation diagram is depicted on Fig.~\ref{morin:fig45}. On the
weak-field branch the Lorentz force is balanced by viscous or inertial terms in
the momentum equation, this force balance requires small-spatial scales. On the
strong field branch, however, the magnetic field strength is set by a balance
between Lorentz and Coriolis forces, which requires larger spatial scales, this
is the magnetostrophic regime. A similar bifurcation diagram, but based on the
fact that magnetic buoyancy would be negligible close to the dynamo onset has
been proposed for stars by \cite{WT2000}.
We now briefly discuss the identification between WM (SD) magnetism and
weak-field (strong-field) dynamo regime, the reader is referred to
\cite{Morin11} for a more detailed discussion. The usual control parameter in
the weak vs strong field dynamo scenario described above is the Rayleigh
number, which measures the energy input relative to forces opposing the motion.
Mass can be used as a good proxy for the available energy flux in M dwarfs,
Fig.~\ref{morin:fig23} can therefore be interpreted as a bifurcation diagram
for the amplitude of the large scale magnetic field versus a control parameter
measuring the energy input. In order to compare the driving of convection with
the impeding effect of rotation, we can use $\hbox{$M_{\star}$}\times\hbox{$P_{\rm rot}$}^2$ as a rough
proxy for the Rayleigh number (see Fig.~\ref{morin:fig23}) based on rotation
rather than diffusivities \cite[\eg][]{Christensen06}.
First, in the SD regime the magnetic field strength has to be compatible with a
Lorentz--Coriolis force balance. We note that this balance is valid spatial
large-scales for which the Coriolis term is predominant over inertial terms in
the momentum equation, in qualitative agreement with the observation that only
the large-scale component of the magnetic field exhibits a bimodal
distribution. This magnetostrophic force balance roughly corresponds to an
Elsasser number of order unity, \ie:
\begin{equation} \Lambda = \frac{B^2}{\rho\mu\eta\Omega} \sim 1, \end{equation}
where $B$ is the magnetic field strength, $\rho$ the mass density, $\mu$ the
magnetic permeability, $\eta $ the magnetic diffusivity and $\Omega$ the
rotation rate. With a few assumptions described in \cite{Morin11}, we find that
the order of magnitude of the expected magnetic field strength on the strong
field branch is set by:
\begin{equation} \small {B_{sf}} \sim 6\,
\left(\frac{\hbox{$M_{\star}$}}{\hbox{${\rm M}_{\odot}$}}\right)^{1/2} %
\left(\frac{\hbox{$R_{\star}$}}{\hbox{${\rm R}_{\odot}$}}\right)^{-1} \left(\frac{\hbox{$L_{\star}$}}{\hbox{$L_{\odot}$}}\right)^{1/6}
\left(\frac{\eta_\odot}{\eta_{\rm ref}}\right)^{1/2} %
\left(\frac{\hbox{$P_{\rm rot}$}}{1~\hbox{$\rm d$}}\right)^{{-}1/2}~\hbox{$\rm kG$} \end{equation}
Where $\eta_\odot$ is a reference value for the magnetic diffusivity in the
solar convection zone, and $\eta_{\rm ref} = 10^{11}~{\rm cm^2\,s^{-1}}$.
Taking stellar radius and luminosity for the stellar mass in the range
$0.08-0.35~\hbox{${\rm M}_{\odot}$}$ from \cite{Chabrier97} main sequence models, and $\eta_\odot$
in the range $10^{11}-3\times10^{12}~{\rm cm^2\,s^{-1}}$
\cite[\eg][]{Ruediger11}, we derive surface values in the strong field regime
in the range 2--50~\hbox{$\rm kG$}, compatible with the order of magnitude of measured \hbox{$B_{\rm V}$}\
values. More conclusively, the gap in terms of \hbox{$B_{\rm V}$}\ between the two branches
depends on the ratio of inertia to Coriolis force in the momentum equation and
can estimated with:
\begin{equation} \frac{B_{wf}}{B_{sf}} = Ro^{1/2}, \end{equation}
which is of the order of $10^{-1}$ for stars of our sample in the bistable
domain, in good agreement with the typical ratio of large-scale magnetic fields
measured between the WM and SD groups of stars (see Fig.~\ref{morin:fig23}).
We note that according to the \cite{Chabrier97} main sequence models, the
product of the terms depending on stellar mass, radius and luminosity in the
expression of $B_{sf}$ is almost constant in the mid-to-late M dwarf regime.
The expected magnetic field strength on the strong field branch hence almost
scales with $\Omega^{1/2}$. This is not in contradiction with the fact all the
stars in our sample belong to the so-called saturated regime of of the
rotation--activity relation. Indeed $B_{sf} \propto \Omega^{1/2}$ (derived from
$\Lambda \sim 1$) should apply here to the large scale field alone, which is
only a fraction of the total magnetic field of the stars (between 15 and 30
\%). If a small scale dynamo operates, it does not need to follow the same
dependency. Finally, the weak dependency of the large-scale magnetic field on
stellar rotation predicted for stars in the strong-field regime cannot be ruled
out by existing data and should be further investigated.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.485\textwidth,clip]{morin_fig4}\hfill%
\includegraphics[width=0.40\textwidth,clip]{morin_fig5}\hfil\\
\caption{{\bfseries Left:} Total magnetic fluxes of fully-convective
stars measured from unpolarised spectra of FeH lines. The values are taken from
\cite{Reiners09} and \cite{Reiners10}, whenever 2MASS near infrared
luminosities \cite[][]{Cutri03} and Hipparcos parallaxes \cite[][]{ESA97} are
available to compute the stellar mass from the \cite{Delfosse00}
mass--luminosity relation. Whenever spectropolarimetric data are available the
properties of the magnetic topology are represented as symbols described in
Fig.~\ref{morin:fig23}. Else small blue symbols are used, upward (downward)
triangles represent lower (upper) limits. {\bfseries Right:} Anticipated
bifurcation diagram for the geodynamo \cite[adapted from][]{Roberts88}. The
magnetic field amplitude is plotted against the Rayleigh number. The
bifurcation sequence is characterised by two branches, referred to as weak and
strong field branches. The yellow and orange regions have the same meaning as
in Fig.~\ref{morin:fig23}. $\rm Ra_c$ is the critical Rayleigh number for the
onset of non-magnetic convection. The weak field regime sets in at $ \rm Ra_m$,
and the turning point associated with the runaway growth corresponds to ${\rm
Ra} = {\rm Ra_r}$.} \label{morin:fig45} \end{figure}
\section{Conclusions}
We present here the main results of the first spectropolarimetric analysis of a
sample of active late M dwarfs \cite[more throughly detailed in][]{Morin10a}.
In particular we report the co-existence of two radically different types of
magnetism -- strong and steady dipolar field (SD) as opposed to weaker
multipolar field evolving in time (WM) -- for stars with very similar masses
and rotation periods. One of the foreseen hypothesis to explain these
observations is the genuine existence of two types of dynamo in this parameter
regime, \ie bistability. We show that the weak \textit{vs} strong field dynamo
bistability is a promising frame work. The order of magnitude of the observed
magnetic field in stars hosting a strong dipolar field, and more conclusively
the typical ratio of large-scale magnetic fields measured in the WM and SD
groups of stars are compatible with theoretical expectations. We argue that the
weak dependency of the magnetic field on stellar rotation predicted for stars
in the strong-field regime cannot be ruled out by existing data and should be
further investigated. We do not make any prediction on the extent of the
bistable domain in terms of stellar parameters mass and rotation period, this
issue shall be investigated by further theoretical work, and by surveys of
activity and magnetism in the ultracool dwarf regime.
A dynamo bistability
offers the possibility of hysteretic behaviour. Hence the magnetic properties
of a given object depend not only on its present stellar parameters but also on
their past evolution. For instance, for young objects episodes of strong
accretion can significantly modify their structure and hence the convective
energy available to sustain dynamo action \cite[][]{Baraffe09} initial
differences in rotation periods of young stars could also play a role. Because
stellar magnetic fields are central in most physical processes that control the
evolution of mass and rotation of young stars \cite[in particular
accretion-ejections processes and star-disc coupling,
\eg][]{Bouvier09,Gregory10}, the confirmation of stellar dynamo bistability
could have a huge impact on our understanding of formation and evolution of low
mass stars.
|
1,116,691,497,359 | arxiv | \section{Introduction}
\IEEEPARstart{N}{owadays}, both carrier frequency and direction of arrival (DOA) are needed in some applications, such as Cognitive Radio (CR) aiming at solving the spectral congestion \cite{Haykin2005, Yucek2009, Mishali2011a, Sun2013, Cohen2014}. The most important function of CRs is to autonomously exploit locally unused spectrum to provide new paths to spectrum access. Therefore, spectrum sensing is an essential part of CRs. The conventional spectrum opportunity only contains three dimensions of the spectrum space: frequency, time, and space. However, with the advancement in array processing technologies \cite{Krim1996, Schmidt1986, Roy1986}, the new dimension, DOA, also creates new spectrum opportunities. Joint frequency spectrum and spatial spectrum would enhance the performance of CRs.
Recently, significant effort have been made towards jointly estimation of carrier frequencies and their DOAs \cite{Lemma1998, Lemma2003}. An obvious drawback is that they require additional pairing between the carrier frequencies and the DOAs. Besides, both works assume that the signal is sampled at least at its Nyquist rate. The main challenge of CRs lies in wideband signal procesing for their costly or even unreachable Nyquist rate sampling. The distribution range of the spectrum under monitoring is from 300 MHz to several GHz \cite{Haykin2005, Yucek2009, Mishali2011a, Sun2013, Cohen2014}. It leads to high Nyquist sampling rate and a large number of sampling data to process.
Fortunately, sub-Nyquist sampling technology can reconstruct a multiband signal from its sub-Nyquist samples \cite{Mishali2011,Mishali2010,Eldar2009,Mishali2009}. Latterly, some joint DOA and carrier frequency estimation methods are proposed at sub-Nyquist sampling rates. \cite{Ariananda2013} proposes a structure, i.e. a linear array by employing a multi-coset sampling at the output of every sensor. This method compresses the wide-sense stationary signal in both the time domain and spatial domain. To simplify the hardware complexity, \cite{Kumar2014} uses an additional identical delayed channel at the output of every sensor. But there are ambiguities during pairing with their corresponding DOAs in an underlying uniform linear array (ULA) scenario. To solve the pairing issue, \cite{ Kumar2015} proposes a structure with the hardware complexity identical to that of \cite{Kumar2014}. However, those papers do not give a unified signal reception model. \cite{Stein2015} presents two joint DOA and carrier frequency recovery approaches for an L-shaped ULA scenario. In \cite{Liu2016}, we propose a new array receiver architecture associated with two sub-Nyquist sampling based methods for simultaneously estimate the frequencies and DOAs of multiple narrowband far-field signals impinging on a ULA, where signals’ carrier frequencies spread around the whole wide spectrum. The architecture is complex due to every sensor following a multi-channel sub-Nyquist sampling receiver.
We consider a scene as \cite{Liu2016} in this paper. For reducing the complexity of receiver, we propose a simplified array receiver architecture. For this model, we propose a unified formula and methods for joint estimation of DOA and carrier frequency.
The following notations are used in the paper. ${\left( \cdot \right)^{\rm T}}$ and ${\left( \cdot \right)^{\rm H}}$ denote the transpose and Hermitian transpose, respectively. $E\left( \cdot \right)$ stands for the expectation operator. ${x_j}$ is the $j$th entry of a vector ${\bf{x}}$. ${{\bf{A}}_i}$ and ${A_{ij}}$ are the $i$th column and $(i,j)$th entry of a matrix ${\bf{A}}$, respectively. $ \otimes$ denotes the Hadamard product. ${{\bf{I}}_M}$ stands for an $M \times M$ identity matrix.
\section{Array Signal Model with Sub-Nyquist Sampling}
In \cite{Liu2016}, we proposed an array signal receiver architecture and the corresponding signal reception model, which introduces sub-Nyquist sampling technology. In this letter, on one hand, the proposed architecture is the simplified form of the previous architecture, on the other hand, we will take advantage of the previous model when estimation algorithm deducing. Therefore, we review the main conclusions of \cite{Liu2016} in this section.
Consider $K$ narrowband far-field signals impinging on a ULA composed of $M$ $(M>K)$ sensors. Our previous receiver architecture applies multi-coset sampling \cite{Mishali2009}. And every array sensor is followed by same $P$ delay branches. All the ADCs are synchronized and samples at a sub-Nyquist sampling rate of ${f_s} = {{{f_N}} \mathord{\left/
{\vphantom {{{f_N}} L}} \right.
\kern-\nulldelimiterspace} L}$, where ${f_N} = {1 \mathord{\left/
{\vphantom {1 {{T_N}}}} \right.
\kern-\nulldelimiterspace} {{T_N}}}$ is the Nyquist sampling rate. The constant set $C=[c_1,c_2,\cdots,c_P]$ is the sampling pattern where
$0 \le {c_1} < {c_2} < \cdots < {c_P} \le L - 1$. ${y_{mp}}\left[ n \right]$ denotes the sampled signal corresponding to the $m$th sensor, $p$th branch.
The matrix output of all branches of all sensors is given by
\begin{align}\label{Yf}
{\bf{Y}}\left( f \right) &= \left( {{\bf{A}} \otimes {\bf{B}}} \right)\overline {\bf{S}} \left( f \right) + \left( {{{\bf{I}}_M} \otimes {\bf{B}}} \right)\widehat {\bf{N}}\left( f \right)\\
&\buildrel \Delta \over= {\bf{G}}\overline {\bf{S}} \left( f \right)+{{\bf{I}}_{\bf{B}}}\widehat {\bf{N}}\left( f \right), f \in \mathcal{F} \buildrel \Delta \over = \left[ {0,\frac{1}{{LT}}} \right),\label{Yf2}
\end{align}
where ${B_{il}} = \frac{1}{{\sqrt L }}\exp \left( {j\frac{{2\pi }}{L}{c_i}l} \right)$, ${{{A}}_{mk}} = \exp \left( { - j{\phi _k}\left( {m - 1} \right)} \right)$ is the $mk$th element of the steer array ${\bf{A}}$, where spatial phase
\begin{align}\label{Phi}
{\phi _k} = \frac{{2\pi d\sin \left( {{\theta _k}} \right)}}{{{c \mathord{\left/
{\vphantom {c {{f_k}}}} \right.
\kern-\nulldelimiterspace} {{f_k}}}}},
\end{align}
where ${\theta _k}$ and ${f_k}$ are the DOA and the center frequency of ${s_k}\left( t \right)$, respectively.
$\overline {\bf{S}} \left( f \right) = {\left[ {\overline {\bf{S}} _1^{\rm{T}}\left( f \right),\overline {\bf{S}} _2^{\rm{T}}\left( f \right), \cdots ,\overline {\bf{S}} _K^{\rm{T}}\left( f \right)} \right]^{\rm{T}}}$, ${\overline {\bf{S}} _k}\left( f \right) = {\left[ {{S_{k1}}\left( f \right),{S_{k2}}\left( f \right), \cdots ,{S_{kL}}\left( f \right)} \right]^{\rm{T}}}$, ${S_{kl}}\left( f \right) = {S_k}\left( {f + \frac{{l - 1}}{{LT}}} \right)$, ${S_k}\left( f \right)$ is the Fourier transform of ${s_k}\left( t \right)$. ${\bf{s}}\left( t \right){\rm{ = }}{\left[ {{s_1}\left( t \right),{s_2}\left( t \right), \cdots ,{s_K}\left( t \right)} \right]^{\rm{T}}}$ is the vector of all signal values.
Because ${s_k}\left( t \right)$ is a narrowband signal, there is one, and only one frequency band which is occupied in ${\overline{{\bf{S}}}_k}\left( f \right)$. Further, ${\overline{{\bf{S}}}_{k}}\left( f \right)$ is a sparse vector of length $L$ when $k$ is fixed and there is one, and only one index ( marked as ${l_k}$), which is activated.
${\bf{Y}}\left( f \right) = {\left[ {{\bf{Y}}_1^{\rm{T}}\left( f \right),{\bf{Y}}_2^{\rm{T}}\left( f \right), \cdots ,{\bf{Y}}_M^{\rm{T}}\left( f \right)} \right]^{\rm{T}}}$. The $p$th element of ${Y_{mp}}\left( f \right) = \sqrt L T_N{Y_{mp}}\left( {{e^{j2\pi fT}}} \right)$, which is the discrete-time Fourier transform of the signal ${y_{mp}}\left[ n \right]$ except a coefficient difference $\sqrt L T_N$.
$\widehat {\bf{N}}\left( f \right) = {\left[ {\widehat {\bf{N}}_1^{\rm{T}}\left( f \right), \cdots ,\widehat {\bf{N}}_M^{\rm{T}}\left( f \right)} \right]^{\rm{T}}}$, ${\widehat {\bf{N}} }_m\left( f \right) = \left[ {{N_{m1}}\left( f \right), \cdots ,{N_{ml}}\left( f \right)} \right]^{\rm T}$, ${N_{kl}}\left( f \right) = {N_k}{f + \frac{{l - 1}}{{LT}}}$ is the Fourier transform of ${n_k}\left( t \right)$. ${\bf{n}}\left( t \right)= {\left[ {{n_1}\left( t \right), \cdots ,{n_M}\left( t \right)} \right]^{\rm{T}}} $ is the noise vector, which subjects to the zero-mean circular complex Gaussian distribution with covariance matrix ${\sigma}^2 \bf{I}_M$.
\section{Proposed receiver architecture and joint DOA and frequency estimation algorithm}
\subsection{Proposed receiver architecture}
To largely decrease hardware complexity, we design the simplified receiver architecture when achieving joint frequency and DOA estimate. This architecture is set up based on the previous architecture. The main difference between the two architectures is that the former only reserves all branches of one array sensor and whole same branch of all sensors. The proposed receiver architecture is shown in Fig.\ref{figArcPart}. Without loss of generality, we select all branches of the first sensor and whole first branch of all sensors in the Fig.\ref{figArcPart}. Namely, our output is ${\bf{W}}\left( f \right) = {\left[ {{Y_{11}}\left( f \right),{Y_{12}}\left( f \right), \cdots ,{Y_{1P}}\left( f \right),{Y_{21}}\left( f \right), \cdots ,{Y_{M1}}\left( f \right)} \right]^{\rm T}}$. We define a $(M + P - 1)\; \times MP$ matrix ${\bf{J}}$, where ${J_{ij}} = 1$ for $\;i = 1, \cdots ,P$, and $ j=i$; or $i = P + 1, \cdots ,M + P - 1$, and $j={1 + iP - {P^2}}$; else ${J_{i,j}} = 0$ for else.
We have ${\bf{W}}\left( f \right) = {\bf{JY}}\left( f \right)$. According to (\ref{Yf}), we have
\begin{align}\label{Ypf}
{\bf{W}}\left( f \right) = {\bf{H}}\overline {\bf{S}} \left( f \right) + {\bf{J}}{{\bf{I}}_{\bf{B}}}\widehat {\bf{N}}\left( f \right),f \in\mathcal{F},
\end{align}
where ${\bf{H}} = {\bf{J}}\left( {{\bf{A}} \otimes {\bf{B}}} \right) = {\bf{JG}}$. Combing ${{\bf{I}}_{MP}} = {{\bf{I}}_{\bf{B}}}{\bf{I}}_{\bf{B}}^{\rm H}$ in \cite{Liu2016}, we have
\begin{align}\label{Noise}
{\bf{J}}{{\bf{I}}_{\bf{B}}}\widehat {\bf{N}}\left( f \right){\left( {{\bf{J}}{{\bf{I}}_{\bf{B}}}\widehat {\bf{N}}\left( f \right)} \right)^{\rm H}}{\rm{ = }}{\sigma ^2}{{\bf{I}}_{M + P - 1}}.
\end{align}
\begin{figure}[!t]
\centering
\includegraphics[width=2.0in]{A2.eps}
\caption{Proposed receiver architecture.}
\label{figArcPart}
\end{figure}
\subsection{Algorithm Based on Individual Estimate}
\subsubsection{ Spatial Phase Estimate}
We denote the outputs of the 1st branch of all sensors as ${ {\bf{Q}} }\left( f \right) = {\left[ {\begin{array}{*{20}{c}}
{{{ {\bf{Y}} }_{11}}\left( f \right)}& \cdots &{{{ {\bf{Y}} }_{M1}}\left( f \right)}
\end{array}} \right]^{\rm T}}$. According to \cite{Liu2016}, we have the following equation.
\begin{align}\label{ASub}
{{\bf{Q}}}\left( f \right) = {\bf{A}}{{\bf{Z}}}\left( f \right) + \widehat {\bf{N}}_1\left( f \right),
\end{align}
where ${{\bf{Z}}}\left( f \right) = {\left[ {\begin{array}{*{20}{c}}
{\sum\limits_{l = 1}^L {{{{B}}_{1l}}{S_{1l}}\left( f \right)} }& \cdots &{\sum\limits_{l = 1}^L {{{{B}}_{1l}}{S_{Kl}}\left( f \right)} }
\end{array}} \right]^{\rm{T}}}$.
Because ${{\bf{S}}_{k}}\left( f \right)$ is a $1$-sparse vector of length $L$, and the activated index is $l_k$. We can simplify ${{\bf{Z}}}\left( f \right)$ as ${\bf{Z}}\left( f \right) = {\left[ {\begin{array}{*{20}{c}}
{{B_{1{l_1}}}{S_{1{l_1}}}\left( f \right)}& \cdots &{{B_{1{l_K}}}{S_{K{l_K}}}\left( f \right)}
\end{array}} \right]^{\rm{T}}}$. (\ref{ASub}) is a standard array reception model, there are many existing method to get $\phi$, such as MUSIC, ESPRIT, and so on. Further, we can get the least square solution of ${ {{ {\bf{Z}} }} }\left( f \right)$,
\begin{align}\label{ALS}
{ {{ {\bf{Z}} }} }\left( f \right) = {\bf{A}} ^\dag { {\bf{Q}} }\left( f \right).
\end{align}
\subsubsection{ Frequency Estimate}
According to \cite{Liu2016} section III part B, the output of all branches of the 1st sensor is
\begin{align}\label{BSub}
{{\bf{Y}}_1}\left( f \right) = {\bf{B}}{\overline {\bf{X}} _1}\left( f \right),
\end{align}
where ${\overline {\bf{X}} _1}\left( f \right) = {\left[ {\begin{array}{*{20}{c}}
{\sum\limits_{k = 1}^K {{{{A}}_{1k}}{S_{k1}}\left( f \right)} }& \cdots &{\sum\limits_{k = 1}^K {{{{A}}_{1k}}{S_{kL}}\left( f \right)} }
\end{array}} \right]^{\rm{T}}}$. Since ${{\bf{S}}_{k}}\left( f \right)$ is a $1$-sparse vector of length $L$, ${\overline {\bf{X}} _1}\left( f \right)$ is $K$-sparse vector of length $L$. We denote the support set of ${\overline {\bf{X}} _1}\left( f \right)$ as $\Omega$. We can use the CTF algorithm to solve (\ref{BSub}) to obtain $\Omega $. Then, we hold
\begin{align}
{\overline {\bf{Y}} _1}\left( f \right) = {\bf{B}}{\overline {\bf{X}} _1}\left( f \right) = {{\bf{B}}_\Omega }{\left( {{{\overline {\bf{X}} }_1}} \right)_\Omega }\left( f \right).
\end{align}
Further, we can get the least square solution of ${\left( {{\overline {\bf{X}} _1}} \right)_\Omega }\left( f \right)$,
\begin{align}\label{BLS}
{\left( {{\overline {\bf{X}} _1}} \right)_\Omega }\left( f \right) = {\bf{B}}_\Omega ^\dag {\overline {\bf{Y}} _1}\left( f \right).
\end{align}
\subsubsection{ Spatial Phase and Frequency matching algorithm}
We calculate the cross-correlation function of signal estimates ${{\bf{Z}}\left( f \right)}$ and ${\left( {{{\bf{X}}_1}} \right)_\Omega }\left( f \right)$. The absolute value of the cross-correlation matrix element has the following expression
\begin{align}\label{Rpm}
\left| {{R_{ij}}} \right|
&= \left| {E\left\{ {{B_{1{l_i}}}{S_{i{l_i}}}\left( f \right){{\left( {\sum\limits_{k = 1}^K {{A_{1k}}{S_{k{\Omega _j}}}\left( f \right)} } \right)}^{\rm H}}} \right\}} \right|\\\label{mid}
&= \left| {E\left\{ {{S_{k{l_k}}}\left( f \right){{\left( {{S_{k{\Omega _j}}}\left( f \right)} \right)}^{\rm H}}} \right\}} \right|\\
&= \left\{ {\begin{array}{*{20}{c}}
{ > 0,when\;{l_i} = {\Omega _j}}\\
{ = 0,when\;{l_i} \ne {\Omega _j}}
\end{array}} \right.,1 \le i \le K,1 \le j \le c.
\end{align}
The conditions for the establishment of (\ref{mid}) have the signals are uncorrelated, the magnitudes of both ${B_{1{l_i}}}$ and ${A_{1k}}$ are 1.
If any of the two signal frequencies are in different frequency bands, we have $c = K$, or $c < K$. According to (\ref{Rpm}), we know that there is one absolute value of element is dominant in each row of ${\bf{R}}$. Further, the support index $\mathcal{S}$ of $\bf{H}$ is determined as following:
\begin{align}\label{Supp}
\mathcal{S}_i = \left( {i - 1} \right)L + {\Omega_j}, j = \mathop {\arg \max }\limits_j \left| {{R_{ij}}} \right|,1\leq i \leq K.
\end{align}
With known the support index $\mathcal{S}$, we obtain
\begin{align}\label{CS}
{\bf{W}}\left( f \right){\rm{ = }}{\bf{H}}\overline {\bf{S}} \left( f \right) = {{\bf{H}}_\mathcal{S} }{\overline {\bf{S}} _\mathcal{S} }\left( f \right) + {\bf{J}}{{\bf{I}}_{\bf{B}}}\widehat {\bf{N}}\left( f \right), f \in\mathcal{F}.
\end{align}
Then we have the least square solution of ${\overline {\bf{S}} _\mathcal{S} }\left( f \right)$
\begin{align}\label{MLES}
\overline{\bf{S}}_\mathcal{S}\left( f \right){\rm{ = }}{\bf{H}}_\mathcal{S} ^\dag {\bf{W}}\left( f \right).
\end{align}
We can gain the received signal's frequency ${\overline f _k}$ through $\overline{\bf{S}}_\mathcal{S}\left( f \right)$. Besides, there is a relationship between ${\overline f _k}$ and the original signal's frequency ${ f _k}$,
\begin{align}\label{MatchFreDir}
{f_k} = \left( {{{{\cal S}_k}\% L} - 1} \right)\frac{{{f_N}}}{L} + \overline {{f}}_k.
\end{align}
We can calculate $\theta_k$ through (\ref{Phi}). We outline the main steps of this individual estimate method for partial channels named algorithm JDFPI in table \ref{Alg4}.
\begin{table}[!t]
\renewcommand{\arraystretch}{1.0}
\caption{\textbf{Algorithm JDFPI}}\label{Alg4}
\label{table_example}
\centering
\begin{tabularx}{8.4cm}{lX}
\toprule
1)&According to (\ref{ASub}),obtain $\phi$ applying the MUSIC, ESPRIT algorithm, and so on;\\
2)&Compute ${ {{ {\bf{Z}} }} }\left( f \right)$ according to (\ref{ALS});\\
3)&Apply the CTF algorithm to solve (\ref{BSub}) to obtain $\Omega $;\\
4)&Compute ${\left( {{\overline {\bf{X}} _1}} \right)_\Omega }\left( f \right)$ according to (\ref{BLS});\\
5)&Determine the support index $\mathcal{S}$ according to (\ref{Supp}); \\
6)&Compute ${\overline {\bf{S}} _\mathcal{S} }\left( f \right)$ according to (\ref{CS}); \\
7)&Determine ${\overline f _k}$ through $\overline{\bf{S}}_\mathcal{S}\left( f \right)$ applying the MUSIC, ESPRIT algorithm, and so on; \\
8)&Acquire ${ f _k}$ according to (\ref{MatchFreDir}); \\
9)&Calculate $\theta_k$ through (\ref{Phi});\\
\bottomrule
\end{tabularx}
\end{table}
\subsection{Algorithm Based on subspace decomposition}
If we calculate the covariance matrix of ${\bf{W}}\left( f \right)$, and take advantage of the subspace decomposition theory as \cite{Schmidt1986}, we will have similar conclusion:
\begin{align}\label{Orth}
{{\bf{a}}_l}\left( \phi \right) \bot {{\bf{U}}_N},
\end{align}
where ${{\bf{U}}_N}$ is the noise subspace. The difference is ${{\bf{a}}_l}\left( \phi \right) = {\bf{J}}\left( {{\bf{a}}\left( \phi \right) \otimes {{\bf{B}}_l}} \right)$. Similarly, we can execute the steps of Algorithm JDFSD \cite{Liu2016}. It is worth pointing out that ${\bf{G}}$ and ${\bf{Y}}\left( f \right)$ in \cite{Liu2016} need to be replaced by ${\bf{H}}$ and ${\bf{W}}\left( f \right)$, respectively. We name this method as algorithm based on subspace decomposition for partial channels (JDFSDPJ).
\subsection{Performance Analysis: Cram\'{e}r\text{-}Rao Bound}
Comparing the model (\ref{CS}) and model (11) in \cite{Liu2016} and noticing that (\ref{Noise}) holds, and making use of the conclusion of Section V equation (29) in \cite{Liu2016}, we have
\begin{align}\label{CRB4Sim}
{\rm{CRB}}_{sub}(sim) &= \frac{\sigma ^2}{{2T/L}}{\left( {\Re \left( {\left( {{{\bf{E}}^{\rm{H}}}{{\bf{P}}_{{{\bf{H}}_{\mathcal{S}}}}}{\bf{E}}} \right) \odot {\bf{R}}_{\overline {\bf{S}} }^{\rm{H}}} \right)} \right)^{ - 1}}\nonumber \\
&= \frac{{{\sigma ^2}}}{{2T}}{\left( {\Re \left( {\left( {{{\bf{E}}^{\rm{H}}}{{\bf{P}}_{{{\bf{H}}_{\mathcal{S}}}}}{\bf{E}}} \right) \odot {\bf{R}_{\bf{S}}^{\rm{H}}}} \right)} \right)^{ - 1}},
\end{align}
where ${{\bf{P}}_{{{\bf{H}}_{\mathcal{S}}}}} = {\bf{I}} - {{\bf{H}}_{\mathcal{S}}}{\bf{H}}_{\mathcal{S}}^\dag $, where ${\bf{H}}_{\mathcal{S}}^\dag = {\left( {{\bf{H}}_{\mathcal{S}}^{\rm{H}}{{\bf{H}}_{\mathcal{S}}}} \right)^{ - 1}}{\bf{H}}_{\mathcal{S}}^{\rm{H}}$, ${\bf{E}} = \left[ {{{\bf{E}}_1}, \cdots ,{{\bf{E}}_K}} \right]$, ${{\bf{E}}_i} = \frac{{d{{{\bf{H}}_{{\mathcal{S}}_i}}}}}{{d{\phi _i}}}$.
\section{Simulation}
In this section, we present the numerical simulation results to illustrate the performance of the proposed algorithms. For the sake of comparison, we take JDFSD in \cite{Liu2016} as a representative of full structure, as JDFSD and JDFTD have the same performance. We set the receiver structure as \cite{Liu2016}, and we take the all branches of the 1st sensor and the 1st branch of all sensors as our simplified structure. For the same reason mentioned in \cite{Liu2016}, we will only give the phase estimation simulation result rather than the DOA estimation simulation result in those simulations.
\subsection{Performance with noise}
Firstly, we will show our model can be solved by the proposed algorithm in different noise levels. In this subsection, the simulation scenario is the same as section VI-A in \cite{Liu2016}.
Fig.\ref{figPS}-Fig.\ref{figFS} depict the RMSE versus SNR in terms of spatial phase and frequency estimation, respectively. Fig.\ref{figPS} shows that the phase estimation performance of algorithms JDFSDPJ and JDFPI improves with SNR, where JDFSDPJ achieves the $\textrm{CRB}_{sub}(Sim)$. The phase estimation performance of JDFSDPJ is better than that of JDFPI is because of jointly using the information in frequency domain and spatial domain. And we observe that $\textrm{CRB}_{sub}(Sim)$ lies between $\textrm{CRB}_{sub}$ and $\textrm{CRB}_{Ny}$. $\textrm{CRB}_{sub}(Sim)$ is higher than $\textrm{CRB}_{sub}$ is obvious. The simplified structure use the jointly information from frequency domain and spatial domain. It leads to a big improvement although the simplified structure have much less samplings comparing with Nyquist sampling. In Fig.\ref{figFS} demonstrates that the frequency estimation performances of JDFSDPJ and JDFPI can achieve the $\textrm{CRB}_{sub}(Sim)$, which is certainly higher than $\textrm{CRB}_{sub}(Sim)$ because of using less branches.
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{PhiVsSNR.eps}
\caption{RMSE of phase estimates versus SNR.}
\label{figPS}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{FreVsSNR.eps}
\caption{RMSE of frequency estimates versus SNR.}
\label{figFS}
\end{figure}
\subsection{Performance with various signal number}
In this subsection, we will investigate the estimation performance when the signal number changes as section VI-C in \cite{Liu2016}.
Fig.\ref{figPK} shows that the phase (DOA) estimation performance of algorithm JDFSDPJ is slightly influenced by the signal number and achieves $\textrm{CRB}_{sub}(Sim)$, however JDFPI is influenced by the signal number. This is due to the former jointly using the information from frequency domain and spatial domain. To some degree, it maintains good robustness in terms of the number of signals as JDFSD. Without doubt the performance of JDFSDPJ is still worse than $\textrm{CRB}_{sub}$. Fig.\ref{figFK} shows that the frequency estimation performances of algorithms JDFSDPJ and JDFPI are not influenced by the signal number and can reach $\textrm{CRB}_{sub}(Sim)$.
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{PhiVsK.eps}
\caption{RMSE of phase estimates versus number of source.}
\label{figPK}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{FreVsK.eps}
\caption{RMSE of frequency estimates versus number of source.}
\label{figFK}
\end{figure}
\section{Conclusions}
In this paper, we designed an simplified array receiver architecture by introducing sub-Nyquist sampling technology. We realized the joint DOA and frequency estimation under lower sampling rate. Although the estimate precision of using partial channels is worse than that of using full channels, the former has lower equivalent sampling rate and hardware complexity. And increase time of sensing will enhance its estimation performance. The simulations demonstrated that the joint algorithm can closely match the CRB according to noise levels and source number as well.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,116,691,497,360 | arxiv | \section{Introduction}
The space of stability conditions $\Stab(\D)$ of a triangulated category $\D$ was introduced in \cite{B}. As a set it has a description as the pairs $(\A, Z)$ where $\A$ is the heart of a t-structure on $\D$, and $Z: K(\A) \isom K(\D) \to \C$ is a stability function on $\A$ known as the central charge. As the forgetful map $\Stab(\D) \to \Hom(K(\D), \C)$ remembering just the central charge is a local homeomorphism \cite[Prop 6.3]{B}, $\Stab(\D)$ has the structure of a complex manifold. It carries an action of the group of autoequivalences $\Aut(\D)$ and a free action of $\C$ for which $\Z \subset \C$ acts as the autoequivalence $[1]$, the shift functor of $\D$.
In this paper we compute a connected component $\Stab^0(\D)$ of the space of stability conditions of $\D = \D_{fd}(G A_2)$, the derived category of finite dimensional modules over the Ginzburg dg algebra of the $A_2$ quiver. This is a $\CY_3$ triangulated category generated (cf \cite[Sect 2]{KYZ}) by two objects $S$ and $T$ with
\[
\Hom(S,S) \isom \C \isom \Hom(T,T) \qquad \Ext^1(S,T) \isom \C
\]
We will call the heart $\A^0$ consisting of all modules supported in degree zero the standard heart. It is equivalent to the abelian category of finitely generated modules over the path algebra of the $A_2$ quiver, and its two simple objects are $S$ and $T$. We study the connected component $\Stab^0(\D)$ which contains stability conditions supported on the standard heart $\A^0$.
In section two we study the subquotient $\Aut^0(\D)$ of $\Aut(\D)$ of those autoequivalences preserving the connected component $\Stab^0(\D)$ modulo those which act trivially on it. We show that the set of hearts supporting a stability condition in $\Stab^0(\D)$ is an $\Aut^0(\D)$-torsor and deduce that
\begin{thm} \label{aut}
$\Aut^0(\D)$ is isomorphic to the braid group $\Br_3$ on three strings.
\end{thm}
In section three we show how to define central charges using periods of a meromorphic differential $\lambda$ on the universal family of framed elliptic curves $\E \to \widetilde{\M}$. Restricted to a fibre $E$, $\lambda$ has a single pole of order $6$ at the marked point $p$ and double zeroes at each of the half-periods. Using the framing $\{\alpha, \beta\}$ and the basis $\{[S], [T]\}$ of $K(\D)$ to identify the lattices $H_1(E \less p, \Z) \isom K(\D)$, we prove
\pagebreak
\begin{thm} \label{stab}
There is a biholomorphic map
\begin{center}
\begin{tikzpicture}[node distance=2cm, auto]
\node (A) {$\widetilde{\M}$};
\node (B) [right of=A] {$\Stab^0(\D)/ \C$};
\node (D) [below of=B] {$\Pb\Hom(K(\D),\C)$};
;
\draw[->] (A) to node {$f$} (B);
\draw[->] (A) to node [anchor=east] {$[\int_\alpha \lambda : \int_\beta \lambda] \: \:$} (D);
\draw[->] (B) to node {$[Z(S):Z(T)]$} (D);
\end{tikzpicture}
\end{center}
lifting the period map of $\lambda$. It is equivariant with respect to the actions of $\PSL(2,\Z)$ on the left by deck transformations and on the right by $\Aut^0(\D)/ \Z$ which are both determined by their induced actions on $K(\D)$.
\end{thm}
As a corollary we obtain a $\Br_3$-equivariant biholomorphism from the universal cover of the $\C^*$-bundle $L^*$ of non-zero holomorphic differentials on $\M$ to $\Stab^0(\D)$.
\begin{rmk} \label{painleve}
In \cite{vdPS} the authors list 9 families of rank two connections on $\Pb^1$ having at least one irregular singularity which have precisely a one-parameter family of isomonodromic deformations described by one of the Painlev\'{e} equations. To each such family we associate a quiver $Q$ as in \cite{GMN}, where $Q = A_2$ corresponds to the family whose isomonodromic deformations are given by solutions to the first Painlev\'{e} equation. It is anticipated that similar considerations to those of this paper will give a description of the space of numerical stability conditions of $\D_{fd}(GQ)$ as the universal cover of a $\C^*$-bundle of meromorphic differentials over a moduli space of elliptic curves. We intend to return to this in future work.
\end{rmk}
The author would like to thank his PhD supervisor Tom Bridgeland for suggesting the problem and for many helpful discussions, and the EPSRC for financial support.
\section{Autoequivalences}
In this section we prove Theorem \ref{aut}. We show that every heart supporting a stability condition in $\Stab^0(\D)$ is a translate of the standard heart $\A^0 = \Mod(\C A_2)$ by a composite of a spherical twist and the shift functor $[1]$. We deduce that every element of $\Aut^0(\D)$ is expressible in this way. The group of spherical twists $\Sph(\D)$ is a subgroup of $\Aut^0(\D)$ of index five, and we use a result of Seidel-Thomas that $\Sph(\D) \isom \Br_3$ to deduce that $\Aut^0(\D) \isom \Br_3$, the braid group on three strings.
\begin{defn}
An object $X \in \D$ is spherical if $\Hom_{\D}(X, X) \isom \C \oplus \C[-3]$. For $X$ spherical there is a twist functor $\Phi_X$ such that
\[
\Phi_X(Y) = \Cone(X \otimes \Hom(X,Y) \to Y)
\]
\end{defn}
There are two spherical objects $S$ and $T$ in $\D$ which are the simple objects in the standard heart $\A^0$. They form an $A_2$-collection \cite[Def 1.1]{ST} as $\Ext^1(S,T) \isom \C$.
\begin{thm} \cite[Thms 1.2, 1.3]{ST}
The spherical twists $\Phi_S, \Phi_T$ satisfy the braid relations
\[
\Phi_S \Phi_T \Phi_S = \Phi_T \Phi_S \Phi_T
\]
and generate a subgroup $\Sph(\D)$ of the group of autoequivalences $\Aut(\D)$ isomorphic to the braid group on three strings $\Br_3$.
\end{thm}
The braid group $\Br_3$ has the following presentation by generators and relations \cite[Sect 1.14]{KT}
\[
\langle \sigma_1, \sigma_2 \: | \: \sigma_1 \sigma_2 \sigma_1 = \sigma_2 \sigma_1 \sigma_2 \rangle
\]
Its centre is the infinite cyclic subgroup generated by the element $u = (\sigma_1 \sigma_2)^3$ \cite[Thm 1.24]{KT} giving us the short exact sequence
\[
1 \to \Z \to \Br_3 \to \PSL(2, \Z) \to 1
\]
where the quotient map sends the generators $\sigma_1, \sigma_2$ to
\[
\left ( \begin{array}{cc} 1 & 1 \\
0 & 1 \\
\end{array} \right )
\qquad \left ( \begin{array}{cc} 1 & 0 \\
-1 & 1\\
\end{array} \right )
\]
We note that the action of a spherical twist $\Phi_X$ on $K(\D)$ is given by the formula
\[
\Phi_X([Y]) = [Y] - \chi(X,Y) [X]
\]
and so the map $\Sph(\D) \to \PSL(2,\Z)$ sends a spherical twist to the matrix given by its action on the lattice $K(\D)$ with respect to the basis $\{[S], [T]\}$.
We now study the combinatorial backbone of the space of stability conditions, namely (a connected component of) the \emph{exchange graph} of hearts of $\D$.
\begin{defn} We say $\A'$ is a simple tilt of $\A$ at $S$ if either
\begin{itemize}
\item $\A'$ is the left tilt of $\A$ with respect to the torsion pair
\[
\T = \langle S \rangle = \{ S^{\oplus n} \: | \: n \in \N_0\} \qquad \F = \{X \: | \: \Hom_{\A}(S,X)=0\}
\]
\item $\A'$ is the right tilt of $\A$ with respect to the torsion pair
\[
\T = \{X \: | \: \Hom_{\A}(X,S)=0 \} \qquad \F = \langle S \rangle = \{ S^{\oplus n} \: | \: n \in \N_0\}
\]
\end{itemize}
\end{defn}
The relevance of simple tilts is that they occur precisely at the codimension $1$ components of the boundary of the space of stability conditions $U(\A)$ supported on a given heart $\A$ by \cite[Lemma 5.5]{Br2}. Thus $\Stab(\D)$ is glued together from the $U(\A)$ according to the exchange graph.
\begin{defn}
The exchange graph $\EG(\D)$ of $\D$ has vertices the set of hearts $\A \subset \D$ and an edge between any two hearts related by a simple tilt. Define $\EG^0(\D)$ to be the connected component containing the standard heart $\A^0$.
\end{defn}
\noindent We compute the four simple tilts of the standard heart $\A^0$.
\begin{prop}
Denote by $E$ and $X$ the unique non-trivial extensions up to isomorphism of $S$ by $T$ and $T$ by $S[1]$ respectively. Let $(A,B)_C$ denote the abelian category generated by two simple objects $A$ and $B$ having a unique up to isomorphism non-trivial extension $C$ of $B$ by $A$, so that the standard heart $\A^0 = (T,S)_E$. Then
\begin{align*}
& R_S(\A^0) = (S[1], T)_{X[1]} & L_T(\A^0) &= (T[-1], S)_{X} \\
& R_T(\A^0) = (E, T[1])_S & L_S(\A^0) &= (S[-1], E)_T
\end{align*}
Moreover the tilted hearts are obtained from $\A^0$ by applying the following autoequivalences.
\begin{align*}
& R_S(\A^0) = (\Phi_T \Phi_S \Phi_T) [3] \;(\A^0) & L_T(\A^0) &= ((\Phi_T \Phi_S \Phi_T) [3])^{-1} \;(\A^0) \\
& R_T(\A^0) = (\Phi_S \Phi_T) [2] \; (\A^0) & L_S(\A^0) &= ((\Phi_S \Phi_T) [2])^{-1} \;(\A^0)
\end{align*}
\end{prop}
We will prove the statement about the left tilt at $T$, the rest being similar. The torsion pair in this case is
\[
\T = \langle T \rangle \qquad \F = \{X \in \A^0 \: | \: \Hom_{\A^0}(T,X)=0 \} = \langle S \rangle
\]
We will use the long exact sequence in cohomology with respect to the original t-structure $\A^0$, the groups being non-zero only in degrees $0$ and $1$.
\begin{lemma}
$T[1]$ is simple in $L_T(\A^0)$
\end{lemma}
\begin{proof}
Consider a short exact sequence in $L_T(\A^0)$
\[
0 \to X \to T[-1] \to Y \to 0
\]
giving a long exact sequence in $\A^0$.
\[
0 \to H^0(X) \to H^0(T[-1]) \to H^0(Y) \to H^1(X) \to H^1(T[-1]) \to H^1(Y) \to 0
\]
We have $H^0(T[-1]) = 0$ so $H^0(X) = 0$. Splitting the remaining 4-term exact sequence into two short exact sequences
\begin{align*}
0 &\to H^0(Y) \to H^1(X) \to Z \to 0 \\
0 &\to Z \to T \to H^1(Y) \to 0
\end{align*}
$Z$ is either $0$ or $T$ as $T$ is simple in $\A^0$. But there are no non-zero maps from $H^0(Y) \in \F$ to $H^1(X) \in \T$ so $H^1(X) \isom Z$ so $X$ is either $0$ or $T[-1]$ and so $T[-1]$ is simple.
\end{proof}
\begin{lemma}
$S$ is simple in $L_T(\A^0)$.
\end{lemma}
\begin{proof} As $H^1(S) =0$ we have as before
\begin{align*}
0 &\to H^0(X) \to S \to Z \to 0 \\
0 &\to Z \to H^0(Y) \to H^1(X) \to 0
\end{align*}
Thus as $S$ is simple in $\A^0$, $H^0(X)$ is either $0$ or $S$, and so $Z$ is either $S$ or $0$. Then as there are no non-zero maps from $H^0(Y) \in \F$ to $H^1(X) \in T$, $H^1(X) = 0$ and so $S$ is simple in $L_T(\A^0)$.
\end{proof}
\noindent We remark that all four simple tilts of $\A^0$ are isomorphic to $\A^0$ so the above is the local structure of the exchange graph at any vertex of the connected component $\EG^0(\D)$.
\begin{defn}
Let $\Aut^0(\D)$ be the subquotient of $\Aut(\D)$ consisting of all autoequivalences preserving the connected component $\EG^0(\D)$ of the exchange graph modulo those acting trivially on it.
\end{defn}
\noindent We will see later that in fact $\Aut^0(\D)$ is the subquotient preserving the connected component $\Stab^0(\D)/ \C$ modulo those acting trivially on it.
\begin{prop}
The vertices of the connected component $\EG^0(\D)$ of the exchange graph are a torsor for $\Aut^0(\D)$.
\end{prop}
\begin{proof}
From the above computation every heart in $\EG^0(\D)$ can be obtained by applying an autoequivalence in $\langle \Phi_S, \Phi_T, [1]\rangle$ to the standard heart $\A^0$. Thus $\Aut^0(\D)$ acts transitively on $\EG^0(\D)$ and acts freely by definition.
\end{proof}
\begin{lemma} The centre of $\Sph(\D)$ is generated by $[-5]$.
\end{lemma}
\begin{proof}
As $\Sph(\D) \isom \Br_3$ the centre is generated by $\Phi = (\Phi_S \Phi_T)^3$. We compute $\Phi$ on $S$ and $T$
\begin{align*}
S \mapsto X\phantom{[-2]} \mapsto T[-1] \mapsto T[-3] \mapsto E[-3] \mapsto S[-3] \mapsto S[-5] \\
T \mapsto T[-2] \mapsto E[-2] \mapsto S[-2] \mapsto S[-4] \mapsto X[-4] \mapsto T[-5]
\end{align*}
Thus $\Phi = [-5]$ in $\Aut^0(\D)$.
\end{proof}
We note that $\Sph(\D)$ defines a subgroup of $\Aut^0(\D)$ isomorphic to $\Br_3$. The generators $\Phi_S$ and $\Phi_T$ are composites of two autoequivalences corresponding to simple tilts, e.g. $\Phi_S^{-1} = (\Phi_T \Phi_S \Phi_T [3]) (\Phi_S \Phi_T [2])$ and so preserve the connected component of the exchange graph. If an element of $\Sph(\D)$ acts trivially on $K(\D)$ then it belongs to the centre which we have just seen is generated by a non-trivial element of $\Aut^0(\D)$ so the only element of $\Sph(\D)$ acting trivially is the identity.
\begin{thm} The map $\Br_3 \to \Aut^0(\D)$ given by $(\sigma_1, \sigma_2) \mapsto (\Phi_S [1], \Phi_T [1])$ is an isomorphism.
\end{thm}
\begin{proof}
As the exchange graph is an $\Aut^0(\D)$-torsor we know that $\Aut^0(\D) =\langle \Phi_S, \Phi_T, [1]\rangle$. As the shift functor commutes with the spherical twists, we find that $((\Phi_S [1])(\Phi_T [1]))^3 = [-5][6] = [1]$ so $\Aut^0(\D) = \langle \Phi_S [1], \Phi_T [1] \rangle$. These two generators satisfy the braid relation as $\Phi_S, \Phi_T$ do.
Now consider a word $w$ in the generators $\Phi_S[1], \Phi_T[1]$ and their inverses which is equal to the identity of $\Aut^0(\D)$. As $[1]$ is in the centre of $\Aut^0(\D)$, we have $\Phi_S^{n_1} \ldots \Phi_T^{n_k} = [-1]^{\sum n_i}$ in $\Sph(\D)$. By the above lemma the centre of $\Sph(\D)$ is generated by $(\Phi_S \Phi_T)^3 = [-5]$, so the right hand side is equal to $[-5]^{(\sum_i n_i)/5}$. As the braid relation is homogeneous, every element of $\Sph(\D)$ has a well-defined word length in the generators $\Phi_S$ and $\Phi_T$. But applying the word length homomorphism gives $\sum_i n_i = \frac{6}{5} \sum_i n_i$ so $\sum_i n_i = 0$. Thus the relations satisfied by the generators $\Phi_S [1], \Phi_T [1]$ of $\Aut^0(\D)$ are precisely those satisfied by the generators $\Phi_S, \Phi_T$ of $\Sph(\D)$.
\end{proof}
To complete the picture we show that $\Sph(\D)$ is a normal subgroup of index $5$.
\begin{prop}
There is a short exact sequence
\[
1 \to \Sph(\D) \to \Aut^0(\D) \to \Z/5\Z \to 1
\]
where the quotient map $l$ is the modulo $5$ word length map in the generators $\Phi_S [1]$ and $\Phi_T [1]$
\end{prop}
\begin{proof}
$\Sph(\D)$ is in the kernel of $l$ as
\[
l(\Phi_S) = l(\Phi_S[1]) - l([1]) = 1-6 = 0
\]
Conversely the smallest power of $[1]$ in the kernel is $[5]= (\Phi_S \Phi_T)^{-3} \in \Sph(\D)$ and so as $\Aut^0(\D) = \langle \Phi_S, \Phi_T , [1] \rangle$ the kernel is contained in $\Sph(\D)$.
\end{proof}
\begin{rmk}
By Sabidussi's Theorem \cite[Thm 4]{Sab}, $\EG^0(\D)$ is isomorphic to the Cayley graph of the braid group $\Br_3$ with respect to the generators $\Delta = (\Phi_T \Phi_S \Phi_T) [3]$ and $\Sigma=(\Phi_S \Phi_T) [2]$ which give the simple tilted hearts. Indeed this gives an alternative presentation of $\Br_3$ \cite[Sect 1.14]{KT}
\[
\langle \Sigma, \Delta \: | \: \Sigma^3 = \Delta^2 \rangle
\]
The quotient of $\EG^0(\D)$ by $\Sph(\D)$ is the $A_2$ cluster exchange graph which is isomorphic to the Cayley graph of $\Z/{5\Z}$. This recovers a special case of a result of Keller and Nicolas \cite[Thm 5.6]{K}.
\end{rmk}
\section{Stability conditions}
In this section we prove Theorem \ref{stab}. We derive the Picard-Fuchs equations satisfied by the periods of the family of meromorphic differentials $\lambda$ on the fibres $E$ of the universal family of framed elliptic curves $\E \to \widetilde{\M}$. Identifying the lattices $H_1(E, \Z) \isom K(\D)$, the image in $\Pb\Hom(K(\D), \C)$ of a certain branch of the period map is a double of the Schwarz triangle with angles $(\pi, \pi/3, \pi/2)$. We show that this coincides with the image under the local homeomorphism $\bar{Z}: \Stab^0(\D) / \C \to \Pb \Hom(K(\D), \C)$ of a fundamental domain for the action of $\Aut^0(\D)/ \Z \isom \PSL(2, \Z)$ on $\Stab^0(\D) / \C$. We use our understanding of the exchange graph of $\D$ to lift the period map to our desired biholomorphism $f: \widetilde{\M} \to \Stab^0(\D) / \C$.
\begin{defn}
On an elliptic curve $y^2 = z^3 + az+ b$ define the meromorphic differential $\lambda = y \: dz$
\end{defn}
This has a pole of order $6$ at the point at infinity and double zeroes at each of the three other branch points of $y$. This is the divisor of the function $y^2$. It is the unique differential up to scale with this property as the above divisor has degree zero.
Define the coordinates $j$ and $u$ on $\widetilde{\M}$ by
\begin{equation} \label{u}
J = 1728/j \qquad j = 4u(1-u)
\end{equation}
where $J$ denotes the usual $J$-invariant. We note that the family of differentials $\lambda= \sqrt{z^3 -3z + (4u - 2)} \: dz$ satisfy $2 \del_u \lambda = \omega$, where $\omega = dz/y$ is the family of holomorphic differentials on $\widetilde{\M}$. Using this we show that the periods of $\lambda$ satisfy hypergeometric equations in $u$ and $j$.
\begin{defn}
A hypergeometric differential equation is a second order ordinary differential equation on $\Pb^1$ of the form
\[
w(1-w) f'' + (\gamma - (\alpha + \beta - 1) w )f' - \alpha \beta w = 0
\]
with $\alpha, \beta, \gamma \in \R$.
\end{defn}
It has regular singularities at $0$, $\infty$ and $1$ with exponents
\[
\lambda = 1 - \gamma \qquad \mu = \alpha - \beta \qquad \nu = \gamma - \alpha - \beta
\]
\begin{lemma}
The periods of $\lambda$ satisfy the hypergeometric equation in $j$ with exponents $(1,\frac{1}{3},\frac{1}{2})$
\end{lemma}
\begin{proof}
Suppose the periods of $\lambda$ satisfy the hypergeometric equation in $u$
\[
u(1-u) \del^2_u f + (\gamma - (\alpha + \beta - 1) j ) \del_u f - \alpha \beta f = 0
\]
Taking the derivative with respect to the dependent variable $u$, we find that the periods of $\omega$ must satisfy
\[
u(1-u) \del^2_u f + (1-2u)\del_u f + (\gamma - (\alpha + \beta - 1) u )\del_u f -(\alpha +\beta-1) f - \alpha \beta f = 0
\]
which is hypergeometric of the form
\[
u(1-u) \del^2_u f + ((\gamma+1) - ((\alpha+1) + (\beta+1) - 1) u ) \del_u f - (\alpha+1) (\beta+1) f = 0
\]
It is well-known the periods of $\omega$ satisfy the hypergeometric equation in $j$ with exponents $(0, \frac{1}{3}, \frac{1}{2})$. Then by the quadratic transformation law for the change of variable given above \cite[Eq 2]{V}, they satisfy the hypergeometric equation in $u$ with exponents $(0, \frac{1}{3}, 0)$. By the above computation, the periods of $\lambda$ satisfy the hypergeometric equation with exponents $(1, \frac{1}{3},1)$ and so reversing the change of variable gives the result.
\end{proof}
\begin{rmk}
The coordinate transformation (\ref{u}) defines a double cover $B \to M_{1,1}$ of the coarse moduli space of elliptic curves. There is a family of elliptic curves on $B$ whose total space is the complement of the three singular fibres of types $(I_1, I_1, II^*)$ over $u=0,1$ and $\infty$ respectively of a rational elliptic surface $\Sigma_u \to \Pb^1_u$. This is the smooth part of Hitchin's fibration of the moduli space of meromorphic $\SU(2)$-Higgs bundles on $\Pb^1_z$ with a single pole of order $4$ at $z =\infty$ whose leading term is nilpotent. The meromorphic differential $\lambda$ is the Seiberg-Witten differential of this integrable system, that is the exterior derivative of $\lambda$ defines a holomorphic symplectic form on $\Sigma$.
In fact $\Sigma$ is a hyperk\"{a}hler manifold \cite{W}, which was studied in \cite[Sect 9.3.3]{GMN}. In another complex structure $\Sigma$ is isomorphic to the moduli space of flat $\SL(2, \C)$-connections on $\Pb^1_z$ with a single pole at $z= \infty$ of Katz invariant $5/2$. This complex manifold was studied in \cite{S,vdPS} as the moduli space of initial conditions of the first Painlev\'{e} equation (cf Remark \ref{painleve}). Its image under the Riemann-Hilbert map is an affine cubic surface which is isomorphic as a complex variety to the cluster algebra of $A_2$.
\end{rmk}
Now consider the moduli space of elliptic curves $\M \isom \Pb(2,3) \less \{ \circ\}$ where $\circ$ is the point corresponding to $j= 0$. We make branch cuts on $\M$ along the line $\Im(j) = 0$ between $\circ$ and each of the $\Z_2$ and $\Z_3$ orbifold points $\times$, $*$ at $j=1, \infty$. We deduce the image of this branch of the period map $p$ of $\lambda$ from the Schwarz triangle theorem.
\begin{thm} \cite[p 206]{N}
Suppose $f_1$, $f_2$ are linearly independent solutions to the hypergeometric equation with exponents $(\lambda, \mu, \nu)$. Suppose further that their ratio $s= f_1/ {f_2}$ restricted to the upper-half plane $\h \subset \Pb^1 \less \{0, \infty, 1\}$ is an injection. Then $s$ maps $\h$ biholomorphically onto the interior of a curvilinear triangle $\Delta_{\lambda, \mu, \nu}$ of angles $(\lambda \pi, \mu \pi, \nu \pi)$.
\end{thm}
The image is determined up to a M\"{o}bius map and so specified uniquely by the positions of the three vertices of the triangle $\Delta$. By the Schwarz reflection principle we have
\begin{cor}
The image $\lozenge = p(\M)$ is the double of the curvilinear triangle $\Delta_{1, \frac{1}{3}, \frac{1}{2}}$ along the edge connecting the image of the two orbifold points $\times$ and $*$.
\end{cor}
We now define a fundamental domain $V=V(\Abar^0)$ for the action of $\Aut^0(\D) / \Z$ on $\Stab^0(\D) / \C$ which maps bijectively under the local homeomorphism $\bar{Z}$ to $\lozenge$. Although the vertices of the quotient of the exchange graph $\overline{\EG^0}(\D) = \EG^0(\D) / {\Z[1]}$ are indeed an $\Aut^0(\D)/ \Z$-torsor, the notion of a projective stability condition $\sigmabar \in \Stab^0(\D)/\C$ being supported at a given vertex $\Abar$ of $\overline{\EG^0}(\D)$ is not a priori well-defined. This is because points of $\Stab^0(\D)$ in the same $\C$-orbit can be supported on different hearts, even modulo the shift functor. We \emph{define} $\sigmabar$ to be supported on $\Abar$ using the following width function.
\begin{defn}
Define the width $\varphi$ of a stability condition $\sigma = (Z, \A) \in \Stab(\D)$
\[
\varphi(\sigma) = \phi^+(\sigma) - \phi^-(\sigma)
\]
where $\phi^+(\sigma)$ and $\phi^-(\sigma)$ denote the maximal and minimal phases respectively of an object in $\A$.
\end{defn}
\noindent The width is the angle of the image under $Z$ of the cone $C(\A) \subset K(\A)$ generated by classes of objects in $\A$.
\begin{defn}
We say that $\sigmabar \in \Stab^0(\D) / \C$ is supported on $\Abar$ if the width function is minimised on a lift $\A$ of $\Abar$.
\end{defn}
\noindent Note that $\sigmabar$ is supported on more than one $\Abar$ where the width function is minimised on more than one such $\Abar$. We will write $V(\Abar) \subset \Stab^0(\D) / \C$ for the subset supported uniquely on $\Abar$, whose closure $\bar{V}(\Abar)$ is the subset supported on $\Abar$.
\begin{prop}
$V = V(\Abar^0)$ is the interior of a fundamental domain for the action on $\Aut^0(\D) / \Z$ on $\Stab^0(\D) / \C$
\end{prop}
\begin{proof}
As the vertices of $\overline{\EG}^0(\D)$ are an $\Aut^0(\D)/ \Z$-torsor, every point in the set $T=\coprod_{\Abar} V(\Abar)$ belongs to a unique $V(\Abar)$. The points $\sigmabar$ in $\bar{V} \less V$ lie on the three codimension 1 walls pictured below where $\sigmabar$ is also supported on some other $\Abar$ for some simple tilt $\A$ of $\A^0$. These walls of the $V(\Abar)$ are locally finite as there is only one other wall intersecting $\bar{V}$, namely $\bar{V}(L_S(\Abar^0)) \cap \bar{V}(R_T(\Abar^0))$. Thus the closure $\bar{T} = \coprod_{\Abar} \bar{V}(\Abar)$. But $\bar{T}$ is clearly open and so is the entire connected component $\Stab^0(\D)/ \C$.
\end{proof}
\begin{rmk} The above proof shows that an autoequivalence $\Phi$ which preserves $\overline{\EG}^0(\D)$ preserves the connected component $\Stab^0(\D)/ \C$. Also if $\Phi$ acts trivially on $\overline{\EG}^0(\D)$ then $\Phi$ fixes the central charge $\bar{Z}$ and so $\Phi$ acts trivially on $\Stab^0(\D)/ \C$.
\end{rmk}
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}
\draw (-1.1,0) -- (1.1,0);
\draw (0,-0.1) -- (0,2.1);
\draw [->,thick](0,0) -- (0,1.5) node [anchor=east]{$S$};
\draw [->,thick] (0,0) -- (0.5,0) node [anchor = north]{$T$};
\draw (2.9,0) -- (5.1,0);
\draw (4,-0.1) -- (4,2.1);
\draw [->,thick](4,0) -- (4.25,1) node [anchor=west]{$E$};
\draw [->,thick](4,0) -- (3.75,1) node [anchor=east]{$S$};
\draw [->,thick] (4,0) -- (4.5,0) node [anchor = north]{$T$};
\draw [->,thick, dashed](4,0) -- (4,2) node [anchor=south]{$E+S$};
\draw (6.9,0) -- (9.1,0);
\draw (8,-0.1) -- (8,2.1);
\draw [->,thick](8,0) -- (8.25,1) node [anchor=west]{$E$};
\draw [->,thick](8,0) -- (7.75,1) node [anchor=east]{$T$};
\draw [->,thick] (8,0) -- (8.5,0) node [anchor = north]{$S[-1]$};
\draw [->,thick, dashed](8,0) -- (8,2) node [anchor=south]{$E+T$};
\end{tikzpicture}
\end{center}
\caption{Typical stability conditions on the three boundary components of $V(\bar{\A^0})$. The first $\bar{V}(\Abar^0) \cap \bar{V}(R_S(\Abar^0)) = \bar{V}(\Abar^0) \cap \bar{V}(L_T(\Abar^0))$ occurs where the only stable objects are $S$ and $T$. The other two $\bar{V}(\Abar^0) \cap \bar{V}(R_T(\Abar^0))$ and $\bar{V}(\Abar^0) \cap \bar{V}(R_T(\Abar^0))$ lie in the region where $S, T$ and $E$ are stable.}
\end{figure}
\noindent This means that $\Stab^0(\D)/ \C$ is glued together from the $V(\Abar)$ according to the quotient of the exchange graph $\overline{\EG}^0(\D)$ just as $\Stab^0(\D)$ is glued from the $U(\A)$ according to $\EG^0(\D)$.
\begin{prop} The image of $V$ under the map $\bar{Z}$ is $\lozenge$.
\end{prop}
\begin{proof} The boundary of $V$ consists of stability conditions supported on one of the three walls which we picture below, whose image under $\bar{Z}$ is the boundary of $\lozenge$.
\end{proof}
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}
\pgfmathparse{{2}}\global\let\r\pgfmathresult
\draw (0,0) circle (\r cm) ;
\pgfmathparse{{\r*tan(67.5)}}\global\let\rt\pgfmathresult
\pgfmathparse{(\rt - sqrt(\rt^2-\r^2))}\global\let\x\pgfmathresult
\begin{scope}
\clip (0,0) circle (\r cm) ;
\fill[lightgray] (0,0) circle (\r cm) ;
\fill[white] (\rt, \r) circle (\rt cm) ;
\fill[white] (\rt, -\r) circle (\rt cm) ;
\draw (0,0) circle (\r cm) ;
\draw [dotted](-\r,0) -- (\x,0);
\draw [densely dashed](\r,0) -- (\x,0);
\begin{scope}
\clip (\x, -\r) rectangle (\r, \r);
\draw [dotted](\rt, \r) circle (\rt cm) ;
\draw [dotted](\rt, -\r) circle (\rt cm) ;
\end{scope}
\begin{scope}
\clip (-\r, -\r) rectangle (\x, \r);
\draw [densely dashed](\rt, \r) circle (\rt cm) ;
\draw [densely dashed](\rt, -\r) circle (\rt cm) ;
\end{scope}
\end{scope}
\draw (0,0) circle (\r cm) ;
\fill[white] (0, \r) circle (\r*0.03 cm) ;
\draw (0, \r) circle (\r*0.03 cm) ;
\fill[white] (0, -\r) circle (\r*0.03 cm) ;
\draw (0, -\r) circle (\r*0.03 cm) ;
\draw (0,\r) node [anchor = south] {$S$} ;
\draw (0, -\r) node [anchor=north] {$T$} ;
\draw (\r,0) node [anchor = west] {$E$} ;
\draw (\x,0) node {$*$} ;
\draw (\r,\r) node {$\h^+$} ;
\begin{scope}
\clip (-2*\r-1, 0) circle (\r cm);
\fill[lightgray] (-3*\r-1,-\r) rectangle (-2*\r-1, \r);
\draw (-2*\r-1, 0) circle (\r cm);
\draw [densely dashed] (-2*\r-1,-\r) -- (-2*\r-1,\r);
\draw (-\r-1,\r) node {$\h^-$};
\draw (-2*\r-1, 0) node {$\times$} ;
\draw [dotted] (-3*\r-1,0) -- (-2*\r-1,0);
\end{scope}
\draw (-2*\r-1, 0) circle (\r cm);
\draw (-\r-1,\r) node {$\h^-$} ;
\fill[white] (-2*\r-1, \r) circle (\r*0.03 cm) ;
\draw (-2*\r-1, \r) circle (\r*0.03 cm) ;
\fill[white] (-2*\r-1, -\r) circle (\r*0.03 cm) ;
\draw (-2*\r-1, -\r) circle (\r*0.03 cm) ;
\draw (-2*\r-1,\r) node [anchor = south] {$S$} ;
\draw (-2*\r-1, -\r) node [anchor=north] {$T$} ;
\end{tikzpicture}
\end{center}
\caption{The fundamental domain $V(\bar{\A}^0) \isom \lozenge$ under the map $\bar{Z}: \Stab^0(\D)/\C \to \Pb^1$. We picture $\Pb^1 = \h^+ \cup \h^-$ as the union of two discs where the imaginary part of the coordinate $\bar{Z}$ is positive and negative respectively. They are glued along the line $\bar{Z} \in \R$, which is the image of all walls of marginal stability in $\Stab(\D)/ \C$. The region $\h^-$ where only two objects are stable contains the first wall passing through the image of $\times$. The region $\h^+$ contains the other two walls of $V(\bar{\A}^0)$ which meet at the image of $*$. We label points on the boundary by the object whose central charge vanishes there.}
\end{figure}
{
\begin{proof}[Proof of Theorem \ref{stab}]
Using the identification $V(\A) \isom \lozenge = p(\M)$, we can extend the branch of the period map to a map $f:\widetilde{\M} \to \Stab(\D) / \C$ by equivariance. We only have to check continuity on the boundary of $\M$, i.e. the action of the monodromy on $H_1(E, \Z)$ on crossing one of the two branch cuts in either direction is identical to the action of the four simple tilts on $K(\D)$. But these both act by
\[
\left ( \begin{array}{cc} 0 & -1 \\
1 & 0 \\
\end{array} \right )
\qquad \left ( \begin{array}{cc} 0 & 1 \\
-1 & 1\\
\end{array} \right )
\]
and their inverses.
\end{proof}
}
\begin{figure}[h] \label{orb}
\begin{center}
\begin{tikzpicture}
\draw (-1.5,0) -- (1.5,0);
\draw (0,-0.1) -- (0,1.5);
\draw [->,thick](0,0) -- (0,1) node [anchor=east]{$T$};
\draw [->,thick] (0,0) -- (1,0) node [anchor = north]{$S$};
\draw (2.5,0) -- (5.5,0);
\draw (4,-0.1) -- (4,1.5);
\draw [->,thick](4,0) -- (4.5,0.86) node [anchor=west]{$E$};
\draw [->,thick](4,0) -- (3.5,0.86) node [anchor=east]{$S$};
\draw [->,thick] (4,0) -- (5,0) node [anchor = north]{$T$};
\end{tikzpicture}
\end{center}
\caption{The image of the $\Z/2$- and $\Z/3$-orbifold points $\times$ and $*$}
\end{figure}
We denote by $L^\times$ the total space of the $\C^*$-bundle of non-zero holomorphic differentials on $\widetilde{M}$. It is isomorphic to the complement of the discriminant locus in the space $\C^2_{a,b}$ of cubic polynomials $z^3 + az + b$. The fundamental group of $L^\times$ is isomorphic to the braid group $\Br_3$ as the discriminant locus describes the trefoil knot.
\begin{cor}
There is a biholomorphic map
\begin{center}
\begin{tikzpicture}[node distance=2cm, auto]
\node (A) {$\widetilde{L^\times}$};
\node (B) [right of=A] {$\Stab^0(\D)$};
\node (D) [below of=B] {$\Hom(K(\D),\C)$};
;
\draw[->] (A) to node {$F$} (B);
\draw[->] (A) to node [anchor=east] {$(\int_\alpha \lambda, \int_\beta \lambda) \: \:$} (D);
\draw[->] (B) to node {$(Z(S),Z(T))$} (D);
\end{tikzpicture}
\end{center}
lifting the periods of the differential $\lambda$. It is equivariant with respect to the actions of $\Br_3$ on the left by deck transformations and on the right by $\Aut(\D)$.
\end{cor}
\begin{proof}
We can lift the map $f: \widetilde{\M} \to \Stab^0(\D) / \C$ to the desired $F$ by equivariance with respect to the $\C$-actions on both sides. It is a bijection as both $\C$-actions are free, and holomorphic as it is locally given by the periods of $\lambda$. We know that the two braid groups act identically on $K(\D)$ via their maps to $\PSL(2,\Z)$ and so define identical actions on the $\C^*$-bundle $L^\times$. Also the actions of the central subgroup $\Z \subset \Br_3$ are identical by construction as it acts as $\Z \subset \C$. But given these data the actions are determined by a group homomorphism $\PSL(2, \Z) \to \Z$ giving a lifting of the $\Br_3$-action on the $\C^*$ bundle $L^\times$ factoring through $\PSL(2, \Z)$ to the universal cover. As the only such homomorphism is the trivial one the two $\Br_3$ actions are identical.
\end{proof}
\bibliographystyle{hplain}
|
1,116,691,497,361 | arxiv | \section{Introduction}
The creation and manipulation of quantum entanglement is central to developing powerful quantum technologies~\cite{QC_Preskill,AtomInt_Cronin}. In particular, precision measurements can greatly benefit from exploiting quantum entanglements~\cite{LIGO_Tse,Gradiometry_Hardman} because non-classical states may be engineered for greater sensitivity to a parameter of interest compared to their classical counterparts~\cite{QMMet_Pezze,FermiGasMet_Luisa}. This field has seen rapid progress on several frontiers~\cite{AdvQMet_Giovannetti} including, but not limited to, experimental demonstration of atomic clock timekeeping below the shot noise limit~\cite{EntangledClock_Shu}, extensions of quantum error correction into quantum metrology schemes~\cite{QECMet_Lukin}, and machine learning and optimization for complex state preparation~\cite{RLMatter_Chih,QMVariation_Kaubruegger,OptimalMet_Marciniak} and measurement schemes~\cite{MLBayes_Pezze}. Through this rapid progress, there is the possibility that we will soon use quantum mechanical devices to probe new fundamental physics via tabletop experiments~\cite{RedShift_Bothwell,NewPhys_Safronova}.
Many state-of-the-art atom interferometry schemes rely on the process of spin squeezing~\cite{SqueezedAtom_Wineland,SpinSqueezeRydberg_Gil}, where a set of quantum spins are correlated to prepare a non-classical state that is sensitive to $SU(2)$ rotations at a precision below the standard quantum limit (SQL)~\cite{Squeezing_Ma} of $\Delta\phi^2 \propto 1/N$, where $\Delta\phi^2$ is the mean square error and $N$ is the number of particles used in the measurement. One candidate for generating this entanglement is one-axis twisting (OAT), whereupon many particles become entangled in a single degree of freedom under a non-linear Hamiltonian~\cite{Squeeze_Ueda,SqueezeNoise_Wineland}. Through entangling processes such as OAT, the SQL may be surpassed and a limit in precision of $\Delta\phi^2 \propto 1/N^2$ is achievable. This limit is a result of a Heisenberg uncertainty-like principle between the operator generating the unitary and the parameter one is measuring. This limit is aptly named Heisenberg limited scaling (HLS)~\cite{Bayes_Holland} and is the ultimate limit for metrological systems~\cite{QSensing_Degen}.
Schemes using OAT provide below SQL improvements for single parameter measurements, such as the angle a dipole sweeps under rotation generated by a magnetic field. These improvements are realized by sacrificing the variance of quantum fluctuations in one direction in exchange for a reduction in the variance of fluctuations in the direction we wish to measure. This hints at a natural extension of OAT; one where multiple degrees of freedom are entangled and squeezed to provide below SQL improvements for multiple parameters simultaneously.
In this paper, we introduce a novel method for squeezing and entangling two distinct degrees of freedom: the internal energy levels of an atomic ensemble and the collective atomic momentum. As a Gedanken experiment, we consider a collimated packet of atoms passing through a cavity. The cavity mediated emission and absorption of photons induces a twisting of the collective internal and momentum degrees of freedom, while also rapidly creating entanglement between these two degrees of freedom. The states prepared by this system could have the potential for multiparameter sensing and estimation~\cite{MultiParam_Pezze} below the SQL, squeezed state Bragg interferometry~\cite{Bragg_Shankar}, or single parameter estimation benefiting from auxiliary measurements. By analyzing the Quantum Fisher Information Matrix (QFIM) of the system, we find that the maximum metrological gain in each individual degree of freedom is shown to scale proportionally to HLS. Here, we focus on the squeezing and correlation of the collective atomic internal energy state and momentum, but we emphasize that the general process could be realized with any system having same structure in its couplings and interactions. To this point, we discuss possible platforms which might be made to generate similar forms of entanglement in the conclusion of this paper.
The structure of this paper is as follows. In Section~\ref{sec:DerivHamil}, we cast the Hamiltonian into a form that illustrates the entanglement generating process: atomic emission and absorption of photons and the resulting momentum recoil. From this form, we show that some features may be intuitively understood as a generalization of the OAT Hamiltonian, while other important features have no analog in OAT. In Section~\ref{sec:OpAlg}, we explore the structure of the system and Hamiltonian using an underlying Lie algebra, and use these to simplify the subsequent analysis of the dynamics. In Section~\ref{sec:DynAnalysis}, we use the quantum Fisher information matrix (QFIM) to discuss the results of a numerical simulation of the time dynamics. Lastly, in Section~\ref{sec:InterScheme} we show schematically two interferometry protocols that benefit from the form of entanglement generated by this scheme.
\section{Derivation of the Hamiltonian and System Dynamics}\label{sec:DerivHamil}
We consider the Gedanken experiment depicted in Fig.~\ref{fig1}(a), where a collimated packet of atoms passes through the center of the beam waist of a linear optical cavity, similar to a pulsed version of the set up proposed in~\cite{BeamLaser_Liu}. Each atom has a mass $m$, and two relevant internal energy levels labeled the excited and ground states $\ket{e}$ and $\ket{g}$, respectively. These energy levels are separated by the transition energy $\hbar\omega_a$. We assume that the cavity supports a single optical mode with corresponding frequency $\omega_{c}$, which is far detuned from the atomic transition by an amount $\Delta = \omega_a - \omega_c$. The interaction strength between the cavity photons and the $j$\textsuperscript{th} atom is taken to be $g(x_j)= \frac{g}{2} \cos(k \hat{x}_j)$. Furthermore, we assume $N$ atoms enter the cavity with uniform velocity, and spend a time $t$ inside the light-atom interaction volume. During this interaction time, the Hamiltonian is then:
\begin{equation}
\begin{aligned}
\hat{H}=&\sum_{j=1}^N \left(\frac{\hat{p}_j^2}{2m}+\frac{\hbar\omega_a}{2}\hat{\sigma}_j^z\right)+\hbar\omega_c\hat{a}_c^\dagger\hat{a}_c^{} \\
&+\frac{\hbar g}{2} \sum_{j=1}^N \cos(k\hat{x}_j) \left(\hat{a}_c^{}\hat{\sigma}^{+}_j+\hat{a}_c^{\dagger}\hat{\sigma}^{-}_j\right),
\end{aligned}
\label{eq:Hfull}
\end{equation}
where $\hat{\sigma}^z_j=\ket{e}_j\bra{e}_j-\ket{g}_j\bra{g}_j$, $\hat{\sigma}^+_j=(\hat{\sigma}^-_j)^{\dagger}=\ket{e}_j\bra{g}_j$ are Pauli matrices for the $j^{\text{th}}$ atom, $\hat{p}_j$ ($\hat{x}_j$) is the transverse momentum (position) operator for the $j^{\text{th}}$ atom parallel to the cavity axis, and $\hat{a}_c^{\dagger}$ ($\hat{a}_c)$ is the photon creation (annihilation) operator of the cavity mode.
The two relevant processes at play are the exchange of photons between different atoms and the atom's recoil due to the emission and absorption of photons. To simplify our study of these dynamics, we first take the interaction picture with $\hat{H}_0 = \sum_{j=1}^N \hbar \omega_a \hat{\sigma}^z_j / 2 + \hbar \omega_a \hat{a}_c^\dagger \hat{a}_c $. We assume the cavity is in the dispersive regime $|\Delta| \gg \sqrt{N} g, \kappa$, where $\kappa$ is the cavity decay rate, such that we can adiabatically eliminate the cavity degrees of freedom over a coarse-grained timescale~\cite{LindbladEq_Jager}. The resultant Hamiltonian becomes
\begin{equation}
{\hat{H}}=\sum_{j=1}^N \frac{\hat{p}_j^2}{2m} + \sum_{i,j=1}^N \frac{\hbar g^2}{4 \Delta} \cos(k\hat{x}_i) \cos(k\hat{x}_j) \hat{\sigma}^{+}_i \hat{\sigma}^{-}_j.
\label{eq:Helim}
\end{equation}
The photon exchange has now been abstracted to an excitation exchange between different atoms and a resultant change in momentum. We note that the operators $\sum_{j=1}^N \cos(k\hat{x}_j) \hat{\sigma}^{\pm}_j $ cause a change in an atom's momentum by $\pm \hbar k $ upon trading an excitation, as $\exp( \pm i k \hat{x}_j )$ are the momentum shift operators. Therefore, if the atomic ensemble is prepared such that the atoms are in motional states differing in their momentum by integer multiples of $\hbar k$, the atoms will never leave this manifold under purely Hamiltonian evolution. We consider atoms in a superposition of motional states of the form $\ket{n}_j \equiv \ket{ n \hbar k / 2}_j$ for odd integers $n$. Preparation of such a state could be accomplished with a diffraction grating~\cite{atomOptics_Cronin} or via Kapitza-Dirac pulses and a trapping potential~\cite{Multimode_Smerzi}.
Lastly, we assume that $ \hbar N g^2 / (4 \Delta) \ll (\hbar k)^2/m $, such that the lowest two momentum states are far detuned from the rest of the quadratic kinetic energy spectrum, as shown in Fig.~\ref{fig1}(b). Therefore, if the atoms start in the $\ket{\pm 1}_j$ states, they will in the subspace spanned by these two states. Under these conditions, the total kinetic energy remains fixed at $ N (\hbar k)^2 / (8 m) $. As a result, we can ignore the constant kinetic energy.
In this regime, the momentum now has a spin-$1/2$ algebraic structure and so the atom's momentum is effectively mapped onto a two-level system. We define $\hat{s}_{j}^{+}=(\hat{s}_{j}^{-})^{\dagger}=\ket{+1}_j\bra{-1}_j,$ and $\hat{s}_j^z = \ket{+1}_j\bra{+1}_j - \ket{-1}_j\bra{-1}_j $ such that we can cast the translation operator $\cos( k \hat{x}_j ) = [ \exp(ik\hat{x}_j) + \exp(-ik\hat{x}_j) ] / 2$ in terms of spin raising and lowering operators. We note that $ e^{+ik\hat{x}_j}=(e^{-ik\hat{x}_j})^{\dagger}=\hat{s}_{j}^{+}$ in this regime and therefore $2 \cos( k \hat{x}_j ) = (\hat{s}_{j}^{+} + \hat{s}_{j}^{-}) \equiv \hat{s}^x_j $, thus we can rewrite our Hamiltonian in terms of these operators. Our simplified Hamiltonian therefore becomes
\begin{align} \label{eq:Hphys}
\hat{H} &= \chi \sum_{i,j=1}^N\hat{s}_{i}^x\hat{s}_{j}^x\hat{\sigma}_{i}^+\hat{\sigma}_{j}^- ,
\end{align}
with $\chi = \hbar g^2/(16 \Delta)$. This non-linear Hamiltonian dictates how the atoms are to be entangled via cavity mediated interactions.
From Eq.~\ref{eq:Hphys}, we see that if the atoms enter the cavity in the same momentum state, with all atoms in the state $(\ket{+1}_j+\ket{-1}_j)/\sqrt{2}$, then the dynamics are generated by $\hat{H} \approx \sum_{i,j=1}^N\hat{\sigma}_{i}^+\hat{\sigma}_{j}^-\propto (\hat{J}^z)^2$, where $\hat{J}^z = \sum_j^N \hat{\sigma}_j^z/2$, and one-axis twisting is recovered. This is because the momentum flip operator, $\hat{s}^x_j$, affects an atom in the state $(\ket{+1}_j+\ket{-1}_j)/\sqrt{2}$ trivially. Physically, this is the case that all the atoms are in the same equal superposition of the $+\hbar k/2$ momentum states, so the recoil from emission and absorption of light doesn't affect the collective momentum, but the atom's internal degree of freedom remains free to evolve. With a starting state such as $\ket{+}^{\otimes N} = (1/\sqrt{2})^N (\ket{e}+\ket{g})^{\otimes N}$ for the internal atomic energies, the Hamiltonian induces standard OAT behavior, leading to an effective spin squeezing. This starting state and behavior is shown in Fig.~\ref{fig1}(c), where the red arrows on the left Bloch sphere represent the action of $(\hat{J}^z)^2$.
We may also consider the case that the internal degrees of freedom don't affect the dynamics. This case is not physical, but rather provides an important intuition for the behavior in the system. Here, we take $\hat{H} \approx \chi \sum_{i,j=1}^N\hat{s}_{i}^x\hat{s}_{j}^x = 4 \chi (\hat{K}^x)^2$, where $\hat{K}^x = \sum_j^N \hat{s}_j^x/2$. While this is not necessarily physical, it sheds light on the approximate behavior of the atomic momentum: we expect the momentum states to experience OAT-like behavior through the non-linear rotation under $(\hat{K}^x)^2$. With a starting state of $\ket{+1}^{\otimes N}$ for the momentum degrees of freedom, we would expect to see operators orthogonal to $\hat{K}^x$, such as $\hat{K}^z = \sum \hat{s}^z_j / 2$, to undergo a twisting-like behavior. This starting state and approximate behavior is shown in Fig.~\ref{fig1}(c), where the red arrow on the right Bloch sphere represents the action of $(\hat{K}^x)^2$.
For the full Hamiltonian we expect the state $\ket{\psi}_0 = \ket{+}^{\otimes N} \otimes \ket{+1}^{\otimes N} $ to experience the corresponding spin twisting-like behavior in both degrees of freedom, and to lead to interesting entanglement between the two. In the subsequent sections, we demonstrate mathematically that this state breaks an important symmetry typically found in OAT, and then we numerically show this leads to entanglement that has potential for metrological advantage.
\begin{figure}
\includegraphics[width=1\columnwidth]{Cavity_System_BlochSpheres.pdf}
\caption{\label{fig1} (a) Schematic of the proposed set up. Here, the momentum perpendicular to the cavity controls the interaction time. The initial momentum along the cavity axis selects the manifold of momentum states that the cavity couples to. (b) The spectrum of the kinetic energy versus the spectrum of momentum states. Here, we note the $\pm 3 \hbar k / 2$ states are far from the $\pm \hbar k / 2$ states, thus demonstrating that the lowest manifold of $4$ states can be considered isolated from the rest of the quadratic spectrum. (c) The two Bloch spheres for the collective two-level system. This picture is only valid when there is no entanglement between the two degrees of freedom, but it still provides a useful picture of the approximate behavior of the system. The blue cloud is the starting state, while the green dashed line represents the approximate distribution of the final state. The final state may not be fully represented on these Bloch spheres due to entanglement breaking the initial $SU(2)\otimes SU(2)$ symmetry needed to represent states on two collective Bloch spheres. (d) The four-level system, and black arrows representing each of the three unique $\ \mathfrak{su}(2)$ algebras acting on the system.}
\end{figure}
\section{The Operator Algebras}\label{sec:OpAlg}
In full, it is not immediately obvious how dynamics evolve under Eq.~\eqref{eq:Hphys}. The $\hat{s}^x_j$ operators complicate the Hamiltonian compared to the usual OAT Hamiltonian, preventing us from using methods typically used to solve OAT models. However, we can use the symmetries of the system to recast the Hamiltonian such that it is a member of an $\mathfrak{su}(4)$ algebra yielding a clear picture of the full dynamics and allowing for efficient numerical simulation.
The operators appearing in the Hamiltonian are all Pauli operators which correspond to a single atom's internal or momentum observable. For the $j^{th}$ atom's internal state, the operators $\{\hat{\sigma}^x_j,\hat{\sigma}^y_j,\hat{\sigma}^z_j\}$ fully describe any possible observable, where $\hat{\sigma}^x_j = \sigma^+_j + \sigma^-_j$ and $\hat{\sigma}^y_j = i( \sigma^-_j - \sigma^+_j)$. Similarly, its momentum state is fully described by $\{\hat{s}^x_j,\hat{s}^y_j,\hat{s}^z_j\}$, where $\hat{s}^y_j = i ( \hat{s}_{j}^{-} - \hat{s}_{j}^{+} )$ is needed for the momentum operators to close under commutation. The total system is then described, in part, by the collective atomic and momentum operators, $\hat{J}^i = \sum_j^N \hat{\sigma}_j^i/2$ and $\hat{K}^i = \sum_j^N \hat{s}_j^i/2$ for $i=x,y,z$ respectively. These collective atomic and momentum operators each form an $\mathfrak{su}(2)$ algebra: $\mathfrak{J}=\{\hat{J}^z,\hat{J}^\pm\}$ and $\mathfrak{K}=\{\hat{K}^z,\hat{K}^\pm\}$. These two algebras allow us to fully describe any state which is seperable in the two degrees of freedom, such as the state $\ket{\psi_0}$ which is represented on two composite Bloch spheres in Fig.~\ref{fig1}(c) in blue. Importantly, we note that the momentum operator $\hat{K}^z$ corresponds to the observable for the center of mass momentum, $\hat{P}_{\rm COM} = \hbar k \hat{K}^z$, which is intuitively the difference between the number of atoms moving in the $+1$ and $-1$ eigenstates.
We can further simplify our analysis by mapping particles into the Schwinger boson representation~\cite{Schwinger}. Here we use the simultaneous eigenstates of $\hat{J}^z$ and $\hat{K}^z$ as the basis for the new representation, but in general this could be done via the procedure shown in~\cite{SUNBos_Manu}.
First, we define
\begin{equation}
\begin{aligned}
\label{eq:SchwingState}
& \ket{\alpha,\beta,\gamma,\delta} = \\
& \mathcal{S}\left(\ket{e,+1}^{\otimes \alpha} \ket{g,-1}^{\otimes \beta} \ket{e,-1}^{\otimes \gamma} \ket{g,+1}^{\otimes \delta}\right),
\end{aligned}
\end{equation}
where $\alpha+\beta+\gamma+\delta=N$ is the total number of atoms and $\mathcal{S}$ is the symmeterization operator. Note that the symmetrizer is defined with the normalization factor, shown explicitly in Appendix~\ref{sec:Normalization}, so this representation is normalized. We can represent all the relevant operators in this formalism as well by associating the annihilation (creation) operators $\hat{a},\hat{b},\hat{c},\hat{d}$ ($\hat{a}^{\dagger},\hat{b}^{\dagger},\hat{c}^{\dagger},\hat{d}^{\dagger}$) to each of the four modes, such that $\hat{a} \ket{\alpha,\beta,\gamma,\delta} = \sqrt{\alpha} \ket{\alpha-1,\beta,\gamma,\delta} $
and similarly for the other three modes as shown in Appendix~\ref{sec:create_annihilate}. Now, the number of atoms in the excited state is simply $\alpha + \gamma$ for states of the form in Eq.~\eqref{eq:SchwingState}. Therefore, we define $\hat{n}_e \ket{\alpha,\beta,\gamma,\delta} = ( \hat{a}^{\dagger}\hat{a} + \hat{c}^{\dagger}\hat{c} ) \ket{\alpha,\beta,\gamma,\delta}$. By the same process, we can recover the ground state number operator to be $\hat{n}_g = \hat{b}^{\dagger}\hat{b} + \hat{d}^{\dagger}\hat{d}$, the $+1$ momentum state number operator to be $\hat{n}_{+1} = \hat{a}^{\dagger}\hat{a} + \hat{d}^{\dagger}\hat{d}$, and the $-1$ momentum state number operator to be $\hat{n}_{-1} = \hat{b}^{\dagger}\hat{b} + \hat{c}^{\dagger}\hat{c}$. Our collective atomic and momentum operators are simple to represent in the form
\begin{equation}
\begin{aligned}
\hat{J}^z & = \frac{1}{2} ( \hat{n}_e - \hat{n}_g ), \\
\hat{K}^z & = \frac{1}{2} ( \hat{n}_{+1} - \hat{n}_{-1} ),
\end{aligned}
\end{equation}
and $\hat{J}^{-} =\hat{a}\hat{d}^\dagger + \hat{c}\hat{b}^\dagger = ( \hat{J}^{+} )^{\dagger},
\hat{K}^{-} =\hat{a}\hat{c}^\dagger + \hat{d}\hat{b}^\dagger = ( \hat{K}^{+} )^{\dagger}$. Moreover, the Hamiltonian is also simply represented,
\begin{equation}
\hat{H} = \chi ( \hat{a}^{\dagger} \hat{b} + \hat{c}^{\dagger} \hat{d} ) ( \hat{a} \hat{b}^{\dagger} + \hat{c} \hat{d}^{\dagger} ).
\end{equation}
This is intuitively what should be expected because, for example, $\hat{a} \hat{b}^{\dagger}$ is collective emission where a single atom goes from the excited, +1 motional state to a ground, -1 motional state. The other terms can be similarly understood.
Lastly, we introduce the raising and lowering operators $\hat{E}^+ = \hat{a}^{\dagger} \hat{b} + \hat{c}^{\dagger} \hat{d} = (\hat{E}^-)^\dagger $, and we notice that $ [ \hat{E}^+, \hat{E}^- ] = 2 \hat{J}^z $ and
$[ \hat{J}^z, \hat{E}^\pm ] = \pm \hat{E}^\pm .$ Thus, we see that the set $\mathfrak{E} = \{\hat{J}^z,\hat{E}^\pm\}$ forms a third closed $\mathfrak{su}(2)$ algebra on the system which succinctly represents the entanglement generating processes due to absorption and emission. The three sub-algebras $\mathfrak{J},\mathfrak{K}$ and $\mathfrak{E}$ taken together are members of a complete $\mathfrak{su}(4)$ algebra, which generates an $SU(4)$ group that efficiently describes the dynamics of this system. The action of three sub-algebras is represented schematically in Fig.~\ref{fig1}(d) for a single atom. In summary, within the full $\mathfrak{su}(4)$ describing our dynamics, we find that there exists three $SU(2)$ subgroups each generated by $\mathfrak{J},\mathfrak{K}$, or $\mathfrak{E}$, which matches the general structure for $SU(4)$~\cite{SUN_Yukawa}. Thus, the system can be considered as a collection of hybridized angular momentum.
We can take advantage of the commutation structure in $\mathfrak{E}$ to simplify the Hamiltonian even further,
\begin{equation} \begin{aligned} \label{eq:HSU4}
\hat{H} &= \chi \hat{E}^+ \hat{E}^- \\
&= \chi ( \hat{E}^2 - (\hat{J}^z)^2 + \hat{J}^z ),
\end{aligned}
\end{equation}
where $ \hat{E}^2 = \hat{E}^+ \hat{E}^- + (\hat{J}^z)^2 - \hat{J}^z $ is the quadratic Casimir operator~\cite{RepThrySU2_Rocek} for $\mathfrak{E}$. Now, Eq.~\eqref{eq:HSU4} looks like the familiar form of a OAT Hamiltonian, except for the important difference that $\hat{K}^y$, $\hat{K}^z$ don't commute with $\hat{E}^2$. This means there exists states which are eigenstates of $\hat{K}^z$ that evolve non-trivially under the operator $\hat{E}^2$, such as the starting state discussed at the end of Section~\ref{sec:DerivHamil}. Furthermore, we can observe that the operator $\hat{E}^2$ has shells corresponding to each of its eigenvalues, similar to the shells typically defining eigenvalues for total angular momentum observables. The starting state, $\ket{\psi_0}$ creates a superposition over these shells and, with $\hat{E}^2$ contributing non-trivially to the dynamics, each of the three pseudo-angular momentum subgroups experience a twisting under this Hamiltonian.
\section{Analysis of the Dynamics and Entanglement Generation}\label{sec:DynAnalysis}
Now we use the Schwinger boson representation introduced in Section~\ref{sec:OpAlg} to numerically simulate the system and explore the dynamics. For these simulations we assume the cavity decay rate $\kappa$ and other dissipative processes are negligible. The effects of decoherence is left to future work.
To simulate the system, we use the four annihilation/creation operators found in the previous section, and model the atomic system as a system of four harmonic oscillators. The Hilbert space of four harmonic oscillators has a dimensionality of $(N+1)^4$ containing all states with atom numbers between $0$ and $4N$ atoms. We may use either of the conditions that $\hat{n}_e+\hat{n}_g = N$ or $\hat{n}_{+1}+\hat{n}_{-1} = N$ to project onto only the states with $N$ atoms. This corresponds to restricting to only states which are may be reached by $SU(4)$ action, and the typical argument of putting $N$ atoms indistinguishably in four distinguishable states shows the system scales at $(N+1)(N+2)(N+3)/6$ states for $N$ atoms. This now matches the dimensionality of the basis states with an $SU(4)$ symmetry, given in Ref~\cite{SU4_Xu}, and is numerically more efficient than the initial $(N+1)^4$ scaling.
We use the starting state discussed in Section~\ref{sec:DerivHamil}, $\ket{\psi}_0 = \ket{+}^{\otimes N} \otimes \ket{+1}^{\otimes N}$. As noted in the end of Section~\ref{sec:OpAlg}, $\ket{\psi}_0$ is not an eigenstate of $\hat{E}^2$. From the discussion of this state and the picture in Fig.~\ref{fig1}(c), we expect this initial state to lead to twisting-like behavior and entanglement generation between the two degrees of freedom. The intuitive picture to understand this behavior is the following. When an atom emits light, its internal degree of freedom becomes entangled to that of the atom which absorbs the emitted light. At the same time, both these atom's momentum states must switch, causing their external degrees of freedom to become entangled similar to their internal ones.
To diagnose the amount of entanglement and potential metrological use, we consider the case that one wants to prepare states which will be used to estimate some phase, $\phi_j$, encoded by unitary evolution under some operator, $\hat{G}^j$, so that the state evolves according to $\exp(-i \phi_j \hat{G}^j )$. Specifically we consider the cases that $\hat{G}^j$ is in either $\mathfrak{J}$ or $\mathfrak{K}$, and choose the indices $i,j$ so that if $i,j=1,2,3$ then $\hat{G}^i, \hat{G}^j = \hat{J}^x, \hat{J}^y,\hat{J}^z$ and if $i,j = 4,5,6$ then $\hat{G}^i, \hat{G}^j = \hat{K}^x, \hat{K}^y, \hat{K}^z$. In this scenario the QFIM serves as both an entanglement measure~\cite{Entangle_Pezze} and a measure of the potential metrological use of a state in quantum metrology~\cite{QFIMrev_Liu}. We use for the form of the QFIM given in Ref.~\cite{QFIM_Meyer} for pure states, since in the present proof of concept we do not address decoherence. Under this condition, the matrix elements are given by
\begin{eqnarray} \label{eq:QFIM}
\mathcal{F}^{ij} = 4 \Big( \Big\langle \frac{\{\hat{G}^i,\hat{G}^j\}}{2} \Big\rangle - \langle \hat{G}^i \rangle \langle \hat{G}^j \rangle \Big),
\end{eqnarray}
where $\{\hat{G}^i,\hat{G}^j\} = \hat{G}^i \hat{G}^j + \hat{G}^j \hat{G}^i $ is the anti-commutator. For $i=j$, Eq.~\eqref{eq:QFIM} returns the fourfold variance, which captures the amount of squeezing and entanglement present. The condition for an entanglement witness to squeezing is $\mathcal{F}^{ii} / N^2 > 1/N$, which is equivalent to the condition given in Ref.~\cite{Entangle_Pezze}. If $\mathcal{F}^{ii} / N^2$ approaches a constant as $N$ grows, then the sufficient condition for entanglement is clearly met and the system's potential metrological use proportional to HLS is demonstrated. Meanwhile for $i\neq j$, Eq.~\eqref{eq:QFIM} returns the fourfold covariance, thereby capturing the amount of quantum correlations between these observables. We observe that $[\hat{J}^i, \hat{K}^j] = 0$ for all $\hat{J}^i\in\mathfrak{J}, \hat{K}^j\in\mathfrak{K}$. As a result the covariance of two operators on the internal state and the momentum, such as $\text{cov}(\hat{J}^x, \hat{K}^z)$, is non-zero only for pure states which are entangled. The off-diagonal elements of the QFIM with $i \in \{ 1,2,3 \}$ and $j \in \{ 4,5,6 \}$ therefore represents the covariance between the atomic and momentum operators, and acts as an entanglement witness of quantum correlations between the two degrees of freedom. Thus, we use the sufficient condition that $\mathcal{F}^{ij} \neq 0$ as an entanglement witness for the two degrees of freedom as a pure state bipartite system. This is a modified version of the condition given in Ref.~\cite{Covar_Abascal}.
In Fig.~\ref{fig2}, we show the quantity $\mathcal{F}^{ii}/N^2$ for the four operators of interest, and for four different numbers of atoms, $N$, each as a function of interaction time with the cavity, $t$. We observe that $\mathcal{F}^{ii} / N^2$ increases sharply before leveling off to a constant value over time. Because $\mathcal{F}^{ii} / N^2 > 1/N$, the entanglement witness condition is satisfied for each case. This condition is met in a short interaction time, demonstrating that entanglement is quickly generated in both the collective internal and momentum modes. Therefore we see that along with the spin squeezing in the internal atomic degrees of freedom, this platform also leads to an effective squeezing of the momentum degrees of freedom.
\begin{figure}
\includegraphics[width=1\columnwidth]{QFI_TimePlot.pdf}
\caption{\label{fig2} Four of the six diagonal elements of the QFIM, for four different atom numbers. The operators $\hat{J}^z$ and $\hat{K}^x$ are left out because they commute with the Hamiltonian and are therefore conserved quantities. We see that as the number of atoms grow, the behavior of the diagonal QFIM elements converge. For atom numbers of $N\approx50$ or more, a plateau with respect to time appears, centered around $\chi t = \pi / 4$. This is similar to the behavior found in OAT where the QFIM for $\hat{J}^x$ and $\hat{J}^y$ reach a plateau~\cite{QMMet_Pezze} centered around the same time. As $N$ grows, the plateau exists almost everywhere in time. Here we only show even atom numbers, $N$, but we note that for odd atom numbers the behavior is the same except for at $\chi t = \pi/2$, where the concavity is opposite from what's shown here.}
\end{figure}
To quantify the potential metrological use of this system, we fix the time at $\chi t = \pi / 4$ and show how the diagonal element of QFIM for $\hat{J}^x$ and $\hat{K}^z$ scales with atom number. The results are shown in Fig.~\ref{fig3}. We choose to examine these elements at a time $\chi t = \pi / 4$ because it is the center of the plateau in the QFIM's diagonal elements. We see that both the atomic and momentum degrees of freedom scale proportionally to $N^2$, i.e. with HLS. Similar behavior exists in OAT, but there it is restricted to a single degree of freedom. Our scheme provides a squeezing mechanism for momentum degrees of freedom, creating the possibility that spin-squeezing techniques used in Ramsey interferometry \cite{Clock_Lukin} might be generalized to Bragg interferometry or that the two might be performed simultaneously.
\begin{figure}
\includegraphics[width=1\columnwidth]{QFI_NumberPlot.PDF}
\caption{\label{fig3} The diagonal elements of the QFIM corresponding to $\hat{J}^x$ and $\hat{K}^z$ shown as a function of atom number, $N$. We fit $4 \Delta\hat{J}^x$ and $4 \Delta\hat{K}^z$ with second order polynomials $F_J(N)$ and $F_K(N)$ respectively. We fit for $N\geq4$, because for $N=2$ and $N=3$ the system has anomalous behavior for small atom numbers. We find that $4 \Delta\hat{J}^x$ is fit with the function $F_J(N) \approx 0.366 N^2 + 0.793 N - 2.662 $, and $4 \Delta\hat{K}^z$ is fit with the function $F_K(N) \approx 0.356 N^2 + 0.599 N + 1.466$. Both of these demonstrate the HLS.}
\end{figure}
Now, we study the behavior of the entanglement between the degrees of freedom, which has no analog in OAT. We study the entanglement via the fourfold covariance between the two operators $\hat{J}^x$ and $\hat{K}^z$, corresponding to an off-diagonal element of the QFIM. In Fig.~\ref{fig4}(a), we see that the system moves through a highly correlated state, with a high covariance between the two degrees of freedom, before it approaches an uncorrelated state for a moment in time at $\chi t = \pi/2$. At an interaction time of $\chi t = \pi$, the system returns to its original state. In Fig.~\ref{fig4}(b), we see that for interaction times of $\chi t \approx \pi / 4$ the correlations only scale linearly with $N$. Therefore, interaction times corresponding to the plateau prepare a system which is capable of quantum sensing for two parameters at the Heisenberg limit, with relatively little error introduced by the simultaneous measurement of the two parameters. This motivates the first half of the next section, where a schematic representation of two parameter interferometry is shown.
The time at which the system is maximally correlated is labeled $t_{\text{max}}$, and we find $\chi t_{\text{max}}$ decreases with number of atoms such that $\chi t_{\text{max}} \approx N^{\nu}$, where $\nu \approx -2/5$ is found from fitting. At this time, the maximum correlation scales proportionally to $N^2$, which is on the order of the squeezing for the two degrees of freedom. This motivates the second half of our next section, where a schematic representation of single parameter interferometry is shown. The parameter is estimated via an interaction with one degree of freedom, and an auxiliary measurement on the other degree of freedom.
\begin{figure}
\includegraphics[width=1\columnwidth]{Covar_TimeAndNumberPlot.pdf}
\caption{\label{fig4} Plots of $\mathcal{F}^{ij} = 4 \mathrm{cov}(\hat{J}^x,\hat{K}^z) $. Left - The off diagonal of the QFIM, $\mathcal{F}^{ij}$, normalized by $N^2$ for four different values of $N$. We see the covariance between $\hat{J}^x,\hat{K}^z$ grows rapidly before decaying for longer time scales, then in a collapse-revival like effect at $\chi t \approx \pi$ the operators become correlated again before approaching the starting state. Right - The same off diagonal element of the QFIM at two different times: $\chi t = \pi/4$ when the correlations are decreasing, and $\chi t = N^{-2/5}$ when the correlations are largest. We find that $\mathcal{F}^{ij}|_{\chi t = \pi / 4} \approx 4.103\cdot10^{-3} N^2 + 0.926 N $, and $\mathcal{F}^{ij}|_{\chi t = N^{-2/5}} \approx 0.1782 N^2 - 0.02721 N $ }.
\end{figure}
\section{Interferometer Schemes}\label{sec:InterScheme}
To demonstrate a possible metrological use, we numerically explore two interferometry schemes. The first uses the system to detect two relative phases: one encoded in the atom's internal degree of freedom, and a second encoded in the momentum degree of freedom. The second scheme uses this system to detect a single parameter via auxiliary measurements. The version of the auxiliary measurement scheme presented here is the case that the collective internal degree of freedom accumulates phase and the momentum degree of freedom is measured. However, this process would work similarly if the roles were reversed.
For both schemes, we choose a new interaction picture for the Hamiltonian such that $\hat{J}^z$ is removed from Eq.~\eqref{eq:HSU4}. This has no effect on the physics described above, besides keeping the atomic ensemble polarized in $\hat{J}^x$ instead of precessing about $\hat{J}^z$. This matches what is often done in OAT, and the process is shown in more depth in Appendix~\ref{sec:InterHamil}.
\begin{figure}[h]
\mbox{
\Qcircuit @C=1em @R=0.7em {
\lstick{(\text{a})\ \ } & & & & \\
& \lstick{\ket{+}^{\otimes N}} & \multigate{1}{e^{-i \hat{H} \tau_2}} & \gate{e^{-i \theta_{\text{opt}} \hat{J}^x}} & \gate{\ e^{-i \phi_3 \hat{J}^z} \ } & \measureD{\hat{J}^x} \\
& \lstick{\ket{+1}^{\otimes N}} & \ghost{e^{-i \hat{H} \tau_2}} & \qw & \gate{\ e^{-i \phi_5 \hat{K}^y} \ } & \measureD{\hat{K}^z} \\ \\ \\
\lstick{(\text{b})\ \ } & & & & \\
& \lstick{\ket{+}^{\otimes N}} & \multigate{1}{e^{-i \hat{H} \tau_1}} & \gate{e^{-i \phi_1 \hat{J}^x}} & \multigate{1}{e^{i \hat{H} \tau_1}} & \qw \\
& \lstick{\ket{+1}^{\otimes N}} & \ghost{e^{-i \hat{H} \tau_1}} & \qw & \ghost{e^{i \hat{H} \tau_1}} & \measureD{\hat{K}^z}
} }
\caption{\label{fig5} A quantum circuit schematic of the two schemes. The two tracks represent the actions affecting either degree of freedom, with the top track representing the internal states of the atoms, and the bottom track representing the momentum. (a) The two parameter scheme. The interaction time for this two parameter scheme, $\tau_2$, is fixed at $\chi \tau_2 = \pi / 4$ to demonstrate metrological use on the platuea found in Section~\ref{sec:DynAnalysis}. (b) The auxiliary measurement scheme. Here, $\chi \tau_1 = N^{-2/5}$ is chosen such that the ensembles are maximally correlated. The time-reversed unitary could be achieved by changing the detuning on the cavity.}
\end{figure}
We start with the two parameter scheme. The relevant schematic representation is shown in Fig.~\ref{fig5}(a). Here, we first pass the atomic ensemble through the cavity for an interaction time $\chi \tau_2 = \pi / 4$ to prepare the probe state. We chose this time to show the metrological use for times near the plateau, when correlations between the degrees of freedom are decreasing with respect to interaction time. However, this multiparameter scheme could be used for any interaction time, albeit with slight differences due to varying correlation strengths. After the state preparation, a rotation generated by $\hat{J}^x$ is performed so that the maximum fluctuation is in $\hat{J}^z$, where the angle $\theta_\text{opt}$ is found numerically. For the momentum degree of freedom, it was found that the state is already prepared such that the maximal fluctuations are along $\hat{K}^y$ at this time. The signal is encoded in the system by unitary
\begin{equation} \label{eq:Signal}
\hat{V} = \exp( - i \phi_3 \hat{J}^z - i \phi_5 \hat{K}^y ),
\end{equation}
where we assume for numerical convenience the phases $\phi_3,\phi_5$ are small, at $\phi_3=\phi_5=\pi/16$. However, we found that these results hold for larger phases as well as for two phases which aren't equal. After the unitary, we measure the observables $\hat{J}^x$ and $\hat{K}^z$ and carry out phase estimation for both phases simultaneously. To estimate the phase, we simulate a Bayesian inferencing scheme~\cite{Bayes_Holland} for two parameters and with a flat prior, and to find the asymptotic behavior of this Bayesian inference, we numerically calculate the Classical Fisher Information (CFI) as a function of atom number. The exact algorithm for sampling and updating a probability distribution, as well as the explicit form of the CFI are shown in Appendix~\ref{sec:Bayes}. Using the CFI, we have a useful set of inequalities from the Cram\'er-Rao Bound~\cite{StatDist_Caves} (CRB):
\begin{equation} \label{eq:CRB_inv}
\sigma_i^2 \geq \frac{1}{M I(\hat{G}^i)} \geq \frac{1}{M \mathcal{F}^{ii}}
\end{equation}
where $i=3,5$ corresponds to either $\phi_3,$ or $\phi_5$, $\sigma_i^2$ is the variance of the probability distribution, $M$ is the number of measurements, $I(\hat{G}^i)$ is the CFI for a parameter encoded by the operator $\hat{G}^i = \hat{J}^z,\hat{K}^y$, and $\mathcal{F}^{ii}$ is the diagonal element of the QFIM for the corresponding operator. The first inequality is the classical CRB, and the second inequality is the quantum CRB. By inverting this bound we find the following: $ \mathcal{F}^{ii} \geq I(\hat{G}^i) \geq \frac{1}{M \sigma_i^2}$, so we can tell how close our resultant probability distribution from Bayesian inferencing is to saturating the CRB. In Fig.~\ref{fig6}, we see the results of this analysis for $M=5000$ measurements. This measurement scheme saturates the classical CRB for both parameters, and reaches a value of about $80\%$ of the quantum CRB. Moreover, it does this simultaneously for both parameters. We also note that, while not shown, as $\phi_3,\phi_5$ tend towards zero, the CFI exactly saturates the quantum CRB, but Bayesian inferencing takes asymptotically more measurements to saturate the classical CRB. This indicates that we are using the optimal simultaneous measurement for differential phase shifts, and near optimal measurement simultaneous measurement for finite phase shifts.
\begin{figure}
\includegraphics[width=1\columnwidth]{2Param_Plot.pdf}
\caption{\label{fig6} Left - A plot of the standard deviation corresponding to the final result of Bayesian inferencing for estimating the phases $\phi_3$ and $\phi_5$ with $M = 5000$ measurements, and $\phi_3=\phi_5=\pi/16$. Right - A plot of the quantities $\frac{1}{M \sigma_i^2}$ for $\sigma_i = \sigma_J, \sigma_K$, the CFI $I(\hat{G}^i)$ for $\hat{G}^i = \hat{J}^z, \hat{K}^y$ corresponding to these measurements, and the diagonal elements of the QFIM for these measurements. Note that because of the rotation generated by $\hat{J}^x$ prior to the interferometry, $\mathcal{F}^{33} = (\Delta \hat{J}^z)^2$ now scales with HLS. We see that the quantities $\frac{1}{M \sigma_i^2} = I(\hat{G}^i)$ saturate the classical CRB from the left half of Eq.~\eqref{eq:CRB_inv}, and nearly saturate the quantum CRB. By fitting the diagonal QFIM elements and $\frac{1}{M \sigma_i^2}$ we find the CFI scales as $I(\hat{J}^z) \approx 0.3184 N^2 + 0.9162 N $, $I(\hat{K}^y) \approx 0.2022 N^2 + 1.454 N $, while $\mathcal{F}^{33} = (\Delta \hat{J}^z)^2 \approx 0.3815 N^2 + 0.1577 N$, $\mathcal{F}^{55} = (\Delta \hat{K}^y)^2 \approx 0.2512 N^2 + 0.8727 N$. This indicates this measurement scheme scales at about $80\%$ the theoretical maximum.}
\end{figure}
Now, we turn our attention to the auxiliary measurement scheme, shown in Fig~\ref{fig5}(b). Here, the atomic ensemble first passes through the cavity for a time of $\chi \tau_1 = N^{-2/5}$, so that the observables $\hat{J}^x$ and $\hat{K}^z$ are well correlated. Then, the phase is encoded on either the internal degree of freedom or the momentum. By changing the detuning on the cavity, the unitary may be reversed and a measurement on the non-interacting degree of freedom may be used to determine the phase. We simulate this scheme using a phase encoded on the atomic degree of freedom and a momentum measurement. To diagnose the metrological use, we consider the fidelity between the $\ket{+1}^{\otimes N}$ momentum state and the final momentum state. This is the same as measuring how far $\langle\hat{P}_{COM}\rangle$ is from $+N \hbar k / 2$. We consider this measurement outcome because, for values of $\phi_1$ near zero, the measurement outcome $+N / 2$ makes up the most likely outcome for a measurement of $\hat{K}^z$. As a result, the standard deviation of this fidelity may be used to calculate a lower bound for the CFI of this measurement scheme. The standard deviation of this fidelity is shown in Fig.~\ref{fig7} (a), while the inverted form of the standard deviation from equation Eq.~\eqref{eq:CRB_inv} is compared to the relevant QFIM diagonal element and shown in Fig.~\ref{fig7} (b). Using the fidelity to represent only one of the possible measurement outcomes, the uncertainty scales at $1/\sigma_{Fid}^2\approx 0.1699 N^{2}$ and from this we see that these auxiliary measurements allow us to predict the real phase with an uncertainty that scales with at least $59\%$ of the quantum CRB. This demonstrates that the auxiliary measurement, while not optimal compared to a direct measurement, still recovers a large amount of information about the degree of freedom not being directly measured. This indicates a possible potential to use this system to prepare probe states useable for weak measurement or a similar effect for improving interferometry on one degree of freedom.
\begin{figure}
\includegraphics[width=1\columnwidth]{AuxScheme_Plot.pdf}
\caption{\label{fig7} Left - The standard deviation of the final state fidelity, $\sigma_{Fid}$ with the $\ket{+1}^{\otimes N}$ momentum state. This is found by fitting the central peak with a Gaussian and offset. Right - The quantities $1/\sigma_{Fid}^2$ and the QFIM element corresponding to rotations about $\hat{J}^x$. We see that $1/\sigma_{Fid}^2 \approx 0.1699 N^2 + 0.1069 N$ and $\mathcal{F}^{ii}|_{\chi t = N^{-2/5}} \approx 0.2874 N^2 - 0.0577 N$, showing that this auxiliary measurement reaches about $0.6$ the quantum CRB. }
\end{figure}
\section{Conclusion}\label{sec:Conc}
In this work, we have introduced a novel method which individually squeezes and entangles two degrees of freedom, and showed there exists a non-trivial interplay between the atomic internal and momentum degrees of freedom. We have demonstrated that these extra degrees of freedom might create the opportunity for multi-parameter metrology at the Heisenberg limit in either degree of freedom, or for novel metrology schemes which benefit from the entangled degrees of freedom. The multiparamter and auxiliary schemes shown in the final section have potential to be the basis for practical tools in matter wave interferometry. This form of entanglement generation and manipulation represents a possible new frontier for atom interferometry.
Future work could include adding decoherence in a numerical exploration~\cite{ImprovedMixedQM_Luisa}, and explorations of the existence of multipartite entanglement\cite{mulitpartite_Ren} that may be realized by this system. We also note that the physical system explored here might pose experimental challenges. Namely, the regime requiring $\Delta \gg \sqrt{N} g$ leads to the parameter $\chi$ being really small, thereby requiring long interaction times which are hard to achieve in atomic beam-cavity experiments. To explore the effects of the small $\chi$ and long interaction times compared to the decoherence time, one could simulate this system with full beam dynamics. It would also be interesting to explore the use of a moving optical lattice~\cite{Conveyor_Browaeys} to select the atomic transverse momentum, and trap the atoms in the cavity longer. We additionally point out that the above discussion is centered on realizing Eq.~\ref{eq:Hphys}, however the principles discussed here may be relevant to more complicated platforms. Specifically, we believe coherently controlling a two-component Bose-Einstein
condensate~\cite{Spinor_Lev} in order to select for interactions, and engineering an optical lattice to induce spin-momentum couplings in a Bose-Einstein~\cite{spinMoment_Khamehchi} might lead to similar Lie algebraic structure and allow for controlled generation of metrologically useful entanglement.
\section*{Acknowledgments}
The authors thank John Cooper, Liang-Ying Chih, and Christopher Wilson for stimulating discussions.
This research was supported by the NSF PFC Grant No. 1734006 and the NSF Q-SEnSE Grant No. OMA 2016244. M.H. acknowledges support from a Visiting Fellowship from the University of Pisa, Italy. M. L. C. acknowledges support from the MIT-UNIPI program. A.S. acknowledges support from the European Union’s Horizon 2020 research and innovation program under Grant Agreements No. 817482 (Pasquans), by the Simons Collaboration on Ultra-Quantum Matter, which is a grant from the Simons Foundation (651440, P. Z.), and by LASCEM via AFOSR No. 64896-PH-QC.
|
1,116,691,497,362 | arxiv | \section{Introduction}
Objective of the Super Charm-Tau factory at Novosibirsk is to study the rare decays of D-meson, $\tau$-lepton, unobserved $\tau$ decays etc. Collider has an energy range of 2-7\,GeV in the center of mass system (CMS) and luminosity $10^{35}$\,cm$^{-2}$s$^{-1}$~\cite{a}. The broad physics program requires the development of a universal magnetic detector (Fig.~\ref{fig:detector}). The tracking system of the proposed Super Charm-Tau factory Detector (SCTD) consists of the Inner Tracker (IT) and the Drift Chamber(DC). The inner tracker is placed between the beam pipe and the drift chamber. It provides the detection solid angle up to 98\,\%. The inner tracker is a cylinder with a length of 60\,cm, inner diameter is 5\,cm,
outer diameter is 40\,cm.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.4\textwidth]{Detector}
\caption{\label{fig:detector} Detector for the Super Charm-Tau factory in Novosibirsk.}
\end{figure}
As it is placed near to the beam pipe, the inner tracker has to handle high particle flux. The main aim of the inner tracker is to detect secondary vertices from decays of short lived particles, like $K_{s}$ and $\Lambda$, and to help drift chamber to measure the momentum of charged particles owing to the increased lever arm. This is especially refers for soft charged particle tracks. Information from the IT can either be processed with that of drift chamber or alone. The cut off value for transverse momentum of the pions, penetrating into the IT is about 60 MeV/c~\cite{b} due to the material of the vacuum pipe. The candidates for the inner tracker are the Time Projection Chamber (TPC), the Silicon strip detector and cylindrical MPGD detector. Material budget for these options are shown in table~\ref{table:budget}. This paper describes results of Monte Carlo studies on transport properties for various gas mixtures proposed for TPC.
\begin{table}[h]
\caption{Material budget for the different inner tracker options}
\centering
\begin{tabular}{ | l | c | c |}
\hline
Option/Subsystem & Material & Thickness (in X$_0$) \\ \hline
Vacuum pipe &1\,mm Be + 0.6\,mm Paraffin + 0.6\,mm Be & 0.9\,\% \\ \hline
TPC & 2 $\times$ (1\,mm G10 + 0.1\,mm Teflon +15$\mu$\, Cu ) & 1.5\,\% \\ \hline
C-MPGD & 4 $\times$ (0.25\,mm Kapton +40$\mu$\,Cu ) & 1.4\,\% \\ \hline
Si-strip & 4 $\times$ (0.32\,mm Si +0.4\,mm carbon fiber ) & 1.4\,\% \\ \hline
\end{tabular}
\label{table:budget}
\end{table}
\section{Time Projection Chamber as Inner Tracker}
Time Projection Chamber is considered as a good option for the inner tracker detector in SCTD as it is an ideal device for tracking of charged particles in three dimensional space by fulfilling all the above requirements. Also, by allowing to measure dE/dx, it is possible to identify particle masses using TPC. Several experiments such as ALICE~\cite{c}, ILD~\cite{d} etc are considering TPC as tracker.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.4\textwidth]{TPC2}
\caption{\label{Figure:2} 3-D view of TPC from detector simulation}
\end{figure}
\par TPC is a large cylindrical volume filled with gas, surrounding interaction point by covering almost full solid angle. TPC for c-tau detector have length of 600\,mm, inner and outer diameter of 50\,mm and 400\,mm. A uniform electric field of 500\,V/cm is applied along Z-direction and magnetic field of 1.5\,T is applied parallel to electric field. Studies of two TPC options with triple GEM and $\mu$-RWELL readouts are going on.
\section{Choice of the drift gas mixtures}
Although gas detectors have been used in experimental physics for several decades, there is no specific recipe for choosing the gas mixture. When choosing a mixture for a particular detector, the following considerations should be taken into account: ionization density, radiation thickness, electron drift velocity and diffusion, ion drift velocity, quenching properties and chemical stability of the gas mixture components.One of the most common are gas mixtures based on Argon or Neon (see Table~\ref{table:gases}). Helium is often used when large radiation length is needed, but for the inner tracker, the contribution of gas mixture is small compared with the material which the TPC is made of (see Table~\ref{table:budget}). Indeed even for Ar based mixtures radiation thickness is less than 0.1\,\% of X$_0$.
\begin{table}[h]
\caption{Properties of gases commonly used in TPCs at normal temperature and pressure. Density $\rho$, radiation length X$_0$, total number of electron-ion pairs for MIPs N$ _t $.}
\centering
\begin{tabular}{ | l | c | c | c | c |}
\hline
Gas & $\rho$ & X$_0$ & X$_0$ & N$ _t $ \\
& [g/l] & [g/cm$^2$] & [m] & [1/cm] \\ \hline
He & 0.1785 & 94.32 & 5280 & 8 \\
Ne & 0.8999 & 28.94 & 322 & 40 \\
Ar & 1.784 & 19.55 & 110 & 97 \\ \hline
CH$_4$ & 0.717 & 46.22 & 645 & 54 \\
CO$_2$ & 1.977 & 36.2 & 183 & 100 \\
C$_2$H$_6$ & 1.977 & 45.47 & 335 & 112 \\
CF$_4$ & 3.93 & 36 & 90 & 120 \\ \hline
\end{tabular}
\label{table:gases}
\end{table}
\subsection{Field distortion due to space charge }
One of the main limitations for the TPC application in continuous read-out mode is the electric field distortion of a uniform electric field due to a space charge of ions, appearing in the Ion Back Flow (IBF) process. IBF appears in the triple-GEM at the end-cap of TPC, ions from the avalanche in GEMs move to TPC volume. With regular triple-GEM cascade, the value of IBF depends on the fields inside GEM holes and in the drift and transfer gaps, and can be easily obtained around 10\%. With special efforts, using 4-GEM cascade with different hole diameters and with different voltages across GEMs and transfer fields, the value of 1\% of IBF can be obtained~\cite{c}. While moving essentially slower (about 1000 times) than electrons, ions stay in TPC volume longer and produce their own electric field. This electric field distorts straight external electric field lines in TPC. The results of physics background simulation~\cite{f} were used for space charge density estimation in TPC. The electric field distortion calculation was based on the obtained values of space charge density with IBF 1\% and gain 10000. The ions space charge density was about $10^7 |e|/cm^3$ (|e| is an absolute value of electron charge) near beam pipe inside TPC and it decreased strongly up to $10^4 |e|/cm^3$ near the outer wall of TPC. In Fig.~\ref{fig:distortion}, the field line deviation map from straight field lines is shown for drift length 30 cm in Ar and Ne based gas mixtures. As charge particles move along field lines, the distortion of field lines is the distortion of the particle trajectories. To evaluate distortion of the field lines due to the effect of space charge the COMSOL simulation package have been used~\cite{g}. Despite the fact that the space charge in the Ar based mixtures is four times larger than in the Ne-based (due to twice higher ionization and two times lower ions drift velocity compared to Neon), the deviations of drift lines in Ar ($\sim1.5\,$mm) are comparable to the transversal diffusion and can be compensated by offline analysis. It makes Ar-based gas mixtures favorable as working medium in the TPC due to low cost and insensitivity to impurities.
\begin{figure}
\includegraphics[width=0.45\textwidth]{IBFAr}
\qquad
\includegraphics[width=0.45\textwidth]{IBFNe}
\caption{\label{fig:distortion} The deviation of the drift lines distortion from straight trajectories for full drift length (30\,cm) in Ar (left) and Ne (right) based gas mixtures. r- and z-axes correspond to directions along the TPC radius and the beam, respectively. Deviations are given in microns. }
\end{figure}
\begin{figure}
\includegraphics[width=0.45\textwidth]{ArDrift}
\qquad
\includegraphics[width=0.45\textwidth]{NeDrift}
\caption{\label{fig:velocity} Drift velocity of electron in argon and neon-based gas mixtures with magnetic field 1.5T applied parallel to electric field.}
\end{figure}
\begin{figure}
\includegraphics[width=0.45\textwidth,trim=10 10 0 0,clip]{TranGraph1_5}
\qquad
\includegraphics[width=0.45\textwidth,origin=c,angle=0]{LongGraph1_5}
\caption{\label{fig:diff} Transverse and longitudinal diffusions as a function of electric field with magnetic field of 1.5T applied parallel to electric field.}
\end{figure}
\subsection{Diffusion and drift velocity}
Transport properties of electrons in various gas mixtures have been studied using Garfield++ software package~\cite{h}. The drift velocity in Argon based gas mixtures is significantly higher compared to neon, due to Ramsauer effect~\cite{i}, as can be seen in Fig.~\ref{fig:velocity}. High drift velocity means less events overlap in TPC, which simplifies event reconstruction. For example for Ar/CO$_2$ (90/10) gas mixture at an electric field 500\,V/cm, tracks from about 6000 particles will simultaneously present in the TPC (that corresponds 1500 bunch crossings). For comparison, with a gas mixture of Ne/CO$_2$ (90/10) at 500\,V/cm there will present about 24000 tracks in TPC volume. In this regard, a mixture of Ar/CH$_4$ (90/10) is of great interest, having at an electric field of only 125\,V/cm the drift velocity of 5\,cm/$\mu$s. That makes the design of the field cage much simpler. Another gas mixture Ar/CH$_4$ (50/50), is interesting because it allows to maximize drift velocity, thereby minimizing the events overlap. Price to pay for this is a significant complication of the design of the field cage, since this mixture requires a high electric field strength up to 1000\,V/cm.
Fig.~\ref{fig:diff} shows the transverse and longitudinal diffusion of electrons in gas mixtures. It is clear from the graph that in 125\,V/cm Ar/CH$_4$ (90/10) gas mixture has the smallest transverse diffusion among all others, that makes it even more attractive to use. At 500\,V/cm Ar/CO$_2$ (90/10) has the slightly higher transverse diffusion than Ne/CO$_2$ with the same proportion. However, Ar/CO$_2$ (90/10) has the highest longitudinal diffusion followed by Ar/CH$_4$ (90/10) at 500\,V/cm. The values of transverse and longitudinal diffusion determines charge cluster size in R-Phi and Z directions correspondingly. The value of spatial resolution, however, might be much smaller and depends on particular size of readout pads, electronic noise and the method of calculation of center of gravity of induced charge.
\section{Conclusions}
The main goal of this work is to propose a gas mixture which can be used for the continuous operation of TPC with MPGD readout. Simulation of the drift lines distortion due to the spatial charge shows that it is possible to use an argon based gas mixture for the TPC operation. Such mixtures are preferred in terms of ionization drift time and electric field strength. However, further study of the achievable spatial resolution for the selected gas mixtures is necessary.
|
1,116,691,497,363 | arxiv | \section{Methods} \label{sec:methods}
\subsection{Model description}
We use the isopycnic general circulation model BLOM (Bergen Layered Ocean
Model), which was originally developed from the Miami Isopycnic Coordinate
Ocean Model (MICOM\@) \citep{bentsen2013}.
We use the model within the framework of the Norwegian Earth System Model
(NorESM) \citep{kirkevaag2013}, but in this study the ocean is the only active
(online) component.
Hence this section describes only the ocean model; the other parts of the Earth
system (atmosphere, land and sea ice) do not run on-line in this study but are
data components that force the ocean.
\subsubsection{Seawater processes}
The biogeochemistry is part of BLOM and is referred to as the isopycnic HAMburg
Ocean Carbon Cycle (iHAMOCC\@).
It is a NPZD-type (nutrients--phytoplankton--zooplankton--detritus) model
extended to include Dissolved Organic Carbon (DOC\@).
The model iHAMOCC simulates the cycles of carbon, three major nutrients
(phosphate, nitrate and silica) and the trace nutrient iron, along with
phytoplankton, a zooplankton grazer, detritus or henceforth Particulate Organic
Carbon (\chem{POC}), as well as calcium carbonate (\chem{CaCO_3}) and biogenic silica
(\chem{bSiO_2}).
A constant Redfield ratio of \chem{P:C:N:O_2}~$=$~\(1:122:16:-172\) is used
\citep{takahashi1985}.
Further details on the ecosystem model can be found in
\citet{maier2005,schwinger2016}.
We continue with a description of what is most relevant for the sediment, namely
the particles in the seawater.
Lithogenic clay is added into the top layer of the ocean, as a refractory tracer
of unknown composition (assuming quartz for its density) as well as a direct
input of iron through the instant dissolution of the deposited dust.
Besides dust input, there is river input of dissolved components into the ocean,
as well as of \chem{POC} \citep{tjiputra2020}.
We use a parameterisation for organic carbon particles that linearly increase
their sinking speed with depth to account, in a crude way, for aggregation.
The other particles, clay, \chem{CaCO_3} and \chem{bSiO_2}, sink through the
water column with a homogeneous velocity.
There is only one size class of each of the particles and each has a
unific reactivity.
The remineralisation rates for \chem{POC} and \chem{bSiO_2} are constant, whereas the
dissolution of \chem{CaCO_3} depends on its saturation state (using first-order
dissolution kinetics).
\subsubsection{Sediment processes} \label{sec:sediment}
The model's sediment module contains four
solid compounds and seven solutes (Table~\ref{tab:tracers}).
Each of these tracers has a counterpart in the seawater.
\begin{table}[h]
\begin{tabular}{lll}
\toprule
symbol & meaning & units \\
\midrule
\multicolumn{3}{l}{\textit{state variables of the solid fraction ($s$)}} \\
\chem{OC} & organic carbon & \si{\mol\phosphorus\per\cubic\deci\metre} \\
\chem{CaCO_3} & calcium carbonate & \si{\mol\carbon\per\cubic\deci\metre} \\
\chem{bSiO_2} & biogenic silica & \si{\mol\silicon\per\cubic\deci\metre} \\
\chem{clay} & lithogenic (from dust) & \si{\kilo\gram\per\cubic\metre} \\
\midrule
\multicolumn{3}{l}{\textit{state variables of the porewater solutes ($d$)}} \\
\chem{DIC} & dissolved inorganic carbon & \si{\mol\carbon\per\cubic\deci\metre} \\
$A_T$ & total alkalinity & \si{\equiv\per\cubic\deci\metre} \\
\chem{PO_4} & phosphate & \si{\mol\phosphorus\per\cubic\deci\metre} \\
\chem{O_2} & oxygen & \si{\mol\oxygen\per\cubic\deci\metre} \\
\chem{N_2} & molecular nitrogen & \si{\mole\nitrogen\per\cubic\deci\metre} \\
\chem{NO_3} & nitrate & \si{\mol\nitrogen\per\cubic\deci\metre} \\
\chem{Si(OH)_4} & silicic acid & \si{\mol\silicon\per\cubic\deci\metre} \\
\bottomrule
\end{tabular}
\caption{Biogeochemical tracers in the sediment model.}
\label{tab:tracers}
\end{table}
The three biogenic particles---organic carbon, \chem{CaCO_3} and
\chem{bSiO_2}---and lithogenic clay become part of the sediment by sinking from
the bottom seawater layer into the upper layer of the sediment.
In reality, sedimentation (also called ``rain'' or ``deposition'') creates new
sediment on top of existing sediment, but in the model the sediment is defined
on a fixed grid.
The layer interfaces of the sediment are defined at 0, 0.1, 0.4, 0.9, 1.6, 2.5, 3.6, 4.9, 6.4,
8.1, 10.0, 12.1, 14.4 and 16.9\;cm below the seawater--sediment interface.
These define the twelve active layers, and one burial layer that stores all the
particulate tracers that have been transported all the way down out of the
active sediment domain.
Typically, most of the sedimented material remineralises within the sediment;
only a fraction of the biogenic compounds is buried.
In the model, clay is considered inert and does not dissolve.
In a steady state, burial equals sedimentation minus dissolution.
For each biogenic solid compound $s$ there is at least one dissolved porewater
component $d$.
Within the sediment, these evolution equations hold:
\begin{subequations}
\begin{align}
\frac{\mathrm{d}c_s}{\mathrm{d}t} &= \mathcal{B} \frac{\partial^2 c_s}{\partial z^2}
- \frac{\partial}{\partial z}(w_s c_s)
- \frac{v_{s\rightarrow d} M_s}{\rho(1-\phi)} \label{eqn:solid} \\
\frac{\mathrm{d}c_d}{\mathrm{d}t} &= \frac{\partial}{\partial z}\left(\mathcal{D} \frac{\partial c_d}{\partial z}\right)
+ \frac{v_{s\rightarrow d}}{\phi} \label{eqn:dissolved} \,,
\end{align}
\label{eqn:both}
\end{subequations}
where $c_s$ is the weight fraction of compound $s$ and $c_d$ the porewater
concentration of solute $d$, $\mathcal{B}$ is the diffusion
coefficient for bioturbation, $w_s$ the vertical advection velocity of the solid
compounds, $v$ the reaction rate, $M_s$ the molecular weight of the solid
compound, $\rho$ the bulk sediment density, $\phi$ is the porosity (the
volume of the porewater divided by the total volume including the target solid
sediment) and $\mathcal{D}$ is the diffusion coefficient for porewater
diffusion.
Sediment is displaced through bioturbation (Eq.~\ref{eqn:solid}, RHS, 1st term)
and vertical advection (2nd term) of solid constituents.
Each of the last terms of Eqs~\ref{eqn:solid} and~\ref{eqn:dissolved} denotes
decomposition of the solid components ($s\rightarrow d$ conversion).
Organic matter, usually referred to as ``organic carbon'', consists of many
organic compounds of which the major ones are modelled.
When organic matter remineralises, we assume that all those compounds end up in
dissolved phase simultaneously.
In other words, in addition to the principal dissolved components $d$ (phosphate
for organic matter) from Eq.~\ref{eqn:dissolved} there are other products $d'$
(e.g.\ nitrate) that are created according to:
\begin{equation}
\frac{\mathrm{d}c_{d'}}{\mathrm{d}t} = R \, \frac{\mathrm{d}c_{d}}{\mathrm{d}t} \,,
\label{eqn:dd}
\end{equation}
where $R$ is a stoichiometric expression, relating $d'$ to $d$.
Similarly, total alkalinity decreases as a consequence of organic carbon
remineralisation and increases with the dissolution of calcium carbonate.
The porosity $\phi$ is a prescribed function of depth, starting at 0.85 in the
upper layer decreasing to 0.62 in the twelfth layer at around 13\;cm depth.
The rest of the volume, $1-\phi$, is thus 0.15 at the surface and increases,
through compactification, to 0.38 at 13\;cm depth.
In a filled state, this is the volume fraction of the bulk solid matter.
The model simulates the advection with respect to the sediment--water
interface by shifting sediment vertically, depending on the ``filling'' state of
the sediment.
The total volume fraction occupied by solid material is given by the sum of the
components' volume fractions:
\begin{equation}
V_\mathrm{f} = \sum_s \frac{M_s}{\rho_s} \, c_s \,,
\end{equation}
where $s$ runs over the four types of particulate matter, $\rho_s$ is the
density of the solid component (thus $M_s/\rho_s$ the specific volume)
and $c_s$ the concentration.
The shifting procedure sees to it that $V_\mathrm{f}$ approaches $1-\phi$.
If there is more deposition than remineralisation, sediment moves downwards
at all depths to fill up the layers and the surplus created in the bottom layer
of the active sediment (layer 12) is moved into the burial layer (layer~13).
If there is not enough deposition relative to remineralisation, sediment moves
upwards.
To this end, the active sediment's bottom layer will be overfilled from the
burial layer with enough clay to fill the whole sediment.
Then the layers above will be filled by redistribution (upward advection) from the
surplus in layer~12.
\subsubsection{Sediment--seawater burst coupling} \label{sec:burst}
\begin{figure}
\centering
\includegraphics[width=.72\linewidth]{SED_flow-crop}
\caption{Flowchart of the burst coupling.
The circles are connectors, the rectangles are processes and the rhomboids
present input/output processing.}
\label{fig:flowchart}
\end{figure}
In order to simulate a near-equilibrium of sediment biogeochemistry that is in
agreement with the ocean circulation and biogeochemistry, we developed
the so-called ``burst-coupling'' method.
The model, BLOM/iHAMOCC, is adjusted such that we can choose to simulate the
sediment processes alone while being forced by a seasonal climatology of
bottom-seawater variables, or to couple the sediment with the rest of the ocean
model.
The variables include particle fluxes from the ocean onto the sediment and
tracer concentrations.
The fluxes define the sedimentation of biogenic and terrigenic particles.
Dissolved tracer concentrations are used to calculate gradients between the
porewater and overlying seawater for sediment--seawater fluxes.
The climatology of the bottom-seawater variables consists of \SI{12}{\month}
and may be an average over a certain number of years at the end of the preceding
coupled period.
Normally, a single iteration step for the sediment (finite difference
approximation of the differential equations of Sec.~\ref{sec:sediment}) is
performed for each water column biogeochemistry iteration, but in stand-alone
mode a huge many iterations of the sediment are done (with a much larger
timestep to speed up even more).
These modes can be alternated in an automated manner where the user defines the
coupled-mode and the decoupled-mode periods.
Figure~\ref{fig:flowchart} shows the flowchart for this process.
If there is a bottom-seawater forcing present, it can be read from file.
Otherwise, the simulation starts in coupled mode and a user-defined number of
years are accumulated during all or the last part of the simulation (at least
the last year) to derive a 12-month climatology.
Optionally, this climatology is copied from volatile memory to file.
Then the sediment is spun up decoupled.
The process may be iterated by continuing the coupled simulation (with the
spun-up sediment) and so on.
The model terminates within the coupled mode when it reaches the user-defined
number of coupled model years (standard NorESM configuration); this is not
presented in this flowchart.
\subsubsection{Other improvements}
Much of the code has been refactored and tidied up, which improved readability
and computational performance (about 20\,\%).
The details can be found in the commits, see
\url{https://hg.gnu.org.ua/hgweb/burst-coupling/} and related discussion in
Appendix~\ref{app:recommendations}.
\subsection{Simulations}
We use a Holocene (pre-industrial) climate state.
We first run the ocean model for 2000\;yr, which is when the seawater tracers
reach an approximate steady state (but still drift because of the unfilled
sediment).
Then we use the final state of this simulation as the initial conditions for the
three different simulations that are listed in Table~\ref{tab:simulations} and
explained below.
The time evolution of each of the simulations is presented in
Fig.~\ref{fig:simulations}.
\begin{table}[h]
\begin{tabular}{lll}
\toprule
Name & Duration & Iterations \\
\midrule
Coupled & 1000\;yr coupled & 1 \\
BurstShort & 50\;yr coupled, 400\;yr sediment, 50\;yr coupled & 2 \\
BurstLong & 250\;yr coupled, 50\;kyr sediment & 4 \\
\bottomrule
\addlinespace
\end{tabular}
\caption{Numerical simulations performed.
All the simulation are forked from a 2000\;yr coupled simulation, such that the
water column is in an approximate steady state to start with.}
\label{tab:simulations}
\end{table}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{burst-crop}
\caption{Burst coupling, a method to efficiently spin up a model consisting
of a computationally slow and a fast component, where, in order to reach a
steady state, the fast component needs to run many more model years than the
slow component.
In the depicted example, the coupled BLOM/iHAMOCC model that includes the
computationally slow seawater component, is spun up for 250\;yr, after which
the fast sediment component is spun up for a much longer time.
This may be repeated a couple of times, each time with a recent forcing.}
\label{fig:burst}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{simulations-crop}
\caption{The model-time evolution of the four simulations (neither depth nor
time are to scale).
The blue areas represent the seawater and the brown the area below
the seafloor (sediment and porewaters).
The small rectangles in the lower-right corner of the seawater in the burst-coupling
simulations represent the yearly averaged bottom-water climatology.
The double arrows represent the forcing of off-line bottom-water variables
onto the sediment.}
\label{fig:simulations}
\end{figure}
\begin{itemize}
\item
\emph{Coupled} is a 1000-yr simulation where the water column and the
sediment are coupled.
Its purpose is that the other simulations can be compared to a base.
Like the other simulations, it is preceded by a 2000-yr initial spin-up; so at
the end of \emph{Coupled}, the model has done 3000\;yr of coupled integration.
\item
\emph{BurstShort} has the same duration, but two intervals of it are sediment-only,
decoupled from the water column and forced by bottom water particle fluxes and
tracer concentrations calculated from the last 50\;yr of the preceding coupled
simulation interval.
This simulation tests if the method is basically working and if the
sediment---when forced by a bottom-water climatology---is evolving sufficiently
similar to \emph{Coupled}.
In other words, this shows how the water column forcing (in contrast to coupled
mode) is affecting the sediment.
\item
\emph{BurstLong} is a simulation of a total model duration of 151\,000\;yr.
During the last 50\;yr of an initial 250\;yr coupled spin-up, the
flux and tracer values are averaged over the duration each month and over the
50\;yr.
Then we run only the sediment model, decoupled from the rest of the model for
50\,000\;yr and forced by this climatology.
This is iterated three times and finalised with another 250\;yr of coupled
integration (Fig.~\ref{fig:burst}).
By interleaving every 250\;yr of coupled simulation with 50\;kyr of
sediment-only, we manage to make the total resulting coupled simulation
1000\;yr.
Hence we can analyse the effect of the long sediment bursts to the water column
(compared to the 1000\;yr of \emph{Coupled}).
Additionally, the simulation may give insight in how long an ocean model needs
to spin up (without drifting because of sediment interaction).
\end{itemize}
\subsection{Observations and evaluation}
We use data from \citet{archer1996,data::archer1999} to evaluate the \chem{CaCO_3} and
\chem{bSiO_2} mass fractions in the sediment.
Furthermore, the burial fluxes from \citet{hayes2021} are used as a further
criterion for the model performance.
\section{Results} \label{sec:results}
\subsection{Coupled simulation}
In Figs~\ref{fig:ts:calcite:Coupled:Panama}--\ref{fig:ts:bSi:Coupled} we
present porewater and particle concentration
development over the 2000 or 3000\;yr spin-up of the coupled simulation.
The figures show that the sediment is not filled with (most) particles within a
couple of thousand years.
For instance, in the Gulf of Panama \chem{CaCO_3} appears to reach a dynamical
equilibrium after about 1000\;yr in the upper layers of the sediment (upper
2.5\;cm), but at higher depths (below 6.4\;cm) it will take notably longer than
2000\;yr for \chem{CaCO_3} to reach a steady state
(Fig.~\ref{fig:ts:calcite:Coupled:Panama}).
Only b\chem{SiO_2} looks in a steady state from the start, or maybe from about
500\;yr.
\begin{figure}[!hb]
\centering
\includegraphics[width=\linewidth]{figs/POC-sediment-timeseries-Coupled-PanamaGulf.png}
\caption{\chem{CaCO_3} timeseries at different depths in the sediment of
\emph{Coupled} in the Gulf of Panama (80\degree\,W, 5\degree\,N).}
\label{fig:ts:calcite:Coupled:Panama}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/CaCO3-sediment-timeseries-Coupled-ManopS.png}
\caption{\chem{CaCO_3} timeseries at different depths in the sediment of
\emph{Coupled} at MANOP site~S (140\degree\,W, 11.33\degree\,N).}
\label{fig:ts:calcite:Coupled:MANOP}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/Ter-sediment-timeseries-Coupled-ManopS.png}
\caption{Clay timeseries at different depths in the sediment of
\emph{Coupled} at MANOP site~S (140\degree\,W, 11.33\degree\,N).}
\label{fig:ts:clay:Coupled}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/bSi-sediment-timeseries-Coupled-PanamaGulf.png}
\caption{b\chem{SiO_2} timeseries at different depths in the sediment of
\emph{Coupled} in the Gulf of Panama (80\degree\,W, 5\degree\,N).}
\label{fig:ts:bSi:Coupled}
\end{figure}
This alone warrants a relatively long spin-up of at least the sediment.
Even if one is not interested in the sediment, it should be realised that the
interaction of the seawater with a changing (non-steady state) sediment will
lead to a drift in an ocean- or climate model.
Of course a priori reasoning is not enough to conclude if such a drift is
significant.
\subsection{Global spin-up evolution}
\begin{figure}[!hb]
\centering
\includegraphics[width=\linewidth]{figs/CaCO3-timeseries-Coupled+Once+Long.png}
\caption{Modelled \chem{CaCO_3} timeseries in the sediment (global average,
upper 14.4\;cm): black is 1000\;yr of coupled, blue includes one
sediment-only period of 50\,000\;yr, red includes three such burst periods.}
\label{fig:ts:calcite-alt}
\end{figure}
We present the timeseries of \chem{CaCO_3} (sediment average) of the
\emph{BurstLong} simulation (Fig.~\ref{fig:ts:calcite-alt}).
It shows the 1000\;yr of coupled simulation, interleaved with the three
50\,000\;yr of decoupled sediment simulation.
The figure shows that at least two sediment-only burst
periods are needed to get the sediment in a steady state, and subsequent bursts
have less and less influence.
The bursts have a significant effect for the other particles as well, as can be
seen in Figs~\ref{fig:ts:oc-alt}--\ref{fig:ts:clay-alt}.
However, after each burst there is a relaxation towards the mass fractions of
the coupled simultion (\emph{Coupled}).
The relative change over the whole simulation time is much smaller for
organic carbon (Fig.~\ref{fig:ts:oc-alt}) and \chem{bSiO_2}
(Fig.~\ref{fig:ts:opal-alt}) than for \chem{CaCO_3}, and the deviation by the
bursts could just be contingent upon the forcing integrated.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/POC-timeseries-Coupled+Once+Long.png}
\caption{Modelled \chem{OC} timeseries in the sediment (global average,
upper 14.4\;cm): black is 1000\;yr of coupled, blue includes one
sediment-only period of 50\,000\;yr, red includes three such burst periods.
See Fig.~\ref{fig:ts:oc-inter} for the timeseries including the 50\,000\;yr
decoupled bursts (albeit compressed).}
\label{fig:ts:oc-alt}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/20731_burstlongrs.ssso12.pdf}
\caption{Modelled \chem{OC} timeseries in the sediment (global average,
upper 14.4\;cm) from \emph{BurstLong}. The time axis of the 50\,000\;yr
burst periods are compressed by a factor of~1000.}
\label{fig:ts:oc-inter}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/opal-timeseries-Coupled+Once+Long.png}
\caption{Modelled \chem{bSiO_2} timeseries in the sediment (global average,
upper 14.4\;cm): black is 1000\;yr of coupled, blue includes one
sediment-only period of 50\,000\;yr, red includes three such burst periods.}
\label{fig:ts:opal-alt}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/clay-timeseries-Coupled+Once+Long.png}
\caption{Modelled \chem{clay} timeseries in the sediment (global average,
upper 14.4\;cm): black is 1000\;yr of coupled, blue includes one
sediment-only period of 50\,000\;yr, red includes three such burst periods.}
\label{fig:ts:clay-alt}
\end{figure}
Fig.~\ref{fig:ts:calcite-50kyr} shows a 50\;kyr burst, again from
\emph{BurstLong}, where the sediment content of \chem{CaCO_3} approaches a
steady state.
In the subsequent figure (Fig.~\ref{fig:ts:oc-50kyr}), organic carbon is
plotted for the same stand-alone sediment simulation.
In addition, also a region in the eastern Central Pacific is plotted where there
is a high flux of organic carbon (as well as biogenic silica).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/20B19_burstlong.sssc12.pdf}
\caption{Modelled \chem{CaCO_3} timeseries in the sediment (average over
upper 14.4\;cm) of the first `burst' of 50\,000\;yr.}
\label{fig:ts:calcite-50kyr}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/20B19_burstlong.ssso12.pdf}
\caption{Modelled organic carbon (\si{\mole\phosphorus\per\cubic\metre}) timeseries in the sediment (average over
upper 14.4\;cm) of the first `burst' of 50\,000\;yr.
In addition to a global average the figure also shows the timeseries of the
high particle flux region just west of Central America.}
\label{fig:ts:oc-50kyr}
\end{figure}
As expected, during the 50-kyr sediment-only simulations immediately after years
250, 500 and 750 of the coupled simulation, much more burial occurs than during
any of the 250-yr periods.
We present the average molar concentration timeseries of organic matter in
Fig.~\ref{fig:ts:oc}.
Organic carbon changes only slightly during the 1000\;yr coupled + 150\;kyr
sediment-only simulation, from
\SIrange{33.5}{36.5}{\mole\phosphorus\per\cubic\metre}.
During coupled simulation, organic carbon increases (but the second derivative
is negative).
During sediment-only simulation, organic carbon decreases.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/20B19_burstlong.ssso12.pdf}
\caption{Modelled organic carbon (\si{\mole\phosphorus\per\cubic\metre})
timeseries in the sediment (average over upper 14.4\;cm) of the full
1000+150\,000\;yr simulation.
The duration during the decoupled periods each 250\;yr is compressed such
that one year on the x-axis represents 1000\;yr in model time.}
\label{fig:ts:oc}
\end{figure}
\subsubsection{Effect on water column}
Figs~\ref{fig:timeseries:jPiC_bot}--\ref{fig:timeseries:PO4_1000m} show
globally averaged quantities in the water column from \emph{Coupled} and
\emph{BurstLong}.
In all the figures from year 250, where the first decoupled period takes place,
the behaviour of \emph{BurstLong} (black line) is accelerated in both magnitude
and phase.
Of course the effect is much more moderate than for the tracers in the sediment,
but the 50\,000\;yr spin-up of the sediment clearly has an effect on the water
column tracers.
It is a small effect compared to the absolute value ($<1\%$) but large compared
to centenial variability ($>20\%$).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/CaCO3-flux-bottom-BurstLong-vs-Coupled.png}
\caption{\chem{CaCO_3} global flux at the ocean bottom (sedimentation);
against time (yr).
The black line is the final 1000\;yr of \emph{Coupled}; the red line
represents \emph{BurstLong}.}
\label{fig:timeseries:jPiC_bot}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/O2-1000m-BurstLong-vs-Coupled.png}
\caption{\chem{O_2} concentration at 1000\;m depth; global average;
against time (yr).
The red line is the final 1000\;yr of \emph{Coupled}; the black line
represents \emph{BurstLong}.}
\label{fig:timeseries:O2_1000m}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/PO4-1000m-BurstLong-vs-Coupled.png}
\caption{\chem{PO_4^{3-}} concentration at 3000\;m depth; global average;
against time (yr).
The red line is the final 1000\;yr of \emph{Coupled}; the black line
represents the burst coupling with decoupled 50\;kyr periods every 250\;yr.}
\label{fig:timeseries:PO4_1000m}
\end{figure}
\subsection{Verification of the method}
\emph{BurstShort} would measure the effect of forcings on the sediment compared
to \emph{Coupled}.
We have not finished this analysis.
Instead we will show the results of a `synchronous' simulation, named
\emph{BasicTest}.
That simulation takes only 51\;yr of which 50\;yr forced by the respective
stored bottom water variables of \emph{Coupled} (see
Fig.~\ref{fig:simulations}).
As such we will, in a way, more precisely track the deviation from the coupled
simulation based on (about) the best presentation of bottom water forcings that
we can get while still using a full stored year.
\begin{figure}[!hb]
\centering
\includegraphics[width=\linewidth]{figs/20C09_couplext.sssc12.pdf}
\caption{\chem{CaCO_3} molar concentration in the sediment of the last
51\;yr of the simulations \emph{Coupled} and \emph{BasicTest}.}
\label{fig:ts:calcite}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/20C09_couplext.ssssil.pdf}
\caption{b\chem{SiO_2} molar concentration in the sediment of the last
51\;yr of the simulations \emph{Coupled} and \emph{BasicTest}.}
\label{fig:ts:opal}
\end{figure}
Figs~\ref{fig:ts:calcite} and~\ref{fig:ts:opal} present the timeseries of the
globally averaged sediment concetration of \chem{CaCO_3} and b\chem{SiO_2} of
\emph{Coupled} together with \emph{BasicTest}.
At the start of the simulations the concentration coincided for both tracers.
Then both tracers in \emph{BasicTest} start to deviate from \emph{Coupled}.
After 50\;yr, calcium carbonate has deviated about 0.01\,\%, whereas biogenic
silica has deviated about~1.4\,\%.
The water column concentrations did not change during the stand-alone sediment
simulation and continued spinning up only in the coupled periods (not further
presented).
\subsection{Evaluation}
This report will not show a full evaluation of the model that, amongst other
things, would compare the 151\,000\;yr simulation (\emph{BurstLong}) against
observations.
We have shown in the previous two subsections that our method works.
For a more complete evaluation of BLOM/iHAMOCC we refer to the analysis in the
recent coupled CMIP6 NorESM simulations \citep{tjiputra2020}.
One of the goals of our study was to show if a spun up sediment has an effect on
the ocean state and we have indications that it has.
\subsubsection{Pelagic zone} \label{sec:pelagic}
BLOM/iHAMOCC has been evaluated by \citet{tjiputra2020}, but this is mostly
limited to the upper part of the water column.
Moreover, they had spun up the model in the order of 1000\;yr to a quasi steady
state.
As expected, most variables in the seawater from our \emph{BurstLong}
simulation are similar to those in \citet{tjiputra2020}.
The deepest part of the ocean and especially the sediment were not in a steady
state or not analysed.
Hence we will here evaluate a few features of the spun up model, mostly using
the average of the final part of \emph{BurstLong}.
\subsubsection{Benthic zone and seabed} \label{sec:benthic}
Figure~\ref{fig:maps:particles} presents the particle composition of the
sediment according to our model.
\begin{figure}[!hb]
\centering
\includegraphics[width=\linewidth]{figs/particles-sediment.pdf}
\caption{Particle composition of the sediment, average over depth and the
last 1000\;yr of a 251\;kyr burst-coupled simulation.
The calcium carbonate and biogenic silica data---plotted as discs on the
same colour scale---are from \citet{archer1996} (revised dataset at
\citet{data::archer1999}).}
\label{fig:maps:particles}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.49\linewidth]{figs/CaCO3-sediment-burstlong-1000yr.png}
\includegraphics[width=.49\linewidth]{figs/CaCO3-sediment-data.png}
\caption{Mass fraction of calcium carbonate according to our model, and from
coretop data \citep{archer1996}.}
\label{fig:maps:CaCO3}
\end{figure}
In Figure~\ref{fig:maps:CaCO3}(b) observations by \citet{archer1996} are
presented.
The model underestimates \chem{CaCO_3} concentrations at most places, and
especially in the equatorial East Pacific.
The northern Indian Ocean is reasonably represented, not the southern part.
The low \chem{CaCO_3} fraction in the Southern Ocean, however, is well
represented.
This was expected as this is likely to rather contain opal instead of calcium
carbonate.
The simulation shows high concentrations of lithogenic material in especially
the Southern and Arctic Oceans.
Even though the opal belt is reproduced by the model
(Section~\ref{sec:pelagic} and \citet{tjiputra2020}), the
sedimentary biogenic silica fraction is nonetheless underestimated.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/opal-sediment.pdf}
\caption{\chem{bSiO_2} fraction of the sediment, average over full model
active sediment depth (0--15\;cm) and the
last 1000\;yr of a 251\;kyr burst-coupled simulation.
The biogenic silica data---plotted as discs on the same colour scale---are
from \citet{archer1996} (revised dataset at \citet{data::archer1999}).
\textbf{Left:} relative to total particulate; \textbf{right:} relative to
particles excluding \chem{CaCO_3}.}
\label{fig:maps:opal}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/CaCO3-sediment-profiles.png}
\caption{Profiles of \chem{CaCO_3} in the eastern equatorial Pacific Ocean,
central North Pacific, BATS station and Weddell Sea.
The vertical axis is the sediment depth (mm); the horizontal mass fraction
of \chem{CaCO_3}.
The black line is after 1000\;yr of normal BLOM/iHAMOCC simulation
(\emph{Coupled});
the red line is also 1000\;yr but every 250\;yr the sediment is decoupled
to run 100\;kyr stand-alone, (\emph{BurstLong}).}
\label{fig:profiles:calcite}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/bSi-sediment-profiles.png}
\caption{Profiles of b\chem{SiO_2} in the eastern equatorial Pacific Ocean,
central North Pacific, BATS station and Weddell Sea.
The vertical axis is the sediment depth (only upper 30\;mm); the horizontal
mass fraction of b\chem{SiO_2}.
The black line is after 1000\;yr of normal BLOM/iHAMOCC simulation
(\emph{Coupled}); the red line is also 1000\;yr but every 250\;yr the
sediment is decoupled to run 100\;kyr stand-alone, (\emph{BurstLong}).}
\label{fig:profiles:bsi}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/cfbSi-sediment-profiles.png}
\caption{Profiles of the \textbf{calcium carbonate free} mass fraction of
b\chem{SiO_2} in the eastern equatorial Pacific Ocean, central North
Pacific, BATS station and Weddell Sea.
The vertical axis is the sediment depth (only upper 30\;mm); the horizontal
mass fraction of b\chem{SiO_2}.
The black line is after 1000\;yr of normal BLOM/iHAMOCC simulation
(\emph{Coupled}); the red line is also 1000\;yr but every 250\;yr the
sediment is decoupled to run 100\;kyr stand-alone, (\emph{BurstLong}).}
\label{fig:profiles:cfbsi}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.8\linewidth]{figs/POC-seawater-profile-Pacific.png}
\caption{Profiles of the \chem{POC} concentration in the Pacific Ocean seawater.
From left to right: the Patton Escarpment, MANOP Site S, and the Gulf of
Panama.
The vertical axis is the water column depth (m).
The black line is after 1000\;yr of normal BLOM/iHAMOCC simulation
(\emph{Coupled}); the red line after 1672 year; and the green line is after
1000\;yr plus and additional 1000\;yr coupled burst coupled, interrupted by
decoupled sediment simulations of run 50\;kyr stand-alone
(\emph{BurstLong}).
The crosses are measurements from \citet{lam2011}.}
\label{fig:seawaterprofiles:poc}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.8\linewidth]{figs/POC-sediment-profile-Pacific.png}
\caption{Profiles of the mass fraction of \chem{POC} in the Pacific Ocean.
From left to right: the Patton Escarpment, MANOP Site S, and the Gulf of
Panama.
The vertical axis is the sediment depth (mm).
The black line is after 1000\;yr of normal BLOM/iHAMOCC simulation
(\emph{Coupled}); the red line after 1672 year; and the green line is after
1000\;yr plus and additional 1000\;yr coupled burst coupled, interrupted by
decoupled sediment simulations of run 50\;kyr stand-alone
(\emph{BurstLong}).}
\label{fig:profiles:poc}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.8\linewidth]{figs/O2-seawater-profile-Pacific.png}
\caption{Profiles of the porewater concentration of \chem{O_2} in the
Pacific Ocean seawater.
From left to right: the Patton Escarpment, MANOP Site S, and the Gulf of
Panama.
The vertical axis is the water column depth in m.
The black line is after 1000\;yr of normal BLOM/iHAMOCC simulation
(\emph{Coupled}); the red line after 1672 year; and the green line is after
1000\;yr plus and additional 1000\;yr coupled burst coupled, interrupted by
decoupled sediment simulations of run 50\;kyr stand-alone
(\emph{BurstLong}).}
\label{fig:seawaterprofiles:o2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.8\linewidth]{figs/O2-sediment-profile-Pacific.png}
\caption{Profiles of the porewater concentration of \chem{O_2} in the
Pacific Ocean.
From left to right: the Patton Escarpment, MANOP Site S, and the Gulf of
Panama.
The vertical axis is the sediment depth in mm.
The black line is after 1000\;yr of normal BLOM/iHAMOCC simulation
(\emph{Coupled}); the red line after 1672 year; and the green line is after
1000\;yr plus and additional 1000\;yr coupled burst coupled, interrupted by
decoupled sediment simulations of run 50\;kyr stand-alone
(\emph{BurstLong}).}
\label{fig:profiles:o2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/O2-sediment-map-zavg.png}
\caption{The depth-averaged porewater concentration of \chem{O_2} in the
World Ocean.
Yearly average at the end of \emph{BurstLong}.}
\label{fig:map:o2}
\end{figure}
The \chem{POC} at the Patton Escarpment (Fig.~\ref{fig:profiles:poc}, left panel) has
the right asymptotic order of magnitude, 0.8\,\% compared with 1.4\,\% from
observations \citep[p.~231]{book::sarmiento2006}, but the shape is wrong
(starting at the surface below 0.5\,\% at the surface instead of 1.7\,\% in
observations).
The seawater profiles (Fig.~\ref{fig:seawaterprofiles:poc}) show some
consistency in that the highest concentration near the bottom are in the right
panel (Panama, where we expect high organic particle loads), consistent with the
about 6\,\% in the sediment.
Of course, percentages cannot be directly compared with concentrations; it
depends also on the other particles' sedimentation.
The zero value in the middle panel of the sediment profiles is inconsistent with
the particles in the seawater (latter compares well with observations).
The oxygen profiles in the seawater (Fig.~\ref{fig:seawaterprofiles:o2}) and the
sediment (Fig.~\ref{fig:profiles:o2}).
Oxygen concentrations are practically zero in the Gulf of Panama (see also the
map of Fig.~\ref{fig:map:o2}).
The sediment profiles have the right basic shape
\citep[p.~231]{book::sarmiento2006}, but they don't become zero at the Patton
Escarpment (continental margin, left panel of Fig.~\ref{fig:profiles:o2}).
This can of course be because of the too low resolution of the model to resolve
the continental margin.
The depth-average oxygen concentration in the sediment shows the hypoxic/anoxic
areas around continents.
The burial rates from near the end of the \SI{251}{\kilo\year} simulation of
\emph{BurstLong} are given in Table~\ref{tab:burial}.
\begin{table}
\begin{tabular}{lSlll}
\toprule
Type of particle & \multicolumn{4}{l}{Global burial rate} \\
& Tmol/yr & $\%_{>200m}$ & Tg/yr & Tg/yr \\
\midrule
Organic carbon & 28 C & 63 & 337 C & 843 \chem{C_6H_{12}O_6} \\
Calcium carbonate & 26 C & 91 & 318 C & 2649 \chem{CaCO3} \\
Biogenic silica & 23 Si & 29 & 644 Si & 1375 \chem{SiO2} \\
Clay mineral & 1.15 & 84 & 1.15 & 1.15 \\
\bottomrule
\addlinespace
\end{tabular}
\caption{The burial rates after the \SI{251}{\kilo\year} simulation of
\emph{BurstLong}.}
\label{tab:burial}
\end{table}
Estimates of burial have a large variance.
For calcium carbonate the burial ranges typically from
\SIrange{100}{150}{\tera\gram\carbon\per\year} \citep{cartapanis2018}.
The model yields a burial that is more than two times higher than their upper
limit.
Then again, earlier estimates of \chem{CaCO_3} burial were higher and much
closer to our result \citep{milliman1993,milliman1996}.
In the model, when calcium carbonate is undersaturated, it dissolves following a
first-order dissolution.
The dissolution rate constant may be adjusted, and a higher-rate dissolution
parameterisation could be considered as well \citep[e.g.][]{subhas2015}.
Recent estimates of biogenic silica burial are
\sisetup{separate-uncertainty}%
\SI{202(115)}{\tera\gram\silicon\per\year} \citep{treguer2013}, whereas our
model yields a factor of three higher burial rate.
Also our carbon burial is overestimated with a factor of three.
Compare iHAMOCC's \SI{337}{\tera\gram\carbon\per\year} with
\SI{112}{\tera\gram\carbon\per\year} \citep{dunne2007} and
\SI{156}{\tera\gram\carbon\per\year} \citep{burdige2007}.
The coupled simulation (\emph{Coupled}) shows a relatively continuous rise of
particle content, because the sediment is filling up.
The burial is close to zero at the beginning of the simulation, because no
particles are present yet in the bottom sediment layer.
After burst coupling (\emph{BurstLong}), the sediment is spun up and fluxes are much
higher and without any obvious trend.
\subsection{Runtime efficiency}
Using the burst-coupling method we reached a model preformance increase per
model year of about a factor of~19.
This is on top of an about 20\,\% performance gain because of the refactoring
done to the code before and during implementation of the burst-coupling method.
\section{Discussion and conclusion} \label{sec:discussion}
A complete steady state is only reached for \chem{CaCO_3} and PO\chem{C} far
beyond 100\,000\;yr, suggesting that (1) models that are similar in the relevant
respects need such a long spin-up and (2) the real ocean needs more than
100\,000\;yr to equilibrate.
In reality this may be shorter.
One issue with determining the equilibration time is the choice of the
coupling/decoupling procedure.
Instead of one parameter (time), it introduces three independent parameters,
namely coupled time, decoupled time and the number of iterations (or total
integration time).\footnote{One could deviate from our model scheme by allowing
variable coupled or decoupled periods, allowing for instance to change the
coupling state as soon as the relevant ocean component approaches a steady
state.
One might also use a climatology from one based on the last 50\;yr of a coupled
simulation; this should consider the extend of interyearly variablity.}
The 50\,000\;yr of stand-alone sediment integration is very long, chosen
for the practical reason that the sediment module integrates very fast, so why
not spin it up to a near steady state.
A consequence of our model setup is that the total integration time may not be a
good measure for how long an ocean model needs to get in whack.
Moreover, it appears that the full ocean system in \emph{BurstLong} is not yet
quite in whack and needs at least a couple more iterations.
This is, abducing from the previous paragraph, due to too little coupled
integration time.
Our model underestimates the sediment calcium carbonate mass fraction,
especially in the equatorial Pacific (Fig~\ref{fig:maps:particles}
or~\ref{fig:maps:CaCO3}).
The general distribution patterns of calcium carbonate are realistic; they
relate to the topography as the observations.
The biogenic silica is underestimated in the opal belt and overestimated in the
equatorial Pacific.
It seems that in the model diatoms rule in the equatorial Pacific, but in
reality coccolithophores are important there.
Also carbon burial is likely strongly overestimated and hence pressing the
calcium carbonate mass fraction burial.
Overall burial of all particles in iHAMOCC could be increased by increasing the
bioturbation coefficient that is not well known.
This would speed up the burial process
\citep[e.g.][pp.~41--44]{book::boudreau1997}.
It is not know if the bioturbation of \SI{1e-9}{\metre\squared\per\second} that
we used is realistic.
\section{Code availability}
The sediment burst-coupling code may be obtained at
\url{https://puszcza.gnu.org.ua/projects/burst-coupling/}.
For this study, different revisions of the code from
\texttt{105:fe270f4aa7db} to \texttt{121:dad56c57eb86} were used.
The repository also contains a
\href{https://hg.gnu.org.ua/hgweb/burst-coupling/file/c2feaf25645d/howto_sediment.pdf}{manual}
for using burst coupling with BLOM/iHAMOCC\@.
The mathematical variable names correspond to the Fortran variables as described
in Table~\ref{tab:variables} in the appendix.
Analysis scripts can be found at \url{https://hg.sr.ht/~marco/sediment-analysis}.
A short manual for using these scripts can be found in Appendix~\ref{app:post}.
Model output is available on request.
\section{Authors' contributions}
The model and the simulations were designed by MvH, JS, JT and CH\@.
MvH designed the burst coupling code structure with contributions from JS, JT,
CH, MB and AG\@.
The manuscript was prepared by MvH with close collaboration and major
contributions from CH, JS, and JT\@.
The authors declare that they have no conflict of interest.
\begin{acknowledgements}
We would like to thank Mats Bentsen for the useful discussions about the model
design.
We thank Alok Gupta and Anne Mor\'ee for helping with the model setup.
This study was supported by the project ``Overturning circulation and its
implications for the global carbon cycle in coupled models'' (ORGANIC, The
Research Council of Norway, grant No.~239965).
This work was also supported through project CRESCENDO (Coordinated
Research in Earth Systems and Climate: Experiments,
Knowledge, Dissemination and Outreach; Horizon 2020 European
Union's Framework Programme for Research and Innovation, grant
No.~641816, European Commission).
The authors wish to acknowledge the use of
\href{https://ferret.pmel.noaa.gov/Ferret/}{Ferret}, a product of
\href{http://www.noaa.gov/}{NOAA}'s Pacific Marine Environmental Laboratory.
The plots in this paper were created by the Ferret visualisation library
\href{http://www.nongnu.org/complot/}{ComPlot} \citep{vanhulten2017:complot} and
\href{http://www.gnuplot.info/}{gnuplot}.
\end{acknowledgements}
\DeclareRobustCommand{\DutchName}[4]{#1,~#3~#4}
|
1,116,691,497,364 | arxiv | \section{Introduction}
In a recent paper \cite{Gurdogan:2015csr}, a certain limit is applied to the $\gamma_i$-deformation
of $\mathcal{N}=4$ SYM theory \cite{Frolov:2005dj}, and the authors claim
that the resulting non-unitary
theory is conformal in the planar limit where the number $N$ of
colors is sent to infinity.
In this letter, we point out that
the Lagrangian given in \cite{Gurdogan:2015csr} is incomplete and does not define a
CFT -- not even in the planar limit.
We complete the theory by the missing couplings that are required for
renormalizability. They are quartic and have double-trace color structures and
non-trivial $\beta$-functions \footnote{Quartic double-trace couplings emerged earlier in other
contexts, e.g.\ in the effective potential
of the worldvolume theory of interacting D3-branes \cite{Tseytlin:1999ii},
in non-supersymmetric orbifold theories \cite{Dymarsky:2005uh} where
they were related to tachyons in the dual string theory \cite{Dymarsky:2005nc}.}.
Although double-trace couplings are apparently subleading
in the large-$N$ expansion, they must not be neglected in the planar limit
\cite{Sieg:2005kd}.
Planar multi-point correlation functions for infinitely many
composite single-trace operators
depend on these couplings and are hence sensitive to the $\beta$-functions.
We show this explicitly by determining to two-loop order
the planar anomalous dimensions of several single-trace operators composed of
two scalar fields.
Moreover, we give an example for a planar four-point correlation function of
further operators that depends on one of the double-trace couplings.
Allowing for complex coupling constants in this model, whose single-trace
part is already non-unitary, we find (one-loop) fix-points.
\section{The proposed theory}
In \cite{Gurdogan:2015csr}, the authors propose to apply the following
limit to the $\gamma_i$-deformation
\begin{equation}\label{limit}
\gamma_i\to i\infty\col\quad \lambda\to0 \quad\text{with}\quad
\xi_i=\sqrt{\lambda}\e^{-\frac{i}{2}\gamma_i}=\text{const.}
\end{equation}
and focus on the special case of only a single non-vanishing coupling constant
$\xi=\xi_3$.
They claim that in the planar limit the Lagrangian of the resulting theory of two interacting complex
scalars is given by equation (1) in their paper,
which in our notation and conventions reads
\begin{equation}
\begin{aligned}\label{L}
\mathcal{L}
&=\tr\Big(-\partial^\mu\bar\phi_1\partial_\mu\phi^1
-\partial^\mu\bar\phi_2\partial_\mu\phi^2
+\frac{\xi^2}{N}\bar\phi_1\bar\phi_{2}\phi^1\phi^{2}\Big
\pnt
\end{aligned}
\end{equation}
This Lagrangian follows immediately when applying the limit \eqref{limit} to the single-trace part of the action of the $\gamma_i$-deformation as written e.g.\ in \cite{Fokken:2013aea}.
\section{Running double-trace couplings}
The Lagrangian \eqref{L} is incomplete since certain divergences in
correlation functions of composite (single-trace) operators cannot be
absorbed by renormalizing these operators -- not even in the planar limit.
These divergences have to be canceled by counter terms for quartic double-trace
couplings.
At one-loop order, the counter terms are determined
by contracting two copies of the quartic scalar single-trace vertex of \eqref{L}.
For fields that transform in the adjoint representation of the global $SU(N)$ group,
the only terms that have to be added to \eqref{L} read \footnote{For $U(N)$ fields,
further couplings have to be considered \cite{Fokken:2013aea}.}
\begin{equation}
\begin{aligned}
\label{doubletracecouplings}
\mathcal{L}_{\text{dt}}&=
-\frac{\xi^2}{N^2}\bigg[\sum_{\substack{i\le j=1}}^2(Q^{ij}_{ij}+\delta Q^{ij}_{ij})\tr(\bar\phi_i\bar\phi_j)\tr(\phi^i\phi^j)\\
&\hphantom{{}={}-\frac{\xi^2}{N^2}\bigg[}
+(\tilde Q+\delta\tilde Q)\tr(\bar\phi_1\phi_2)\tr(\phi^1\bar\phi^2)
\bigg]
\col
\end{aligned}
\end{equation}
where $Q^{ii}_{ii}$, $Q^{12}_{12}$ and $\tilde Q$ with $i=1,2$ denote the respective
tree-level couplings. The counter terms
$\delta Q^{ii}_{ii}$, $\delta Q^{12}_{12}$ and $\delta\tilde Q$
are easily determined in $D=4-2\varepsilon$ dimensions
and lead to the following one-loop $\beta$-functions
in the planar limit
\begin{equation}\label{betafunctions}
\begin{aligned}
\beta_{Q^{ii}_{ii}}&=2\varepsilon\delta Q^{ii}_{ii
=(1+4(Q^{ii}_{ii})^2)\frac{\xi^2}{(4\pi)^2}
\col\\
\beta_{Q^{12}_{12}}&=2\varepsilon\delta Q^{12}_{12}=2(1-Q^{12}_{12})^2\frac{\xi^2}{(4\pi)^2}\col\\
\beta_{\tilde Q}&=2\varepsilon\delta\tilde Q=2(1-\tilde Q)^2\frac{\xi^2}{(4\pi)^2}
\pnt
\end{aligned}
\end{equation}
The $\beta$-functions \eqref{betafunctions} are non-vanishing for
$Q^{ii}_{ii}=Q^{12}_{12}=\tilde Q=0$, showing that the theory given by
the single-trace Lagrangian \eqref{L} alone is neither complete nor
conformal.
The double-trace couplings
\eqref{doubletracecouplings} cannot even be neglected in the planar limit: their counter terms
are required to renormalize certain planar Feynman diagrams. This is exemplified
below.
The $\beta$-functions for $Q^{12}_{12}$ and $\tilde Q$
have a (one-loop) fix-point at $Q^{12}_{12}=1$ and $\tilde Q=1$, respectively,
such that their running can be avoided.
The $\beta$-functions for $Q^{ii}_{ii}$, however, do not have fix-points for real tree-level couplings $Q^{ii}_{ii}$.
In fact, the $\beta$-functions for $Q^{ii}_{ii}$
follow immediately when applying the limit
\eqref{limit} to our result \cite{Fokken:2013aea}, which we have
obtained together with J.\ Fokken and which shows that
the $\gamma_i$-deformation is not a CFT.
In contrast to the $\gamma_i$-deformation, a theory with the single-trace
Lagrangian \eqref{L} is not unitary.
Hence,
let us be bold and even allow complex coupling constants.
In this case, we can choose the imaginary values $Q^{ii}_{ii}=\pm\frac{i}{2}$
that are their one-loop fix-point values.
\section{Effect on the planar spectrum}
As we have already mentioned before,
the double-trace couplings must not be neglected in the planar limit
since their counter terms are required to renormalize planar correlation functions.
Together with J.\ Fokken, we
have already pointed this out in our calculation \cite{Fokken:2014soa} of the anomalous dimensions of the composite operators $\mathcal{O}^{ii}=\tr(\phi^i\phi^i)$
in the $\gamma_i$-deformation.
Here, we determine to two-loop order
the planar anomalous dimensions of all composite operators (states) built from
two scalar fields
that receive contributions from the double-trace couplings and counter terms
\eqref{doubletracecouplings}.
The affected states are $\mathcal{O}^{ii}=\tr(\phi^i\phi^i)$, $\mathcal{O}^{12}=\tr(\phi^1\phi^2)$, $\tilde{\mathcal{O}}=\tr(\bar\phi_1\phi^2)$
and their complex conjugates.
The results for their anomalous dimensions read
\begin{equation}
\begin{aligned}\label{gammas}
\gamma^\varrho_{\mathcal{O}^{ii}}
&=4Q^{ii}_{ii}\frac{\xi^2}{(4\pi)^2}-2\frac{\xi^4}{(4\pi)^4}-2\varrho\beta_{Q^{ii}_{ii}}\frac{\xi^2}{(4\pi)^2}\col\\
\gamma^\varrho_{\mathcal{O}^{12}}&=2(Q^{12}_{12}-1)\frac{\xi^2}{(4\pi)^2}-\varrho\beta_{Q^{12}_{12}}\frac{\xi^2}{(4\pi)^2}\col\\
\gamma^\varrho_{\tilde{\mathcal{O}}}&=2(\tilde Q-1)\frac{\xi^2}{(4\pi)^2}-\varrho\beta_{\tilde Q}\frac{\xi^2}{(4\pi)^2}
\col
\end{aligned}
\end{equation}
where the first line also follows
from \cite{Fokken:2014soa} in the limit \eqref{limit}.
The parameter $\varrho$ captures the scheme dependence that starts
at two-loop order and vanishes whenever the respective $\beta$-function
is zero.
The scheme labeled by $\varrho$ is defined by applying minimal subtraction
using $\xi_\varrho=\xi\e^{\frac{\varrho}{2}\varepsilon}$ as coupling constant.
For example, we have $\varrho=0$ in the DR scheme but $\varrho=-\gamma_{\text{E}}+\log4\pi$ in the $\overline{\text{DR}}$ scheme \footnote{In the present case, the DR scheme coincides with the MS scheme.}.
At the fix-point values $Q^{ii}_{ii}=\pm\frac{i}{2}$, $Q^{12}_{12}=\tilde Q=1$,
the scheme dependence vanishes as expected, and we find that only the first of
the above anomalous dimensions is non-vanishing but complex and reads
\begin{equation}
\gamma^\varrho_{\mathcal{O}^{ii}}=\pm2i\frac{\xi^2}{(4\pi)^2}-2\frac{\xi^4}{(4\pi)^4}
\pnt
\end{equation}
\section{Effect on planar multi-point correlation functions}
Not only the two-point functions and hence anomalous dimensions \eqref{gammas}
depend on the double-trace couplings \eqref{doubletracecouplings} in the
planar limit. Also, planar higher-point functions and hence also OPEs
of operators built from more than two fields are sensitive to these couplings.
As an example, we consider the four-point correlation function
of the following operators each built from three scalars
\begin{equation}
\label{forpointfunction}
\begin{aligned}
&\tr(\phi_1\phi_2^2)(x_1)\col\qquad
\tr(\phi_1\bar\phi_2^2)(x_2)\col\\
&\tr(\bar\phi_1\phi_2^2)(x_3)\col\qquad
\tr(\bar\phi_1\bar\phi_2^2)(x_4)\pnt
\end{aligned}
\end{equation}
At order $\mathcal{O}(\xi^4)$ and when double-trace couplings are disregarded,
the diagrams contain two quartic scalar single-trace
vertices. There is one such diagram that is planar and
contains a UV divergence
that cannot be absorbed by renormalizing the operators \eqref{forpointfunction}.
It contains a loop formed by the two direct connections of these
two vertices and is shown in figure \ref{wrapdiag}.
\begin{figure}[htbp]
\centering
\centering
\subfigure[]
\label{wrapdiag}
\settoheight{\eqoff}{$+$
\setlength{\eqoff}{0.5\eqoff
\addtolength{\eqoff}{-5.5\unit
\raisebox{\eqoff}
\begin{pspicture}(-1,-1)(14,18.5
\ulinsert{1}{16
\olvertex{1}{13
\dlinsert{1}{10}
\psbezier[doubleline=true](4,16)(5.25,16)(5.25,15)(6.5,15)
\psbezier[doubleline=true](6.5,15)(7.75,15)(7.75,16)(9,16)
\psbezier[doubleline=true](4,13)(5.25,13)(5.25,12)(6.5,12)
\psbezier[doubleline=true](6.5,12)(7.75,12)(7.75,13)(9,13)
\urinsert{12}{16
\orvertex{12}{13
\drinsert{12}{10
\ulinsert{1}{6
\olvertex{1}{3
\dlinsert{1}{0}
\psbezier[doubleline=true](4,3)(5.25,3)(5.25,4)(6.5,4)
\psbezier[doubleline=true](6.5,4)(7.75,4)(7.75,3)(9,3)
\psbezier[doubleline=true](4,0)(5.25,0)(5.25,1)(6.5,1)
\psbezier[doubleline=true](6.5,1)(7.75,1)(7.75,0)(9,0)
\urinsert{12}{6
\orvertex{12}{3
\drinsert{12}{0
\psset{doubleline=true,linearc=\linearc}
\psline(4,10)(5,10)(5,9)
\psline(5,7)(5,6)(4,6)
\psline(6,8)(7,8)
\psline(8,9)(8,10)(9,10)
\psline(8,7)(8,6)(9,6)
\psbezier(4,8)(-0.5,8)(-0.5,10)(-0.5,13)
\psbezier(-0.5,13)(-0.5,18)(0.5,18)(6.5,18)
\psbezier(6.5,18)(13.5,18)(13.5,18)(13.5,13)
\psbezier(13.5,13)(13.5,10)(13.5,8)(9,8)
\fourvertex{5}{8}{0}
\fourvertex{8}{8}{0}
\end{pspicture}
}
\subfigure[]
\label{ctdiag}
\settoheight{\eqoff}{$+$
\setlength{\eqoff}{0.5\eqoff
\addtolength{\eqoff}{-5.5\unit
\raisebox{\eqoff}
\begin{pspicture}(-1,-1)(14,18.5
\ulinsert{1}{16
\olvertex{1}{13
\dlinsert{1}{10}
\psbezier[doubleline=true](4,16)(5.25,16)(5.25,15)(6.5,15)
\psbezier[doubleline=true](6.5,15)(7.75,15)(7.75,16)(9,16)
\psbezier[doubleline=true](4,13)(5.25,13)(5.25,12)(6.5,12)
\psbezier[doubleline=true](6.5,12)(7.75,12)(7.75,13)(9,13)
\urinsert{12}{16
\orvertex{12}{13
\drinsert{12}{10
\ulinsert{1}{6
\olvertex{1}{3
\dlinsert{1}{0}
\psbezier[doubleline=true](4,3)(5.25,3)(5.25,4)(6.5,4)
\psbezier[doubleline=true](6.5,4)(7.75,4)(7.75,3)(9,3)
\psbezier[doubleline=true](4,0)(5.25,0)(5.25,1)(6.5,1)
\psbezier[doubleline=true](6.5,1)(7.75,1)(7.75,0)(9,0)
\urinsert{12}{6
\orvertex{12}{3
\drinsert{12}{0
\psset{doubleline=true,linearc=\linearc}
\psline(4,10)(5,10)(5,9)
\psline(5,7)(5,6)(4,6)
\psline(8,9)(8,10)(9,10)
\psline(8,7)(8,6)(9,6)
\psbezier(5,9)(5,8.25)(8,8.25)(8,9)
\psbezier(5,7)(5,7.75)(8,7.75)(8,7)
\end{pspicture}
}
\caption{The planar four-point function of the operators \eqref{forpointfunction} receives a UV divergent contribution associated with a one-loop vertex correction. The divergence is canceled by the second diagram that contains the counter term of the respective double-trace coupling \eqref{doubletracecouplings}.
The four-point function is shown in double-line notation and the operators are depicted in red.}
\label{UVdivergent diagram}
\end{figure}
Its UV divergence is associated with the one-loop correction of a quartic
scalar double-trace vertex and has to be canceled by a respective counter term diagram
that also contributes in the planar limit. The latter diagram is shown in figure \ref{ctdiag}
and contains the respective counter
term coupling from the double-trace Lagrangian
\eqref{doubletracecouplings} \footnote{For non-vanishing tree-level double-trace couplings, also a third diagram with two double-trace couplings exists. It does not affect the argument.}.
The result is therefore sensitive
to the $\beta$-function of that coupling. This is another
indication that one must not neglect the double-trace couplings
-- not even in the planar limit.
Diagrams similar to those in figure \ref{UVdivergent diagram} exist for
all correlation functions of composite single-trace operators
in which the total charge of a subset of these operators matches the charge
of a single-trace factor in a double-trace coupling.
Likewise, a composite operator
$\mathcal{O}^{ii}$, $\mathcal{O}^{12}_{12}$, $\tilde{\mathcal{O}}$ or a complex
conjugate thereof can occur in the
OPE of two single-trace operators whose total charge matches its charge.
Via the anomalous dimension for that operator taken from \eqref{gammas},
such an OPE is then sensitive to the respective
$\beta$-function.
The planar correlation functions of the field theory are hence widely affected by the
double-trace couplings \eqref{doubletracecouplings}.
\section{Conclusions}
In this letter, we have explicitly shown that the model proposed in
\cite{Gurdogan:2015csr} is neither complete nor conformal -- not even in the planar
limit.
We have shown how to complete the model in the planar limit by including
the required double-trace couplings and their counter terms.
Admitting complex coupling constants, we could find conformal fix-points
for all induced couplings at one-loop order.
It would be very interesting to determine whether the fix-points
persist at higher loop orders and at finite $N$, i.e.\ when terms beyond the
planar limit that are subleading in the large-$N$ expansion are taken
into account.
Finally, the model opens the possibility for very interesting studies
concerning integrability in a simplified setup which
are of high relevance for integrability in $\mathcal{N}=4$ SYM theory
and in its deformations.
For instance, the very interesting question whether integrability is connected to
conformal invariance can be investigated by studying the spectrum of
the operators $\mathcal{O}^{ii}$ and $\mathcal{O}^{12}_{12}$ or
$\tilde{\mathcal{O}}$ at the conformal
fix-points and away from them. In particular, it should be analyzed
whether the different behaviors (non-vanishing and vanishing) of their
anomalous dimensions \eqref{gammas} at the fix-points can be reproduced in
the integrability-based approach.
This analysis should also be extended to both classes of
multi-point functions: those that are sensitive and those that are not
sensitive to the breakdown of conformal invariance.
\section{Acknowledgment}
We thank M.\ Staudacher and K.\ Zarembo for reading the manuscript and
for very useful remarks.
This research is supported in part by SFB 647 \emph{``Raum-Zeit-Materie. Analytische und Geometrische Strukturen''} and in part by DFF-FNU through
grant number DFF-4002-00037.
|
1,116,691,497,365 | arxiv | \section{Introduction}
There is growing scientific interest in understanding functional and structural organizations of the human brain from a large scale of multimodal brain imaging data. In medical imaging analysis, one of the popular ways for this task is to explore brain regional connections (i.e., brain networks) measured from the brain imaging signals. The topological patterns of brain networks are closely related to the brain functional organizations~\cite{bullmore2012economy} and the connection breakdown between the relevant brain regions has an intimate association with the progress of neurodegenerative diseases~\cite{jao2015functional,repovs2011brain} or normal brain developments~\cite{zhang2018multimodal}. However, patterns of focal damages in brain networks are different across modalities, making the mining of multimodal network changes difficult.
Deep learning methods have been successfully applied to extract biological information from the neuroimaging data~\cite{ronneberger2015u,suk2013deep}. Most of the prior brain network analysis represent graph structure as a grid-like image to enable convolutional computation~\cite{plis2018reading,deshpande2015fully,wang2017structural}.
More recently, deep graph convolutional networks (GCNs) have been introduced to brain network research~\cite{arslan2018graph,kawahara2017brainnetcnn,ktena2018metric}. These studies perform the localized convolutional operation at either graph nodes or edges. They can be categorized into the graph spectral convolution~\cite{arslan2018graph,ktena2018metric} and the graph spatial convolution~\cite{hamilton2017inductive}. The former approach is suitable for node-centric problems defined on the fixed-sized neighborhood graphs. For graph-centric problems, the spectral method requires a group-wise graph structure before approximating the spectral graph convolution. Therefore, its performance to a large extent depends on the predefined network basis. However, the existing framework~\cite{kawahara2017brainnetcnn} is designed for a single modality and lacks a well defined k-hop convolutional operator on each node. This makes the multimodal brain network fusion intractable in the node domain and thus difficult to draw brain saliency maps.
In this paper, we propose a novel GCN model for multimodal brain networks analysis. Two naturally coherent brain network modalities, i.e., functional and structural brain networks, are considered. The structural network acts as the anatomical skeleton to constrain brain functional activities and, in return, consistent functional activities reshape the structural network in the long term~\cite{bullmore2012economy}. Hence, we argue the existence of a high-level dependency, namely networks communication~\cite{avena2018communication}, across them. It is deciphered by a deep encoding-decoding graph network in our model. Meanwhile, the obtained node features help representation learning of brain network structure in a supervised manner. The contributions can be summarised into four-folds. (1) It is the first paper using a deep graph learning to model brain functions evolving from its structural basis. (2) We propose an end-to-end automatic brain network representation framework based on the intrinsic graph topology.
(3) We model the cross-modality relationship through a deep graph encoding-decoding process based on the proposed multi-stage graph convolutional kernel. (4) We draw graph saliency maps subject to the supervised tasks, enabling phenotypic and disease-related biomarker detection.
\section{Methodology}
\textbf{Multimodal Brain Network Data.} A brain network uses a graph structure to describe interconnections between brain regions and is a weighted graph $G=\{V,E,X\}$, where $V=\{v_i\}^N_{i=1}$ is the node set indicating brain regions, $E=\{\epsilon_{i,j}\}$ is the edges set and $X=\{x_{i,j}\}$ is the corresponding edge weight; For a given subject, we have a pair of networks $\{G^f,G^d\}$, where $G^f=\{V,E^f,X^f\}$ represents the functional brain network and $G^d=\{V,E^d,X^d\}$ is the structural brain network. These two networks share the same set of nodes, i.e., using an identical definition of brain regions, but differ in network topology and edge weights. An edge weight $x_{i,j}^f$ in $G^f$ is the correlation of fMRI signals between node $v_i$ and $v_j$, while a structural edge weight $x_{i,j}^d$ in $G^d$ is the probability of fiber tractography between them.
\vspace{-1em}
\subsection{Multi-Stage Graph Convolution Kernel}
\vspace{-0.5em}
A brain structural network can be interpreted as a freeway net where biological information such as brain functional signals flows from node to node. In the brain network, a node shall be affected by its neighboring nodes and their affection is negatively correlated with the shortest network distance~\cite{stam2016relation}. To encode these node-to-node patterns, we adopt the spatial graph convolution kernel which will give the node embedding features with respect to the local graph topology. It defines a way to aggregate node features in a given size of neighborhood, e.g., 1-hop connections.
Given a target node $v_i$ and its neighbourhood graph topology $G_{\mathcal{N}(v_i)}$, the graph convolution kernel first collects node features $h_{v_i}$ of its immediate neighbours:
\vspace{-0.5em}
\begin{equation}\label{Eq:AGG}
\vspace{-0.5em}
AGG(h_{v_i})=\sum_{v_j\in \mathcal{N}(v_i)}h_{v_j}\cdot x_{i,j},
\end{equation}
and then updates the node feature as:
\vspace{-0.1em}
\begin{equation}\label{Eq:update}
h'_{v_i}=\sigma(AGG(h_{v_i})\cdot w).
\vspace{-0.5em}
\end{equation}
Here, $\sigma$ is a non-linear activation and $w\in\mathbb{R}^{F\times F'}$ is a learnable weight matrix of a fully-connected layer (FC). Previous research proves that a $k$-hop convolution kernel can be divided into $k$ 1-hop convolutions~\cite{kipf2016semi}. Therefore, we stack several 1-hop convolutions to increase size of the effective receptive field on graphs.
A potential problem with Eq.~\ref{Eq:AGG} is its poor generalization of the local aggregation, i.e., the aggregation weight is fixed to be $x_{i,j}$. Though these predefined values reflect the brain biological profiles, they might not be optimal for brain network encoding, especially for the cross-modality learning pursued by our research. For example, brain regions that are interconnected with large weights in the brain structural network are not guaranteed to be more strongly connected in the brain functional network as well~\cite{osmanliouglu2019system}. Besides, compared with brain structural networks, brain functional networks are more dynamic and fluctuant on the edge connections. Therefore, the dynamic adjustment of the aggregation weights during graph learning is favored. To this end, we adopt the idea of graph attention network (GAT)~\cite{velickovic2017graph}. Given each pair of node features, their dynamic edge weights are learned by a single-layer feedforward neural network, i.e., $X^{ATT}=\{x^{ATT}_{i,j}\}=\{f_{att}(h_{v_i},h_{v_j})\}$. More specifically, we first increase the expression power of the node features by using a shared linear transformation, $\Tilde{h}_{v_i}=h_{v_i}\cdot w$, where $w\in \mathbb{R}^{F\times F'}$ is a learned parameter. Then, we use a single-layer feedforward neural network to derive the edge weight:
\begin{equation}\label{Eq:feature}
\vspace{-0.5em}
\Tilde{x}_{i,j}=\sigma(a^T[\Tilde{h}_{v_i}\oplus \Tilde{h}_{v_j}]),
\end{equation}
where $\oplus$ is the concatenate operator and $a\in \mathbb{R}^{2F'}$ is a parameter of the feedforward network. To assure generalization of Eq.~\ref{Eq:feature} across different nodes, a softmax layer is append for normalization of the neighbourhood,
\begin{equation}
\vspace{-0.8em}
x^{ATT}_{i,j}=\frac{exp(\sigma(a^T[\Tilde{h}_{v_i}\oplus \Tilde{h}_{v_j}]))}{\sum_{k\in\mathcal{N}(v_i)}exp(\sigma(a^T[\Tilde{h}_{v_i}\oplus \Tilde{h}_{v_k}]))}.
\end{equation}
\begin{figure}[t]
\vspace{-1em}
\center
\includegraphics[width=0.95\linewidth]{images/MGCK_horizental.pdf}
\caption{Multi-stage graph convolution kernel (MGCK). Three aggregation mechanisms are dynamical combined, including the graph attention weight $x^{ATT}_{i,j}$ (A), the original edge weight $x_{i,j}$ (B), and the binary weight $\delta(x_{i,j})$ (C).}
\label{fig:MGCK}
\vspace{-1em}
\end{figure}
Compared with $x_{i,j}$, $x^{ATT}_{i,j}$ is associated with the node order and thus is asymmetric on edge $\epsilon_{i,j}$. Besides, it is free of local network topology. In addition to the graph attention based aggregation (Fig.~\ref{fig:MGCK}, A), we also propose a binary symmetric aggregation defined with a threshould function $\delta(x_{i,j})$ (Fig.~\ref{fig:MGCK}, B). $\delta(x_{i,j})$ thresholds an edge by a given threshould value $\gamma$, e.g., aggregation weight will be 1 if $x_{i,j}>\gamma$, otherwise 0. We set $\gamma=0$ empirically in this study. This process follows an assumption that two brain regions are highly interactive in functional brain network as long as they are structurally connected~\cite{stam2016relation}. To integrate all of the aggregation mechanisms, we design a multi-stage graph convolution kernel (MGCK). Eq.~\ref{Eq:AGG} is thus updated as:
\begin{equation}\label{Eq:AGG_mod}
\vspace{-0.5em}
\begin{split}
AGG(h_{v_i})=&\sum_{v_j\in \mathcal{N}(v_i)}h_{v_j}\cdot (x_{i,j}+\alpha)\cdot(x^{ATT}_{i,j}+\beta \delta(x_{i,j}))\\
=&\sum_{v_j\in \mathcal{N}(v_i)}h_{v_j}\cdot (x_{i,j}x^{ATT}_{i,j}+\beta x_{i,j}+\alpha x^{ATT}_{i,j}+\alpha\beta \delta),
\end{split}
\end{equation}
where $\alpha$ and $\beta$ are learnable parameters balancing different aggregation mechanisms. In the above equation, we have 4 different aggregation weights. $x_{i,j}x^{ATT}_{i,j}$ and $x_{i,j}$ are the pre-defined network connections with and without attention weights. $x^{ATT}_{i,j}$ is the attention aggregation alone and $\delta$ is the threshold connections. In the end, we introduce the multi-head learning~\cite{vaswani2017attention} to stabilize the aggregation in MGCK. $K$ independent multi-stage aggregation are conducted and aggregated features are concatenated before feeding to a FC layer. Accordingly, Eq.~\ref{Eq:update} is updated as:
\begin{equation}\label{Eq:multihead}
\vspace{-0.3em}
\hat{h}_{v_i}=\oplus^K_{k=1}[\sigma(AGG^k(h_{v_i})\cdot w)].
\end{equation}
Previous research indicates that graph convolution network performs poorly with a deep architecture due to the high complexity of back-propagation in the deep layers. To address this problem, residual block in GCN~\cite{li2019deepgcns} is proposed. It is inspired by the success of ResNet~\cite{he2016deep} for image data. We add the residual connection after MGCK,
\begin{equation}\label{Eq:residule}
\vspace{-0.5em}
h'_{v_i}=\mathcal{F}(\hat{h}_{v_i},\hat{w})+w_m h_{v_i}.
\end{equation}
$\mathcal{F}$ is a FC layer parameterized by $\hat{w}$. Parameter $w_m$ is designed to match the dimensions.
\begin{figure*}[t]
\vspace{-0.5em}
\center
\includegraphics[width=1\linewidth]{images/Pipeline.pdf}
\caption{Pipeline of DMBN. The structural network is fed into two independent encoding-decoding networks to generate the cross-modality encoding of the positive and negative functional connections. Meanwhile, the node features from these two networks are combined and serve as the multimodal graph representations for the supervised learning tasks via a MLP network. During this process, a brain saliency map is derived.}
\label{fig:pipeline}
\vspace{-1.2em}
\end{figure*}
\vspace{-1em}
\subsection{Deep Multimodal Brain Networks (DMBN)}
\vspace{-0.5em}
We show the pipeline of DMBN in Fig.~\ref{fig:pipeline}. It generates the multimodal graph node representations for different learning tasks. There are two parts in DMBN. The first part is for cross-modality learning via an encoding-decoding network. Here, we construct brain functional network from brain structural network. The brain functional network contains both positive and negative connections. These two types of brain functional connectivities yield a distinct relationship with brain structural network~\cite{honey2009predicting,schwarz2011negative}. Hence, we separate their encoding into two independent encoding networks. For each graph encoder, we use several MGCK layers to aggregate node features from diverse ranges of the neighborhood in structural network. The generated node features are then fed into the decoding networks to reconstruct the positive and negative connections respectively.
Specifically, for each undirected edge $e_{i,j}$, we define the reconstructed links as:
\begin{equation}\label{eq:embedding}
\vspace{-0.8em}
\hat{x}_{i,j}=\frac{1}{1+exp(-h_{v_i}^T\cdot \Theta\cdot h_{v_j})},
\end{equation}
where $h_{v_i}$ is a node feature vector in the network embedding space and $\Theta$ is a learnable layer weight. Eq.~\ref{eq:embedding} maps the deep node embeddings $\{h_{v_i}\}$ to a connection matrix $\{\hat{x}_{i,j}\}$ where each element ranges from 0 to 1 consisting with the functional connections.
The second part of our model is a supervised learning. The node embedding features ($h_v$) from the positive and negative encoding networks are concatenated node-wisely and processed by an MLP. Since our tasks are graph level learning, a global pooling is applied before the last FC layer to remove the effect of node orders. Along with the supervised learning tasks, it is important to understand the key brain regions closely associated with the tasks. Inspired by the classic activation maps~\cite{arslan2018graph}, a graph localization strategy is carried out by learning contribution scores of graph nodes. As shown in Fig.~\ref{fig:pipeline}, suppose the final node feature matrix consists of $F$ channels for $N$ nodes, a global mean pooling generates a channel-wise vector treated as the network feature. Therefore, each channel has a corresponding weight, $w_i$, learned by the last FC layer. To obtain the node-wise importance score, we warp it back by an inner product between node features and channel weights, i.e., $h_v\cdot W^T$. In the end, we rank the top-$k$ nodes for each subject and conduct a group voting to obtain the group-wise saliency map.
There are 3 loss terms in DMBN controlling the brain network reconstruction and supervised learning tasks (Eq.~\ref{Eq:TotalLoss}). The reconstruction loss consists of the global and local decoding losses to preserve different levels of graph topology.
\begin{equation}\label{Eq:TotalLoss}
\vspace{-0.5em}
L_{all}=\mu_1 L_{global}+\mu_2 L_{local}+L_{preds},
\vspace{-0.1em}
\end{equation}
\vspace{-1em}
\paragraph{\textbf{1) Global Decoding Loss.}} This term evaluates the averaged performance of edge reconstruction in the target network.
\begin{equation}
\vspace{-1em}
\begin{split}
\mathcal{L}_{global}=&\frac{1}{|E|}\sum_{e_{i,j}} a_{i,j}(\hat{x}^{f+}_{i,j}-\hat{x}^{f-}_{i,j}-x^f_{i,j})^2,
\end{split}
\end{equation}
where $a_{i,j}$ is the additional penalty of the edge reconstruction. Here, we set it as $e^{abs(x^f_{i,j})}$, which gives the higher weights for stronger connections in brain functional network. $\hat{x}^{f+}$ and $\hat{x}^{f-}$ indicate the decoded network connections from the positive and negative flow of encoding.
\vspace{-1em}
\paragraph{\textbf{2) Local Decoding Loss.}} The cross-modality reconstruction of brain networks is challenging, hence we do not expect a full recovery of all edges but rather the reconstruction of local graph structure on important connections, e.g., edges with strong connections in both structural and functional networks. We adopt the first-order proximity~\cite{wang2016structural} to capture the local structure.
The loss function is defined as:
\begin{equation}\label{eq:localloss}
\vspace{-1em}
\begin{split}
\mathcal{L}_{local}=\sum_{i=1}^n \frac{1}{|\mathcal{N}^d_i|}\sum_{j\in\mathcal{N}^d_i}e^{\delta(x^d_{i,j})}||h^{f}_{v_i}-h^{f}_{v_j}||^2_2 ,
\end{split}
\end{equation}
where $|\mathcal{N}^d_i|$ is the number of neighbouring nodes of $v_i$ in brain structural network. $\delta(x^d_{i,j})$ is a threshold function which favors strong generalization.
Eq.~\ref{eq:localloss} generalizes Laplacian Eigenmaps~\cite{belkin2003laplacian} and drives nodes with similar embedding features together.
\vspace{-1em}
\paragraph{\textbf{3) Supervised Loss.}} The loss function for prediction is defined as:
\begin{equation}\label{Eq:PredLoss}
\vspace{-0.8em}
\begin{split}
L_{pred}=-\frac{1}{K}\sum_{i=1}^K y_i\cdot log(f_{pred}(h_{v_i})),
\end{split}
\end{equation}
where $K$ is the number of subjects and $f_{pred}$ is a function learned by the MLP network.
\section{Experiment}
\subsection{Gender Prediction}\label{sec:genderprediction}
\vspace{-0.5em}
\paragraph{\textbf{Dataset.}} The data are from the WU-Minn HCP 1200 Subjects Data Release~\cite{van2013wu}. We include 746 healthy subjects (339 males, 407 females), each has high-quality resting fMRI and dMRI data. The functional network is processed using CONN toolbox~\cite{whitfield2012conn} and structural connectivity is measured by using FSL toolbox~\cite{jenkinson2012fsl}. Here we try to predict the gender based on the multimodal brain network topology. Previous research has shown the strong relationship between gender and brain connectivity patterns~\cite{ruigrok2014meta}.
\vspace{-1em}
\paragraph{\textbf{Experiment Setup.}} We select 5 state-of-the-art baseline models for comparison, where 3 of them, i.e. tBNE~\cite{cao2017tbne}, MK-SVM~\cite{dyrba2015multimodal} and mCCA + ICA~\cite{sui2011discriminating}, are transitional machine learning algorithms while the rest two, i.e. BrainNetCNN~\cite{kawahara2017brainnetcnn} and Brain-Cheby~\cite{ktena2018metric} use deep models. In addition, 5 variant models of MDBN are tested in the experiments as an ablation study. We apply the 5-fold cross-validation for all methods. In our model setting, the positive connection encoding has 5 cascade MGCK layers and negative connection encoding has 4 MGCK layers. In each encoding, each of MGCKs has the feature dimension [128] and 4-heads learning. We report the statistical results with three evaluation metrics: accuracy, precision, and F1 scores. Besides, we take a grid search to decide hyperparameters $\mu_1$ and $\mu_2$. Based on the empirical knowledge, we set the search range for $\mu_1$ as [10, 1, 0.1, 0.01] and $\mu_2$ as [5, 1, 0.5, 0.1]. The best result appears at $\mu_1=1$ and $\mu_2=0.5$. Details can be found in Supplementary Fig.1.
\begin{table}[t]
\vspace{-1em}
\centering
\scriptsize
\caption{Performance of gender prediction in the HCP data.}
\begin{adjustbox}{width=.8\textwidth,center}
\begin{tabular}{c|ccc|ccc}
\multirow{2}{*}{Method} & \multicolumn{3}{c|}{HCP (Gender)} & \multicolumn{3}{c}{PPMI (Disease)} \\ \cline{2-7}
&$Acc$ &$Prec$ &$F1-Score$ &$Acc$ &$Prec$ &$F1-Score$\\ \hline
tBNE~\cite{cao2017tbne} &0.543 &0.497 &0.503 &0.580 &0.597 &0.530 \\
MK-SVM~\cite{dyrba2015multimodal} &0.481 &0.438 &0.524 &0.587 &0.487 &0.568 \\
mCCA + ICA~\cite{sui2011discriminating} &0.680 &0.703 &0.691 &0.640 &0.660 &0.622\\
Brain-Cheby~\cite{ktena2018metric} &0.739 &0.740 &0.739 &0.635 &0.622 &0.628\\
BrainNetCNN~\cite{kawahara2017brainnetcnn} &0.734 &0.775 &0.684 &0.673 &0.695 &0.778\\ \hline
w/o Recon$^+$ &0.738 &0.692 &0.767 &0.688 &0.727 &0.786\\
w/o TAGG\&Recon$^+$ &0.699 &0.696 &0.738 &- &- &-\\
w/o AAGG\&Recon$^+$ &0.681 &0.689 &0.735 &- &- &- \\\hline
MDBN w/o Global &0.784 &0.798 &0.799 &- &- &- \\
MDBN w/o Local &0.793 &0.814 &0.824 &- &- &- \\
MDBN &\textbf{0.819}* &\textbf{0.836}* &\textbf{0.845}* &\textbf{0.728}* &\textbf{0.859}* &\textbf{0.735} \\\hline
\end{tabular}%
\end{adjustbox}
\label{tab:genderprediction}
\begin{tablenotes}
\small\item $\mathbf{^*}$ stands for significance.$\mathbf{^+}$ indicates the variant model using a single modality.
\end{tablenotes}
\vspace{-1em}
\end{table}
\vspace{-1em}
\vspace{-0.2em}
\paragraph{\textbf{Results.}} As shown in the Tab.~\ref{tab:genderprediction} (HCP), our model achieves the highest accuracy ($ACC>81.9\%$) in the gender prediction among all the methods and significantly outperforms the others with at least 8\% and 10\% increases in accuracy and F1 scores, respectively. Generally, deep models are superior to the traditional node embedding method (tBNE). We notice that, when we remove the cross-modality learning, i.e., variant methods denoted by w/o Recon, the performance drops significantly. Though they are still comparable to the other baselines, the training process is unstable with a high variance. The cross-modality learning enables node-level learning to be effective and consequently affects further graph-level learning. In addition, the 10 most important brain regions affecting the gender prediction are shown in Supplementary Fig. 2. These regions spread at the cortical areas including the frontal and orbital gyrus, precentral gyrus, insular gyrus, as well as the subcortical areas such as basal ganglia. All those regions play vital roles in regulating cognitive functioning, motor and emotion controls, which, with a high probability, exert the gender discrepancy~\cite{rijpkema2012normal,ruigrok2014meta}.
\vspace{-1em}
\paragraph{\textbf{Ablation Analysis.}} We explore influence of each element in our model (Tab.~\ref{tab:genderprediction}). We first remove the decoding network that makes our model a single modality learning (w/o Recon). Under such a configuration, our model is still comparable to the baselines. However, the decreased performance suggests the cross-modality is indispensable to an informative network representation. Based on this setting, we further evaluate the role of each aggregation mechanism in MGCK. We remove the threshold aggregation weight (w/o TAGG$\&$Recon) and graph attention aggregation (w/o AAGG$\&$Recon) respectively. All of them cause a significant decrease in performance. In addition to the single modality learning, we also validate the importance of different reconstruction losses in multimodal learning. Missing the local (MDBN w/o Local) or global (MDBN w/o Global) losses results in around 3\% downgrade in prediction accuracy. Meanwhile, the global reconstruction loss yields a larger weight than the local reconstruction loss. Since the global loss considers all of the edges in the functional network, it contains relatively more fruitful information than the local loss which focuses on the direct edges in the structural network. However, they are complementary to each other.
\vspace{-1em}
\paragraph{\textbf{Cross-Modality Learning.}}
To validate the efficacy of cross-modality learning, we turn off the prediction tasks, i.e., only keeping the reconstruction losses during training. Results have been shown in Fig.~\ref{fig:AE_gender}. We present the predicted functional networks of a randomly selected sample and the group average of the whole testing data. From the sparse structural networks, the corresponding functional connections have been correctly predicted and major patterns of the local network connections are captured. To further prove the accuracy, we conduct the statistical analysis on edges. Both direct and indirect edges in the target functional network are highly correlated with the predicted edges (Spearman correlation, overall is $r_S=0.83$ with $p<10^{-4}$), where the direct edges, $r_S=0.84$, are slightly greater than the indirect edges, $r_S=0.82$. We also prove the robustness of our model to the different sparsity levels of brain structural networks and results are shown in Supplementary Fig. 4.
\begin{figure*}[!t]
\vspace{-1em}
\center
\includegraphics[width=1\linewidth]{images/ae_reconstruction_gender.pdf}
\caption{The cross-modality learning results. The functional network is predicted (middle) from its structural network counterpart (left). We present the group averaged result and an individual sample. The statistical evaluation (Spearman correlation, $r_S$) of reconstructed functional networks is conducted (right) and the predicted edge weights are significantly correlated with the ground truth data, $r_s=0.83$.}
\label{fig:AE_gender}
\vspace{-1em}
\end{figure*}
\vspace{-1em}
\subsection{Disease Classification}\label{sec:diseaseprediction}
\vspace{-0.5em}
In addition to the gender prediction in the healthy subjects, we retest our model on the disease classification. In this experiment, we include 323 subjects from Parkinson's Progression Markers Initiative (PPMI)~\cite{marek2011parkinson} and 224 of them are patients of Parkinson's disease (PD). We follow the experimental setting in gender prediction. $\mu_1=0.5$ and $\mu_2=0.5$ are used according to the grid search.
\vspace{-1em}
\paragraph{\textbf{Classification Results.}} We consider the state-of-the-art baseline methods for comparison. The results are shown in Tab.~\ref{tab:genderprediction} (PPMI). Our model achieves the best prediction performance than other models (improving the accuracy by 5\% than BrainNetCNN, 9\% than Brain-Cheby and other baselines). Moreover, It also shows adding the cross-modality reconstruction do upgrade the performance. We locate the 10 key regions associating with the PD classification via the saliency map, see Supplementary Fig. 3. Most of the salient regions locate at the subcortical structures, such as the bilateral hippocampus and basal ganglia. These structures are conventionally conceived as the biomarkers of PD in medical imaging analysis~\cite{obeso2000pathophysiology,camicioli2003parkinson}.
\vspace{-1mm}
\section{Conclusion}
\vspace{-1mm}
We propose a novel multimodal brain network fusion framework based on a deep graph modal. The cross-modality network embedding is generated by an encoding-decoding network. The network embedding is also supervised by the prediction tasks. Eventually, the learned node features contribute to the brain saliency map for detecting disease-related biomarkers. In the future, we plan to extend our model to other learning tasks such as brain cortical parcellation and cognitive activity prediction.
\subsubsection*{Acknowledgments} This work was supported in part by NIH (RF1AG051710 and R01EB025032). We also gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.
\bibliographystyle{splncs03}
\vspace{-1em}
|
1,116,691,497,366 | arxiv | \section{Introduction}
\label{section:Introduction}
The Maximum Entropy (MaxEnt) formalism provides an alternative to more conventional statistical mechanics approaches by offering a convenient jumping off point for thinking about problems that are not within the purview of equilibrium statistical mechanics. For non-equilibrium systems, the corresponding idea is called the Principle of Maximum Caliber (MaxCal), in which we attempt to determine the probability that a particle travels on a particular space-time path rather than its probability of occupying a certain energy level \cite{Jaynes}. One way to think about these information-theoretic approaches is that the probability distributions generated by MaxEnt and MaxCal are the least-biased inferences about the distributions that can be made on the limited information that is in hand; i.e., by maximizing the Shannon entropy over the collection of possible
outcomes for the system of interest \cite{Shannon}. MaxEnt has been used for several decades to model everything from neural firing \cite{Bialek:neurons} to climate change \cite{Karmeshu} and many applications between. MaxCal has been subjected to a few applications including analysis of the dynamics of a two-state system \cite{Gerhard, Wuetal} and to examine the origin of phenomenological transport laws including Fick's law of diffusion \cite{Seitaridou} and fluctuation theorems \cite{RDewar}.
The MaxCal approach is relatively unexplored and has been set forth as a very general method of analyzing nonequilibrium systems \cite{Jaynes, Jaynes2, Jaynes3}.
By applying MaxCal to experimentally realizable systems, we hope that we can not only make useful predictions of the behavior of concrete experimental systems, but also help identify and highlight the outstanding theoretical questions that impede progress towards practical use of MaxCal. One such system that we find appealing is the dynamics of a two-state system, partly because of its simplicity and because of the opportunity to manipulate such systems experimentally. To that end, in this paper, we analyze the dynamics of a two-state system, cognizant that it is only one concrete application of the MaxCal approach and that much remains to be done to demonstrate MaxCal's usefulness more generally.
MaxEnt as used in equilibrium statistical mechanics and MaxCal exploit almost entirely the same theoretical prescription, except that MaxEnt as used in equilibrium statistical mechanics identifies the \textit{entropy}-maximizing probability distribution of particle occupancy over energy states, whereas MaxCal identifies the \textit{caliber}-maximizing probability distribution of particle occupancy over possible space-time paths (``microtrajectories''). Caliber is an entropy-like quantity that has the same form as the Shannon entropy, ${\cal C} = \sum_{\Gamma} p_{\Gamma} \ln p_{\Gamma}+\sum_{i}\lambda_{i} \langle A_{i} \rangle_{\Gamma}$. In this equation, ${\cal C}$ is the caliber, $p_{\Gamma}$ denotes the probability of a given microtrajectory $\Gamma$, and each $\langle A_{i}\rangle_{\Gamma}$ is a macroscopic observable which serves as a constraint within
the information-theoretic approach. For convenience, we will denote an average over the microstates $\langle A_{i} \rangle_{\Gamma}$ as $\langle A_i\rangle$. The constants $\lambda_{i}$ are Lagrange multipliers corresponding to those constraints. Maximizing the caliber yields a full probability distribution for all the allowed microtrajectories.
In this paper, we analyze the dynamics of the two-state system using a MaxCal model and derive new insights into MaxCal and the discretized two-state system in the process. The organization of this paper is as follows. Background on MaxCal is presented in Section \ref{section:Background}. In Section \ref{section:StatisticalDynamics}, we show that the dynamics of a two-state system are mathematically equivalent to the 1-D Ising model, and that there are different but equivalent ways of picking the MaxCal constraints. In Section \ref{section:Test}, we use simulation to investigate the relationship between the MaxCal model for the two-state system proposed here and a previously proposed MaxCal model for the two-state system \cite{Gerhard, Wuetal}.
\section{Background}
\label{section:Background}
As noted above, caliber is an entropy-like quantity that increases as the relative probabilities of the microtrajectories becomes less predictable. As a result of maximizing the caliber, MaxCal asserts that our best guess as to the probability of observing a particular microtrajectory $\Gamma$ is
\begin{equation}
p_{\Gamma}=\frac{1}{Z}\exp(\sum_{i} \lambda_i A_{i,\Gamma}),
\label{eq:MaxCalProbGeneral}
\end{equation}
where
\begin{equation}
Z=\sum_{\Gamma}\exp\left(\sum_i \lambda_i A_{i,\Gamma}\right)
\label{eq:MaxCalPartFnctnGeneral}
\end{equation}
is a normalization factor dubbed by Jaynes as the ``partition functional'' and
the $A_{i,\Gamma}$ are values of macroscopic observables $A_i$ along the microtrajectory labeled $\Gamma$. $\lambda_i$ are the corresponding Lagrange multipliers that can be calculated from $\langle A_{i}\rangle=\frac{\partial\ln Z}{\partial\lambda_i}$ \cite{Jaynes}.
A distinguishing feature of MaxCal-derived formulas of nonequilibrium statistical mechanics as compared to more conventional nonequilibrium statistical mechanics formulas, e.g. the Jarzynski equality \cite{Jarzynski}, is that MaxCal solves an inverse problem, in which we infer microscopic properties based on macroscopic observations.
The MaxCal distribution described in Equation \ref{eq:MaxCalProbGeneral} is typically generated on
the basis of knowing only a small number of macroscopic observables $\langle A_i\rangle$ and predicts the probability of traveling any one of the many possible microtrajectories $\Gamma$. For example, in the two-state systems simulated in this paper which each have $10^5$ time steps, only two macroscopic observables are used to generate predictions for the probability of traversing the entirety of the $2^{10^5}$ possible microtrajectories. However, since the average probability of observing a particular microtrajectory is typically very small, it is difficult to obtain accurate experimental estimates of $p_{\Gamma}$ itself to which we can compare MaxCal predictions. Instead, to test the accuracy of a MaxCal model, we can predict higher order moments of the joint probability distribution $p(A_1,A_2,...,A_n)$. These higher moments are more robust than individual microtrajectory probabilities $p_{\Gamma}$ to the depth of our experimental investigation of the system. Again, note that these higher moments are {\it not} used in any way to determine the probability distribution $p(A_1,...,A_n)$ itself. Rather, they are predictions about more nuanced features of the two-state system than those simple average quantities used to establish the distribution given in Equation \ref{eq:MaxCalProbGeneral}.
\section{Statistical Dynamics of the Two-State System Using Caliber}
\label{section:StatisticalDynamics}
The dynamics of the two-state system can be completely characterized by rate equations in terms of the constants $k_A$ and $k_B$, where $k_A$ and $k_B$ are the intrinsic rate constants for transition from state A to state B and vice versa \cite{Gardiner}. Indeed, one of the interesting outcomes of the present paper is a simple relation between the conventional rate constants and the Lagrange multipliers that are central to the MaxCal description of the same system.
To make the dynamics of the two-state system amenable to simple MaxCal analysis, let the MaxCal trajectory $\Gamma$ be a string of $1$'s and $-1$'s describing the state of the system as a function of time. More explicitly, suppose that our two-state system is a particle jumping between wells on a double-well potential energy landscape, examples of which are shown in Figure \ref{fig:Data}. Let $\sigma_{\Gamma}(t)$ represent the system's state at time $t$. If the particle is in the left potential well at time $t$, then we say it is in state $A$ and that $\sigma_{\Gamma}(t)=1$; if the particle is in the right potential well at time $t$, then we say it is in state $B$ and $\sigma_{\Gamma}(t)=-1$. Thus, a particle's trajectory $\Gamma$ is equivalent to a series of $N$ $1$'s and $-1$'s, for which the particle's state is sampled every $\Delta t$ and the total time spent observing the particles of interest is $N \Delta t$.
\begin{figure}[h]
\centering
\includegraphics[width=8.6cm]{DataExamples2}
\caption{\textbf{Representative simulated trajectories on two different two-well potential energy landscapes.} A simulation of a particle diffusing on a double Gaussian potential surface generates trajectories of the particle diffusing on the energy landscape according to Equation (\ref{eqSimulation}). The traces show the raw data. States are assigned after boxcar filtering and threshold finding. (A) Potential well number 1. See Table \ref{table:parameters2}. The state of lower potential energy is more frequently populated; there is a high barrier, with infrequent transitions. (B) Potential well number 8. See Table \ref{table:parameters2}. The upper and lower states are populated roughly equally.}
\label{fig:Data}
\end{figure}
This discretized picture of the two-state process will lead to accurate calculations only if our time step $\Delta t$ is small enough to capture nearly all state transitions \cite{Hawkes1}. In other words, in the language of rate constants, we require that the average waiting time in both states is much larger than the time step $\Delta t$. For Markov systems, the waiting time in a particular state $t$ follows an exponential distribution with a mean of $\langle t\rangle=\frac{1}{k}$ \cite{Gardiner}. Thus, here, we require $\Delta t<<\frac{1}{k_A},~\frac{1}{k_B}$, or identically $k_A\Delta t,~k_B\Delta t<<1$ as described previously \cite{Wuetal}. If this heuristic condition is satisfied, missed transitions are negligible. For the two-well potential landscapes considered here, waiting times range from $0.5$ to $5$ s, whereas $\Delta t=1$ ms.
\subsection{The dynamics of the two-state system is equivalent to the one-dimensional Ising model}
\label{section:IsingModel}
There are many different ways that one can imagine characterizing the trajectories realized by the two-state system as it sweeps out a telegraph signal \cite{Gardiner}. The particular choice that is made has implications in turn for the nature of the MaxCal constraints used to describe the system.
However, it is still unclear to us what general criteria must be used when choosing constraints $\langle A_i\rangle$ for some
specific new problem.
For the two-state system considered here, we focus on two distinct constraints: the average number of times that the particle switches states over the course of a microtrajectory ($\langle N_s\rangle$), and the ``aggregate state'' of the particle ($\langle i\rangle$). We define the aggregate state mathematically using the state variable $\sigma(t)$, which was defined earlier in Section \ref{section:StatisticalDynamics}, as $i_{\Gamma}=\sum_{t=1}^N \sigma_{\Gamma}(t)$, so that
\begin{equation}
\langle i\rangle=\sum_{\Gamma} p_{\Gamma} \left(\sum_{t=1}^N \sigma_{\Gamma}(t)\right),
\label{eq:TotalState}
\end{equation}
where we obtain the average by summing over {\it all} the possible microtrajectories
of the system (labeled by $\Gamma$), each weighted by its probability $p_{\Gamma}$.
Intuitively, the ``average state'' of the particle ($\frac{\langle i\rangle}{N}$) corresponds to a measure of where the particle is most frequently found. If the particle is frequently found in state A, then $\langle i\rangle\sim N$; if the particle is frequently found in state B, then $\langle i\rangle\sim -N$; and if the particle spends its time equally between the two states, then $\langle i\rangle\sim 0$. In terms of experimental quantities, the average state is analogous to the average current if we think of the telegraph signal as the current passing through an ion channel in a single-channel measurement \cite{PBoC}.
With these two constraints and the MaxCal formalism described above, we find that
the discretized version of the partition functional takes the simple form
\begin{equation}
Z=\sum_{\Gamma}\exp(\alpha N_{s,\Gamma} +h i_{\Gamma}),
\label{eqZ1}
\end{equation}
where $\alpha$ and $h$ are the Lagrange multipliers
associated with our two constraints. The probability of observing a particular microtrajectory is accordingly
\begin{equation}
p_{\Gamma}=\frac{1}{Z}\exp(\alpha N_{s,\Gamma} +h i_{\Gamma}).
\label{eqProbTraj}
\end{equation}
More precisely, this expression provides the probability of observing a particular microtrajectory $\Gamma$ with aggregate state $i_{\Gamma}$ and number of switches $N_{s,\Gamma}$, where the subscript $\Gamma$ labels the trajectory of interest.
To cast this expression in a more useful way,
we notice that $\langle N_s\rangle $ can be expressed in terms of $\sigma_{\Gamma}(t)$ using the quantity $(\frac{\sigma_{\Gamma}(t+1)-\sigma_{\Gamma}(t)}{2})^2$, which compares the state of the system at two
subsequent instants in time and is $1$ when the particle switches states and $0$ otherwise. Then, $\langle N_s\rangle=\sum_{\Gamma}p_{\Gamma}(\sum_{t} (\frac{\sigma_{\Gamma}(t+1)-\sigma_{\Gamma}(t)}{2})^2)$ and from combining this fact with Equation \ref{eq:TotalState}, Equation \ref{eqZ1} can be rewritten as
\begin{eqnarray}
Z &=& \sum_{\Gamma} \exp(\frac{\alpha}{2} \left(N-\sum_{t=\Delta t}^{(N-1)\Delta t} \sigma_{\Gamma}(t+\Delta t)\sigma_{\Gamma}(t) \right)\nonumber \\
&& +h \sum_{t=\Delta t}^{N\Delta t} \sigma_{\Gamma}(t) ) \\
&=& \exp\left(\frac{\alpha}{2}N\right) \sum_{\Gamma} \exp(-\frac{\alpha}{2} \sum_{t=\Delta t}^{(N-1)\Delta t}\sigma_{\Gamma}(t+\Delta t)\sigma_{\Gamma}(t) \nonumber \\
&& +h \sum_{t=\Delta t}^{N\Delta t} \sigma_{\Gamma}(t) ).
\end{eqnarray}
Let $J=-\frac{\alpha}{2}$. Then
\begin{equation}
Z=\exp(-JN)\sum_{\Gamma}\exp(J\sum_t \sigma_{\Gamma}(t)\sigma_{\Gamma}(t+\Delta t)+h\sum_t\sigma_{\Gamma}(t)).
\label{eqZ2}
\end{equation}
The partition function in Equation \ref{eqZ2} looks like the partition function for the 1-D Ising model. In this context, a spin at position $x$ corresponds to the particle's state at time $t$; the lattice spacing $\Delta x$ corresponds to the time step $\Delta t$; $h$ is analogous to the applied magnetic field; and $J$ is the analog of the coupling constant between spins. In the Ising model, the magnetic field drives spins to prefer one direction over another and the coupling constant drives neighboring spins to align. In the two-state system analogy considered here, the ``magnetic field'' drives the particle to favor one of the two potential wells, and the ``coupling constant'' recognizes that particle's state at time $t+\Delta t$ depends on its state at time $t$ and thus is responsible for the
temporal correlations of the system. This analogy will be discussed further in Section \ref{section:MarkovRelation}.
The most convenient scheme for determining the partition function in Equation \ref{eqZ2} analytically
is to exploit periodic boundary conditions. Though it is natural in the context of spins to talk about periodic boundary conditions, such conditions in our dynamical problem correspond nonsensically to time running in circles. Nevertheless, the mathematical convenience of periodic boundary conditions makes them an attractive mathematical option. (The errors incurred in making this simplifying assumption are discussed later.) Under the assumption of periodic boundary conditions, the 1-D Ising model partition function can be evaluated analytically using transfer matrices as $Z=e^{-JN}(\lambda_{+}^N+\lambda_{-}^N)$ where $\lambda_{+}$ and $\lambda_{-}$ are the eigenvalues of the transfer matrix $T=\left( \begin{smallmatrix} e^{h+J}&e^{-J}\\ e^{-J}&e^{-h+J} \end{smallmatrix} \right) \label{eqMatrix}$ \cite{Pathria}.
As usual in maximum entropy problems, the values of the Lagrange multipliers are found by
evaluating partial derivatives of $Z$ \cite{Jaynes}. In
particular, we need
\begin{equation}
\frac{\partial \ln Z}{\partial h}=\langle i\rangle
\label{eq:LM_h,i}
\end{equation}
and
\begin{equation}
\frac{\partial\ln Z}{\partial J}=\frac{\partial\ln Z}{\partial\alpha}\frac{d\alpha}{dJ}=-2\langle N_s\rangle.
\label{eq:LM_J,Ns}
\end{equation}
Equations \ref{eq:LM_h,i} and \ref{eq:LM_J,Ns} are transcendental and can be solved numerically.
\subsection{Relation to MaxCal partition functional with Markov constraints}
\label{section:MarkovRelation}
Even though Equations \ref{eq:LM_h,i} and \ref{eq:LM_J,Ns} are transcendental, we can find closed-form expressions for $h$ and $J$ in terms of the rate constants $k_A$, $k_B$, and the time step $\Delta t$ by relating a previously used two-state MaxCal partition functional to the one used here. In an earlier MaxCal analysis of the two-state system, the constraints $N_{aa},~N_{ab},~N_{ba},$ and $N_{bb}$ were used instead of $N_s$ and $i$ \cite{Gerhard} \cite{Wuetal}. The quantity $N_{xy}$ reports the number of time instants of length
$\Delta t$ out of a total of $N$ such time steps
where the particle moved from state $x$ to state $y$. The MaxCal partition function using this alternative set of constraints is
\begin{equation}
Z=\sum_{\Gamma}\gamma_{aa}^{N_{aa,\Gamma}}\gamma_{ab}^{N_{ab,\Gamma}}\gamma_{ba}^{N_{ba,\Gamma}}\gamma_{bb}^{N_{bb,\Gamma}},
\label{eqZold}
\end{equation}
where each $\Gamma$ is a different microtrajectory.
Note that the four constraints $\langle N_{aa}\rangle,~\langle N_{ab}\rangle,~\langle N_{ba}\rangle,$ and $\langle N_{bb}\rangle$ are not linearly independent, since the particle can only start in state A or B and move to state A or B in each of the $N$ time steps. This constraint is captured by the relation
\begin{equation}
N_{\Gamma}=N_{aa,\Gamma}+N_{ab,\Gamma}+N_{ba,\Gamma}+N_{bb,\Gamma}.
\label{eq:constraintRelation1}
\end{equation}
To relate these constraints to $N_{s,\Gamma}$ and $i_{\Gamma}$, note that the total number of times that the particle switches states is the sum of the number of times it switches from state $A$ to state $B$ and vice versa,
\begin{equation}
N_{s,\Gamma}=N_{ab,\Gamma}+N_{ba,\Gamma}.
\end{equation}
Additionally, the average state of the particle is the weighted sum of the amount of time it spends in state $A$ and the amount of time it spends in state $B$,
\begin{equation}
i_{\Gamma}=\left(1\right)\left(N_{aa,\Gamma}+N_{ab,\Gamma}\right)+\left(-1\right)\left(N_{bb,\Gamma}+N_{ba,\Gamma}\right).
\end{equation}
Under the assumption of periodic boundary conditions used to calculate the partition function in Section \ref{section:IsingModel}, the particle must start and end in the same state, and therefore switch from state B to state A just as many times as it switches from state A to state B
resulting in the constraint
\begin{equation}
N_{ab,\Gamma}=N_{ba,\Gamma}.
\label{eq:constraintRelation4}
\end{equation}
The Equations \ref{eq:constraintRelation1}-\ref{eq:constraintRelation4} above allow us
to solve for the $N_{xy}$ quantities in terms of $N_s$ and $i$, namely,
\begin{equation}
N_{aa,\Gamma}=\frac{N+i_{\Gamma}-N_{s,\Gamma}}{2},
\label{eqNaa}
\end{equation}
\begin{equation}
N_{bb,\Gamma}=\frac{N-i_{\Gamma}-N_{s,\Gamma}}{2},
\label{eqNbb}
\end{equation}
and
\begin{equation}
N_{ab,\Gamma}=N_{ba,\Gamma}=\frac{N_{s,\Gamma}}{2}.
\label{eqNab}
\end{equation}
If we substitute Equations \ref{eqNaa}, \ref{eqNbb}, and \ref{eqNab} into Equation \ref{eqZold}, we find that
\begin{eqnarray}
Z&=&\sum_{\Gamma} \gamma_{aa}^{N_{aa,\Gamma}}\gamma_{ab}^{N_{ab,\Gamma}}\gamma_{ba}^{N_{ba,\Gamma}}\gamma_{bb}^{N_{bb,\Gamma}} \\
&=&\exp(\frac{N\ln(\gamma_{aa}\gamma_{bb})}{2}) \times \sum_{\Gamma} \exp(\frac{\ln\gamma_{aa}}{2} (i_{\Gamma}-N_{s,\Gamma}) \nonumber \\
&& -\frac{\ln\gamma_{bb}}{2} (i_{\Gamma}+N_{s,\Gamma})+\frac{\ln(\gamma_{ba}\gamma_{ab})}{2} N_{s,\Gamma}) \\
&=& \exp(\frac{N\ln(\gamma_{aa}\gamma_{bb})}{2}) \times \nonumber \\
&& \sum_{\Gamma} \exp(\frac{i_{\Gamma}}{2}\ln(\frac{\gamma_{aa}}{\gamma_{bb}})+\frac{N_{s,\Gamma}}{2}\ln(\frac{\gamma_{ab}\gamma_{ba}}{\gamma_{aa}\gamma_{bb}})).
\label{eqZ3}
\end{eqnarray}
Therefore, the probability of the particle moving along a particular microtrajectory $\Gamma$ is
\begin{equation}
p_{\Gamma} = \frac{\exp(\frac{i}{2}\ln(\frac{\gamma_{aa}}{\gamma_{bb}})+\frac{N_s}{2}\ln(\frac{\gamma_{ab}\gamma_{ba}}{\gamma_{aa}\gamma_{bb}}))}{\sum_{\Gamma'} \exp(\frac{i_{\Gamma'}}{2}\ln(\frac{\gamma_{aa}}{\gamma_{bb}})+\frac{N_{s,\Gamma'}}{2}\ln(\frac{\gamma_{ab}\gamma_{ba}}{\gamma_{aa}\gamma_{bb}}))}.
\end{equation}
To recapitulate, we have two different MaxCal formulations of the two-state system dynamics, derived from two apparently different sets of constraints. Their partition functions are given by Equation \ref{eqZ1} and Equation \ref{eqZ3}. Both models accurately describe the two-state system, as shown in Section \ref{section:Test} and Reference \cite{Wuetal}. We can impose
the consistency condition that both models have the same predictions for $p_{\Gamma}$, and hence they must have the same Lagrange multiplier coefficients for $N_{s,\Gamma}$ and $i_{\Gamma}$. As a result, we can prescribe the equivalence of the two formulations through the relations
\begin{equation}
-2J=\frac{1}{2}\ln(\frac{\gamma_{ab}\gamma_{ba}}{\gamma_{aa}\gamma_{bb}})
\label{eqAlpha}
\end{equation}
and
\begin{equation}
h=\frac{1}{2}\ln(\frac{\gamma_{aa}}{\gamma_{bb}}).
\label{eqMu}
\end{equation}
Recall that these Lagrange multipliers are \cite{Wuetal}
\begin{equation}
\gamma_{ab}=k_A\Delta t,~\gamma_{ba}=k_B\Delta t,~\gamma_{aa}=1-k_A\Delta t,~\gamma_{bb}=1-k_B\Delta t.
\label{eq:MarkovLMsRates}
\end{equation}
These statistical weights are Markovian in nature: $\gamma_{ab}$ is the probability that the particle will transition from state A to state B in the time interval $\Delta t$, $\gamma_{ba}$ is the probability that the particle will transition from state B to state A in the time interval $\Delta t$, and so on. The alternative interpretation is that according to Equation \ref{eqAlpha}, the ``coupling constant'' increases as dwell times increase; Equation \ref{eqMu} suggests that the ``magnetic field'' increases as state A is favored over state B and vice versa.
Note that the calculations in \cite{Wuetal} used exact boundary conditions, whereas Equation \ref{eqZ2} exploited the analytic
convenience of periodic boundary conditions. As a result, any differences between the predictions of the two MaxCal partition functionals is an inheritance of the difference in boundary conditions. In this way, we can measure to what extent potentially false assumptions about boundary conditions will affect the validity of moment predictions. As we will see later, the particular choice of boundary conditions are practically irrelevant for the systems considered here because the particles are observed for many time steps.
\section{Testing the MaxCal model via numerical simulation}
\label{section:Test}
We wish to test our MaxCal model by comparing theoretical predictions to data obtained from simulations or experimental measurements of a two-state system. In an earlier paper, we described the use of a dual-well optical trap that could be used to sculpt different two-state trajectories by tuning the properties of the adjacent wells \cite{Wuetal}. In this paper, we continue along similar lines, but instead using simulated data for this system like that shown in Figure \ref{fig:Data}. Trajectories were simulated according to the Langevin equation subject to an external potential \cite{GillespieSim},
\begin{equation}
\frac{dV(t)}{dt}=-\frac{\gamma}{M}V(t)+\frac{dU}{dx}+\frac{\sqrt{2\gamma k_B T}}{M}\Gamma(t)
\label{eqSimulation}
\end{equation}
where $U(x)$ is a potential of the form
\begin{equation}
U(x)=-E_{1}\exp(-\frac{(x-\mu_1)^2}{\sigma_1^2})-E_{2}\exp(-\frac{(x-\mu_2)^2}{\sigma_2^2}),
\end{equation}
meant to mimic the two-well landscape imposed by the optical traps.
The parameters $\mu_1$ and $\mu_2$ give the x-coordinates on which each well is centered; the parameters $\sigma_1$ and $\sigma_2$ refer to the width of each potential well; and $E_1$ and $E_2$ refer to the depth of each well. In simulations, parameters $\mu_1$, $\mu_2$, $\sigma_1$, and $\sigma_2$ were kept constant. However, fifteen different combinations of $E_1$ and $E_2$ were used to simulate fifteen different double potential wells. See Table \ref{table:parameters} and Table \ref{table:parameters2} for a listing of $E_1$, $E_2$, and $\Delta E=E_1-E_2$ for each of the relevant energy landscapes.
\begin{table}[ht]
\centering
\begin{tabular}{c c}
\hline\hline
Parameter & Value \\ [0.5ex]
\hline
$\mu_1$ & $-270~nm$ \\
$\mu_2$ & $270~nm$ \\
$\sigma_1$ & $200~nm$ \\
$\sigma_2$ & $200~nm$ \\
$m$ & $0.55~pg$ \\
\hline
\end{tabular}
\caption{Parameters governing the geometry of the potential wells for
the dynamics simulation. $\mu_i$ refers to the center of the $i^{th}$ well
and $\sigma_i$ is the RMS width of the potential. The width and mean position of the well were the same for all simulated potential surfaces. $m$ is the mass of the particle.}
\label{table:parameters}
\end{table}
\begin{table}[ht]
\centering
\begin{tabular}{c c c c}
\hline\hline
Potential & $\Delta E$ & $E_{1}$ & $E_{2}$ \\ [0.5ex]
Well \# & ($k_B T$) & ($k_B T$) & ($k_B T$) \\ [0.5ex]
\hline
1 & -2.5841 & 1.9009 & 4.4851 \\
2 & -2.2803 & 2.0607 & 4.3409 \\
3 & -1.9764 & 2.2243 & 4.2007 \\
4 & -1.6724 & 2.3918 & 4.0642 \\
5 & -1.3684 & 2.5629 & 3.9313 \\
6 & -1.0643 & 2.7375 & 3.8019 \\
7 & -0.7602 & 2.9156 & 3.6758 \\
8 & -0.4562 & 3.0969 & 3.5531 \\
9 & -0.1521 & 3.2814 & 3.4335 \\
10 & 0.1521 & 3.4690 & 3.3169 \\
11 & 0.4562 & 3.6595 & 3.2034 \\
12 & 0.7603 & 3.8529 & 3.0927 \\
13 & 1.0643 & 4.0492 & 2.9848 \\
14 & 1.3684 & 4.2481 & 2.8797 \\
15 & 1.6724 & 4.4496 & 2.7772 \\
\hline
\end{tabular}
\caption{Parameters governing the depths of the potential wells for the different
simulations. The relative depth of the two wells varied between each simulated potential surface.}
\label{table:parameters2}
\end{table}
The simulations give us explicit realizations of the trajectory distribution that we can compare to theoretical predictions of our MaxCal model via partial derivatives of the partition function. For example, we can compute the fluctuations in the mean state as
\begin{eqnarray}
\frac{\partial^2\ln Z}{\partial h^2}&=&\frac{1}{Z}\frac{\partial^2 Z}{\partial h^2}-(\frac{\partial\ln Z}{\partial h})^2\\
&=&\sum_{\Gamma}p_{\Gamma}(\sum_{t}\sigma_{\Gamma}(t))^2-\langle i\rangle^2\\
&=&\langle i^2\rangle-\langle i\rangle^2.
\label{eqVarI}
\end{eqnarray}
Similarly, we can compute other correlation functions of interest such
as
\begin{equation}
\frac{\partial^2\ln Z}{\partial J\partial h}=-2(\langle N_s i\rangle-\langle N_s\rangle \langle i\rangle)
\label{eqCovariance}
\end{equation}
and
\begin{equation}
\frac{\partial^2\ln Z}{\partial J^2} = 4(\langle N_s^2\rangle-\langle N_s\rangle^2).
\label{eqVarNs}
\end{equation}
\begin{figure}
\includegraphics[width=8.6cm]{VarianceI_v3}
\caption{\textbf{Comparison between the value of the variance of $i$ obtained analytically and the variance obtained by ``measuring'' the properties of simulated trajectories.}
The graph shows the variance for each of the simulated potential
surfaces characterized in Table~\ref{table:parameters2}, with each data point
corresponding to a particular potential well. The MaxCal results are shown
for the cases when periodic boundary conditions and exact boundary
conditions are used. The best fit line to the periodic boundary condition data is $y=1.034 x-2.5\times 10^{-4}$ with an $R^2=0.9945$; the best fit line to the exact boundary condition data is $y=0.9724 x-5.8\times 10^{-5}$ with an $R^2=0.9923$.}
\label{fig:vari}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=8.6cm]{CovarianceNsI_v3}
\caption{\textbf{Comparison between computed and simulated statistical
correlations. }
The graph shows the statistical quantity labeled above the graph
for each of the simulated potential
surfaces characterized in Table~\ref{table:parameters2}, with each data point
corresponding to a particular potential well. The MaxCal results are shown
for the cases when periodic boundary conditions and exact boundary
conditions are used. The best fit line to the periodic boundary condition data is $y=1.0063 x-1.5\times 10^{-7}$ with an $R^2=0.9957$; the best fit line to the exact boundary condition data is $y=0.9625 x-1.2\times 10^{-7}$ with an $R^2=0.9955$.}
\label{fig:covariance}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=8.6cm]{VarianceNs_v3}
\caption{\textbf{Comparison between the value of the variance of $N_s$ obtained analytically and the variance obtained by ``measuring'' the properties of simulated trajectories.} The graph shows the variance for each of the simulated potential
surfaces characterized in Table~\ref{table:parameters2}, with each data point
corresponding to a particular potential well. The MaxCal results are shown
for the cases of the partition functional used here (Equation \ref{eqZ1}) and of the Markov partition functional used in a previous analysis. The best fit line to the periodic boundary condition data is $y=0.9297 x+2.9\times 10^{-10}$ with an $R^2=0.9626$; the best fit line to the exact boundary condition data is $y=1.0342 x-4.2\times 10^{-11}$ with an $R^2=0.9537$.}
\label{fig:varNs}
\end{figure}
To test our MaxCal model, we compared moments computed from the simulated data described previously with moments predicted by the MaxCal model as
revealed in Equations \ref{eqVarI}, \ref{eqCovariance}, and \ref{eqVarNs}, for
example. Figures \ref{fig:vari}, \ref{fig:covariance}, and \ref{fig:varNs} demonstrate good agreement between simulated data and the moments predicted from the MaxCal model via Equations \ref{eqVarI}, \ref{eqCovariance}, and \ref{eqVarNs}, respectively. The calculations from this MaxCal model are also compared to the calculations from the MaxCal model described in Section \ref{section:MarkovRelation} \cite{Wuetal}. In these graphs, each data point corresponds to a different potential well setup and error bars correspond to standard error. Uncertainty in calculated values of moments stems from uncertainty in the value of the constraints taken from simulated data. See Appendix I for details.
From Figures \ref{fig:vari}-\ref{fig:varNs}, there is good agreement between the two MaxCal models, which we have already argued are equivalent aside from a difference in boundary conditions. This difference between the moment predictions in Figures \ref{fig:vari}-\ref{fig:varNs} is barely perceptible because $N=10^5>>1$, but there is certainly a significant difference in MaxCal predictions due to boundary conditions when there are very few timesteps \cite{Gerhard}. For most cases of interest, the long-time limit will be the relevant situation in which the boundary conditions become less relevant as the total sampling time $N\Delta t$ increases compared to the time step $\Delta t$, just as surface terms become irrelevant in the thermodynamic limit in equilibrium statistical mechanics \cite{Pathria}.
This indifference to boundary conditions can also be seen by testing Equations \ref{eqAlpha} and \ref{eqMu} derived for Lagrange multipliers in Section \ref{section:MarkovRelation}. Recall that those equations hold only if periodic boundary conditions and exact boundary conditions lead to equivalent results, and therefore any deviation from the equations is due entirely to a difference in boundary conditions. Figures \ref{fig:Alpha} and \ref{fig:Mu} demonstrate that Equations \ref{eqAlpha} and \ref{eqMu} agree well with the data from the fifteen different potential well setups described above, although Figure \ref{fig:Alpha} shows a systematic deviation due to a difference in boundary conditions. We calculated rates $k_A$ and $k_B$ from Equation \ref{eq:MarkovLMsRates} and the values of the Lagrange multipliers derived using $\frac{\partial \ln Z}{\partial \ln \gamma_{aa}}=\langle N_{aa}\rangle$, $\frac{\partial \ln Z}{\partial \ln \gamma_{bb}}=\langle N_{bb}\rangle$, $\frac{\partial \ln Z}{\partial\ln \gamma_{ab}}=\langle N_{ab}\rangle$, and $\frac{\partial\ln Z}{\partial\ln \gamma_{ba}}=\langle N_{ba}\rangle$. See Appendix II for more details.
\begin{figure}[h]
\centering
\includegraphics[width=8.6cm]{LM_J_Prediction_v2}
\caption{\textbf{The Lagrange multiplier $J$.} Comparison between predicted value of $J$ from Equation (\ref{eqAlpha}) and its calculated value using MaxCal. The dashed line shows where the data points would fall if there were perfect agreement between the two MaxCal models. The best fit line is given by $y=0.8944 x+0.4134$ with an $R^2=0.9884$. Deviations from the proposed relationships in Equations \ref{eqAlpha} and \ref{eqMu} can be ascribed to the difference in boundary conditions, based on the analysis in Appendix II.}
\label{fig:Alpha}
\end{figure}
\begin{figure}
\includegraphics[width=8.6cm]{LM_h_Prediction_v2}
\caption{\textbf{The Lagrange multiplier $h$.} Comparison between predicted value of $h$ from Equation (\ref{eqMu}) and its calculated values using MaxCal. The dashed line shows where the data points would fall if there were perfect agreement between the two MaxCal models. The best fit line is given by $y=0.9154 x$ with an $R^2=0.9996$.}
\label{fig:Mu}
\end{figure}
The Ising model can also be used for predicting the autocorrelation function for the different states, so that MaxCal model calculations can be compared directly to predictions from the master equations for the two-state (i.e. random telegraph) problem \cite{Gardiner}. In the language of the Ising model, the autocorrelation function for being in the state $A$ is
\begin{equation}
P(A,\tau)=\frac{1}{N}\langle\sum_t (\frac{\sigma(t)+1}{2})(\frac{\sigma(t+\tau)+1}{2})\rangle.
\end{equation}
The quantity $\frac{\sigma(t)+1}{2}$ is $1$ when the particle is in state A at time $t$ and $0$ if the particle is in state B; thus, the quantity $(\frac{\sigma(t)+1}{2})(\frac{\sigma(t+\tau)+1}{2})$ acts as a counter that is $1$ when the particle is in state A at times $t$ and $t+\tau$, and $0$ otherwise. Expansion of this expression gives
\begin{equation}
P(A,\tau)=\frac{1}{4}(1+2\frac{\langle i\rangle}{N}+\langle\sigma(t)\sigma(t+\tau)\rangle).
\label{eqProbA}
\end{equation}
Similarly, we can write the corresponding autocorrelation function for the other state as
\begin{equation}
P(B,\tau)=\frac{1}{4}(1-2\frac{\langle i\rangle}{N}+\langle\sigma(t)\sigma(t+\tau)\rangle).
\label{eqProbB}
\end{equation}
Here $J,~h$ are not time dependent, so that $\langle\sigma(t)\sigma(t+\tau)\rangle$ is the same for any $t$. We can evaluate $\langle\sigma(t)\sigma(t+\tau)\rangle$ by using standard transfer matrix techniques \cite{Pathria}, yielding the result
\begin{eqnarray}
\langle\sigma(t)\sigma(t+\tau)\rangle&=&\sum_{\Gamma}\sigma(t)\sigma(t+\tau)p_{\Gamma}\\
&=&\frac{1}{Z}tr(D^t(S^{-1}PS)D^{\tau}(S^{-1} \nonumber \\
&&PS)D^{N-t-\tau})
\end{eqnarray}
where $T$ is given by Equation (\ref{eqMatrix}) and $P$ is the Pauli spin matrix $\left( \begin{smallmatrix} 1&0\\ 0&-1 \end{smallmatrix} \right)$. We find that
\begin{equation}
S^{-1} P S= \left( \begin{smallmatrix} x&-x-1\\ x-1&-x \end{smallmatrix} \right) ,
\end{equation}
where
\begin{equation}
x=\frac{e^{2J}(e^{2h}-1)}{\sqrt{4e^{2h}+e^{4J}(e^{2h}-1)^2}}.
\end{equation}
Recall that $D=\left( \begin{smallmatrix} \lambda_{+}&0\\ 0&\lambda_{-} \end{smallmatrix} \right)$, which makes exponentiation trivial, resulting in
\begin{equation}
D^n=\left( \begin{smallmatrix} \lambda_{+}^n&0\\ 0&\lambda_{-}^n \end{smallmatrix} \right).
\end{equation}
Therefore, matrix multiplication gives
\begin{eqnarray}
\langle\sigma(t)\sigma(t+\tau)\rangle&=&\frac{e^{4J}(e^{2h}-1)^2}{4e^{2h}+e^{4J}(e^{2h}-1)^2}- \nonumber \\
&&\frac{4e^{2h}}{4e^{2h}+e^{4J}(e^{2h}-1)^2}\times \nonumber \\
&&\frac{\lambda_{+}^\tau \lambda_{-}^{N-\tau}+\lambda_{-}^\tau \lambda_{+}^{N-\tau}}{\lambda_{+}^N+\lambda_{-}^N},
\end{eqnarray}
which we can use to directly calculate $P(A,t)$ and $P(B,t)$.
A comparison between simulated and theoretical values for $P(A,t)$ is shown in Figure \ref{fig:Correlation}. Equations \ref{eqProbA} and \ref{eqProbB} are quite accurate, although they are not identical to the probabilities that one would calculate using master equations. The reason for this is that our discrete MaxCal model has a third implicit parameter in addition to $k_A$ and $k_B$: $\Delta t$, the time scale associated with the discretization of the trajectories. The exact nature of the dependence on $\Delta t$ was made explicit in Section \ref{section:MarkovRelation}.
\begin{figure}[htbp]
\begin{center}%
\includegraphics[width=8.6cm]{Autocorrelation1A_v2}
\end{center}
\caption{\textbf{Time dependence of the correlation function.} The graph of Equation \ref{eqProbA} is shown as a function of time in units of $\Delta t$ for potential well number one.}
\label{fig:Correlation}
\end{figure}
There are benefits and drawbacks to using the MaxCal analysis presented here instead of a more traditional master equation analysis. For example, it is straightforward to calculate the joint probability distribution of $N_s$ and $i$, $p(N_s,i)$, using the MaxCal model. Indeed,
for some quantities we have not even been able to figure out how to compute them
using the master equation treatment. The ease of these calculations in
the MaxCal setting results from the fact that the moments of the probability distribution $p(N_s,i)$ are simply partial derivatives of the partition function $Z$ in our MaxCal model, and in the master equation formalism, one must calculate $p(N_s,i)$ from $P(A,t)$ and $P(B,t)$.
It is straightforward to calculate autocorrelations using the master equation formalism via a linear differential equation; the same quantity can be calculated from MaxCal using transfer matrices.
\section{Maxwell-like relations for dynamic processes}
Both in classical equilibrium thermodynamics and statistical mechanics
and in the thermodynamics of small departures from equilibrium, there are
broad classes of relations such as the Maxwell relations and the Onsager
relations that illustrate the mathematical linkage of quantities that at first
blush might seem unrelated. In some cases, the simplest way to
explain these relations is that they reflect equality of certain second-order
mixed partial derivatives. The maximum caliber formalism admits similar
relations \cite{Jaynes2,Gerhard}. In the language of trajectories, what these relations tell us is
that if we have measured properties such as
$N_s$ and $i$ on one two-well landscape, we can predict what the dynamics
will be like on a ``nearby'' landscape without doing any further
measurements.
In this MaxCal model, Maxwell-like relations fall out as the equality of mixed partial derivatives of $\ln Z$. For example, the statement that $\frac{\partial^2\ln Z}{\partial h\partial J}=\frac{\partial^2\ln Z}{\partial J\partial h}$ leads to
\begin{equation}
\frac{\partial \langle N_s\rangle}{\partial h}=-\frac{1}{2}\frac{\partial^2\ln Z}{\partial h\partial J}=-\frac{1}{2}\frac{\partial \langle i\rangle}{\partial J}.
\label{eqOnsager}
\end{equation}
Other results can be obtained from higher-order mixed partial derivatives, e.g. $\frac{\partial^3\ln Z}{\partial J\partial^2 h}=\frac{\partial^2}{\partial J\partial h}(\frac{\partial\ln Z}{\partial h})$ yields $-\frac{1}{2}\frac{\partial^2 \langle N_s\rangle}{\partial h^2}=\frac{\partial^2\langle i\rangle}{\partial h^2}$. It still remains to see how to turn such relations into general useful
predictive tools for the design and interpretation of experiments.
\section{Conclusion}
In summary, we have studied a single colloidal particle undergoing a two-state process with stationary rates, though the problem with time-varying rates is of great interest. This classic problem is studied using the theory of maximum caliber. By measuring relatively short trajectories ($\sim$ one hour), we are able to find the full probability distribution of trajectories from a MaxCal model using the average frequency of state-switching and the average state as constraints, and confirm that the predicted trajectory distribution agrees with simulated data. Additionally, we show that the ``Markov-like'' constraints used in an earlier MaxCal analysis of the two-state system \cite{Wuetal} differs from the MaxCal analysis presented here only via a difference in boundary conditions. The mapping of the Ising-like MaxCal model onto the Markov-like MaxCal model allows us to assign a physical interpretation to the Lagrange multipliers.
Mapping two-state dynamics onto a one-dimensional Ising model yields unexpected insights into two-state dynamics, even though the master equations that describe two-state kinetics have been used for decades. First, we show that the dynamics of the two-state system can be mapped onto a 1-D Ising model whose ``coupling constant'' and ``magnetic field'' can be written explicitly in terms of $k_A$, $k_B$, and $\Delta t$. Then we derive several relationships that hold true specifically for discretized two-state systems, i.e. the expressions depend on the time step $\Delta t$ as well as the rates $k_A$ and $k_B$. We derive and verify expressions for the moments of the joint probability distribution $p(N_s,i)$, which to the best of our knowledge have not been derived previously using the traditional master equation formalism or anything else. Next we derive and verify an expression for the autocorrelation functions $P(A,t)$ and $P(B,t)$ in a discretized system. It is possible that these new formulas will be of practical value to those who study systems that can be modeled as two-state systems, e.g. ion channels and ligand-receptor complexes.
\section{Acknowledgements}
We are grateful to Ken Dill and Kings Ghosh for years of
discussion on the MaxCal formalism. SM acknowledges the support of a Caltech Student Undergraduate Research Fellowship. DW acknowledges the
support of a NIH UCLA-Caltech MD-PhD fellowship.
This work was also supported by the NIH Director's Pioneer award.
|
1,116,691,497,367 | arxiv | \section{Introduction}\label{Intro}
Metamaterials, which consist of artificial elements (so-called metaatoms or metamolecules) usually
arranged in a periodic pattern, have been playing an increasingly important r\^{o}le in
applications in which they emulate physical properties that otherwise cannot be achieved with
naturally occurring materials. The broad available choice of particular geometries and material
parameters of the constituents of metamaterials facilitates their use for the implementation of key
functionalities, including, \textit{inter alia}, phase engineering \cite{aman07prb,tcr14sr,yp18oe},
light focusing \cite{vgbc15josab, gzcql10oe, cz10oe}, and local field enhancement \cite{igmha13prb,
pegwc11jo, jwzxs14oe, baj09prb}. These functionalities are beginning to impact a series of research
fields by finding applications to bio-sensing \cite{apswg09nm, kymem16nm, cjxsn13acs, ngva15oe},
development of efficient absorbers \cite{nsjd08prl, rmcf12oe, sqlf13n}, electromagnetic cloaking
\cite{djbs06s, dyygs12nc}, and imaging beyond sub-diffraction limit
\cite{zshyycx07oe,dbcn10oe,ss12njp,gm15prl}. Among these physical properties of metamaterials,
local field enhancement is particularly relevant to nonlinear optics, as in this case the optical
response of a metamaterial-based device depends nonlinearly on the externally applied optical field
and thus can be widely tuned.
In many applications, the two-dimensional (2D) counterpart of metamaterials, the so-called
metasurfaces, can provide the required functionality, especially in the case of devices with planar
configuration. In addition, metasurfaces have the advantage of requiring much less laborious
fabrication processes. Moreover, in many applications pertaining to nonlinear optics, especially
those related to surface science and sensing, achieving the phase-matching of the interacting waves
is not a prerequisite condition, and therefore the constraints imposed on metasurfaces in order to
attain optimal energy conversion in nonlinear processes can be greatly relaxed \cite{jmjma15prb,
ad11ome, cga14prb, egy16nc,gst17nrm}.
Broadly speaking, there are two classes of optical metasurfaces: plasmonic metasurfaces based on
metallic particles \cite{fllfc16sa, ya11prb} and dielectric metasurfaces \cite{yidj14nc,
jimed15acs} relying on Mie resonances of dielectric particles. In the case of plasmonic
metasurfaces, the local field can be dramatically enhanced at plasmon-resonance frequencies
\cite{m07book,zmm05pr,vme10rmp,psl18jo}; however, this effect is usually accompanied by a
relatively large optical loss \cite{j15nn}. On the other hand, dielectric metasurfaces are
characterized by much smaller optical losses but usually provide reduced optical field enhancement.
A promising alternative to plasmonic and all-dielectric metasurfaces is provided by graphene
metasurfaces, as the (plasmon) resonance frequency of graphene nanostructures lies in the terahertz
domain, namely where optical losses of graphene are relatively small. Equally important, the
plasmonic nature of these resonances ensures that strong field enhancement can be achieved in
graphene metasurfaces, too. In addition, the corresponding resonance wavelength is much larger than
the size of graphene resonators, which means that a large number of such resonators can be packed
inside a domain with size comparable to that of the operating wavelength. Consequently, the optical
response of graphene metasurfaces can be highly isotropic, when the geometry of graphene unit cell
is symmetric. In fact, patterned graphene has already been employed in the design of terahertz
devices, such as perfect absorbers, filters, and tunable reflectors
\cite{tone16pre,toge18prb,vte18pra,fdf11nl,smt12nm,ybb19nanophot}. In this context, a particularly
appealing physical property of graphene is the tunability of its dielectric constant, a unique
functionality that is highly relevant to the design of active photonic devices.
In this paper, we propose a powerful and versatile homogenization approach for graphene
metasurfaces, and subsequently use it to demonstrate that the effective second-order susceptibility
of such metasurfaces can be dramatically increased due to the field-enhancement effect at plasmon
resonances. The novelty of the homogenization method used in this study consists in its ability to
describe not only metasurfaces containing linear and isotropic materials, like the standard
field-average methods, but also those made of anisotropic and nonlinear optical media. In addition,
we find that when a so-called double-resonance phenomenon occurs in a graphene metasurface
\cite{yymp17ptrsa}, the second-harmonic generation (SHG) can be further enhanced, leading to an
overall increase in SHG of more than three orders of magnitude as compared to the SHG of a graphene
sheet placed on the same substrate.
The paper is organized as follows: In the next section, the configurations of the graphene
metasurfaces investigated in this work are described, as well as their material parameters. In
Section~\ref{Theory}, an improved homogenization approach for retrieving the effective linear and
nonlinear properties of graphene metasurfaces is presented. Then, using this homogenization method,
the geometrical parameters of the graphene metasurfaces are optimized so as to achieve plasmon
resonances at both the fundamental frequency (FF) and second-harmonic (SH). In
Section~\ref{ResDisc}, the linear and nonlinear optical spectra of the graphene metasurfaces are
calculated and a comparison of the effective second-order susceptibility of graphene metasurfaces
with the second-order susceptibility of a graphene sheet placed on the same substrate is provided.
Finally, the main conclusions are outlined in Section~\ref{Concl}.
\section{Physical configuration and material parameters of graphene metasurfaces}\label{Conf}
In this section, we present the configuration of the one-dimensional (1D) and 2D graphene
metasurfaces studied in this work and describe the properties of the linear and nonlinear optical
constants of graphene. Thus, the two generic nonlinear graphene-based metasurfaces, a 1D
metasurface consisting of a periodic arrangement of graphene ribbons and a 2D metasurface
consisting of a rectangular array of graphene rectangular patches, are schematically illustrated in
Figs.~\ref{schematic}(a) and \ref{schematic}(b), respectively. The period of the 1D metasurface is
$P_{x}=\SI{100}{\nano\meter}$ and the width of the nanoribbons is $w$, whereas in the case of the
2D metasurface the periods along the $x$- and $y$-axis are $P_{x}=P_{y}=\SI{100}{\nano\meter}$ and
the length of the graphene patches along the $y$-axis is fixed at $w_{y}=\SI{30}{\nano\meter}$. The
width of the graphene nanoribbons and the length of the graphene patches along the $x$-axis,
$w_{x}$, are free parameters that will be optimized so as to achieve a double-resonance effect. In
both cases the graphene nanostructures are placed onto a silica substrate with
$n_{\mathrm{SiO_2}}=1.4$ and are illuminated by a normally incident, $x$-polarized plane wave with
field amplitude $E_{0}=\SI{1}{\volt\per\meter}$ (wave intensity
$I_{0}=\SI{4.43e12}{\watt\per\square\meter}$). This choice of the wave polarization ensures that
graphene plasmons exist in both metasurfaces.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{Fig1.jpg}
\caption{(a) Schematics of a 1D graphene metasurface, with the period $P_{x}$ and width of graphene
ribbons, $w$. (b) Schematics of a 2D graphene metasurface, with periods $P_{x}$ and $P_{y}$, and
side-length of the graphene patches of $w_{x}$ and $w_{y}$. The two graphene metasurfaces are
illuminated by an $x$-polarized plane wave normally incident onto the metasurfaces.}
\label{schematic}
\end{figure}
Due to its metallic characteristics in the terahertz and infrared spectral regions, graphene
supports surface-plasmon polaritons (SPPs), which are collective oscillations of free electrons. In
the case of finite-size graphene nanostructures, the resonance frequency of SPPs is geometry
dependent. Therefore, by properly choosing the size and shape of these graphene nanostructures, one
can achieve a double-resonant phenomenon, namely SPPs exist both at the FF and SH. When this
occurs, the optical near-fields at the FF and SH are strongly enhanced, which leads to a marked
increase of the intensity of the SHG. Under these circumstances, one expects that the graphene
metasurface can be viewed as a homogeneous sheet of nonlinear material with strongly enhanced
effective second-order susceptibility.
Before we analyze in more detail the linear and nonlinear optical properties of the two graphene
metasurfaces, we briefly summarize the optical properties of the main optical constants of
graphene. Since graphene is a 2D semimetal, a surface optical conductivity, $\sigma_{s}$, is
generally used to describe its main linear physical properties at optical frequencies. Based on
Kubo's formula derived within the random-phase approximation, $\sigma_{s}$ can be expressed as the
sum of the intra-band ($\sigma_{\mathrm{intra}}$) and inter-band ($\sigma_{\mathrm{inter}}$)
contributions, $\sigma_{s}=\sigma_{\mathrm{intra}}+\sigma_{\mathrm{inter}}$. The intra-band part is
given by:
\begin{equation}
\label{eq:intra_band_conduct}
\sigma_{\mathrm{intra}}=\frac{e^{2}k_{B}T_{\tau}}{\pi\hbar^{2}(1-i\pi\tau)}\left[
\frac{\mu_{c}}{k_{B}T}+2\ln(e^{-\frac{\mu_{c}}{k_{B}T}}+1) \right],
\end{equation}
where $\mu_{c}$ is the chemical potential, $\tau$ is the relaxation time, $T$ is the temperature,
$e$ is the electron charge, $k_{B}$ is the Boltzmann constant, and $\hbar$ is the reduced Planck's
constant. Throughout our analysis, we use $\mu_{c}=\SI{0.6}{\electronvolt}$,
$\tau=\SI{0.25}{\pico\second}$, and $T=\SI{300}{\kelvin}$. Moreover, if $\mu_{c}\gg k_{B}T$, which
usually holds at room temperature, the inter-band part can be approximated as:
\begin{equation}
\label{eq:inter_band_conduct} \sigma_{\mathrm{inter}}= \frac{i e^{2}}{4\pi\hbar}\ln
\left[\frac{2|\mu_{c}|-(\omega+i\tau^{-1})\hbar}{2|\mu_{c}|+(\omega+i\tau^{-1})\hbar}\right].
\end{equation}
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{Fig2.jpg}
\caption{Relative electric permittivity of a graphene sheet with
$h_{\mathrm{\mathrm{eff}}}=\SI{0.3}{\nano\meter}$.} \label{epsilon}
\end{figure}
If we assume that the effective thickness of graphene is $h_{\mathrm{\mathrm{eff}}}$, the relative
electric permittivity can be calculated from the conductivity through the relation:
\begin{equation}
\label{eq:relation_epsilon_conduct}
\epsilon_{\mathrm{gr}}(\omega)=1+\frac{i\sigma_{s}}{\epsilon_{0}\omega h_{\mathrm{eff}}}.
\end{equation}
The relative electric permittivity of graphene is depicted in Fig.~\ref{epsilon}, where
$h_{\mathrm{\mathrm{eff}}}=\SI{0.3}{\nano\meter}$ has been used.
Similar to the case of three-dimensional (3D) bulk optical media, the nonlinear optical properties
of 2D materials are generally determined by the symmetry properties of their atomic lattice and are
quantified by (bulk) nonlinear susceptibility tensors, $\bm{\chi}^{(n)}(\Omega;\omega)$, where
$\omega$ and $\Omega$ are the frequencies at the FF and higher-harmonic, respectively, and $n$ is
the order of the nonlinear optical process, or, equivalently, by surface nonlinear optical
conductivities, $\bm{\sigma}_{s}^{(n)}(\Omega;\omega)$. These two physical quantities are related
\textit{via} the following relation:
\begin{equation}\label{chisig}
\bm{\chi}^{(n)}(\Omega;\omega)=\frac{i}{\epsilon_{0}\Omega
h_{\mathrm{eff}}}\bm{\sigma}_{s}^{(n)}(\Omega;\omega).
\end{equation}
Free-standing graphene is a centrosymmetric material and therefore second-order nonlinear optical
processes and, in particular, SHG, are forbidden. If a graphene sheet, however, is placed onto a
homogeneous substrate the inversion symmetry is broken and (dipole) SHG is allowed. In particular,
such an optical configuration is characterized by a surface second-order nonlinear optical
conductivity tensor, $\bm{\sigma}_{s}^{(2)}(\Omega;\omega)$, where $\Omega=2\omega$. Symmetry
considerations based on the fact that graphene belongs to the $\mathcal{D}_{\mathrm{6h}}$ symmetry
group lead to the conclusion that this tensor has three independent nonzero components,
$\sigma_{s,\perp\perp\perp}^{(2)}$,
$\sigma_{s,\parallel\parallel\perp}^{(2)}=\sigma_{s,\parallel\perp\parallel}^{(2)}$, and
$\sigma_{s,\perp\parallel\parallel}^{(2)}$, where the symbols $\perp$ and $\parallel$ refer to the
directions perpendicular onto and parallel to the plane of graphene, respectively. The values of
these parameters used in this paper are
$\sigma_{s,\perp\perp\perp}^{(2)}=\SI[output-complex-root=\text{\ensuremath{i}}]{-9.71ie-16}{\ampere\meter\per\square\volt}$,
$\sigma_{s,\parallel\parallel\perp}^{(2)}=\sigma_{s,\parallel\perp\parallel}^{(2)}=\SI[output-complex-root=\text{\ensuremath{i}}]{-2.56ie-16}{\ampere\meter\per\square\volt}$,
and
$\sigma_{s,\perp\parallel\parallel}^{(2)}=\SI[output-complex-root=\text{\ensuremath{i}}]{-2.09ie-16}{\ampere\meter\per\square\volt}$,\cite{yjdja14prb,jm10prb}
and correspond to graphene placed on a silica substrate. Note that similar to the case of surface
nonlinear second-order susceptibility of nobel metals, the dominant component of the surface
nonlinear second-order conductivity (susceptibility) is the $\sigma_{s,\perp\perp\perp}^{(2)}$
($\chi_{s,\perp\perp\perp}^{(2)}$) component.
\section{Theory of linear and nonlinear homogenization}\label{Theory}
In this section, we describe a theoretical method we recently introduced \cite{jp19ol} for the
homogenization of the linear and nonlinear optical response of graphene metasurfaces. In
particular, we present an approach for extracting the effective linear and nonlinear optical
coefficients of a homogenized layer of material, which in the far-field has the same linear and
nonlinear optical response as that of the graphene metasurface. To be more specific, we use this
method to compute the effective electric permittivity of the two generic graphene metasurfaces, as
well as the effective surface second-order susceptibility of graphene metasurfaces, when they are
optimized to achieved maximum nonlinearity enhancement. Note that although the homogenized
metasurfaces can be characterized by effective surface quantities, such as linear and nonlinear
surface conductivities \cite{hkd11awpl,tks18acsp}, in this work we consider that the homogenized
metasurfaces have a finite thickness, $h_{\mathrm{eff}}$, and thus are described by bulk effective
permittivities and nonlinear susceptibilities.
\begin{figure}[!b]
\centering
\includegraphics[width=\columnwidth]{Fig3.jpg}
\caption{(a) Effective relative permittivity of homogenized graphene-nanoribbon metasurface with
$w=\SI{57.5}{\nano\meter}$. In insets, the spatial profile of $\vert E_{x}\vert$, calculated at the
resonance wavelength $\lambda=\SI{6.74}{\micro\meter}$ (top panel) and at
$\lambda=\SI{4}{\micro\meter}$ (bottom panel). (b) The same as in (a), but calculated for the 2D
graphene metasurface with $w_{x}=\SI{42.5}{\nano\meter}$. The resonance wavelength for the 2D
graphene metasurface is $\lambda=\SI{6.93}{\micro\meter}$}\label{ribb_epsilon_eff}
\end{figure}
In order to develop a general homogenization method, we extend the traditional field-averaged
method to include nonlinear optical effects and anisotropic 2D materials. Thus, the constitutive
relation of a linear anisotropic material is expressed as:
\begin{equation}\label{const}
D_{i}=\sum_{j}\epsilon_{ij}E_{j},
\end{equation}
where $\mathbf{D}$ and $\mathbf{E}$ are the electric displacement and electric field, respectively,
and the subscripts $i,j = x,y,z$. Then, we introduce the averaged fields, defined as:
\begin{subequations}\label{avfields}
\begin{align}
\overline{\textbf{D}}_{\mathrm{eff}}(\omega)&=\frac{1}{V}\int_{V}\textbf{D}(\textbf{r},\omega)d\textbf{r},\label{Df} \\
\overline{\textbf{E}}_{\mathrm{eff}}(\omega)&=\frac{1}{V}\int_{V}\textbf{E}(\textbf{r},\omega)d\textbf{r},\label{Ef}
\end{align}
\end{subequations}
where $V$ is the volume of the unit cell of the (1D or 2D) metasurface. More specifically, the
integration domains for the 1D and 2D metasurfaces are $V=[0,P_{x}]\times[0,h_{\mathrm{eff}}]$ and
$V=[0,P_{x}]\times[0,P_{y}]\times[0,h_{\mathrm{eff}}]$, respectively. Using Eqs.~\eqref{const} and
\eqref{avfields}, the effective electric permittivity tensor of the metasurface, defined by the
constitutive relation
$\overline{D}_{i,\mathrm{eff}}=\sum_{j}\overline{\epsilon}_{ij,\mathrm{eff}}\overline{E}_{j,\mathrm{eff}}$,
can be written as:
\begin{equation}\label{epseff}
\overline{\bm{\epsilon}}_{ij,\mathrm{eff}}(\omega)=\frac{\displaystyle
\int_{V}\textbf{D}_{i}(\textbf{r},\omega)d\textbf{r}}{\displaystyle
\int_{V}\textbf{E}_{j}(\textbf{r},\omega)d\textbf{r}}= \frac{\displaystyle
\int_{V}\epsilon(\mathbf{r})\textbf{E}_{i}(\textbf{r},\omega)d\textbf{r}}{\displaystyle
\int_{V}\textbf{E}_{j}(\textbf{r},\omega)d\textbf{r}},
\end{equation}
where $\epsilon(\mathbf{r})=\epsilon_{0}$ if $\mathbf{r}$ is in air and
$\epsilon(\mathbf{r})=\epsilon_{\mathrm{gr}}$ if $\mathbf{r}$ is in graphene. The formula above has
been derived for metasurfaces made of isotropic optical materials, but it can be easily extended
to anisotropic ones.
In order to assess the validity of our homogenization method, we have calculated the effective
permittivity given by Eq.~\eqref{epseff} and then compared the optical response of the homogenized
metasurfaces, \textit{i.e.}, the absorption, $A$, transmittance, $T$, and reflectance, $R$, with
that of the two graphene metasurfaces. The optical near-fields needed to calculate
$\overline{\bm{\epsilon}}_{ij,\mathrm{eff}}(\omega)$, as well as the absorption, transmittance, and
reflectance of the two graphene metasurfaces, were computed using an in-house developed code
\cite{wp16prb,ytgp18josab}.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{Fig4.jpg}
\caption{Linear response comparison of absorption, $A$, reflectance, $R$, and transmittance, $T$,
calculated for the two graphene metasurfaces whose effective permittivities are presented in
Fig.~\ref{ribb_epsilon_eff} (depicted with solid curves) and $A$, $R$, and $T$ corresponding to
their homogenized counterparts (depicted with dotted curves).}\label{homo_test}
\end{figure}
The effective permittivities of the homogenized metasurfaces, $\epsilon_{xx,\mathrm{eff}}(\omega)$,
retrieved using the algorithm just described, are presented in Fig.~\ref{ribb_epsilon_eff}. The 1D
and 2D metasurfaces considered here were optimized for maximum nonlinear response using an approach
that will be described in the next section, the corresponding values of the geometrical parameters
being $w=\SI{57.5}{\nano\meter}$ and $w_{x}=\SI{42.5}{\nano\meter}$, respectively. In contrast to
the intrinsic permittivity of a homogeneous graphene sheet shown in Fig.~\ref{epsilon}, the
effective permittivities of the homogenized metasurfaces exhibit an evident Lorentzian resonant
response around a wavelength of about \SI{6.8}{\micro\meter}, which is reminiscent of the linear
optical response of an optical medium containing Lorentz-type resonators.
The field profiles presented in the insets of Figs.~\ref{ribb_epsilon_eff}(a) and
\ref{ribb_epsilon_eff}(b) suggest that at resonance the optical near-field is strongly enhanced,
which is one of the main physical properties of SPPs. Moreover, Fig.~\ref{ribb_epsilon_eff} shows
that in addition to this main resonance, few other higher-order resonances exist at smaller
wavelengths. These higher-order resonances correspond to the excitation of higher-order plasmon
modes in the graphene nanoribbons or graphene patches. Interestingly enough, although graphene has
metallic characteristics in the frequency range considered in our calculations, near the resonance
$\mathfrak{Re}(\overline{\bm{\epsilon}}_{xx,\mathrm{eff}})>0$, which means that the homogenized
metasurfaces behave as a dielectric around this frequency.
The main aim of a homogenization theory is to reduce a patterned metasurface to a homogeneous sheet
characterized by certain effective optical constants. A reliable way to assess the validity of this
procedure is to compare the optical response of the homogenized metasurface and the original one,
as quantified by physical quantities such as absorption, reflectance, and transmittance. We
performed this analysis for the two graphene metasurfaces whose effective permittivities are
presented in Fig.~\ref{ribb_epsilon_eff}, the corresponding results being summarized in
Fig.~\ref{homo_test}. This comparison clearly demonstrates that the linear response of the
homogenized sheets perfectly agrees with that of the original graphene metasurfaces, thus proving
the accuracy of the proposed linear homogenization approach. This is explained by the fact that the
wavelengths considered in our computations, including those at which the graphene metasurfaces are
strongly resonant, are much larger than the characteristic size of the graphene constituents of the
metasurfaces, so that the two optical structures are operated deep in the metasurface regime.
We now extend the homogenization method to the nonlinear regime, and use SHG as an illustrative
nonlinear optical process. Thus, this nonlinear optical interaction is determined by the following
nonlinear polarization:
\begin{equation}
\label{eq:Polarization}
\textbf{P}(\Omega;\textbf{r})=\epsilon_{0}\bm{\chi}^{(2)}(\Omega;\textbf{r}):
\mathbf{E}(\omega;\textbf{r})\mathbf{E}(\omega;\textbf{r}),
\end{equation}
where $\Omega=2\omega$ and
$\bm{\chi}^{(2)}(\Omega;\textbf{r})=\bm{\chi}_{\mathrm{gr}}^{(2)}(\Omega)$ if $\mathbf{r}$ is in
graphene and $\bm{\chi}^{(2)}(\Omega;\textbf{r})=0$ if $\mathbf{r}$ is in air. Based on
Eq.~\eqref{eq:Polarization}, the components of the SH polarization can be evaluated as:
\begin{equation}\label{eq:Polarization_i}
P_{i}=\epsilon_{0}\sum_{jk}\chi_{ijk}^{(2)}E_{j}E_{k}\equiv\sum_{jk}q_{ijk},
\end{equation}
where we have introduced the auxiliary quantities,
$q_{ijk}=\epsilon_{0}\chi_{ijk}^{(2)}E_{j}E_{k}$. The averaged value of these auxiliary quantities
are:
\begin{equation}\label{eq:q_ijk}
\overline{q}_{ijk}(\Omega)=\frac{1}{V}\int\chi_{ijk}^{(2)}(\Omega;\textbf{r})
E_{j}(\omega;\textbf{r})E_{k}(\omega;\textbf{r})d\textbf{r}.
\end{equation}
\begin{figure}[!b]
\centering
\includegraphics[width=\columnwidth]{Fig5.jpg}
\caption{(a) Absorption spectra of the 1D graphene metasurface presented in
Fig.~\ref{schematic}(a), calculated for the optimum width, $w=\SI{57.5}{\nano\meter}$, for which a
double-resonance phenomenon occurs, and for $w=\SI{85}{\nano\meter}$. (b) Dispersion map of
absorption. Dashed curves indicate the plasmon bands, whereas the green curve indicates the
half-wavelength of the fundamental plasmon band. The vertical line indicates that there is a
double-resonance effect for $w=\SI{57.5}{\nano\meter}$.}\label{FF_spec_ribb}
\end{figure}
Similarly to Eq.~\eqref{eq:Polarization}, the nonlinear SH polarization in the homogenized
metasurfaces can be written as:
\begin{equation}
\label{eq:Polarizationeff}
\overline{\textbf{P}}_{\mathrm{eff}}(\Omega)=\epsilon_{0}\bm{\chi}_{\mathrm{eff}}^{(2)}(\Omega):
\overline{\mathbf{E}}_{\mathrm{eff}}(\omega)\overline{\mathbf{E}}_{\mathrm{eff}}(\omega),
\end{equation}
where $\bm{\chi}_{\mathrm{eff}}^{(2)}(\Omega)$ is the effective second-order susceptibility of the
homogenized metasurface.
The homogenized metasurface and the original one will have the same nonlinear optical response in
the far-field if the averaged nonlinear polarization in Eq.~\eqref{eq:Polarization} is
\textit{termwise equal} to the effective nonlinear polarization described by
Eq.~\eqref{eq:Polarizationeff}. Using this condition, the effective second-order susceptibility of
the homogenized metasurface can be evaluated as:
\begin{equation}\label{eq:chi_eff}
\bm{\chi}_{\mathrm{eff},ijk}^{(2)}(\Omega)= \frac{\displaystyle
\overline{q}_{ijk}(\Omega)}{\displaystyle \overline{E}_{\mathrm{eff},j}(\omega)
\overline{E}_{\mathrm{eff},k}(\omega)}.
\end{equation}
\section{Results and Discussion}\label{ResDisc}
In this section, we describe our approach to optimize the nonlinear optical response of graphene
metasurfaces and quantify the nonlinearity enhancement of the optimized metasurfaces. In
particular, we calculate the effective second-order susceptibility of the graphene metasurfaces and
compare it to the second-order susceptibility of a graphene sheet placed onto the same silica
substrate.
\begin{figure}[!b]
\centering
\includegraphics[width=\columnwidth]{Fig6.jpg}
\caption{(a) Absorption spectra of the 2D graphene metasurface presented in
Fig.~\ref{schematic}(b), calculated for the optimum side-length, $w_{x}=\SI{42.5}{\nano\meter}$,
for which a double-resonance phenomenon occurs, and for $w_{x}=\SI{60}{\nano\meter}$. (b)
Dispersion map of absorption. Dashed curves indicate the plasmon bands, whereas the green curve
indicates the half-wavelength of the fundamental plasmon band. The vertical line shows that there
is a double-resonance effect for $w_{x}=\SI{42.5}{\nano\meter}$.}\label{FF_spec_rect_x}
\end{figure}
\subsection{Linear optical response of 1D and 2D graphene metasurfaces}
One effective approach to achieve a significant enhancement of the SHG in graphene metasurfaces is
to engineer their geometrical parameters so as plasmons exist at both the FF and SH. Under these
conditions, the incoming light would in-couple effectively into the metasurface, as plasmons exist
at the FF, which would lead to a strong enhancement of the optical near-field at the FF, and, as
per Eq.~\eqref{eq:Polarization}, of the nonlinear polarization. Moreover, if plasmons exist at the
SH, too, the nonlinear sources will radiate efficiently into the continuum, the graphene
metasurface behaving in these conditions as an efficient nanoantenna.
One particularly useful tool for optimizing the linear and nonlinear optical response of graphene
metasurfaces is the dispersion map of the absorption, namely the dependence of the optical
absorption spectra on a certain parameter. Because the optical absorption increases when plasmons
are excited in the structure, the absorption dispersion map provides valuable information about the
frequency dispersion of the plasmon modes. The corresponding absorption spectra have been
calculated using a computational method \cite{wp16prb,ytgp18josab} that rigorously incorporates
both the frequency dispersion and nonlinearity of graphene.
We begin our analysis with the 1D graphene metasurface presented in Fig.~\ref{schematic}(a). Thus,
we show in Fig.~\ref{FF_spec_ribb}(a) the linear absorption spectra determined for the optimum
width of the graphene nanoribbons, $w=\SI{57.5}{\nano\meter}$ (we will explain letter how this
value was determined) and for some other arbitrary value, $w=\SI{85}{\nano\meter}$. Moreover, the
dispersion map of the optical absorption corresponding to this metasurface is plotted in
Fig.~\ref{FF_spec_ribb}(b). It can be seen in Fig.~\ref{FF_spec_ribb}(a) that the absorption
spectra present a series of plasmon resonances, whose amplitude decreases as the resonance
wavelength decreases. These resonances appear in the absorption map as a series of
geometry-dependent plasmon bands, indicated with dashed curves, with the resonance wavelength
increasing with the increase of the width of the nanoribbons. Importantly,
Fig.~\ref{FF_spec_ribb}(b) suggests that for $w=\SI{57.5}{\nano\meter}$ the nanoribbons support a
(fundamental) plasmon at the FF and a second-order plasmon at the SH, namely the metasurface
possesses a double-resonance feature.
\begin{figure}[!b]
\centering
\includegraphics[width=\columnwidth]{Fig7.jpg}
\caption{The same as in Fig.~\ref{FF_spec_rect_x}(b) but determined for an $y$-polarized incident
plane wave.}\label{FF_spec_y}
\end{figure}
Similar conclusions can be drawn in the case of the 2D graphene metasurface. Thus, similarly to the
data summarized in Fig.~\ref{FF_spec_ribb}, we present in Figs.~\ref{FF_spec_rect_x}(a) and
\ref{FF_spec_rect_x}(b) two linear absorption spectra determined for the optimum side-length of the
graphene patches, $w_x=\SI{42.5}{\nano\meter}$, and for an arbitrary value,
$w_{x}=\SI{60}{\nano\meter}$, as well as the corresponding dispersion map of the optical
absorption, respectively. It can be seen that in the 2D case, too, the resonance wavelength of the
plasmon bands increases with $w_{x}$ and that the double-resonance phenomenon also occurs in 2D
graphene metasurfaces. To be more specific, if $w_x=\SI{42.5}{\nano\meter}$ plasmon resonances exit
at both the FF of $\lambda_{\mathrm{FF}}=\SI{6.93}{\micro\meter}$, which is a fundamental plasmon,
and at the SH of $\lambda_{\mathrm{SH}}=\lambda_{\mathrm{FF}}/2=\SI{3.47}{\micro\meter}$. Note
that, as illustrated in Fig.~\ref{schematic}(b), the 2D graphene metasurface is normally
illuminated by an $x$-polarized plane wave.
The 2D graphene metasurface is anisotropic and therefor the optical absorption spectra depend on
the polarization of the incident light. This idea is validated by the dispersion map of the optical
absorption shown in Fig.~\ref{FF_spec_y}, which has been determined for a normally incident,
$y$-polarized incident plane wave. Thus, for this wave polarization the wavelength of
fundamental-plasmon band increases with $w_{x}$, whereas the wavelength of the higher-order plasmon
bands decrease with $w_{x}$.
It can also be seen that when $w_{x}$ varies, the plasmon bands are more dispersive for
$x$-polarized incident waves as compared to those in the case of $y$-polarized waves. This finding
is explained by the fact that the wavelength of the plasmon resonance is primarily determined by
the size of the patch along the direction of the electric field. More importantly, however, the
results in Fig.~\ref{FF_spec_y} suggest that the double-resonance effect does not occur for
$y$-polarized incident plane waves. In our analysis, we have only considered $x$- and $y$-polarized
incident plane waves, chiefly because the conclusions for other polarizations can be derived from
the results corresponding to the linear superposition of these two primary polarizations.
\begin{figure}[!t]
\centering\includegraphics[width=\columnwidth]{Fig8.jpg} \caption{(a) Normalized SHG intensity
spectra, $I_{\mathrm{SHG}}$, of the 1D graphene metasurface presented in Fig.~\ref{schematic}(a),
calculated for the optimum width, $w=\SI{57.5}{\nano\meter}$, and for $w=\SI{85}{\nano\meter}$. (b)
Dispersion map of $I_{\mathrm{SHG}}$. The dashed curve indicates the fundamental-plasmon band. The
inset shows the dependence of $I_{\mathrm{SHG}}$ \textit{vs.} $w$, determined for the case when the
wavelengths of the FF and fundamental plasmon are the same.}\label{SH_spec_ribb}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{Fig9.jpg}
\caption{(a) Normalized SHG intensity spectra, $I_{\mathrm{SHG}}$, of the 2D graphene metasurface
presented in Fig.~\ref{schematic}(b), calculated for the optimum side-length,
$w_{x}=\SI{42.5}{\nano\meter}$ and for $w_{x}=\SI{60}{\nano\meter}$. (b) Dispersion map of
$I_{\mathrm{SHG}}$. Dashed curve indicates the fundamental-plasmon band. The inset shows the
variation of $I_{\mathrm{SHG}}$ with $w_{x}$, computed for the case when the wavelengths of the FF
and fundamental plasmon are the same.}\label{SH_spec_rect}
\end{figure}
\subsection{Nonlinear optical response of 1D and 2D graphene metasurfaces}
We now turn our attention to SHG in 1D and 2D graphene metasurfaces and investigate the influence
of plasmon excitation at the FF and SH on the nonlinear optical response of the two graphene
metasurfaces. To this end, we used a generalized-source FDTD numerical method \cite{ytgp18josab},
to rigorously compute the SHG in the graphene metasurfaces. Since we want to compare the SHG
intensity corresponding to different values of the width of the nanoribbons and rectangular
patches, we normalize the SHG intensity to the area of the graphene structure contained in a unit
cell (note that the periods $P_{x}$ and $P_{y}$ are not changed, so that the area of the unit cells
do not vary). More specifically, the normalized SHG intensity spectra, $I_{\mathrm{SHG}}$, were
calculated as follows: in the 1D case we computed the SHG power per unit length and then divided
the result by the corresponding area of the graphene nanoribbon. In the 2D case, we computed the
SHG power coresponding to the unit cell with area $P_{x}\times P_{y}$ and divided the result to the
area of the graphene patch, $w_{x}\times w_{y}$. Note that the normalized SHG intensity represents
the sum of the SHG signals emitted in the transmission and reflection directions.
The results of these calculations are presented in Fig.~\ref{SH_spec_ribb} and
Fig.~\ref{SH_spec_rect} and correspond to the 1D and 2D metasurfaces, respectively. As
Eq.~\eqref{eq:Polarization} shows, the nonlinear polarization is proportional to the square of the
optical near-field at the FF and therefore the SHG intensity is proportional to the FF field
amplitude to the fourth. As a result, the resonance peaks of normalized SHG intensity spectra and
the plasmon bands of the corresponding dispersion maps of the normalized SHG intensity should be
observed at exactly the half-wavelength of the resonance peaks of linear optical absorption spectra
and the corresponding plasmon bands of the dispersion maps of the linear optical absorption. This
prediction is fully validated by a comparison between the results presented in
Fig.~\ref{FF_spec_ribb} and Fig.~\ref{SH_spec_ribb} on the one hand, results that correspond to the
1D graphene metasurface, and, on the other hand, the results plotted in Fig.~\ref{FF_spec_rect_x}
and Fig.~\ref{SH_spec_rect}, which correspond to the 2D graphene metasurface.
\begin{figure}[!b]
\centering
\includegraphics[width=\columnwidth]{Fig10.jpg}
\caption{Wavelength dependence of the three independent components of the effective second-order
susceptibility, $\bm{\chi}_{\mathrm{eff}}^{(2)}$, of the 1D graphene metasurface.
}\label{susceptiblity_333}
\end{figure}
Importantly, the insets in Fig.~\ref{SH_spec_ribb}(b) and Fig.~\ref{SH_spec_rect}(b) demonstrate
the SHG enhancement due to the double-resonance mechanism. Indeed, it can be inferred from these
plots that for the 1D graphene metasurface maximum SHG intensity is achieved for a width of the
graphene nanoribbons of $w=\SI{57.5}{\nano\meter}$, whereas in the case of the 2D graphene
metasurface the optimum value of the side-length of the graphene patch that leads to maximum SHG
intensity is $w_{x}=\SI{42.5}{\nano\meter}$. This clearly proves that in addition to
plasmon-enhanced SHG, the double-resonance mechanism can be employed to achieve further significant
enhancement of the nonlinear optical response of graphene metasurfaces.
\subsection{Enhancement of the effective second-harmonic susceptibility of 1D and 2D graphene metasurfaces}
A suitable physical quantity that measures the enhancement of the nonlinear optical response of a
nonlinear optical system is the nonlinear susceptibility. Therefore, we have used the
homogenization method described in Sec.~\ref{Theory} to calculate the effective second-order
susceptibility of the two graphene metasurfaces. In particular, we retrieved the three independent
components of this nonlinear susceptibility, $\chi_{\mathrm{eff},zzz}^{(2)}$,
$\chi_{\mathrm{eff},xxz}^{(2)}$, and $\chi_{\mathrm{eff},zxx}^{(2)}$. The results of these
calculations are summarized in Fig.~\ref{susceptiblity_333} and Fig.~\ref{susceptiblity_333_2}, and
correspond to the 1D and 2D metasurfaces, respectively.
\begin{figure}[!b]
\centering
\includegraphics[width=\columnwidth]{Fig11.jpg}
\caption{The same as in Fig.~\ref{susceptiblity_333}, but corresponding to the 2D graphene
metasurface.}\label{susceptiblity_333_2}
\end{figure}
One important conclusion that can be inferred from the data presented in these figures is that,
similar to the case of the effective permittivity of the homogenized graphene metasurfaces, all
components of the effective second-order susceptibilities show a resonant behavior around the
plasmon resonance wavelength (fundamental and higher-order wavelength), which means that the
enhancement of the nonlinearity of the graphene metasurfaces can be traced to the excitation of
graphene SPPs. The maximum enhancement occurs when the fundamental plasmon is excited. Moreover,
the spectra of these components of the second-order susceptibilities are similar to those of a
nonlinear optical medium containing resonators of Lorentzian nature, which suggests that the
graphene nanostructures that constitute the building blocks of the two metasurfaces can be viewed
as metaatoms responsible for the effective nonlinear optical response of these optical
nanostructures. Since the size of these metaatoms is much smaller than the resonance wavelength at
the SH, one can conclude that the nonlinear graphene gratings investigated in this study operate in
the metasurface regime, too.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{Fig12.jpg}
\caption{(a) Wavelength dependence of the enhancement factor of the dominant component of the
effective second-order susceptibility, determined for the optimized 1D graphene metasurface for
several values of the angle of incidence, $\theta$. (b) The same as in (a), but determined for the
2D optimized graphene metasurface.}\label{enhancement}
\end{figure}
In order to further analyze the characteristics of the magnitude of the enhancement of the
nonlinear optical response of the two graphene metasurfaces, we also calculated the enhancement
factors $\eta_{zzz}=\vert\chi_{\mathrm{eff},zzz}^{(2)}/\chi_{\mathrm{gr},zzz}^{(2)}\vert$,
$\eta_{xxz}=\vert\chi_{\mathrm{eff},xxz}^{(2)}/\chi_{\mathrm{gr},xxz}^{(2)}\vert$, and
$\eta_{zxx}=\vert\chi_{\mathrm{eff},zxx}^{(2)}/\chi_{\mathrm{gr},zxx}^{(2)}\vert$ for several
different values of the angle of incidence, $\theta$. In these definitions,
$\bm{\chi}_{\mathrm{gr}}^{(2)}$ is the surface second-order susceptibility of a uniform graphene
sheet placed on top of a silica substrate.
The results of this analysis are summarized in Fig.~\ref{enhancement}, where we show the data
corresponding to the enhancement $\eta_{zzz}$ of the dominant component of
$\bm{\chi}_{\mathrm{eff}}^{(2)}$ of the 1D and 2D metasurfaces. This figure demonstrates a
remarkable enhancement of the second-order nonlinearity of the two metasurfaces, especially near
the plasmon resonance. In particular, the dominant component $\chi_{\mathrm{eff},zzz}^{(2)}$ of the
homogenized graphene metasurfaces is larger by more than three orders of magnitude than the
corresponding component $\chi_{\mathrm{gr},zzz}^{(2)}$ of a graphene sheet placed on the same
silica substrate. It can also be observed that $\eta_{zzz}$ only slightly decreases as the angle of
incidence increases, which further proves that the graphene elements of the metasurfaces behave as
true metaatoms. We also stress that despite the fact that the nonlinear optical losses are enhanced
as well around plasmon resonances, one expects that this is not a particularly detrimental effect
as the graphene metasurfaces investigated in this work are not meant to be employed in applications
where large propagation distances are required.
\section{Conclusion}\label{Concl}
In summary, in this study we investigated the optical response of one- and two-dimensional graphene
metasurfaces and their homogenized counterparts. In particular, using a recently developed
homogenization technique, we retrieved the effective permittivity and effective second-order
susceptibility of the homogenized metasurfaces and compared the values of several physical
quantities characterizing the original and homogenized metasurfaces, such as the optical
absorption, transmittance, and reflectance. Our analysis revealed that for metasurfaces whose
graphene constituents have characteristic size of a few tens of nanometers there is an excellent
agreement between the predictions of the homogenization method and the results obtained by
rigorously solving the Maxwell equations. This was explained by the fact that the characteristic
size of graphene resonators is much smaller than their resonance wavelength.
Our theoretical analysis of the two types of homogenized graphene metasurfaces showed that their
nonlinear response can be greatly enhanced when surface plasmons are excited in their graphene
constituents. Additional nonlinearity enhancement is achieved when plasmons exist at both the
fundamental-frequency and second-harmonic, the overall effect of this double-resonance effect being
an enhancement of the effective second-order susceptibility of the graphene metasurfaces by more
than three orders of magnitude. Moreover, it should be noted that this double-resonance phenomenon
could also be observed in other more complex configurations, \textit{e.g.} when plasmons are
excited in different plasmonic materials, such as metasurfaces containing coupled metallic-graphene
nanostructures. Equally important, the proposed homogenization method can be readily extended to
other cases, too, such as three-dimensional configurations or incident waves with arbitrary
polarization and angle of incidence, which further underscores the importance of the results
reported in this study.
\section*{Acknowledgments}\label{Ack}
This work was supported by the European Research Council (ERC), Grant Agreement No.
ERC-2014-CoG-648328, China Scholarship Council (CSC), and University College London (UCL)
(201506250086). The authors acknowledge the use of the UCL Legion High Performance Computing
Facility (Legion@UCL) and associated support services in the completion of this work.
|
1,116,691,497,368 | arxiv | \section{Introduction}
\label{sec:intro}
The introduction of convolutional neural network (CNN) has brought forth unprecedented performance increase for classification problems in many different domains including RGB, RGBD, and hyperspectral images.~\cite{SGuptaECCV2014,RGirshickTPAMI2016,KHeCVPR2016,HLeeIGARSS2016,HLeeTIP2017} Such performance increase was made possible due to the ability of CNN being able to learn and express the deep and wide connection between the input and the output using a huge number of parameters. In order to learn such a huge set of parameters, having a large scale dataset has become a significant requirement. When the size of the given dataset is insufficient to learn a network, one may consider using a larger external dataset to better learn the large set of parameters. For instance, Girshick et al. \cite{RGirshickTPAMI2016} introduced a domain adaptation approach where the network is trained on a large scale source domain (ImageNet dataset~\cite{JDengCVPR2009}) and then finetuned on a target domain (object detection dataset~\cite{MEveringhamIJCV2010}).
When applying CNN to hyperspectral image classification problem, we also face the similar issue as there are no large scale hyperspectral dataset available. A typical hyperspectral image classification data only contains between 10k and 100k pixels where very small portion of those pixels are being used for training. In order to tackle such data scarcity issue, we need a way to make use of multiple hyperspectral dataset (domain).
There are several challenges that arise when devising an approach which can be applied to multiple hyperspectral domains. First of all, all the hyperspectral datasets contain different wavelength range and spectral reflectance bands. Furthermore, applying a domain adaptation approach \cite{RGirshickTPAMI2016} is infeasible as a large scale auxiliary dataset for the hyperspectral image classification is not available.
\begin{figure}[t]
\begin{center}
\centerline{\includegraphics[width=\linewidth,trim=0mm 60mm 7mm 7mm]{concept.pdf}}
\end{center}
\caption{{\bf Cross-Domain CNN.} While the non-shared portions of the CNN (``dataset-specific hyperspectral analysis layers'' and ``dataset-specific classification layers'') are handling the dataset-specific classification tasks, the ``shared convolutional layers'' generate common information which applies to all the datasets. Note that both shared and non-shared portions of the CNN are optimized in an end-to-end fashion.}
\label{fig:concept}
\end{figure}
Therefore, we have developed a novel cross-domain CNN architecture which simultaneously performs network learning and classification on multiple hyperspectral datasets. The architecture consists of three components: dataset-specific hyperspectral analysis layers, shared convolutional layers, and dataset-specific classification layers. In the front-end portion, ``dataset-specific hyperspectral analysis layers'' are present to analyze the spatial-spectral information. The back-end is built with the ``dataset-specific classification layers'' which performs the classification for different dataset. These two components are connected by the ``shared convolutional layers'' which learns the common information across the domains. All three components are being optimized in an end-to-end fashion. The information acquired from multiple datasets is fed through the layers concurrently during training, which leads to the better learning of the shared convolutional layers via dataset augmentation. The overall CNN architecture is depicted in Figure \ref{fig:concept}.
In this paper, we have used the three mostly used hyperspectral datasets (Indian Pines, Salinas, and University of Pavia) to demonstrate the effectiveness of having a cross-domain CNN. The experimental results show that our novel architecture outperforms the baseline networks (i.e., only one dataset used to train the network) by about 1.5\% to 3\% consistently in terms of classification accuracy.
The contributions of our approach are listed as below.
\begin{enumerate}
\item First attempt to learn a CNN with multiple hyperspectral datasets.
\item A novel cross-domain CNN optimized in an end-to-end fashion.
\item Consistent classification accuracy increase on all datasets.
\end{enumerate}
\section{The Proposed Method}
\label{sec:proposed_method}
\subsection{Architecture}
Figure~\ref{fig:flowchart} shows the proposed cross-domain CNN architecture.\\
\begin{figure}[t]
\begin{center}
\centerline{\includegraphics[width=\linewidth,trim=7mm 115mm 7mm 0mm]{flowchart_v2.pdf}}
\end{center}
\caption{{\bf Architecture.} \textcolor{myred}{Red}, \textcolor{myblue}{blue}, \textcolor{mygreen}{green} blocks indicate the non-shared portion of the CNN while the black ones are shared across all the streams for different datasets. Detailed specifications on the architecture are denoted on the last (\textcolor{mygreen}{green}) stream. {\bf B} and {\bf C} indicate the number of spectral bands and the number of classes, respectively, which are different for all the datasets.}
\label{fig:flowchart}
\end{figure}
\noindent{\bf Backbone CNN architecture.} For each dataset-specific stream in the proposed architecture, we have used the modified version of the 9-layered hyperspectral image classification CNN introduced by Lee and Kwon~\cite{HLeeIGARSS2016,HLeeTIP2017}. The modification has been carried out by adding a batch normalization (BN) layer~\cite{SIoffeICML2015} after each convolutional layer while removing all the local response normalization layers. The BN layer computes the mean and the variance of the samples in each mini-batch and performs normalization, that is, it fits the samples in the mini-batch to a normal distribution. By introducing the BN, we can bypass the process of data normalization for both training and testing. In addition, bias terms are no longer required.
The backbone CNN is a fully convolutional network and contains the sequentially connected multi-scale filter bank, one convolutional layer, two residual modules and three convolutional layers. Each multi-scale filter bank~\cite{CSzegedyCVPR2015} consists of 1$\times$1, 3$\times$3, and 5$\times$5 filters to analyze the spatial-spectral characteristics. Each residual module~\cite{KHeCVPR2016} includes two convolutional layers. The residual modules allow ease of learning of the deep network. To the best of our knowledge, this architecture is the first attempt to go deeper than three layers by adopting the residual modules within the domain of hyperspectral image classification.\\
\begin{table}
\caption{{\bf Specifications for different hyperspectral datasets.} Reduced bands are acquired by removing the bands which correspond to the water absorption. Note that, datasets acquired using the same type of sensor can have different reduced bands.}
\label{tab:domains}
\setlength{\tabcolsep}{2.3pt}
\renewcommand{\arraystretch}{1.4}
{\footnotesize
\begin{tabular}{l|c|c|c|c}
\specialrule{.15em}{.05em}{.05em}
Dataset & Sensor & Range & Bands & Reduced Bands \\\specialrule{.15em}{.05em}{.05em}
Indian Pines & \multirow{2}{*}{AVIRIS} & \multirow{2}{*}{0.4$\mu$m $\sim$ 2.5$\mu$m} & \multirow{2}{*}{224} & 200\\\cline{1-1}\cline{5-5}
Salinas & & & & 204 \\\hline
University of Pavia & ROSIS & 0.43$\mu$m $\sim$ 0.86$\mu$m & 115 & 103 \\\specialrule{.15em}{.05em}{.05em}
\end{tabular}
}
\end{table}
\noindent{\bf Cross-domain CNN architecture.} To perform an RGB-to-RGB domain adaptation process using CNNs~\cite{RGirshickTPAMI2016,MOquabCVPR2014}, we can follow a traditional approach of replacing the classification layers to fit the target dataset. However, when source and target domains are hyperspectral (where each dataset carries its unique characteristics as shown in Table \ref{tab:domains}), simply replacing the classification layer does not work physically. Because of this, we have devised dataset-specific layers not only at the latter portion of the network, but also before the shared convolutional layers to adaptively intake the different datasets.
The multi-scale filter bank in each dataset-specific stream, which is responsible for analyzing the spatial-spectral characteristics, is assigned as the ``dataset-specific hyperspectral analysis layers''. The last three convolutional layers function as the ``dataset-specific classification layers''. The remaining layers in the middle of the architecture which consist of the second convolutional layer and two residual modules are assigned as the cross-domain ``shared convolution layers''.
\subsection{Optimization}
Each layer consists of 128 convolutional filters. The first, second, and ninth convolutional layers are initialized according to Gaussian distribution with mean of 0 and standard deviation of 0.01. Remaining convolutional layers are initialized with zero-mean Gaussian and standard deviation of 0.005. To provide richer set of samples and to avoid over-fitting, the training samples are augmented eight-fold by mirroring each sample across the vertical, horizontal, and two diagonal axes.
We have used stochastic gradient descent (SGD) to train the network with a batch size of 10 samples. The training process was initiated with the learning rate of 0.001 and iterated 100k times with a step size of 40k. We set the momentum, gamma, and weight decay as 0.9, 0.1, and 0.0005, respectively.
When learning the ``shared convolutional layers'', we multiply $1/N$ (where $N$ is the number of domains involved in the training process) to the base learning rate because updating the weights in these layers are affected by all $N$ domain-specific networks when back-propagation takes place at each iteration. In our case, $N$ is set to be 3 as we have used 3 different datasets.
\section{Evaluation}
\label{sec:eval}
\begin{table*}
\caption{Selected classes for evaluation and the numbers of training and test samples.}
\label{tab:dataset_info}
\begin{tabular}{ccc}
\begin{subfigure}{0.29\textwidth}
\caption{Indian Pines}
\setlength{\tabcolsep}{9.5pt}
\renewcommand{\arraystretch}{1.1}
{\footnotesize
\begin{tabular}{lcc}
\specialrule{.15em}{.05em}{.05em}
Class & Training & Test \\\specialrule{.15em}{.05em}{.05em}
Corn-notill & 200 & 1228 \\
Corn-mintill & 200 & 630 \\
Grass-pasture & 200 & 283 \\
Hay-windrowed & 200 & 278 \\
Soybean-notill & 200 & 772 \\
Soybean-mintill & 200 & 2255 \\
Soybean-clean & 200 & 393 \\
Woods & 200 & 1065 \\\specialrule{.15em}{.05em}{.05em}
Total & 1600 & 6904 \\\specialrule{.15em}{.05em}{.05em}
\end{tabular}
}
\vspace{8.3em}
\end{subfigure}
&
\begin{subfigure}{0.37\textwidth}
\caption{Salinas}
\setlength{\tabcolsep}{9.5pt}
\renewcommand{\arraystretch}{1.1}
{\footnotesize
\begin{tabular}{lcc}
\specialrule{.15em}{.05em}{.05em}
Class & Training & Test \\\specialrule{.15em}{.05em}{.05em}
Brocooli green weeds 1 & 200 & 1809 \\
Brocooli green weeds 2 & 200 & 3526 \\
Fallow & 200 & 1776 \\
Fallow rough plow & 200 & 1194 \\
Fallow smooth & 200 & 2478 \\
Stubble & 200 & 3759 \\
Celery & 200 & 3379 \\
Grapes untrained & 200 & 11071 \\
Soil vineyard develop & 200 & 6003 \\
Corn senesced green weeds & 200 & 3078 \\
Lettuce romaines, 4 wk & 200 & 868 \\
Lettuce romaines, 5 wk & 200 & 1727 \\
Lettuce romaines, 6 wk & 200 & 716 \\
Lettuce romaines, 7 wk & 200 & 870 \\
Vineyard untrained & 200 & 7068 \\
Vineyard vertical trellis & 200 & 1607 \\\specialrule{.15em}{.05em}{.05em}
Total & 3200 & 50929 \\\specialrule{.15em}{.05em}{.05em}
\end{tabular}
}
\end{subfigure}
&
\begin{subfigure}{0.28\textwidth}
\caption{University of Pavia}
\setlength{\tabcolsep}{9.5pt}
\renewcommand{\arraystretch}{1.1}
{\footnotesize
\begin{tabular}{lcc}
\specialrule{.15em}{.05em}{.05em}
Class & Training & Test \\\specialrule{.15em}{.05em}{.05em}
Asphalt & 200 & 6431 \\
Meadows & 200 & 18449 \\
Gravel & 200 & 1899 \\
Trees & 200 & 2864 \\
Sheets & 200 & 1145 \\
Bare soils & 200 & 4829 \\
Bitumen & 200 & 1130 \\
Bricks & 200 & 2482 \\
Shadows & 200 & 747 \\\specialrule{.15em}{.05em}{.05em}
Total & 1800 & 40976 \\\specialrule{.15em}{.05em}{.05em}
\end{tabular}
}
\vspace{7.5em}
\end{subfigure}
\end{tabular}
\end{table*}
\subsection{Evaluation Settings}
We have used three hyperspectral datasets (Indian Pines, Salinas, and University of Pavia) for the experiments. The Indian Pines dataset includes 145$\times$145 pixels and 200 spectral reflectance bands which cover the range from 0.4 to 2.5 $\mu$m with a spatial resolution of 20 m. The Indian Pines dataset has 16 classes but only 8 classes with relatively large numbers of samples are used. The Salinas dataset contains 16 classes with 512$\times$217 pixels and 204 spectral bands and a high spatial resolution of 3.7 m. Salinas dataset shares the frequency characteristics with the Indian Pines dataset as the same sensor (AVIRIS) was used for the data acquisition. The University of Pavia dataset, which was acquired using ROSIS sensor, provides 9 classes and 610$\times$340 pixels with 103 spectral bands which cover the spectral range from 0.43 to 0.86 $\mu$m with a spatial resolution of 1.3 m. For the Salinas dataset and the University of Pavia dataset, we use all available classes because both datasets do not contain classes with a relatively small number of samples. Each dataset has been randomly partitioned into train and test sets according to Table~\ref{tab:dataset_info}.
\subsection{Performances}
\noindent{\bf Classification accuracy.} As shown in Table~\ref{tab:accuracy}, the proposed cross-domain CNN outperforms the individual networks on the Indian Pines, Salinas, and University of Pavia by 1.5\%, 1.5\%, and 3.2\%, respectively. Note that the individual networks have been trained without the shared portion of the network. We observe that using a bigger and richer dataset (i.e., using all three datasets to learn one single network) in training the cross-domain CNN assisted in boosting up the performance.\\
\begin{table}
\caption{Classification accuracy.}
\label{tab:accuracy}
\setlength{\tabcolsep}{6.0pt}
\renewcommand{\arraystretch}{1.4}
{\footnotesize
\begin{tabular}{l|c|c|c}
\specialrule{.15em}{.05em}{.05em}
Dataset & Individual CNN & Cross-Domain CNN & Gain \\\specialrule{.15em}{.05em}{.05em}
Indian Pines & .907 & {\bf .922} & +.015 \\
Salinas & .893 & {\bf .908} & +.015 \\
University of Pavia & .921 & {\bf .953} & +.032 \\\specialrule{.15em}{.05em}{.05em}
\end{tabular}
}
\end{table}
\noindent{\bf Training loss analysis.}
As shown in Figure~\ref{fig:loss}, training loss evolution for the cross-domain CNN does not show much difference when compared with the cases where CNNs are trained separately for different datasets. However, we observe that the cross-domain CNN shows better performance in terms of classification accuracy. This indicates that the performance gain achieved by the cross-domain CNN is due to the effort in addressing the overfitting issue. As we have used a larger dataset (i.e., combined set of all three) to train one network, discrepancies between the training and the test sets are decreased.\\
\begin{figure}[t]
\begin{center}
\centerline{\includegraphics[width=\linewidth,trim=5mm 130mm 5mm 105mm]{loss.pdf}}
\end{center}
\caption{Comparing training loss evolution: Cross-domain CNN vs. Individual CNNs.}
\label{fig:loss}
\end{figure}
\noindent{\bf Test time.} We have evaluated the computation time of our cross-domain CNN when three images (one from each dataset) are fed in for testing. It takes 0.5875 seconds for processing all three images when tested on NVIDIA TITAN XP GPU.\\
\section{Conclusion}
\label{conclusion}
In this paper, we have introduced a novel cross-domain CNN which can concurrently learn and perform the hyperspectral image classification for multiple datasets. As the shared portion of our network is being trained using multiple hyperspectral datasets, the proposed approach is more effective in optimizing the high capacity CNN than the cases where only a single dataset (domain) is being used. Our approach is the first attempt to exploit multiple hyperspectral datasets for training a CNN in an end-to-end fashion. We have experimentally demonstrated that using the shared layers across the domains brings notable classification accuracy improvements when compared to the individually trained cases.
\bibliographystyle{IEEE}
|
1,116,691,497,369 | arxiv | \section{Introduction}
Local search is one of the most widely used approaches to solving hard
optimization problems. The basic idea of local search is that one
tries to iteratively improve a current solution by searching for
better solutions in its ($k$-)neighborhood (i.e., within distance $k$
from it). Any optimization algorithm can be followed by a local search
phase, thus the problem of finding a better solution locally is of great
practical interest. However, a brute force search of a
$k$-neighborhood is not feasible for large $k$, thus it is natural to study
the complexity of searching the $k$-neighborhood.
The constraint satisfaction problem (CSP) provides a framework in which it is
possible to express, in a natural way, many combinatorial problems encountered in
artificial intelligence and computer science. A constraint satisfaction problem is
represented by a set of variables, a domain of values for each variable, and a set
of constraints between variables. The basic aim in a constraint satisfaction problem
is then to find an assignment of values to the variables that satisfies the
constraints. Boolean CSP (when all variables have domain $\{0,1\}$), aka generalized
satisfiability, is a natural generalization of {\sc Sat} where constraints are given
by arbitrary relations, not necessarily by clauses. Local search methods for {\sc
Sat} and CSP are very extensively studied (see,
e.g.,~\cite{Dantsin02:local,Gu00:book,Hirsch00:local,Hoos06:handbook}).
Complexity classifications for various versions of CSP (and, in particular, for
versions of Boolean CSP) have recently attracted massive attention from researchers,
and one of the most popular directions here is to characterise restrictions on the
type of constraints that lead to problems with lower complexity in comparison with
the general case (see~\cite{Cohen06:handbook,Creignou01:book}). Such classifications
are sometimes called Schaefer-style because the first classification of this type
was obtained by T.J.~Schaefer in his seminal work~\cite{Schaefer78:complexity}. A
local-search related Schaefer-style classification for Boolean {\sc Max CSP} was
obtained in~\cite{Chapdelaine05:local}, in the context of local search complexity
classes such as PLS.
The hardness of searching the $k$-neighborhood (for any optimisation problem) can be
studied very naturally in the framework of parameterized
complexity~\cite{Downey99:book,Flum06:book}, as suggested
in~\cite{Fellows01:frontiers}; such a study for the traveling salesman problem (TSP)
was recently performed in~\cite{Marx08:TSP}. Parameterized complexity studies
hardness in finer detail than classical complexity. Consider, for example, two
standard NP-complete problems {\sc Minimum Vertex Cover} and {\sc Maximum Clique}. Both have
the natural parameter $k$: the size of the required vertex cover/clique. Both
problems can be solved in $n^{O(k)}$ time by complete enumeration. Notice that the
degree of the polynomial grows with $k$, so the algorithm becomes useless for large
graphs, even if $k$ is as small as 10. However, {\sc Minimum Vertex Cover} can be solved
in time $O(2^k\cdot n^2)$~\cite{Downey99:book,Flum06:book}. In other words, for
every fixed cover size there is a polynomial-time (in this case, quadratic)
algorithm solving the problem where the degree of the polynomial is independent of
the parameter. Problems with this property are called fixed-parameter tractable. The
notion of W[1]-hardness in parameterized complexity is analogous to NP-completeness
in classical complexity. Problems that are shown to be W[1]-hard, such as {\sc Maximum
Clique}~\cite{Downey99:book,Flum06:book}, are very unlikely to be fixed-parameter
tractable. A Schaefer-style classification of the basic Boolean CSP with respect to
parameterized complexity (where the parameter is the required Hamming weight of the solution)
was obtained in~\cite{Marx05:CSP}.
In this paper, we give a Schaefer-style complexity classification for
the following problem: given a collection of Boolean constraints, and
a solution to it, the question is whether there is a better (i.e.,
with smaller Hamming weight) solution within a given (Hamming)
distance $k$ from the initial solution. We obtain classification
results both for classical (Theorem~\ref{thm:poly}) and for
parameterized complexity (Theorem~\ref{thm:fpt}). However, we would
like to point out that it makes much more sense to study this problem
in the parameterized setting. Intuitively, if we are able to decide in
polynomial time whether there is a better solution within distance
$k$, then this seems to be almost as powerful as finding the best
solution (although there are technicalities such as whether there is a
feasible solution at all). Our classification confirms this intuition:
searching the $k$-neighborhood is polynomial-time solvable only in
cases where finding the optimum is also polynomial-time solvable. On
the other hand, there are cases (for example, Horn constraints or
\textsc{1-in-3 Sat}) where the problem of finding the optimum is NP-hard,
but searching the $k$-neighborhood is fixed-parameter tractable. This
suggests evidence that parameterized complexity is the right setting
for studying local search.
The paper is organized as follows. Section~\ref{sec:preliminaries}
reviews basic notions of parameterized complexity and Boolean CSP.
Section~\ref{sect:fpt} presents the classificiation with respect to
fixed-parameter tractability, while Section~\ref{sect:poly} deals with
polynomial-time solvability. The proofs omitted from
Section~\ref{sect:poly} can be found in the appendix.
\section{Preliminaries}\label{sec:preliminaries}
\textbf{Boolean CSP.} A {\em formula} $\phi$ is a pair $(V,C)$
consisting of a set $V$ of {\em variables} and a set $C$ of {\em
constraints.} Each constraint $c_i\in C$ is a pair $\langle
\overline s_i, R_i
\rangle$, where $\overline s_i=( x_{i,1},\dots, x_{i,r_i})$ is an $r_i$-tuple of
variables (the {\em constraint scope}) and $R_i\subseteq \{0,1\}^{r_i}$ is an $r_i$-ary Boolean
relation (the {\em constraint relation}). A function $f: V\to \{0,1\}$ is a {\em satisfying
assignment} of $\phi$ if $(f(x_{i,1}),\dots, f(x_{i,r_i}))$ is
in $R_i$ for every $c_i \in C$. Let $\Gamma$ be a set of Boolean
relations. A formula is a {\em $\Gamma$-formula} if every constraint
relation $R_i$ is in $\Gamma$. In this paper, $\Gamma$ is always a
finite set.
The {\em (Hamming) weight} $w(f)$ of
assignment $f$
is the number of variables $x$ with $f(x)=1$. The {\em distance} $\dist(f_1,f_2)$ of
assignments $f_1,f_2$ is the number of variables $x$ with
$f_1(x)\neq f_2(x)$.
We recall various standard definitions concerning Boolean constraints (cf.~\cite{Creignou01:book}):
\begin{itemize}
\item $R$ is {\em 0-valid} if $(0,\dots,0)\in R$.
\item $R$ is {\em 1-valid} if $(1,\dots,1)\in R$.
\item $R$ is {\em Horn} or {\em weakly negative} if it can be
expressed as a conjunction of clauses such that each clause
contains at most one positive literal. It is known that $R$ is Horn
if and only if it is {\em min-closed:} if
$(a_1,\dots,a_r)\in R$ and $(b_1,\dots,b_r)\in R$, then
$(\min(a_1,b_1),\dots, \min(a_r,b_r))\in R$.
\item $R$ is {\em affine} if it can be expressed as a conjunction
of constraints of the form $x_1 + x_2 +\dots +x_t=b$, where $b\in
\{0,1\}$ and addition is modulo 2. The number of tuples in an affine
relation is always an integer power of 2.
\item $R$ is {\em width-2 affine} if it can be expressed as a conjunction
of constraints of the form $x=y$ and $x\neq y$.
\item $R$ is {\em IHS-B$-$} (or {\em implicative hitting set bounded}) if
it can be represented by a conjunction of clauses of the form $(x)$, $(x \to y)$ and
$(\neg x_1 \vee \ldots \neg x_n)$, $n\ge 1$.
\item The relation $R_{\textsc{$p$-in-$q$}}$ (for $1 \le p \le q$) has arity $q$ and
$R_{\textsc{$p$-in-$q$}}(x_1,\dots,x_q)$ is true if and only if exactly
$p$ of the variables $x_1$, $\dots$, $x_q$ have value $1$.
\end{itemize}
The following definition is new in this paper. It plays a crucial role
in characterizing the fixed-parameter tractable cases for local search.
\begin{definition}
Let $R$ be a Boolean relation and $(a_1,\dots, a_r)\in R$. A set
$S\subseteq \{1,\dots,r\}$ is a {\em flip set} of $(a_1,\dots,a_r)$
(with respect to $R$) if $(b_1,\dots,b_r)\in R$ where $b_i=1-a_i$ for
$i\in S$ and $b_i=a_i$ for $i\not\in S$. We say that $R$ is {\em flip
separable} if whenever some $(a_1,\dots,a_r)\in R$ has two flip sets
$S_1,S_2$ with $S_1\subset S_2$, then $S_2\setminus S_1$ is also a
flip set for $(a_1,\dots,a_r)$.
\end{definition}
It is easy to see that $R_{\textsc{1-in-3}}$ is flip separable: every flip set has
size exactly 2, hence $S_1\subset S_2$ is not possible.
Moreover, $R_{\textsc{$p$-in-$q$}}$ is also flip separable for every
$p\le q$.
Affine constraints are also flip separable: to see this,
it is sufficient to verify the definition only for the constraint $x_1+\dots
+x_r=0$.
The basic problem in CSP is to decide if a formula has a satisfying assignment:
\begin{center}
\fbox{
\parbox{0.9\linewidth}{
\textsc{CSP($\Gamma$)}\\[5pt]
\begin{tabular}{rl}
{\em Input:} & A $\Gamma$-formula $\phi$.\\[3pt]
{\em Question:} & Does $\phi$ have a satisfying assignment?
\end{tabular}
}}
\end{center}
Schaefer completely characterized the complexity of \textsc{CSP($\Gamma$)} for every
finite set $\Gamma$ of Boolean relations \cite{Schaefer78:complexity}. In
particular, every such problem is either in \PTIME\ or \NP-complete, and there is a
very clear description of the boundary between the two cases.
Optimization versions of Boolean CSP were investigated in
\cite{Creignou01:book,Crescenzi02:Hamming}. A straightforward way to obtain an
optimization problem is to relax the requirement that every constraint is satisfied,
and ask for an assignment maximizing the number of satisfied constraints. Another
possibility is to ask for a solution with minimum/maximum weight. In this paper, we
investigate the problem of minimizing the weight. As we do not consider the
approximability the problem, we define here only the decision version:
\begin{center}
\fbox{
\parbox{0.9\linewidth}{
\textsc{Min-Ones($\Gamma$)}\\[5pt]
\begin{tabular}{rl}
{\em Input:} &A $\Gamma$-formula $\phi$ and an integer $W$.\\[3pt]
{\em Question:} &Does $\phi$ have a satisfying assignment $f$ with
$w(f)\le W$?
\end{tabular}
}}
\end{center}
The characterization of the approximability of finding a minimum
weight satisfying assignment for a $\Gamma$-formula can be found in
\cite{Creignou01:book}. Here we state only the classification of
polynomial-time solvable and NP-hard cases:
\begin{theorem}[\cite{Creignou01:book}]
Let $\Gamma$ be a finite set of Boolean relations. \textsc{Min-Ones($\Gamma$)} is
solvable in polynomial time if one the following holds, and \NP-complete otherwise:
\begin{itemize}
\item Every $R\in \Gamma$ is 0-valid.
\item Every $R\in \Gamma$ is Horn.
\item Every $R\in \Gamma$ is width-2 affine.
\end{itemize}
\end{theorem}
A Schaefer-style characterization of the approximability of finding two satisfying
assignments to a formula with a largest distance between them was obtained
in~\cite{Crescenzi02:Hamming}, motivated by the blocks world problem from KR, while
a Schaefer-style classification of the problem of deciding whether a given
satisfying assignment to a given CSP instance is component-wise minimal was
presented in~\cite{Kirousis03:minimal}, motivated by the circumscription formalism
from AI.
The main focus of the paper is the local search version of minimizing weight:
\begin{center}
\fbox{
\parbox{0.9\linewidth}{
\prob{\Gamma}\\[5pt]
\begin{tabular}{rp{0.85\linewidth}}
{\em Input:} & A $\Gamma$-formula $\phi$, a satisfying assignment $f$,
and an integer $k$.\\[3pt]
{\em Question:} & Does $\phi$ have a
satisfying assignment $f'$ with $w(f')<w(f)$ and $\dist(f,f')\le k$?
\end{tabular}
}}
\end{center}
LS in the above problem stands for both ``local search'' and ``lighter solution.''
Observe that the satisfying assignments of an $(x \vee y)$-formula correspond to the
vertex covers of the graph where the variables are the vertices and the edges are
the constraints. Thus $\prob{\{x\vee y\}}$ is the problem of reducing the size of a
(given) vertex cover by including and excluding a total of at most $k$ vertices. As
we shall see (Lemma~\ref{lem:orhard}), this problem is W[1]-hard, even for bipartite
graphs. This might be of independent interest.
\textbf{Parameterized complexity.}
In a {\em parmeterized problem,} each instance contains an integer $k$
called the {\em parameter}. A parameterized
problem is \textit{fixed-param\-eter tractable (FPT)} if can be
solved by
an algorithm with running time $f(k)\cdot n^c$, where $n$ is the
length of the input, $f$ is an
arbitrary (computable) function depending only on $k$, and
$c$ is a constant independent of $k$.
A large fraction of NP-complete problems is known to be FPT. On the
other hand, analogously to NP-completeness in classical complexity, the theory of
W[1]-hardness can be used to give strong evidence that certain
problems are unlikely to be fixed-parameter tractable.
We omit the somewhat technical
definition of the complexity class W[1], see \cite{Downey99:book,Flum06:book} for
details. Here it will be sufficient to know that there are many
problems, including \textsc{Maximum Clique}, that were proved to be
W[1]-hard.
To prove that a parameterized problem is W[1]-hard, we have to
present a parameterized reduction from a known W[1]-hard
problem. A {\em parameterized reduction} from problem $L_1$ to problem
$L_2$ is a function that transforms
a problem instance $x$ of $L_1$ with parameter $k$ into a problem instance $x'$
of $L_2$ with parameter $k'$ in such a way that
\begin{itemize}
\item $x'$ is a yes-instance of $L_2$ if and only if $x$ is a
yes-instance of $L_1$,
\item $k'$ can be bounded by a function of $k$, and
\item the transformation can be computed in time $f(k)\cdot |x|^c$ for
some constant $c$ and function $f(k)$.
\end{itemize}
It is easy to see that if there is a parameterized reduction from
$L_1$ to $L_2$, and $L_2$ is FPT, then it
follows that $L_1$ is FPT as well.
\section{Characterizing fixed-parameter tractability}\label{sect:fpt}
In this section, we completely characterize those finite sets $\Gamma$ of
Boolean relations for which \prob{\Gamma} is
fixed-parameter tractable.
\begin{theorem}\label{thm:fpt}
Let $\Gamma$ be a finite set of Boolean relations. The problem \prob{\Gamma} is in
\textup{FPT} if every relation in $\Gamma$ is Horn or every relation in $\Gamma$ is flip
separable. In all other cases, \prob{\Gamma} is \textup{W[1]}-hard.
\end{theorem}
First we handle the fixed-parameter tractable cases
(Lemmas~\ref{horn-fpt} and~\ref{lem:flip-separ-FPT})
\begin{lemma}\label{horn-fpt}
If every $R\in \Gamma$ is Horn, then
\prob{\Gamma} is \textup{FPT}.
\end{lemma}
\begin{proof}
Observe that if there is a solution $f'$, then we can assume that
$f'(x)\le f(x)$ for every variable $x$: by defining $f''(x):=\min
\{f(x),f'(x)\}$, we get that $f''$ is also satisfying (as every
$R\in \Gamma$ is min-closed) and $\dist(f'',f)\le \dist(f',f)$.
Thus we can restrict our search to solutions that can be obtained
from $f$ by changing some 1's to 0's, but every 0 remains
unchanged.
Since $w(f')<w(f)$, there is a variable $x$ with $f(x)=1$ and $f'(x)=0$. For
every variable $x$ with $f(x)=1$, we try to find a solution $f'$ with
$f'(x)=0$ using a simple bounded-height search tree algorithm. For a
particular $x$, we proceed as follows. We start with initial
assignment $f$. Change the value of $x$ to $0$. If
there is a constraint $\langle (x_1,\dots,x_r),R\rangle$ that is not
satisfied by the new assignment, then we select one of the variables
$x_1$, $\dots$, $x_r$ that has value 1, and change it to 0. Thus at
this point we branch into at most $r$ directions. If the assignment is
still not satisfying, the we branch again on the variables of some
unsatisfied constraint. The branching factor of the resulting search
tree is at most $r_\text{max}$, where $r_\text{max}$ is the maximum
arity of the relations in $\Gamma$. By the observation above, if there
is a solution, then we find a solution on the first $k$ levels of the
search tree. Therefore, we can stop the search on the $k$-th level,
implying that we visit at most $r_\text{max}^{k+1}$ nodes of the search
tree. The work to be done at each node is polynomial in the size $n$
of the input, hence the total running time is $r_\text{max}^{k+1} \cdot
n^{O(1)}$.
\qed\end{proof}
If every $R\in \Gamma$ is not only Horn, but IHS-B$-$ (which is a
subset of Horn), then the algorithm of Lemma~\ref{horn-fpt} actually runs in
polynomial time:
\begin{corollary}\label{lem:cor-ihbs}
If every $R\in \Gamma$ is IHS-B$-$, then
\prob{\Gamma} is in \PTIME.
\end{corollary}
\begin{proof}
We can assume that every constraint is either $(x)$, $(x\to y)$, or
$(\bar x_1 \vee \dots \vee \bar x_r)$. If a constraint $(\bar x_1
\vee \dots \vee \bar x_r)$ is satisfied in the initial assignment
$f$, then it remains satisfied after changing some 1's to 0.
Observe that if a constraint $(x)$ or $(x\to y$) is not satisfied,
then at most one variable has the value 1. Thus there is no
branching involved in the algorithm of Lemma~\ref{horn-fpt}, making
it a polynomial-time algorithm. \qed\end{proof}
For flip separable relations, we give a very similar branching
algorithm. However, in this case the correctness of the algorithm
requires a nontrivial argument.
\begin{lemma}\label{lem:flip-separ-FPT}
If every $R\in \Gamma$ is flip separable, then
\prob{\Gamma} is FPT.
\end{lemma}
\begin{proof}
Let $(\phi, f,k)$ be an instance of $\prob{\Gamma}$.
If $w(f')<w(f)$ for some assignment $f'$, there is a variable $x$ with $f(x)=1$ and
$f'(x)=0$. For every variable $x$ with $f(x)=1$, we try to find a
solution $f'$ with $f'(x)=0$ using a simple bounded-height search
tree algorithm. For each such $x$, we proceed as follows. We
start with the initial assignment $f$ and set the value of $x$ to $0$.
Iteratively do the following: (a) if there is a constraint in $\phi$
that is not satisfied by the current assignment and such that the value of
some variable in it has not been flipped yet (on this branch),
then we select one of such
variables, and flip its
value; (b) if there is no such constraint, but the current assignment is not satisfying then
we move to the next branch; (c) if
every constraint is satisfied, then either we found a required solution or else
we move to the next branch. If a required solution is not found
on the first $k$ levels of the search tree then the algorithm
reports that there is no required solution.
Assume that $(\phi,f,k)$ is a yes-instance.
We claim that if $f'$ is a required solution with minimal distance from $f$,
then some branch of the algorithm finds it.
Let $X$ be the set of variables on which $f$ and $f'$ differ, so $|X|\le k$.
We now show that on the first $k$ levels of the
search tree, the algorithm finds some satisfying assignment $f_0$
(possibly heavier than $f$) that
differs from $f$ only on a subset $X_0\subseteq X$ of variables. To
see this, assume that at some node of the search tree, the current
assignment differs from the initial assignment only on a subset of
$X$; we show that this remains true for at least one child of the
node. If we branch on the variables $(x_1,\dots,x_r)$ of an
unsatisfied constraint, then at least one of its variables, say
$x_i$, has a value different from $f'$ (as $f'$ is a satisfying
assignment). It follows that $x_i\in X$: otherwise the current value
of $x_i$ is $f(x_i)$ (since so far we changed variables only in $X$)
and $f(x_i)=f'(x_i)$ (by the definition of $X$), contradicting the
fact that current value of $x_i$ is different from $f(x_i)$. Thus if
we change variable $x_i$, it remains true that only variables
from $X$ are changed. Since $|X|\le k$, this branch of the
algorithm has to find some satisfying assignment $f_0$.
If $w(f_0)<w(f)$, then, by the choice of $f'$, we must have $f_0=f'$.
Otherwise, let $X_0\subseteq X$ be
the set of variables where $f$ and $f_0$ differ and let $f''$ be the
assignment that differs from $f$ exactly on the variables
$X\setminus X_0$. From the fact that every constraint is flip
separable, it follows that $f''$ is a satisfying assignment. We
claim that $w(f'')<w(f)$. Indeed, if changing the values of the
variables in $X$ decreases the weight and changing the values in
$X_0$ does not decrease the weight, then the set $X\setminus X_0$ has to
decrease the weight. This contradicts the assumption that $f'$ is a
solution whose distance from $f$ is minimal: $f''$ is a solution
with distance $|X\setminus X_0|<|X|$. Thus it is sufficient to
investigate only the first $k$ levels of the search tree. As in the proof of
Lemma~\ref{horn-fpt}, the branching factor of the tree is at
most $r_\text{max}$, and the algorithm runs in time
$r_\text{max}^{k+1} \cdot n^{O(1)}$.
\qed\end{proof}
All the hardness proofs in this section are based on the following
lemma:
\begin{lemma}\label{lem:orhard}
\prob{\{x\vee y\}} is \textup{W[1]}-hard.
\end{lemma}
\begin{proof}
The proof is by reduction from a variant of \textsc{Maximum Clique}: given
a graph $G(V,E)$ with a distinguished vertex $x$ and an integer $t$, we
have to decide whether $G$ has a clique of size $t$ that contains
$x$. It is easy to see that this problem is W[1]-hard.
Furthermore, it can be assumed that $t$
is odd. Let $n$ be the number of vertices of $G$ and let $m$ be the
number of edges. We construct a formula $\phi$ on $m+n(t-1)/2-1$
variables and a satisfying assignment $f$ such that $G$ has a clique of size $t$
containing $x$ if
and only if $\phi$ has a satisfying assignment $f'$ with $w(f')<w(f)$
and distance at most $k:=t(t-1)-1$ from $f$.
Let $d=(t-1)/2$ (note that $t$ is odd). The formula $\phi$ has $d$ variables $v_1$, $\dots$, $v_d$
for each vertex $v\neq x$ of $G$ and a variable $u_e$ for each edge $e$
of $G$. The distinguished vertex $x$ has only $d-1$ variables $x_1$,
$\dots$, $x_{d-1}$. If a vertex $v$ is the endpoint of an edge $e$,
then for every $1 \le i \le d$ (or $1 \le i \le d-1$, if $v=x$), we add the constraint $u_e\vee
v_{i}$. Thus each variable $u_e$
is in $2d-1$ or $2d$ constraints
(depending on whether $x$ is the endpoint of $e$ or not).
Set $f(u_e)=1$ for every $e\in E$ and $f(v_{i})=0$ for every $v\in V$,
$1\le i \le d$. Clearly, $f$ is a satisfying assignment.
Assume that $G$ has a clique $K$ of size $t$ that includes
$x$. Set $f'(v_i)=1$ for every $v\in K$ ($1 \le i \le d$) and set
$f'(u_e)=0$ for every edge $e$ in $K$; let $f'$ be the same
as $f$ on every other variable. Observe that $f'$ is also a satisfying
assignment: if a variable $u_e$ was changed to 0 and there is a
constraint $u_e \vee v_i$, then $v\in K$ and hence
$f'(v_i)=1$. We have $w(f')<w(f)$:
$dt-1$ variables were changed to $1$ (note that $x\in K$) and
$t(t-1)/2=dk$ variables were changed to $0$. Moreover, the distance of
$f$ and $f'$ is exactly $dt-1+t(t-1)/2=t(t-1)-1=k$.
Assume now that $f'$ satisfies the requirements. Let $K$ be the set
of those vertices $v$ in $G$ for which $f'(v_i)=1$ for every $i$. We
claim that $K$ is a clique of size $t$ in $G$. Observe that there are
at least $d|K|-1$ variables $v_i$ with $f'(v_i)>f(v_i)$ and
$f'(u_e)<f(u_e)$ is possible only if both endpoints of $e$ are in $K$,
i.e., $e$ is in the set $E(K)$ of edges in $K$. Thus
$w(f')<w(f)$ implies $d|K|-1<|E(K)|\le |K|(|K|-1)/2$, which is only
possible if $|K|\ge t$.
If
$|K|>t$, then $f'(v_i)>f(v_i)$ for at least $(t+1)d-1$ variables,
hence there must be at least that many variables $u_e$ with
$f'(u_e)<f(u_e)$. Thus the distance of $f$ and $f'$ is at least
$2(t+1)d-2>t(t-1)-1$.
Therefore, we can assume $|K|=t$. Now $dt-1 < |E(K)| \le
|K|(|K|-1)/2=t(t-1)/2$ is only possible if $|E(K)|=t(t-1)/2$ (i.e., $K$
is a clique) and it follows that there are exactly $dt-1$ variables $v_i$ with
$f'(v_i)>f(v_i)$ (i.e., $x\in K$).
\qed\end{proof}
Now we are ready to present the main hardness proof of the section:
\begin{lemma}\label{lem:W1-main}
If $\Gamma$ contains a relation $R_1$ that is not Horn and a
relation $R_2$ that is not flip separable, then \prob{\Gamma} is \textup{W[1]}-hard.
\end{lemma}
\begin{proof}
The proof is by reduction from \prob{\{x\vee y\}}. Let
$(phi_1,f_1,k)$ be an instance of \prob{\{x\vee y\}}, i.e., every
constraint relation in formula $\phi_1=(V,C)$ is $(x \vee y)$.
Since $R_1$ is not min-closed, we can assume (by
permuting the variables) that for some $r_1,r_2\ge 1$, $r_3,r_4\ge
0$, if we define
\[
R'_1(x,y,w_0,w_1)=R_1(\overbrace{x,\dots,x}^{r_1},\overbrace{y,\dots,y}^{r_2},\overbrace{w_0,\dots,w_0}^{r_3},\overbrace{w_1,\dots,w_1}^{r_4}),
\]
then
$(0,1,0,1),(1,0,0,1)\in R'_1$, but $(0,0,0,1)\not\in R'_1$.
Since $R'_1$ is obtained from $R_1$ by identifying variables, we can
use the relation $R'_1$ when specifying instances of \prob{\Gamma}.
We consider two cases:
\textbf{Case 1:} $(1,1,0,1)\in R'_1$. In this case $R'_1(x,y,0,1)=x
\vee y$, hence it is easy to simulate \prob{\{x \vee y\}}. The only
difficulty is how to simulate the constants 0 and 1. We do this as
follows. Let us construct a formula $\phi_2$ that has every variable
of $V$ and new variables $q^j_0$, $q^j_1$ for every $1 \le j \le k+1$
(these new variables will play the role of the constants). We define
assignment $f_2$ of $\phi_2$ by setting $f_2(x)=f_1(x)$ for $x\in V$
and $f_2(q^j_0)=0$ and $f_2(q^j_1)=1$ for $1 \le j \le k+1$. For $1
\le a,b,c \le k+1$, we add constraint
$c^1_{a,b,c}=R'_1(q^{a}_1,q^b_0,q^b_0,q^c_1)$, it is clearly satisfied
by assignment $f_2$. To simulate a constraint $x\vee y$, we add
$c^2_{x,y,j}=R'_1(x,y,q^j_0,q^1_1)$ for every $1 \le j \le k+1$.
It is easy to see that if there is a solution $f'_1$ for the original
instance $(\phi_1,f_1,k)$, then by setting $f'_2(x)=f'_1(x)$ for every
$x\in V$ and $f'_2(q^j_0)=0$, $f'_2(q^j_1)=1$ for every $1 \le j \le
k+1$ gives a solution $f'_2$ for the constructed instance
$(\phi_2,f_2,k)$. We claim the converse is also true: if $f'_2$ is a
solution for the instance $(\phi_2,f_2,k)$, then the restriction of $f'_2$ to
$V$ gives a solution for $(\phi_1,f_1,k)$. Since the distance of $f_2$ and
$f'_2$ is at most $k$, there are $1 \le b,c \le k+1$ with
$f'_2(q^b_0)=0$ and $f'_2(q^c_1)=1$. Because of the constraint
$c^1_{a,b,c}$, we have that $f'_2(q^a_1)=1$ for every $1\le a \le
k+1$. It follows that $f'_2$ restricted $V$ is a satisfying assignment
of $\phi_1$: for every constraint $x\vee y \in C$, the constraint
$c^2_{x,y,b}$ prevents the possibility $f'_2(x)=f'_2(y)=0$. We have
seen that $f'_2(q^j_0)\ge f_2(q^j_0)$ and $f'_2(q^j_1)\ge f_2(q^j_1)$
for every $1 \le j \le k+1$. Now $w(f'_2)<w(f_2)$ implies that the
weight of $f'_2$ on $V$ has to be less than the weight of $f_2$ on
$V$. Thus $w(f'_1)<w(f_1)$.
\textbf{Case 2:} $(1,1,0,1)\not \in R'_1$, which means that
$R'_1(x,y,0,1)$ is $x\neq y$. In this case we have to rely on the fact
that $R_2$ is not flip separable to simulate the constraint $x\vee y$.
We construct formula $\phi_2$ and its satisfying assignment $f_2$ as
follows. Each variable $x$ is replaced by 3 variables $x_1$, $x_2$,
$x_3$. We set $f_2(x_1)=f_2(x_2)=f_1(x)$ and $f_2(x_3)=1-f_1(x)$.
Furthermore, for $1\le j \le 3k+1$, we add the variables $q^j_0$ and
$q^j_1$ and set $f_2(q^j_0)=0$ and $f_2(q^j_1)=1$.
For every $1 \le a,b,c \le 3k+1$, we add the constraint
$c^1_{a,b,c}=R'_1(q^{a}_1,q^b_0,q^b_0,q^c_1) $, as in the
previous case. For every $x\in V$, $1 \le j \le 3k+1$, and $\ell=1,2$,
we add $c^2_{x,\ell,j}=R'_1(x_\ell,x_3,q^j_0,q^1_1)$, as we shall see,
the role of these constraints is to ensure $f'_2(x_1)=f'_2(x_2)\neq
f'_2(x_3)$.
Since $R_2$ is not flip separable, there is a tuple
$(s_1,\dots,s_r)\in R_2$ that has flip sets $S_1\subset S_2$, but
$S_2\setminus S_1$ is not a flip set. For every constraint $x\vee y$
of $\phi_1$, we add $3k+1$ constraints to $\phi_2$ as follows. First,
for $1 \le i \le r$ and $1 \le j \le 3k+1$, we define variable $v^j_i$
as
\[
v^j_i = \begin{cases}
x_1 & \text{if $i\in S_1$ and $s_i=0$},\\
x_3 & \text{if $i\in S_1$ and $s_i=1$},\\
y_1 & \text{if $i\in S_2\setminus S_1$ and $s_i=1$},\\
y_3 & \text{if $i\in S_2\setminus S_1$ and $s_i=0$},\\
q^1_1 & \text{if $i\not\in S_2$ and $s_i=1$},\\
q^j_0 & \text{if $i\not\in S_2$ and $s_i=0$}.
\end{cases}
\]
For every $1 \le j \le 3k+1$, we add the constraint
$c^3_{x,y,j}=R_2(v^j_1,\dots,v^j_{r}).$
For example, assume that $(0,1,0,1)\in R_2$ and this tuple has
flip sets $S_1=\{1,2\}$ and $S_2=\{1,2,3,4\}$, but $S_2\setminus S_1=\{3,4\}$
is not a flip set. This means that $(0,1,0,1),(1,0,1,0),(1,0,0,1)\in
R_2$ and $(0,1,1,0)\not\in R_2$. In this case,
constraint $c^3_{x,y,j}$ is $R_2(x_1,x_3,y_3,y_1)$. Assuming
$f(x_1)\neq f(x_3)$ and $f(y_1)\neq f(y_3)$, any combination of values
on $x_1$ and $y_1$ satisfies the constraint, except if
$f(x_1)=f(y_1)=0$. Thus the constraint effectively acts as a
constraint $x_1\vee y_1$.
Finally, we set the maximum allowed distance to $k':=3k$. This
completes the description of the constructed instance $(\phi_2,f_2,k')$.
Assume first that $f'_1$ is a solution for the instance $(\phi_1,f_1,k)$. Define
$f'_2(x_1)=f'_2(x_2)=f'_1(x)$ and $f'_2(x_3)=1-f'_1(x)$ for every
$x\in V$, and define $f'_2(q^j_0)=0$, $f'_2(q^j_1)=1$ for every $1\le
j\le 3k+1$. The fact $w(f'_1)<w(f_1)$ implies $w(f'_2)<w(f_2)$.
Furthermore, the distance of $f_2$ and $f'_2$ is exactly three times
the distance of $f_1$ and $f'_1$, i.e., at most $3k$. We claim that
$f'_2$ satisfies the constraints of $\phi_2$. This is easy to see for
$c^1_{a,b,c}$ and $c^2_{x,\ell,j}$. For
$c^3_{x,y,j}$, this can be seen as follows:
\begin{itemize}
\item
If $f'_2(x)=0$, $f'_2(y)=1$, then because $(s_1,\dots, s_r)\in
R_2$.
\item If $f'_2(x)=1$, $f'_2(y)=0$, then because $S_2$ is a flip set.
\item If $f'_2(x)=1$, $f'_2(y)=1$, then because $S_1$ is a flip set.
\end{itemize}
For the other direction, assume that $f'_2$ is a solution for
instance $(\phi_2,f_2,k')$. Define $f'_1(x)=f'_2(x_1)$ for every $x\in V$; we claim that
$f'_1$ is a solution for instance $(\phi_1,f_1,k)$. Since the distance of $f_2$ and
$f'_2$ is at most $3k$, there are $1 \le b,c \le 3k+1$ with
$f'_2(q^b_0)=0$ and $f'_2(q^c_1)=1$. Because of the constraint
$c^1_{a,b,c}$, we have that $f'_2(q^a_1)=1$ for every $1\le a \le
3k+1$. The constraints $c^2_{x,1,b}$ and $c^2_{x,2,b}$ ensure that
$f'_2(x_1)=f'_2(x_2)=1-f'_2(x_3)$ (since $(0,0,0,1)\not\in R'_1$ and
$(1,1,0,1)\not\in R'_1$). It follows that the distance of
$f_1$ and $f'_1$ is at most $k$: $f_1(x)\neq f'_1(x)$ implies
$f_2(x_\ell)\neq f'_2(x_\ell)$ for $\ell=1,2,3$, hence this can hold
for at most $k$ different $x\in V$. Moreover, $w(f'_1)<w(f_1)$: this
follows from the facts $w(f'_2)<w(f_2)$ and $f'_2(q^j_0)\ge f_2(q^k_0)$,
$f'_2(q^j_1)\ge f_2(q^k_1)$ ($1\le j \le 3k+1$).
We claim that every constraint $x\vee y$ of $\phi_1$ is
satisfied. Assume that $f'_1(x)=f'_1(y)=f'_2(x_1)=f'_2(y_1)=0$. Now
$c^3_{x,y,b}$ is not satisfied: this follows from the
fact that $S_2\setminus S_1$ is not a flip set for $(s_1,\dots, s_r)$
(with respect to $R_2$).
\qed\end{proof}
\section{Characterizing polynomial-time solvability}\label{sect:poly}
In this section, we completely characterize those finite sets $\Gamma$ of Boolean
relations for which \prob{\Gamma} is polynomial-time solvable.
\begin{theorem}\label{thm:poly}
Let $\Gamma$ be a finite set of Boolean relations. The problem \prob{\Gamma} is in
$\PTIME$ if every relation in $\Gamma$ is IHS-B$-$ or every relation in $\Gamma$ is
width-2 affine. In all other cases, \prob{\Gamma} is $\NP$-hard.
\end{theorem}
\begin{proof}
If every relation in $\Gamma$ is IHS-B$-$, then
Corollary~\ref{lem:cor-ihbs} gives a polynomial-time algorithm.
If every relation in $\Gamma$ is width-2 affine then the following
simple algorithm solves \prob{\Gamma}: for a given instance
$(\phi,f,k)$, compute the graph whose vertices are the variables in $\phi$
and two vertices are connected if there is a constraint in $\phi$
imposed on them. If there is a connected component of this graph
which has at most $k$ vertices and such that $f$ assigns more 1's
in this component than it does 0's, then flipping the values in
this component gives a required lighter solution. If such a
component does not exists, then there is no lighter solution within
distance $k$ from $f$.
By Lemma~\ref{lem:W1-main}, if $\Gamma$ contains a relation that is
not Horn and a relation that is not flip separable then
\prob{\Gamma} is $\NP$-hard. (Note that the proof is actually a
polynomial-time reduction from an \NP-hard problem.) Therefore, we
can assume that every relation in $\Gamma$ is Horn or every
relation in $\Gamma$ is flip separable. We now give the proof for
the former case, while the proof for the latter case can be found
in Appendix.
Assume now that $\Gamma$ is Horn, and there is a relation $R\in \Gamma$ that is not
IHS-B$-$. We prove that $\prob{\{R\}}$ is NP-hard. It is shown in the proof of
Lemma~5.27 of~\cite{Creignou01:book} that then $R$ is at least ternary and one can
permute the coordinates in $R$ and then substitute 0 and 1 in $R$ in such a way that
that the ternary relation $R'(x,y,z)=R(x,y,z,0,\ldots,0,1,\ldots,1)$ has the
following properties:
\begin{enumerate}
\item $R'$ contains tuples $(1,1,1), (0,1,0), (1,0,0), (0,0,0)$, and
\item $R'$ does not contain the tuple $(1,1,0)$.
\end{enumerate}
Note that if $(0,0,1)\in R'$ then $R'(x,x,y)$ is $x\to y$. If $(0,0,1)\not\in R'$
then, since $R$ (and hence $R'$) is Horn (i.e., min-closed), at least one of of the
tuples $(1,0,1)$ and $(0,1,1)$ is not in $R'$. Then it is easy to check that at
least one of the relations $R'(x,y,x)$ and $R'(y,x,x)$ is $x\to y$. Hence, we can
use constraints of the form $x\to y$ when specifying instances of $\prob{\{R'\}}$.
We reduce \textsc{Minimum Dominating Set} to \prob{\{R'\}}. Let $G(V,E)$ be a graph
with $n$ vertices and $m$ edges where a dominating set of size at most $t$ has to be
found. Let $v_1$, $\dots$, $v_n$ be the vertices of $G$. Let $S=3m$. We construct a
formula with $nS+2m+1$ variables as follows:
\begin{itemize}
\item There is a special variable $x$.
\item For every $1 \le i \le n$, there are $S$ variables
$x_{i,1}$, $\dots$, $x_{i,S}$. There is a constraint $x_{i,j}\to
x_{i,j'}$ for every $1 \le j,j' \le n$.
\item For every $1 \le i \le n$, if $v_{s_1}$, $\dots$, $v_{s_d}$ are
the neighbors of $v_i$, then there are $d$ variables $y_{i,1}$,
$\dots$, $y_{i,d}$ and the following constraints:
$x_{s_1,1}\to y_{i,1}$, $R'(x_{s_2,1},y_{i,1},y_{i,2})$,
$R'(x_{s_3,1},y_{i,2},y_{i,3})$, $\dots$, $R'(x_{s_{d},1},y_{i,d-1},y_{i,d})$,
$R'(x_{i,1},y_{i,d},x)$.
\item For every variable $z$, there is a constraint $x\to z$.
\end{itemize}
Observe that the number of variables of type $y_{i,j}$ is exactly $2m$. Setting
every variable to 1 is a satisfying assignment. Set $k:=St+S-1$.
Assume that there is a satisfying assignment where the number of 0's is at most $k$
(but positive). Variable $x$ has to be 0, otherwise every other variable is 1. If
$x_{i,1}$ is 0, then $x_{i,j}$ is 0 for every $1 \le j \le S$. Thus $k<S(t+1)$
implies that there are at most $t$ values of $i$ such that $x_{i,1}$ is 0. Let $D$
consist of all vertices $v_i$ such that $x_{i,1}$ is 0. We claim that $D$ is a
dominating set. Suppose that some vertex $v_i$ is not dominated. This means that if
$v_{s_1}$, $\dots$, $v_{s_d}$ are the neighbors of $v_i$, then the variables
$x_{s_1,1}$, $\dots$, $x_{s_d,1}$, $x_{i,1}$ all have the value 1. However, this
means that these variables force variables $y_{i,1}$, $\dots$, $y_{i,d}$ and
variable $x$ to value 1, a contradiction. Thus $D$ is a dominating set of size at
most $t$.
The reverse direction is also easy to see. Assume that $G$ has a dominating set $D$
of size at most $t$. For every $1 \le i \le n$ and $1 \le j \le S$, set variable
$x_{i,j}$ to 1 if and only $v_i$ is not contained in $D$. Set $x$ to 0. It is easy
to see that this assignment can be extended to the variables $y_{i,j}$ to obtain a
satisfying assignment: indeed, if $v_{s_1}$, $\dots$, $v_{s_d}$ are the neighbors of
$v_i$ and none of them is in $D$ then $v_i\in D$, and we set $y_{i,1}=\ldots
=y_{i,d}=1$. Otherwise, if $j$ is minimal such that $v_{s_j}\in D$, we set
$y_{i,1}=\ldots =y_{i,j-1}=1$ and $y_{i,q}=0$ for $q\ge j$.
This satisfying assignment contains at most $St+2m+1 \le k$ variables with value 0,
as required.
Finally, we reduce $\prob{\{R'\}}$ to $\prob{\{R\}}$ (and so to
$\prob\Gamma$). Take an instance $(\phi,f,k)$ of $\prob{\{R'\}}$, let
$V$ be the variables of $\phi$ and $c_1,\ldots,c_p$ the constraints of
$\phi$. We build an instance $\phi'$ of
$\prob{\{R\}}$ as follows.
\begin{enumerate}
\item For each $1\le i \le \max(p,k+1)$, introduce new variables $x_0^i, x_1^i$.
\item For each constraint $c_i=R'(x,y,z)$ in formula $\phi$, replace
it by the constraint
$R(x,y,z,x_0^i,\ldots,x_0^i,x_1^i,\ldots,x_1^i)$.
\item For each ordered pair $(i,j)$ where $1\le i,j \le \max(p,k+1)$, add the
constraints $R(x_0^i,x_0^i,x_0^j,x_0^j,\ldots,x_0^j,x_1^j,\ldots,x_1^j)$ and
$R(x_1^j,x_1^j,x_1^i,x_0^j,\ldots,x_0^j,x_1^j,\ldots,x_1^j)$.
\end{enumerate}
Finally, extend $f$ so that, for all $i$, we have $x_0^i=0$ and $x_1^i=1$. It is
clear that the obtained mapping $f'$ is a solution to the new instance. Note that,
by the choice of $R'$, the tuple $(1,1,0,0,\ldots,0,1,\ldots,1)$ does not belong to
$R$. Hence, the constraints added in step (3) above ensure that if a variable of the
form $x_0^i$ or $x_1^i$ in $f'$ is flipped then, in order to get a solution to
$\phi'$ different from $f'$, one must flip at least one of $x_0^i$ or $x_1^i$ for
each $1\le i \le \max(p,k+1)$. Consequently, all solutions to $\phi'$ that lie within
distance $k$ from $f'$ must agree with $f'$ on all such variables. In other words,
searching for such a solution, it makes sense to flip only variables from $V$. Thus,
clearly, the instances $(\phi,f,k)$ and $(\phi',f',k)$ are equivalent.
\qed\end{proof}
|
1,116,691,497,370 | arxiv | \section{Introduction}
The principles of self-organization (SO), implying the autonomous increase of order in a system, is omnipresent in technological and biological entities \cite{nicolis_self_org,cross_pattern}.
Typically, SO is driven by gradients in thermodynamic potentials so that corresponding processes occur under nonequilibrium conditions. During the organizational process, small fluctuations present in an initially disordered system grow selectively. This results in a long-distance coherent behavior. In pattern-forming systems, this coherence is manifested in characteristic features of
the emerged patterns, which are independent of the exact form of the initial fluctuations. Hence, such characteristics can be used to classify the system.
The analysis of self-organizing systems is also motivated by their frequent occurrence in nature. Among others, SO appears in hydrodynamic instabilities \cite{cross_pattern}, concentration patterns during chemical reactions \cite{epstein_nonlin_chem_dyn}, as well as biological evolution and cellular processes \cite{karsenti_self_org_cell_bio}. Self-organizing systems (SOSs) also find various applications such as in the production of supramolecular structures \cite{lehn_supram_chem}, self-organization of organic semiconductors \cite{choi_self_org_sem}, and producing structured surfaces \cite{wu_so_pattern}.
While there is extensive work addressing the characteristics and dynamics of single SOSs, there is comparably little work on coupled processes. In a similar fashion as the interaction of the components of a single SOS lead to the macroscopic coherent behavior of the SOS itself, the communication between these systems may result in a nontrivial collective behavior as well. So far, to the best of our knowledge, such an analysis has only been performed for SO chemical reactions. Specifically, both theoretical and experimental work has been done for the coupling of chemical oscillators \cite{epstein_nonlin_chem_dyn} and extensively for Turing instabilities. The latter occur in a two-component reaction-diffusion system used as a model system for pattern forming reactions \cite{turing_morpho}. In the work of Yang et al. \cite{yang_id_turing_osci}, two separated liquid films were theoretically analyzed, where the same two-component chemical reaction was taking place in each film. The time dependence of the concentrations was described with the equations proposed by Turing. The coupling between the layers was achieved by allowing one of the reactants to diffuse through the membrane separating the liquids. The numerical analysis of this system predicted the emergence of oscillatory patterns, which do not appear in the single-layer configuration. However, the Turing equations are inappropriate for the complete description of a nonequilibrium system in steady state, since the need for constant supply of reactants and removal of products is not considered \cite{cross_pattern}. Moreover, the emergence of the patterns requires the two reactants to have considerably different diffusion coefficients \cite{epstein_nonlin_chem_dyn}. Despite the difficulties, coupled Turing patterns have been experimentally analyzed, and the emergence of superlattice patterns was observed \cite{berenstein_exp_coupled_turing}. Hence, for the Turing instability it is confirmed that the coupled system has properties significantly different from the individual SOS. This motivates the search for the effects of coupling in other physical systems.
To this end, facilitated by the extensive literature on self-organization in thin liquid films, the coupling of a long-wavelength (or deformational) B\'enard-Marangoni instability with a long-wavelength Rayleigh-Taylor instability is analyzed. Research on instabilities in liquid layers is widespread: next to their importance in coating technologies, they are still rich in unresolved scientific questions, while their theoretical description can nevertheless be performed with the Navier-Stokes equations and other well-established transport equations. Furthermore, both instabilities forming the basis of this article have been examined in detail by numerous authors \cite{oron_lubrication, vanhook_lw_exp_theo, burgess_ceiling_drip}. Herein, the coupling between the layers is achieved by the modulation of heat transfer, while mechanical interaction, e.g., by viscous forces, is negligibly small. The evolution of coupled liquid instabilities has already been examined \cite{merkt_liquid-liquid, merkt_liquid_liquid-gas, rednikov_two-layered_benard, kumar_faraday, nepomn_multilayer}. However, the purpose of these works was not to illuminate synergistic effects, as the coupling mechanisms were usually quite complicated and a clear separation into individual subsystems was not readily possible. By contrast, for the system under discussion in this article, the dynamics of the films can be described with qualitatively identical evolution equations, facilitating the examination of the effects induced by the coupling. As the evolution equations remain reasonably transparent, one can clearly distinguish the two self-organizing subsystems and immediately isolate the effects which are caused by the coupling alone.
\section{Governing equations}
\subsection{Double-layer configuration}
This section focuses on the evolution equation of the long-wavelength instability for a double-layer system. (Fig. \ref{im:one-layer}) Two distinct types of instabilities exist in liquid films driven by a variation of surface tension with temperature. In corresponding systems, the film with an initial thickness of $h_0$ is typically heated from below, while the free interface of surface tension $\sigma$ is cooled from the top. For thicker films ($h_0 = 0.1-1\text{mm}$), the short-wavelength B\'enard-Marangoni (BM) instability is dominant, for which the characteristic pattern wavelength $\lambda_{char}$ is of the same order as $h_0$, whereas the deformation of the interface is negligible. For the second type, the so-called long-wavelength or deformational BM-instability, $\lambda_{char}$ is much larger than $h_0$, and the interface undergoes significant deformation. Experimentally, this mode of instability is more difficult to observe since the commonly observed short-wavelength mode needs to be suppressed by either a small value of $h_0$ \cite{vanhook_exp} or by using very viscous films such as polymer \cite{troian_polymer-lw} or metal melts \cite{trice_laser-lw}. Equivalently, depending on $h_0$, there exist two types of buoyancy-driven instabilities in liquid films. Of particular interest herein is the long-wavelength Rayleigh-Taylor (RT) instability \cite{vanhook_lw_exp_theo}. As will be described below and except for the sign of the buoyancy term, the evolution equations for the long-wavelength BM instability and the long-wavelength RT instability are qualitatively the same, if in the latter case the film is exposed to a transverse thermal gradient. This was used in Ref. \cite{burgess_ceiling_drip} to prevent the dripping of liquid films from ceilings.
The derivation of the evolution equation is based on the incompressible Navier-Stokes equations. The detailed analysis is available in multiple papers, therefore we will only summarize the main steps and results. For further details the reader is referred to Refs. \cite{oron_lubrication, vanhook_lw_exp_theo, mathias_nanofilm}. The momentum equations in the bulk of the layer and at the interface read \cite{wang_moving_contact}
\begin{align}
\frac{d\rho \boldsymbol{\mathrm{v}}}{dt}&=-\nabla \cdot \boldsymbol{\mathrm{P}}_{liq} +\rho \boldsymbol{\mathrm{F}}\label{eq:bulk_mom}\\
\frac{d\rho_s \boldsymbol{\mathrm{v}}}{dt}&= - \left(\nabla_s \cdot \boldsymbol{\hat{\mathrm{n}}}\right)\boldsymbol{\hat{\mathrm{n}}}\sigma+\nabla_s\sigma+\left(\boldsymbol{\mathrm{P}}_{liq}-\boldsymbol{\mathrm{P}}_{gas}\right)\boldsymbol{\hat{\mathrm{n}}},\label{eq:surf_mom}
\end{align}
\noindent where $\rho$ is the density, $\boldsymbol{\mathrm{v}}$ is the liquid velocity, and the effects of gravity and other bulk forces are collected in $\boldsymbol{\mathrm{F}}$. The substantial derivative is expressed by $d(.)/dt=\partial(.)/\partial t+{\boldsymbol v}\cdot{\boldsymbol \nabla}(.)$. The stress tensor is denoted by $\boldsymbol{\mathrm{P}}_{liq}$, which is composed of a (scalar) equilibrium pressure $p_{liq}$ and the viscous stress-tensor, namely
\begin{equation}
\begin{aligned} \boldsymbol{\mathrm{P}}_{liq}&=p_{liq}\boldsymbol{\mathrm{I}}+\boldsymbol{\mathrm{E}}^{\nu} &\\
E^{\nu}_{ij}&=-\mu\left(\frac{\partial v_i}{\partial x_j}+\frac{\partial v_j}{\partial x_i}\right) \text{ }i,j\in[1,2,3],
\end{aligned}
\end{equation}
\noindent where $\mu$ is the dynamic viscosity and Newtonian behavior is assumed. The subscripts $1$, $2$ and $3$ represent the $x$, $y$ and $z$ directions, respectively. In Eq.~\eqref{eq:surf_mom}, $\rho_s$ is the surface density, which is usually negligible and will not be considered in this work. The derivative along the surface is denoted by $\boldsymbol \nabla_s$, while $\boldsymbol{\hat{\mathrm{n}}}$ is the normal vector of the surface pointing towards the air layer. Thus the first term on the right hand side of Eq.~\eqref{eq:surf_mom} is the capillary pressure and the second term is the shear stress induced by an inhomogeneity of the surface tension. Compared to liquids, the viscosity of gaseous materials is usually negligible so that the stress-tensor can be approximated by $\boldsymbol{\mathrm{P}}_{gas}=p_{gas}\boldsymbol{\mathrm{I}}$. This assumption is not necessarily valid for very thin layers of gas, where substantial viscous stresses might appear. Nevertheless, the Knudsen number in such thin gas films is no longer small, and additional physical phenomena emerge, such as a velocity slip and a temperature jump at the gas-liquid interface. These effects are beyond the scope of this paper and will not be further discussed.
At the bottom of the liquid layer the no-slip boundary condition is imposed, i.e. $\boldsymbol{\mathrm{v}}|_{z=0}=0$. At the liquid-gas interface the kinematic condition reads $v_z|_{z=h}={\partial h}/{\partial t}+v_x{\partial h}/{\partial x}+v_y{\partial h}/{\partial y}$.
Using the characteristic quantities from Fig. \ref{im:one-layer} we introduce the following nondimensional parameters: $X=x/\lambda_{char}$, $Y=y/\lambda_{char}$, $Z=z/h_0$, and $H=h/h_0$.
The characteristic flow velocity in the lateral direction $v_c$ is utilized as the scaling factor for the velocities in the form of $V_x=v_x/v_c$, $V_y=v_y/v_c$ and $V_z=(\lambda_{char}/h_0) \cdot v_z/v_c$. The time variable is rescaled according to $\tau=t\cdot v_c/\lambda_{char}$. Finally, $P=p a h_0/\left(\mu v_c\right)$ is the nondimensional form of the pressure, with $a=h_0/\lambda_{char}$. The dimensionless surface tension is denoted by $\Gamma=a\sigma/(\mu v_c)$.
\begin{figure}
\centerline{\includegraphics[width=\linewidth]{fig1_one-layer.eps}}
\caption{{The double-layer configuration comprising a liquid and a gas film of thicknesses $h_0$ and $d_0-h_0$, respectively, confined between two walls maintained at different but constant temperatures ($T_1>T_2$).}\label{im:one-layer}}
\end{figure}
With the nondimensionalized velocities the continuity equation remains qualitatively unchanged
\begin{equation}
\frac{\partial V_x}{\partial X}+\frac{\partial V_y}{\partial Y}+\frac{\partial V_z}{\partial Z}=0.
\end{equation}
The dimensionless bulk momentum equations are given by
\begin{equation}
\begin{aligned}
a \text{Re}\frac{dV_x}{d\tau}=&a^2\left(\frac{\partial^2 V_x}{\partial X^2}+\frac{\partial^2 V_x}{\partial Y^2}\right)+\frac{\partial^2 V_x}{\partial Z^2}-\frac{\partial P}{\partial X}\\
a \text{Re}\frac{dV_y}{d\tau}=&a^2\left(\frac{\partial^2 V_y}{\partial X^2}+\frac{\partial^2 V_y}{\partial Y^2}\right)+\frac{\partial^2 V_y}{\partial Z^2}-\frac{\partial P}{\partial Y}\\
a^3 \text{Re}\frac{dV_z}{d\tau}=&a^4\left(\frac{\partial^2 V_z}{\partial X^2}+\frac{\partial^2 V_z}{\partial Y^2}\right)+a^2\frac{\partial^2 V_z}{\partial Z^2}-\frac{\partial P}{\partial Z}-\frac{\text{Bo}}{\text{Ca}}. \hspace{-20pt}
\end{aligned}
\end{equation}
Furthermore, the dimensionless surface momentum equations read
\begin{equation}
\begin{aligned}
\frac{\partial \Gamma}{\partial X}=&\frac{\partial V_x}{\partial Z}+a^2\frac{\partial H}{\partial X}\frac{\partial V_z}{\partial Z}\\
\frac{\partial \Gamma}{\partial Y}=&\frac{\partial V_y}{\partial Z}+a^2\frac{\partial H}{\partial Y}\frac{\partial V_z}{\partial Z}\\
P=&2a^2\left(\frac{\partial H}{\partial X}\frac{\partial V_x}{\partial Z}+\frac{\partial H}{\partial Y}\frac{\partial V_y}{\partial Z}+\frac{\partial V_z}{\partial Z}\right)-\frac{\nabla_{\|}^2 H}{\text{Ca}}. \hspace{-20pt}
\end{aligned}
\end{equation}
In the long-wavelength limit, $a$ is assumed to be small so that $a^2\ll1$. Furthermore, the characteristic velocity $v_c$ is typically low and the Reynolds number $\text{Re}=\rho h_0 v_c/\mu$ can be assumed to be of order $a$ or smaller. The effect of gravity is captured by the term proportional to $\text{Bo}/\text{Ca}$, where $\text{Bo}=g \rho \lambda_{char}^2/\sigma$ is the Bond number, $\text{Ca}=\mu v_c/(\sigma a^3)$ is the capillary number and $\nabla_{\|}=(\partial/\partial X,\partial/\partial Y)$ is the gradient along the lateral coordinates.
The gradient of the dimensionless surface tension $\nabla_\|\Gamma$ is defined by the variance of the temperature along the interface, i.e. $\nabla_\|\Gamma=$${a}/(\mu v_c)\nabla_\|\sigma=$$-a\sigma_T/(\mu v_c)\nabla_\|T{\big|_{ Z=H}}$. Here, $\sigma$ is assumed to be a linear function of the temperature and $\sigma_T=-{d\sigma}/{dT}$. The nondimensional equivalent of the temperature is defined according to $\Theta=\left({T-T_2}\right)/\left({T_1-T_2}\right)$, where $T_1$ and $T_2$ are the substrate temperatures of the lower and the upper substrate, respectively, and $T_1>T_2$. Thus the surface tension gradient can be expressed by $\nabla_\|\Gamma=-\text{Ma}\nabla_\|\Theta{\big|_{ Z=H}}$. The dimensionless Marangoni number $\text{Ma}={a\sigma_T(T_1-T_2)}/(\mu v_c)$ characterizes the variance of the surface tension with temperature. This definition of the Marangoni number differs from the one used in the short-wavelength B\'enard-Marangoni instability \cite{vanhook_lw_exp_theo}. In the latter case, $\text{Ma}^*={\sigma_T\Delta T^*h_0}/({\mu\alpha})$, where $\alpha$ is the thermal diffusivity and $\Delta T^*=T_1-T(h_0)$ is the temperature drop across the liquid layer.
For the long-wavelength approximation, terms of order $a^2$ and $a^4$ are assumed to be negligible. For $\text{Re} \leq O(a)$, the momentum equations together with the boundary conditions and the continuity equation can be transformed into \cite{mathias_nanofilm}
\vspace{-12pt}
\begin{equation}\label{eq:h_mom_eq}
\begin{aligned}
\frac{\partial H}{\partial \tau}+\nabla_\|\Biggl\{
\frac{H^3}{3\text{Ca}}\Bigl[\nabla_\|\left(\nabla_\|^2H\right)-\text{Bo}\nabla_\|H\Bigr]-&\\
-\text{Ma}\frac{H^2}{2}\nabla_\|\left(\Theta{\big |_{ Z=H}}\right)\Biggr\}&=0
\end{aligned}
\end{equation}
The temperature distribution in the liquid and gas layers can be calculated with the energy equation. In the long-wavelength approximation of the double-layer configuration the P\'eclet numbers $\text{Pe}_{l}$$=$$\text{Re}\cdot \text{Pr}$$=$$h_0 v_c/\alpha$ of the liquid and $\text{Pe}_{g}$$=$$\text{Re}_g\cdot \text{Pr}_g$$=$$h_{0,g} v_{c,g}/\alpha_g$ of the gas layers are assumed to be small, at least to order $a$. The quantities $\alpha$ and $\alpha_g$ stand for the thermal diffusivity of the liquid and gas layers respectively. In nondimensional form, the energy equation reads
\vspace{-12pt}
\begin{equation}\label{eq:full_energy_eq}
a \text{Pe}_{l/g}\frac{d\Theta}{d\tau}=a^2\left(\frac{\partial^2 \Theta}{\partial X^2}+\frac{\partial^2 \Theta}{\partial Y^2}\right)+\frac{\partial^2 \Theta}{\partial Z^2},
\end{equation}
\noindent where $\text{Pe}_{l/g}$ is either $\text{Pe}_l$ or $\text{Pe}_g$, depending on which phase Eq. \eqref{eq:full_energy_eq} refers to. The first terms on the right-hand side and the left-hand side of Eq.~\eqref{eq:full_energy_eq} can be omitted, as they are proportional to $a^2$. Assuming continuity in the temperature and in the heat flux at the interface, the temperature at $Z=H$ is given by:
\vspace{-12pt}
\begin{equation}
\Theta\big|_{Z=H}=\frac{D_0-H}{D_0+(1/\kappa-1)H},
\end{equation}
\noindent where $\kappa=k_{liq}/k_{gas}$ denotes the ratio of the heat-conductivities of the two layers. The distance between the two substrates is given by $D_0=d_0/h_0$.
Substituting this into Eq.~\eqref{eq:h_mom_eq} leads to a nonlinear partial-differential equation for $H$. For the linear analysis one assumes that the deviation from the equilibrium configuration ($H=1$) is small so that $H=1+\Delta H$, where $\Delta H^2<<1$. With this the equation for $\Delta H$ takes the following form:
\vspace{-12pt}
\begin{equation}\label{eq:one_layer_linearized}
\begin{aligned}
\frac{\partial \Delta H}{\partial \tau}&+\frac{\frac{1}{\kappa} D_0 \text{Ma}}{2\left(D_0+1/\kappa-1\right)^2}\nabla_\|^2\Delta H+\\
&+\frac{1}{3\text{Ca}}\nabla_\|^2\left(\nabla_\|^2\Delta H-\text{Bo}\Delta H\right)=0.
\end{aligned}
\end{equation}
\subsection{Triple-layer configuration}
In the system shown in Fig.~\ref{im:one-layer}, thermocapillarity destabilizes the film, while gravity stabilizes it. If the film is placed on the upper substrate, gravity destabilizes it while thermocapillarity stabilizes it. This circumstance was utilized in the thermocapillarity-driven dripping prevention from ceilings \cite{burgess_ceiling_drip}. Note that this system can be described by simply inverting the sign in front of the gravity term in Eq.~\eqref{eq:h_mom_eq}. Inspired by the work of Srivastava et al. \cite{sharm_electric_int}, in the following, two films opposite to each other on separated substrates with a thin gas layer in between are considered (Fig.~\ref{im:double-layer}). As for the two-layer system, the lower substrate is hotter than the upper one so that the lower film is subjected to the gravity-stabilized long-wavelength BM instability, whereas the upper film is subjected to the thermocapillarity-stabilized long-wavelength RT instability. Experimental verification of the core principle of coupled self-organization based on a similar system as the one presented herein is on its way. In that context, it has to be noted that, for the coupled system, the in situ measurement of the individual film height distributions without disturbing the coupling is experimentally very challenging. In the focus of the subsequent analysis are the consequences of thermal coupling between the two films which cannot be observed for the isolated systems.
Due to the negligibly small viscosity of the gas, there is no direct mechanical connection between the layers. Therefore, the momentum equations of the two systems remain independent and the coupling will only appear through the energy equation. As will be shown, the layers are thermally coupled in a nontrivial way, as the deformation of one liquid layer changes the local surface temperature of the other one.
Hereafter it is assumed that the long-wavelength \nolinebreak approximation is valid for both liquids. Consequently $\left(h_{0,1}/\lambda_{char,1}\right)^2$$\ll$$1$ and $(h_{0,2}/\lambda_{char,2})^2$$\ll$$1$, where $h_{0,i}$ and $\lambda_{char,i}$ represent the initial thicknesses and the characteristic deformation wavelengths of the two liquid layers, respectively. Hence, the evolution equations for the dimensionless liquid thicknesses $H_1=h_1/h_{0,1}$ and $H_2=h_2/h_{0,2}$ can be derived in an equivalent fashion as for Eq.~\eqref{eq:h_mom_eq}. Moreover, we suppose that the thicknesses of the two liquid layers are of the same order of magnitude, and that this assumption remains valid for the characteristic wavelengths too. Therefore, the ratio of the initial thicknesses $\chi=h_{0,2}/h_{0,1}=O(1)$ and the same scaling lengths can be used in the lateral direction. For the calculation of the dimensionless groups, $h_{0,1}$ is used for nondimensionalization. Accordingly, the film evolution equations of both liquid layers are given by
\vspace{-12pt}
\begin{align}
&\begin{aligned}\frac{\partial H_1}{\partial \tau}+\nabla_\|\Biggl\{\frac{H_1^3}{3\text{Ca}_1}\Bigl[\nabla_\|\left(\nabla_\|^2H_1\right)-\text{Bo}_1\nabla_\|H_1\Bigr]-&\\
-{\text{Ma}_1}\frac{H_1^2}{2}\nabla_\|\left(\Theta{\big |_{ Z=H_1}}\right)\Biggr\}&=0
\end{aligned}\label{eq:h_mom_eq_double_1}\\
&\text{and}\nonumber \\
&\begin{aligned}
\frac{\partial H_2}{\partial \tau}+\nabla_\|\Biggl\{
\chi^3\frac{H_2^3}{3\text{Ca}_2}\Big[\nabla_\|\left(\nabla_\|^2H_2\right)
+\text{Bo}_2\nabla_\|H_2\Big]-&\\
-\chi{\text{Ma}_2}\frac{H_2^2}{2}\nabla_\|\left(\Theta{\big |_{ Z=D_0-H_2}}\right)\Biggr\}&=0.
\end{aligned}\label{eq:h_mom_eq_double_2}
\end{align}
\noindent The effect of the different liquid thicknesses are captured by $\chi$ alone. Using the one-dimeansional heat diffusion equation, derived in the previous section, the continuity of the temperature and the heat flux density at the interfaces leads to the following expressions for the interfacial temperatures
\begin{equation}\label{eq:dimles_temp_2}
\begin{aligned}
\Theta\big|_{Z=H_1}& =\frac{-H_1}{\kappa_1(D_0-H_1-\chi H_2)+H_1+\frac{\kappa_1}{\kappa_2}\chi H_2}+1,\\
\text{and}\\
\Theta\big|_{Z=D_0-H_2}& =\frac{\chi H_2}{\kappa_2(D_0-H_1-\chi H_2)+\chi H_2+\frac{\kappa_2}{\kappa_1} H_1},
\end{aligned}
\end{equation}
\noindent where $\kappa_i=k_{liq,i}/k_{gas}$. The surface gradients of the interfacial temperatures are
\vspace{-12pt}
\begin{equation}\label{eq:epsilon_def}
\begin{aligned}&\nabla_\|\Theta|_{H_1}=-\epsilon_1 \nabla_\|H_1-\epsilon_2\nabla_\|H_2 \text{, where} \\
&\displaystyle \epsilon_1=\frac{\kappa_1\Big[D_0-\chi H_2 \left(1-\frac{1}{\kappa_2}\right)\Big]}{\Big[\kappa_1\left(D_0-H_1-\chi H_2\right)+H_1+\frac{\kappa_1}{\kappa_2}\chi H_2\Big]^2},\\
&\displaystyle \epsilon_2=\frac{\kappa_1\chi H_1\left(1-\frac{1}{\kappa_2}\right)}{\Big[\kappa_1\left(D_0-H_1-\chi H_2\right)+H_1+\frac{\kappa_1}{\kappa_2}\chi H_2\Big]^2}
\end{aligned}
\end{equation}
\vspace{-12pt}
\begin{equation}\label{eq:phi_def}
\begin{aligned}
&\text{and } \nabla_\|\Theta|_{D_0-H_2}=\phi_1 \nabla_\|H_1+\phi_2\nabla_\|H_2\text{, where} \\
&\displaystyle \phi_1=\frac{\kappa_2 \chi H_2\left(1-\frac{1}{\kappa_1}\right)}{\Big[\kappa_2\left(D_0-H_1-\chi H_2\right)+\chi H_2+\frac{\kappa_2}{\kappa_1}H_1\Big]^2},\\
&\displaystyle \phi_2=\frac{\kappa_2 \chi \Big[D_0-H_1\left(1-\frac{1}{\kappa_1}\right)\Big]}{\Big[\kappa_2\left(D_0-H_1-\chi H_2\right)+\chi H_2+\frac{\kappa_2}{\kappa_1}H_1\Big]^2}.
\end{aligned}
\end{equation}
\noindent Note that $\epsilon_1,\epsilon_2,\phi_1,\phi_2>0$ and $\epsilon_1\phi_2>\epsilon_2\phi_1$.
For the linear analysis we introduce the notation $H_1=1+\Delta H_1$ and $H_2=1+\Delta H_2$ and assume again that $\Delta H_1^2,\quad \Delta H_2^2 <<1$. Neglecting the second and higher order terms in $\Delta H_1$ and $\Delta H_2$ leads to the linearized evolution equations, reading
\begin{equation}\label{eq:non_dim_grav}
\begin{aligned}
\frac{\partial \Delta H_1}{\partial \tau}+\left( \frac{\epsilon_1 \text{Ma}_1}{2}-\frac{\text{Bo}_1}{3\text{Ca}_1}\right)\nabla_\|^2 \Delta H_1+&\\
+\frac{\epsilon_2 \text{Ma}_1}{2}\nabla_\|^2 \Delta H_2+\frac{1}{3\text{Ca}_1}\nabla_\|^4 \Delta H_1&=0, \\
\frac{\partial \Delta H_2}{\partial \tau}+\left( -\frac{\phi_2\chi \text{Ma}_2}{2}+\frac{\chi^3 \text{Bo}_2}{3\text{Ca}_2}\right)\nabla_\|^2 {\Delta H_2}-&\\
- \frac{\phi_1 \chi \text{Ma}_2}{2}\nabla_\|^2 {\Delta H_1}+\frac{\chi^3}{3\text{Ca}_2}\nabla_\|^4 {\Delta H_2}&=0.
\end{aligned}
\end{equation}
\noindent Here, $\epsilon_1$, $\epsilon_2$, $\phi_1$ and $\phi_2$ are evaluated at $H_1 = 1$ and $H_2 = 1$, respectively.
\begin{figure}
\includegraphics[width=\linewidth]{fig2_double-layer.eps}
\caption{Schematic of the triple-layer configuration with initial liquid thicknesses of $h_{0,1}$, $h_{0,2}$ and plate separation $d_0$.}\label{im:double-layer}
\end{figure}
\section{Linear and numerical analysis of the evolution equations}
In this section, the properties of Eq.~\eqref{eq:non_dim_grav} are analyzed. This will provide useful information for the qualitative and quantitative description of the emerging patterns as well as the stability behavior of the SOS \cite{cross_pattern}.
Equation \eqref{eq:non_dim_grav} indicates that the linear evolution of the liquid layers is qualitatively similar. The main difference lies in the interchanged roles of the thermocapillary and gravitational force. While the $Ma_1$-term in the first equation destabilizes the lower layer by giving positive feedback to a deformation, the $Ma_2$-term in the second layer stabilizes the upper layer by damping deformations. The opposite is true for the gravitational body force, i.e., it stabilizes the lower layer but destabilizes the upper layer.
For a better understanding of some of the results presented later in this work, it is helpful to consider a simplified version of the triple-layer configuration, where the upper liquid layer is assumed to be rigid. (${\Delta H_2}=0$). In this case the second equation in Eqs.~\eqref{eq:non_dim_grav} turns into an identity, while the first one simplifies to
\vspace{-12pt}
\begin{equation}\label{eq:upper_rigid}
\displaystyle \frac{\partial H_1}{\partial \tau}+\frac{\frac{1}{\kappa_1} D_0 \text{Ma}_{eff}}{2\left(D_0+\frac{1}{\kappa_1}-1\right)^2}\nabla_{\|}^2H_1+\frac{1}{3\text{Ca}_1}\nabla_{\|}^4H_1=0,
\end{equation}
with the effective Marangoni number
\vspace{-12pt}
\begingroup
\fontsize{10pt}{8pt}\selectfont
\begin{equation}\label{eq:maeff}
\text{Ma}_{eff}=\left({\frac{\epsilon_1\text{Ma}_1}{2}-\frac{\text{Bo}_1}{3\text{Ca}_1}}\right)\left[{\frac{\frac{1}{\kappa_1} D_0}{2\left(D_0+(\frac{1}{\kappa_1}-1)\right)^2}}\right]^{-1}.
\end{equation}
\endgroup
Equation \eqref{eq:upper_rigid} has the form of a linearized evolution equation of a system with only one liquid layer in the absence of gravity, where $Ma_{eff}$ serves as the Marangoni number. Thus, it is an equivalent of Eq.~\eqref{eq:one_layer_linearized} and the results already available for this type of instability \cite{oron_lubrication} are directly applicable. In particular, the system will always be linearly unstable if $\text{Ma}_{eff}>0$, which is equivalent to $\epsilon_1\text{Ma}_1/2-{\text{Bo}_1}/(3\text{Ca}_1)>0$. In the limit of $\chi =0$ this criterion is identical to the results obtained by VanHook et al. \cite{vanhook_lw_exp_theo}. The same train of thought can be applied to a system where the lower layer is rigid. In this case the upper one is unstable if $-\phi_2 \text{Ma}_2/2+\chi^2\text{Bo}_2/(3 \text{Ca}_2)>0$.
In general, for both layers being mobile, Eq.~\eqref{eq:non_dim_grav} can be solved using the Fourier transforms of the deformation variables
\vspace{-12pt}
\begin{equation}\label{eq:fourier_form}
\begin{aligned} {\Delta H_n}&(\tau,X,Y)=\\
=&\frac{1}{2\pi}\int_{-\infty}^{\infty}{\widetilde{\Delta H}_n(\tau,q_{x},q_{y})e^{i(q_{x}X+q_{y}Y)}dq_{x}dq_{y}},
\end{aligned}
\end{equation}
\noindent where $n\in[1;2]$ indicates the two liquid layers. The dimensionless wavenumbers (scaled by $a/h_0$) in the $x$ and $y$ direction are denoted by $q_x$ and $q_y$. The back substitution to the linearized evolution equations gives an equation for every Fourier component. As Eq.~\eqref{eq:non_dim_grav} is linear, the different Fourier modes will be independent of each other. Introducing $q=\left(q_x^2+q_y^2\right)^{1/2}$, the transformed equations read
\vspace{-12pt}
\begin{equation}\label{eq:non_dim_matrix_grav}
\frac{\partial}{\partial\tau}\begin{pmatrix} {\widetilde{\Delta H}_1}\\ {\widetilde{\Delta H}_2}\end{pmatrix}=\boldsymbol{\mathrm{M}}
\begin{pmatrix} {\widetilde{\Delta H}_1}\\ {\widetilde{\Delta H}_2}\end{pmatrix},
\end{equation}
\vspace{-12pt}
\begin{align*}
&M_{1,1}=\left( \frac{\epsilon_1 \text{Ma}_1}{2}-\frac{\text{Bo}_1}{3\text{Ca}_1}\right)q^2-\frac{1}{3\text{Ca}_1}q^4,\\
&M_{1,2}=\frac{\epsilon_2\text{Ma}_1}{2}q^2, \quad \quad M_{2,1}=
-\frac{\phi_1\chi \text{Ma}_2}{2}q^2, \\
&M_{2,2}=\left( -\frac{\phi_2\chi \text{Ma}_2}{2}+\frac{\chi^3\text{Bo}_2}{3\text{Ca}_2}\right)q^2-\frac{\chi^3}{3\text{Ca}_2}q^4.
\end{align*}
\noindent The general solution of this system of linear differential equations is
\begin{equation}\label{eq:general_fourier_sol}
\begin{aligned}
\displaystyle &\begin{pmatrix}{\widetilde{\Delta H}_1}(\tau,q_x,q_y)\\ {\widetilde{\Delta H}_2}(\tau,q_x,q_y)\end{pmatrix}=\\
&=\begin{pmatrix}{\widetilde{\Delta H}_{1+}}(q)\\ {\widetilde{\Delta H}_{2+}}(q)\end{pmatrix}e^{\omega_+\tau}+\begin{pmatrix}{\widetilde{\Delta H}_{1-}}(q)\\ {\widetilde{\Delta H}_{2-}}(q)\end{pmatrix}e^{\omega_-\tau}.
\end{aligned}
\end{equation}
In Eq.~\eqref{eq:general_fourier_sol}, $\omega_+(q)$ and $\omega_-(q)$ are the eigenvalues of the matrix $\boldsymbol{\mathrm{M}}$ in Eq.~\eqref{eq:non_dim_matrix_grav}, while $\left(\widetilde{\Delta H}_{1+}(q), \widetilde{\Delta H}_{2+}(q)\right)^T$ and $\left(\widetilde{\Delta H}_{1-}(q), {\widetilde{\Delta H}_{2-}}(q)\right)^T$ are the respective eigenvectors.
\subsection{Linearized equations for identical layers}
The exact formulas for the eigenvectors and eigenvalues are involved and are functions of many independent parameters. One arrives at considerably simpler formulas if the layers are identical. ($\epsilon_1=\phi_2$, $\epsilon_2=\phi_1$, $\text{Ca}_1=\text{Ca}_2\equiv \text{Ca}$, $\text{Ma}_1=\text{Ma}_2\equiv \text{Ma}$, $\text{Bo}_1=\text{Bo}_2\equiv \text{Bo}$ and $\chi=1$.). For simplification, the dimensionless time is rescaled according to $\hat{\tau}=\tau / 3\text{Ca}$. The resulting eigenvalues are
\vspace{-12pt}
\begin{equation}\label{eq:ident_eigval}
\displaystyle\omega_{\pm}=-q^4 \pm q^2 \cdot 3\text{Ca}\sqrt{\left( \frac{\epsilon_1 \text{Ma}}{2}-\frac{\text{Bo}}{3\text{Ca}}\right)^2-\left(\frac{\epsilon_2 \text{Ma}}{2}\right)^2}
\end{equation}
\noindent For this type of system the real part of the second eigenvalue $\text{Re}\left(\omega_-\right)$ is always negative. Therefore the corresponding modes are damped and will not have any effect on the long-term evolution of the system. To analyze the other eigenvalue from Eq.~\eqref{eq:ident_eigval} we define
\begin{equation}
s=\left(\frac{3}{2}\epsilon_1\text{Ca} \text{Ma}-\text{Bo}\right)^2-\left(\frac{3}{2}\epsilon_2\text{Ca} \text{Ma}\right)^2.
\end{equation}
\noindent Further examination of Eq.~\eqref{eq:ident_eigval} indicates that for $s>0$ the system is unstable to small perturbations, as there exists a range of wavenumbers $[q_{min},q_{max}]$ where $\text{Re}(\omega_+)\geq0$. Moreover, the deformations of the film are not oscillatory since $\omega_+$ is a real number in this region. By setting the left-hand side of Eq.~\eqref{eq:ident_eigval} to zero one obtains the marginally stable wavenumbers, $q_{max}=\sqrt[4]{s}$ and $q_{min}=0$. The marginal stability of the $q=0$ mode can also be understood intuitively: a growth rate different from zero would imply a uniform thickening or thinning of the film, changing the volume of the layer, which violates mass conservation \cite{cross_pattern}. In Fig.~\ref{im:growthrate} the growth rate $\text{Re}\left(\omega_+(s)\right)$ as a function of the wavenumber is shown. The aforementioned properties of the pattern formation identify it as a type-II-s instability \cite{cross_pattern}.
In the framework of the linear analysis, the characteristic wavenumber of the emerging pattern is predicted by finding the quantity $q_{char}$ that maximizes the real part of the growth rate. For the identical-layer setup, one finds that, similarly as for the double-layer configuration, $\displaystyle q_{char}=q_{max}/\sqrt{2}$. This defines a larger characteristic wavelength than for the uncoupled system. The corresponding growth rate is $\omega_{char}=s/4$.
\begin{figure}
\centerline{\includegraphics[width=\linewidth]{fig3_growthrate.eps}}
\caption{{The growth rate function of the linear stability analysis as a function of the dimensionless wavenumber.}\label{im:growthrate}}
\end{figure}
For validation of the results, a series of direct numerical simulations of Eq.~\eqref{eq:h_mom_eq_double_1} and Eq.~\eqref{eq:h_mom_eq_double_2} were performed and were compared with the findings of the linear analysis. Different values of $s$ were examined. For the liquid films the properties of a 10cSt silicone oil were used, separated by a layer of air. The material properties are summarized in Table \ref{tab:mat_prop}. The thermal conductivity and thermal diffusivity of the air layer at $50\,^{\circ}\mathrm{C}$ and at atmospheric pressure were approximated by $k_g=27.88\cdot 10^{-3}\text{W}\text{m}^{-1}\text{K}^{-1}$ and $\alpha_{g}=2.57\cdot10^{-5}\text{m}^2/\text{s}$ \cite{vdi}. For this analysis the thicknesses of both the air and the liquid layers were set to $h_{0,1}=100\mu \text{m}$. Systems with slightly different film heights were also analyzed in a second series of simulations. This will be addressed later. The value of $s$ was controlled by the temperature difference between the two substrates. By varying the latter between $7.5\text{K}$ and $20.5\text{K}$, $s$ varies between $0.1$ and $3.5$. Within this region, the theoretical expectations of the characteristic wavelengths are in the range $[64\cdot h_{0,1}, 158\cdot h_{0,1}]$. It follows that $a^2\ll1$. Furthermore, in the third column of Table {\ref{tab:dim_quant}}, the typical values of the dimensionless groups characterizing the liquid layers are summarized. In order to predict their values, $q_{char}$ was calculated based on the linear stability analysis and was transformed back to its dimensional form according to the lateral scaling length used in the simulations. The calculation of $\text{Re}$, $\text{Ma}$, $\text{Ca}$ and $\text{Pe}$ requires the characteristic velocity $v_c$. Thus they could be only computed in the unstable ($s>0$) region. The characteristic velocity was approximated with the results of the linear analysis, and the method used to estimate it is discussed in the appendix. As apparent from the table, the assumptions underlying the lubrication approximation are valid for the simulated systems. Furthermore, the conventional Marangoni number describing the onset of the short-wavelength B\'enard-Marangoni is below the critical value of 80 \cite{pearson_sw}. Hence, this instability is not present in our system and all unstable modes have a long-wave character.
For the numerical simulations the finite-element method was applied with quadratic Lagrangian shape functions. They were conducted with Comsol 4.3 \cite{comsol} using the Matlab Livelink environment. The lateral scaling length was set to $10h_{0,1}$. As the physical behavior of the system is independent of the scaling parameters this does not have any effect on the evolution of the system. The simulated domain was a square with side lengths of $15\left(2 \pi\right)/q_{char}$. As $q_{char}$ is dependent on $s$, the simulation area was not the same for the different temperatures. Each square was divided into $120\times120$ cells. Hence, according to the Nyquist-sampling theorem, the maximal numerically resolvable wavenumber is $4q_{char}$ if the domain was discretized with a finite-difference scheme. Finite-element discretization with high order interpolation functions exceed this resolution. In any case, this resolution is more than sufficient because, according to the linear analysis, the largest unstable wavenumber is expected to be $q_{max}=\sqrt{2}q_{char}$ and higher wavenumbers will probably be damped. Nevertheless, $q_{max}$ is only an approximation, and the fine meshing should allow for possible deviations to appear in the simulations. Periodic boundary conditions were applied at the edges of the simulation domain so that, as a consequence of the finite size of the rectangle, the numerically available wavenumbers in the $x$- and $y$-direction are multiples of $q_{char}/15$. All simulations started from a near-equilibrium state with $H_1=1+\xi_1(x,y)$ and $H_2=1+\xi_2(x,y)$, where $\xi_1$, $\xi_2$ are white noise perturbations with amplitudes of $5\cdot10^{-2}$. The simulations covered a time span of $\Delta\hat{\tau}=2.5/\omega_{char}$. The maximal time step was set to $0.005/\omega_{char}$. At the final time step the relative deformation of the layers was found to be between $15\%$ and $30\%$.
All calculations were performed on a Dell Precision T7500 workstation running Cent OS 5.8. Grid independency studies were conducted by varying the mesh size between $30\times 30$ and $180\times 180$. It was found that, beyond a grid resolution of $105 \times 105$ the numerical results became practically indistinguishable from each other.
\begin{table}
\centering
\begin{ruledtabular}
\begin{tabular}{c M c}
Parameter & Notation & Value\\
\thickhline
\rule{0pt}{2.5ex} Density & $\rho$ &$940 \frac{\text{kg}}{\text{m}^3}$\\[1.25ex]
Surface tension & $\sigma$ &$19 \cdot10^{-3} \frac{\text{N}}{\text{m}}$\\[1.25ex]
Surface tension coefficient & $\sigma_T$ &$6.9\cdot 10^{-5}\frac{\text{N}}{\text{mK}}$\\[1.25ex]
Kinematic viscosity & $\nu$ &$1.02 \cdot 10^{-5} \frac{\text{m}^2}{\text{s}}$\\[1.25ex]
Thermal conductivity & $k$ &$13.3 \cdot 10^{-2} \frac{\text{W}}{\text{mK}}$\\[1.25ex]
Thermal diffusivity & $\alpha$ & $10^{-7}\frac{\text{m}^2}{\text{s}}$
\end{tabular
\end{ruledtabular}
\caption{ Material properties of the considered silicone oil at $50 \,^{\circ}\mathrm{C}$ \cite{vanhook_lw_exp_theo}.}
\label{tab:mat_prop}
\end{table}
\begin{table}
\centering
\begin{ruledtabular}
\begin{tabular}{L M M M}
& Formula & First series & Second series\\
\thickhline
\rule{0pt}{4ex}Reynolds & $\displaystyle\frac{\rho h_0 v_c}{\mu}$ & $6.21\cdot10^{-4}$- -$5.33\cdot10^{-3}$ & $1.53\cdot10^{-4}$- -$5.17\cdot10^{-3}$ \\
\rule{0pt}{3.5ex}Marangoni & $\displaystyle\frac{a\sigma_T(T_2-T_1)}{\mu v_c}$ & $4.17$ - $5.45$ & $4.45$ -$10.1$\\
\rule{0pt}{3ex}$Ma^*$ & $\displaystyle\frac{\sigma_T \Delta T^* h_{0,1}}{\mu \alpha}$ & $19.2$ - $52.0$ & $9.9$ - $59.6$ \\
\rule{0pt}{3ex}Capillary & $\displaystyle\frac{\mu v_c}{\sigma a^3}$ & $75.2$ - $126.0$ & $36.4$ - $110.8$\\
\rule{0pt}{3.7ex}Galileo & $\displaystyle\frac{\rho g h_{0,1}^3}{\mu \alpha}$ & $9.62$ & $16.62$\\
\rule{0pt}{3.7ex}Bond & $\displaystyle\frac{g \rho \lambda_{char}^2}{\sigma}$ & $20.49$ - $121.2$ & $36.3$ - $303.4$\\
\rule{0pt}{3.3ex}Prandtl & $\displaystyle\frac{\mu}{\rho \alpha}$ & 102 & 102\\
\rule{0pt}{3.3ex}P\'eclet (oil) & $\displaystyle\frac{h_0 v_c}{\alpha}$ & $6.33\cdot10^{-2}$ - $5.43\cdot10^{-1}$ & $1.56\cdot10^{-2}$ - $5.27\cdot10^{-1}$\\
\rule{0pt}{3.3ex}P\'eclet (air) & $\displaystyle\frac{h_0 v_c}{\alpha_{g}}$ & $2.47\cdot10^{-4}$ - $2.12\cdot10^{-3}$ & $6.08\cdot10^{-5}$ - $2.05\cdot10^{-3}$
\end{tabular
\end{ruledtabular}
\caption{ The dimensionless groups characterizing the governing equations and their values for the two series of numerical simulations. In the first series a parametric sweep in $s$ was performed, while in the second one the parametric sweep was conducted for $r$.}
\label{tab:dim_quant}
\end{table}
A comparison between the theoretical predictions and the numerical results for the neutral stability curve $q_{max}(s)$ is given in Fig.~\ref{im:neutral_stab}. The numerical curves were calculated at $\hat{\tau}_1=0.5/\omega_{char}$ and $\hat{\tau}_2=2.5/\omega_{char}$. The data were obtained by analyzing the time evolution of the two-dimensional Fourier transform of the film thickness. A Fourier component was considered to be unstable, if its amplitude increased for ten successive time-steps. The marginally stable wavenumber was approximated by calculating the mean value of two averages. The first one is the directional average of the smallest wavenumbers for which the Fourier component is stable, whereas the second one is the directional average of the largest wavenumbers for which the Fourier component is unstable. At $\hat{\tau}_1=0.5/{\omega_{char}}$ the theoretical expectations and numerical results agree well. The linear method predicts slightly smaller wavenumbers than the numerical results, the largest relative difference between the two is $3.5\%$. By contrast, at $\hat{\tau}_2=2.5/{\omega_{char}}$, the difference between the two solution approaches becomes significant. According to the numerical results $q_{max}$ increases significantly with increasing time. This tendency is present in the whole $[0.5/{\omega_{char}}, 2.5/{\omega_{char}}]$ time interval. This is not expected to be a consequence of numerical inaccuracy, as the spatial resolution is considerably higher than the increased values of $q_{max}$. This behavior suggests that the system becomes more unstable during its time evolution. Further simulations (not
shown for brevity) indicate that the monotonic expansion of the unstable wavenumber region is also present
in double-layer configurations and is thus not a consequence of the coupling. The work of Boos and Thess \cite{boos_cascade} supports this argument, as their numerical analysis of drained regions in a double-layer configuration indicated the destabilization of higher wavenumbers.
\begin{figure}
\includegraphics[width=\linewidth]{fig4_neutral_stab.eps}
\caption{The theoretical neutral stability curve and the corresponding numerical results at $\hat{\tau} =0.5/\omega_{char}$ and $\hat{\tau} =2.5/\omega_{char}$.}\label{im:neutral_stab}
\centerline{\includegraphics[width=\linewidth]{fig5_max_wavenum.eps}}
\caption{{The theoretical and numerical results ($\hat{\tau} =2.5/\omega_{char}$) for the characteristic wavenumber of the film deformations.}\label{im:char_wave}}
\end{figure}
At $\hat{\tau}=2.5/\omega_{char}$ the Fourier transforms of $H_1(X,Y)$ and $H_2(X,Y)$ were also used to estimate the characteristic wavenumber of the pattern. This can be approximated by the wavenumber corresponding to the Fourier component with the largest amplitude, which is the one with the largest growth rate within the time range considered. This value was calculated for the patterns of the lower and upper liquid film. The results of the two layers coincided for every simulation except at $s=0.8$, where there was a $1\%$ relative difference between them. As shown in Fig.~\ref{im:char_wave}, the numerical results fit the theoretical expectations well.
From Eqs.~\eqref{eq:epsilon_def}~and~\eqref{eq:phi_def} one can deduce that for identical layers $\epsilon_1 > \epsilon_2$. However, the presence of gravity ($Bo>0$) allows the parameter $s$ to assume negative values. In this case the system is stable, as $\text{Re}(\omega_+)=\text{Re}(\omega_-)=-q^4<0$ for every wavenumber. In the stable regime it is assumed that momentarily appearing film deformations are long-waved in character so that the evolution equations presented before remain valid. For these deformations, $\text{Im}\left(\omega_{\pm}\right)=\pm q^2 \sqrt{-s}$ will give rise to waves appearing on the surface with a dimensionless phase velocity of $V_p=q\sqrt{-s}$ and a group velocity of $V_g=2q\sqrt{-s}$.
\begin{figure}
\centerline{\includegraphics[width=\linewidth]{fig6_modes.eps}}
\caption{{The triple-layer configurations with a weak effect of gravity (antiphase deformation, $A$) and a strong gravity effect (in-phase deformation, $B$).}\label{im:double_strong_grav}}
\centerline{\includegraphics[width=\linewidth]{fig7_summary.eps}}
\caption{{The ratio of the amplitudes of the two liquid films (left side, solid line) and the phase shift between the patterns (right side, dotted line).}\label{im:summary}}
\end{figure}
These waves only appear because of the coupling of the two layers. From Eq.~\eqref{eq:non_dim_matrix_grav} one can deduce that if the coupling between the layers is set to zero ($\epsilon_2=\phi_1=0$), then $M(1,2)=M(2,1)=0$. In consequence, the eigenvalues of $\boldsymbol{\mathrm{M}}$ are real numbers in the whole parameter range, making it impossible for oscillations to occur in a single liquid film.
For identical layers the eigenvectors of the linearized system described by Eq.~\eqref{eq:non_dim_matrix_grav} are
\vspace{-12pt}
\begin{equation}\label{eq:ident_eigvec}
\begin{aligned}
&\begin{pmatrix}\widetilde{\Delta H}_{1\pm}(q)\\ \widetilde{\Delta H}_{2\pm}(q)\end{pmatrix}= \\
& =\begin{pmatrix}{\displaystyle -\frac{\left(\frac{\epsilon_1 \text{Ma}}{2}-\frac{\text{Bo}}{3\text{Ca}}\right)\pm\sqrt{\left(\frac{\epsilon_1 \text{Ma}}{2}-\frac{\text{Bo}}{3\text{Ca}}\right)^2-\left(\frac{\epsilon_2 \text{Ma}}{2}\right)^2}}{\frac{\epsilon_2 \text{Ma}}{2}}}\\
\\ 1\end{pmatrix},
\end{aligned}
\end{equation}
\noindent that is, they are in fact independent of $q$. As mentioned earlier, $\omega_{-}<0$ for $s>0$. Thus, the effect of the corresponding eigenvector will disappear after a sufficiently long time. Furthermore, since the eigenvectors are the same for every wavenumber, after a sufficiently long time, the ratio of the deformation of the two layers should be $\widetilde{\Delta H}_{1+}/\widetilde{\Delta H}_{2+}$. In other words, the two layers will have exactly the same pattern, with only the magnitude of the deformation differing by a scaling factor. Nevertheless, for the case of $s>0$, there are two qualitatively different configurations: if ${\epsilon_1 \text{Ma}}/{2}-{\text{Bo}}/({3\text{Ca}})>0$, then $\widetilde{\Delta H}_{1+}/\widetilde{\Delta H}_{2+}<0$, i.e., the deformations will be in antiphase. By contrast if ${\epsilon_1 \text{Ma}}/{2}-{\text{Bo}}/({3\text{Ca}})<0$, then $\widetilde{\Delta H}_{1+}/\widetilde{\Delta H}_{2+}>0$; hence the deformations of the two layers are in phase. This is schematically shown in Fig.~\ref{im:double_strong_grav}.
An intuitive explanation of this behavior can be attained by identifying the effects driving the film evolution. As mentioned earlier, if the upper layer is rigid, then the lower one is unstable if ${\epsilon_1 \text{Ma}}/{2}-\text{Bo}/(3\text{Ca})>0$. Similarly, if instead of the lower layer the upper layer is the only deformable layer, then it will be unstable if $\epsilon_1 \text{Ma}/{2}-\text{Bo}/(3\text{Ca})<0$. These are two mutually exclusive conditions. Returning to the coupled system, this suggests that, if the former condition is fulfilled, the evolution of the system should be mainly driven by the lower layer. On top of the regions where the lower layer thickens, the surface of the upper layer will heat up. Subsequently the Marangoni flow arising in the upper layer will point away from this location, leading to a locally decreasing thickness of the upper film. This behavior leads to the antiphase configuration depicted at the left-hand side of Fig.~\ref{im:double_strong_grav}. Based on similar arguments, the in-phase evolution can be explained by considering that, in this case, the instability will be mainly driven by the upper layer, i.e., by gravity.
For $s<0$ according to Eq.~\eqref{eq:ident_eigvec}, the magnitudes of the deformations are equal, but with a phase shift $\Phi$ between the two layers. Formally $\widetilde{\Delta H}_{1+}/\widetilde{\Delta H}_{2+} \equiv e^{i \Phi}$, where $\Phi$ is independent of the wavenumber.
The phase shift behavior and amplitude ratios can be described in terms of the parameter
\vspace{-12pt}
\begin{equation}
r \equiv -\frac{\frac{\epsilon_1 \text{Ma}}{2}-\frac{\text{Bo}}{3\text{Ca}}}{\frac{\epsilon_2 \text{Ma}}{2}}.
\end{equation}
\noindent For $r<0$ the lower layer is the initially unstable one, while for $r>0$ it is the upper layer that triggers the evolution of both films. If $\left|r\right|>1$ the system is unstable and $\Phi$ is either $0$ (in-phase) or $\pi$ (antiphase). If $r \in [-1,1]$, the system is stable and the phase between the two layers is given by $\Phi=\text{arctan}\left(-\sqrt{1-r^2}/r\right)$, where the inverse tangent function gives a value within $[-\pi,0]$. These results are summarized in Fig.~\ref{im:summary}. The characterization of the triple-layer system requires both $r$ and $s$, as the $\omega_{\pm}$ eigenvalues are solely functions of $s$ whereas the eigenvectors can be expressed only in terms of $r$.
To support the findings from Eq.~\eqref{eq:ident_eigvec}, a second series of numerical simulations was conducted. The value of $r$ was varied between $[-2,2]$. Once again, silicone oil as the liquid medium (Table \ref{tab:mat_prop}) and air as the gaseous medium were used. The thicknesses of the air and liquid layers were $120 \mu \text{m}$. The variation of $r$ was achieved by varying the temperature difference between $3.25\text{K}$ and $19.60\text{K}$. The fourth column of Table \ref{tab:dim_quant} summarizes the dimensionless groups of this simulation.
For $|r|>1$ the numerical parameters of the simulations and the corresponding system behavior were similar to the previous simulations. On the other hand, for $|r|<1$ no patterns emerge. Consequently, it is not possible to calculate a characteristic wavenumber $q_{char}$ in order to scale the simulated region. However, the long-wavelength approximation remains valid, and the actual values of the scaling parameters do not have any qualitative effect on the system behavior, as they do not change the essential physics. Therefore, lacking natural scaling quantities, for the purpose of numerical simulation, a scaling length and velocity can be chosen arbitrarily. To this end, $L=10h_{0,1}$ was used again as the lateral scaling length and $v_c=10^{-5}\text{m/s}$ as the scaling velocity. In this case a square with a side length of $150$ was used as the simulation domain, while the number of cells remained at $120\times120$. Instead of $2.5/\omega_{char}$, the simulated time period was $-250/s$ ($s<0$), since $\omega_{char}$ is also not defined here. As before, the initial layer thicknesses were modulated by a white-noise perturbation with an amplitude of $5\cdot10^{-2}$.
The evolution of the system was calculated with two independent methods, starting from the same initial conditions. In the full numerical approach, Eqs.~\eqref{eq:h_mom_eq_double_1}~and~\eqref{eq:h_mom_eq_double_2} were directly solved. For comparison, in the semi-analytical method the Fourier transforms of the initial liquid patterns were computed and all Fourier components were evolved independently from each other by using Eq.~\eqref{eq:general_fourier_sol}. In this case, the growth or decay of modes is governed by Eq.~\eqref{eq:ident_eigval}.
To this end, $(\widetilde{\Delta H}_{1+}(k_x,k_y),$$ \widetilde{\Delta H}_{2+}(k_x,k_y))$ and $(\widetilde{\Delta H}_{1-}(k_x,k_y),$$ \widetilde{\Delta H}_{2-}(k_x,k_y))$ at $\hat{\tau}=0$ were obtained through the white-noise initial condition. Subsequently, the position space representation of the patterns was calculated with the inverse Fourier formula of Eq.~\eqref{eq:fourier_form} for every time step. The relative difference between the numerical (${\Delta H_1}^n,{\Delta H_2}^n$) and semi-analytical (${\Delta H_1}^a,{\Delta H_2}^a$) results is defined by
\begin{equation}
\begin{aligned}
D(\tau)=\frac{1}{2}&{\Bigg \{}\frac{\int{[{\Delta H_1}^{n}(\tau,x,y)-{\Delta H_1}^{a}(\tau,x,y)]^2dxdy}}{\int{[{\Delta H_1}^{n}(\tau,x,y)]^2dxdy}}+\\
&+\frac{\int{[{\Delta H_2}^{n}(\tau,x,y)-{\Delta H_2}^{a}(\tau,x,y)]^2dxdy}}{\int{[{\Delta H_2}^{n}(\tau,x,y)]^2dxdy}}{\Bigg \}}.
\end{aligned}
\end{equation}
\noindent Table \ref{tab:max_rel_diff} summarizes the maximal values of $D(\tau)$ over the whole simulated time interval. The full nonlinear numerical solution agrees well with the semi-analytical result obtained from linear theory. This table serves also as a verification that the integration time steps chosen in the numerical simulation are sufficiently small.
\begin{table}
\centering
\begin{ruledtabular}
\begin{tabular}{r r M}
\multicolumn{1}{c}{$r$} & \multicolumn{1}{c}{$s$} & \multicolumn{1}{c}{$max(D)$} \\
\thickhline
\rule{0pt}{2.5ex}
$-2.000$ & $2.315$ \hspace{8mm} & $1.841 \cdot10^{-3}$\\
$-1.500$ & $3.637\cdot 10^{-1}$ & $1.077\cdot10^{-3}$\\
$-0.900$ & $-2.582 \cdot10^{-2}$ & $2.538\cdot10^{-5}$\\
$-0.400$ & $-7.149 \cdot10^{-2}$ & $1.359\cdot10^{-7}$\\
$0.000$ & $-6.249 \cdot10^{-2}$ & $1.007\cdot10^{-5}$\\
$0.400$ & $-4.018 \cdot10^{-2}$ & $4.419\cdot10^{-7}$\\
$0.900$ & $-6.794 \cdot 10^{-3}$ & $1.552\cdot10^{-5}$\\
$1.500$ & $3.309 \cdot10^{-2}$ & $2.608\cdot10^{-4}$\\
$2.000$ & $6.371 \cdot10^{-2}$ & $5.274\cdot10^{-4}$
\end{tabular
\end{ruledtabular}
\caption{ Maximal relative difference between numerical and semi-analytical results.}
\label{tab:max_rel_diff}
\end{table}
Three further series of numerical simulations were conducted. In the first one, the initial liquid layer thickness was $h_{0,1}=60\mu \text{m}$, while the temperature difference was varied between $[0.81\text{K},4.90\text{K}]$. In the second and third set of simulations the temperature difference between the substrates was fixed, and the liquid layer thicknesses were varied. These parameters were either $T_1-T_2=3\text{K}$ while $h_{0,1} \in [47\mu\text{m},115\mu\text{m}]$, or $T_1-T_2=5\text{K}$ while $h_{0,1}\in[60\mu\text{m},149\mu\text{m}]$. In all three cases the parameters were chosen in such a fashion that $r$ took the same values as listed in Table \ref{tab:max_rel_diff}. In every simulation the maximal value of the relative difference from the semi-analytical predictions was smaller than $10^{-3}$. This parametric study indicates that the results detailed in this paper are not specific to the particular choice of the initial film heights, but are in fact generic as long as the long-wavelength approximation holds.
\subsection{Non-identical layers}
From an experimental viewpoint, exactly identical layers are impossible to achieve. To obtain a more general solution of practical relevance, in the following it is assumed that the initial thicknesses of both layers differ slightly, i.e., $\chi \neq 1$. As before, liquids with the same material properties (and hence, within the present formulation, with equal dimensionless groups) are considered. In this case,
\vspace{-12pt}
\begin{equation}
\begin{aligned}
\phi_1&=\epsilon_2\\
\phi_2 &= \epsilon_1+\delta \cdot \frac{\kappa D_0}{[\kappa\left(D_0-2-\delta\right)+2+\delta]^2} =\\
&= \epsilon_1+\delta \cdot (\epsilon_1+\epsilon_2),
\end{aligned}
\end{equation}
\noindent where $\delta=\chi-1=\left(h_{0,2}-h_{0,1}\right)/h_{0,1}$ is the difference between the initial liquid layer thicknesses. Reevaluation of the matrix $\boldsymbol{\mathrm{M}}$ of the linearized equations indicates the presence of additional terms of different orders of $\delta$, which make the analysis considerably more complicated, as if the film thicknesses were identical. Thus, as a simplification it is assumed that $\delta^2\ll1$ so that the governing equations can be linearized in $\delta$. However, the linearization of $\epsilon_1$ and $\epsilon_2$ in terms of $\delta$ is unfavorable, as this would render the formulas less compact. Furthermore, leaving $\epsilon _1$
and $\epsilon_2$ in their original forms is not expected to significantly lower the accuracy. Therefore, to simplify the algebraic structure, linearization of these terms was not performed. The dimensionless time variable was again $\hat{\tau}=\tau/(3Ca)$. Apart from a linear correction in $\delta$, the eigenvectors of this system remain the same as in Eq.~$\eqref{eq:ident_eigvec}$. This correction does not result in any significant change in the system behavior. The eigenvalues are obtained as
\vspace{-12pt}
\begin{equation}\label{eq:non_ident_eigenvalue}
\begin{aligned}
\omega_{+/-}=&-q^4\left[1+\frac{3}{2}\delta\left(1\pm \frac{r}{\sqrt{r^2-1}}\right)\right]+\\
+&q^2\left[\pm\sqrt{s}-\frac{\delta}{2}\left(c\left(1\pm \frac{r}{\sqrt{r^2-1}}\right)\pm\frac{\sqrt{s}}{r^2-1}\right)\right],
\end{aligned}
\end{equation}
\noindent where for easier notation
\begin{equation}
c=\frac{3}{2}\text{Ca}\text{Ma}\left(-3\epsilon_2r +\epsilon_2-\epsilon_1\right)
\end{equation}
\noindent was introduced. Analyzing Eq.~\eqref{eq:non_ident_eigenvalue} one finds that, at $s>0$ and sufficiently small wavenumbers, the growth rate $\text{Re}(\omega_+)$ is still positive and the patterns remain locked in with respect to the horizontal coordinates since $\text{Im}(\omega_{\pm})=0$. In this regime the additional terms introduced by $\delta \neq 0$ only shift the characteristic wavenumber to different values. The oscillations, for which $\text{Im}(\omega_{\pm})\neq 0$, occur in the same region as for identical layers, i.e. for $s<0$, or equivalently, if $|r|<1$. In this regime the real part of $\omega_{\pm}$ reads
\vspace{-12pt}
\begin{equation}\label{real_nonident_osc}
\text{Re}(\omega_{\pm})=-q^4\left(1+\frac{3}{2}\delta\right)-q^2\frac{c \delta}{2}.
\end{equation}
\noindent The difference with the case of two identical layers is the following: if $c\delta<0$, then for sufficiently small wavenumbers the oscillatory deformation has a positive growth rate and instability occurs. Given the definition of $c$, the $c \delta<0$ condition can be reformulated to
\vspace{-12pt}
\begin{equation}\label{eq:instab_osci}
\begin{aligned}
r\gtrless r_0 &\text{ if } \delta\gtrless0,
\end{aligned}
\end{equation}
\noindent where $r_0=\left(\epsilon_2-\epsilon_1\right)/({3 \epsilon_2})$. Considering Eq.~\eqref{eq:epsilon_def} with $\kappa_1=\kappa_2$ (same liquid medium) and the relation $d_0>h_{0,1}+h_{0,2}$ one can show that at $H_1=H_2=1$ and for $\delta^2\ll1$ in fact $\epsilon_2-\epsilon_1<0$, thus $r_0<0$. The stability behavior of the triple-layer configuration is summarized in Table \ref{tab:stab_prop}. On the one hand, the sign of $r$ determines which liquid layer dominates the overall system dynamics. On the other hand, the value of $r$ relative to $r_0$ defines the stability of the oscillatory regime.
\begin{table}
\centering
\begin{ruledtabular}
\begin{tabular}{c N N N N }
$r\in$ & $[-\infty,-1]$ & $[-1,r_0]$ & $[r_0,1]$ & $[1,\infty]$\\
\thickhline
$\delta<0$ & \rule{0pt}{2ex} stationary, unstable & \rule{0pt}{2ex} oscillatory, unstable & oscillatory, stable & stationary, unstable\\
$\delta=0$ & stationary, unstable & oscillatory, stable & oscillatory, stable & stationary, unstable\\
$\delta>0$ & stationary, unstable & oscillatory, stable & oscillatory, unstable & stationary, unstable
\end{tabular
\end{ruledtabular}
\caption{ Stability regions of the system with two liquid layers. In stationary states the deformations are locked in with respect to the horizontal coordinates, whereas in oscillatory states they are mobile and move in-plane.}
\label{tab:stab_prop}
\end{table}
The characteristic wavenumber of the oscillatory instability is $q_{char}=\sqrt{{-c \delta}/{(4+6\delta)}}$. Since $q_{char} \rightarrow 0$ with $r \rightarrow r_0$, this is a type-II-o instability \cite{cross_pattern}. Inserting $q_{char}$ into Eq.~\eqref{real_nonident_osc}, one finds the corresponding growth rate to be $\text{Re}(\omega_{char})={(c\delta)^2}/{(16+24\delta)}$. The angular velocity is $\text{Im}(\omega_{char})={c\delta}/{(4+6\delta)}\sqrt{-s}+O(\delta^2)$. The neutrally stable wavenumbers are $q_{min}=0$ and $q_{max}=\sqrt{2}\cdot q_{char}$, respectively.
For illustrative purposes the typical behavior of the oscillatory instability is shown in Fig.~\ref{im:osci_pic}. A $4\cdot \lambda_{char} \times 4\cdot\lambda_{char}$ sized rectangular cutout from the $15\cdot \lambda_{char} \times 15\cdot\lambda_{char}$ computational domain as obtained from numerical simulation is displayed. In this simulation the phase shift between the different layers was $\Phi\approx -1$. In the first row of Fig.~\ref{im:osci_pic}, the film height distributions of film 1 (left) and 2 (right) is shown at $\tau = 0.998 \cdot 2.5/\omega_{char}$. In the second row, the corresponding film height distributions at $\tau = 2.5/\omega_{char}$ are displayed. From the latter it is apparent, that in contrast to the spatially fixed patterns particularly observed in two-layer systems, the locations of the elevations change during the evolution of the oscillatory instability. Furthermore, Fig.~\ref{im:osci_pic} also illustrates that, contrary to the stationary instabilities, the phase shift between the two layers causes the patterns to differ from each other.
\begin{figure}
\centerline{\includegraphics[width=7.2cm]{fig8_image_plot.eps}}
\caption{{The surface patterns $H_1$ and $H_2$ of $4\cdot \lambda_{char} \times 4\cdot \lambda_{char}$ sized regions extracted from the numerical simulations at (a) $\hat{\tau}=0.998\cdot2.5/{\omega_{char}}$ and (b) $\hat{\tau}=2.5/{\omega_{char}}$. The parameters are $s=-0.056$, $r=0.152$, and $c=-0.532$. The grayscale extends from $0.993$ to $1.007$.}\label{im:osci_pic}}
\end{figure}
For verification, the semi-analytical results were compared in detail with numerical simulations. As before silicone oil was considered as the liquid medium, and the thickness of the air layer was equal to the thickness of the lower liquid layer. This value was fixed at $h_{0,1}=120\mathrm{{\mu}m}$. For the first series of simulations, the initial thickness of the upper layer was set to $h_{0,2}=h_{0,1}\cdot 1.01$ ($\delta=0.01$). For the second series it was $h_{0,2}=h_{0,1}\cdot 0.99$ ($\delta=-0.01$). The $c$, $r$ and $s$ values were varied by changing the temperature difference between the substrates from $4.2\text{K}$ to $8.4\text{K}$. For both configurations, the simulations were conducted in the range of $r$ where the oscillatory instability occurs. The computational domains in space and time were defined in the same way as in the previous simulations, i.e., by using $q_{char}$ and $\text{Re}(\omega_{char})$. For both values of $\delta$ and near the $r\rightarrow r_0$ limit, the oscillation frequency becomes very high compared to the growth rate, i.e., $\text{Im}(\omega_{char})\gg\text{Re}(\omega_{char})$. Thus the computational costs to sufficiently resolve the emerging patterns in time also increases considerably. As a consequence, for $\delta = -0.01$ it was not possible to examine the whole range of $r \in [-1,r_0]=[-1,-0.61]$. Instead, the simulations were limited to $[-0.95,-0.72]$. In the simulated regime, $c\in[0.12,0.38]$ and $s\in[-5.3\cdot10^{-2},-1.3\cdot10^{-2}]$. Similarly, for the numerical simulations employing $\delta=0.01$, $r$ was varied between $[-0.36,0.9]$ instead of the full interval $[-0.59,1]$ and $c\in[-0.85,-0.18]$ while $s\in[-7.9\cdot10^{-2},-6.9\cdot10^{-3}]$. The results for the characteristic wavelengths of the patterns are depicted in Fig.~\ref{im:osc_char}. For $\delta=-0.01$ the maximal relative difference between the linear theory and the full numerical simulation was $2.6\%$, while for $\delta=0.01$ it was $12.9\%$. The latter is a significant deviation, which appeared at $r=-0.36$. This is the point closest to $r_0$. However, at other points the numerical and theoretical results agree reasonably well.
\\
\begin{figure}
\includegraphics[width=\linewidth]{fig9_oscillatory_char_wavenum.eps}
\caption{The characteristic wavenumber of the oscillatory pattern emerging for (a) $\delta=-0.01$ and for (b) $\delta=0.01$.}\label{im:osc_char}
\end{figure}
\vspace{-12pt}
\section{Conclusion}
This article focuses on the effects of thermal coupling on the pattern evolution of two liquid layers placed opposite to each other and separated by a thin air layer. Specifically, the scenario was studied where the lower layer undergoes a gravity-stabilized long-wavelength B\'enard-Marangoni (BM) instability, while the upper one evolves under a thermocapillarity-stabilized long-wavelength Rayleigh-Taylor (RT) instability. For simplicity, the films are assumed to have similar initial thicknesses and the same material properties. The system was analyzed using linear stability theory as well as numerical solutions of the full nonlinear set of equations. A good overall agreement between the analytical and the numerical results was found. It was shown that the coupling can result in a qualitative difference in the evolution of the system compared to the conventional case where both instabilities evolve independently. Not only does the characteristic wavelength of the patterns increase, but the two layers may stabilize or destabilize each other. While without the coupling one of the layers would always be unstable while the other is stable, it was found that in the coupled system the stability behavior of the layers is synchronized. Moreover, for certain parameter ranges oscillatory instabilities were observed, which do not appear in the double-layer configuration. This is similar to the behavior found in coupled Turing pattern formations. It can be concluded that the coupling of two self-organizing systems can result in new modes of pattern formation not present in the individual systems.
\begin{acknowledgments}
Funding by the German Research Foundation (DFG), Grant No. DI 1689/1-1, is gratefully acknowledged.
\end{acknowledgments}
\vspace{-12pt}
|
1,116,691,497,371 | arxiv | \section{#1}%
\renewcommand*{\thesection}{\Alph{section}}%
}
\begin{document}
\begin{frontmatter}
\title{A New Polar Decomposition \\ in a Scalar Product Space}
\author{Xuefang Sui\fnref{e1}}
\author{Paolo Gondolo\fnref{e2}}
\fntext[e1]{{\it Email address:} [email protected]}
\fntext[e2]{{\it Email address:} [email protected]}
\address{Department of Physics and Astronomy, University of Utah, 115 South 1400 East \#201, Salt Lake City, UT 84112-0830}
\begin{abstract}
There are various definitions of right and left polar decompositions of an $m\times n$ matrix $F \in \mathbb{K}^{m\times n}$ (where $\mathbb{K}=\mathbb{C}$ or $\mathbb{R}$) with respect to bilinear or sesquilinear products defined by nonsingular matrices $M\in \mathbb{K}^{m\times m}$ and $N\in \mathbb{K}^{n\times n}$. The existence and uniqueness of such decompositions under various assumptions on $F$, $M$, and $N$ have been studied. Here we introduce a new form of right and left polar decompositions, $F=WS$ and $F=S'W'$, respectively, where the matrix $W$ has orthonormal columns ($W'$ has orthonormal rows) with respect to suitably defined scalar products which are functions of $M$, $N$, and $F$, and the matrix $S$ is selfadjoint with respect to the same suitably defined scalar products and has eigenvalues only in the open right half-plane. We show that our right and left decompositions exist and are unique for any nonsingular matrices $M$ and $N$ when the matrix $F$ satisfies $(F^{[M,N]})^{[N,M]}=F$ and $F^{[M,N]}F$ ($FF^{[M,N]}$, respectively) is nonsingular, where $F^{[M,N]}=N^{-1} F^\# M$ with $F^\#=F^T$ for real or complex bilinear forms and $F^\#=\bar{F}^T$ for sesquilinear forms. When $M=N$, our results apply to nonsingular square matrices $F$. Our assumptions on $F$, $M$, and $N$ are in some respects weaker and in some respects stronger than those of previous work on polar decompositions.
\end{abstract}
\begin{keyword}
Scalar product \sep Polar decomposition \sep Sign function
\MSC[2010] 15A23 \sep 15A63 \sep 47B50
\end{keyword}
\end{frontmatter}
\section{Introduction}
Let $\mathbb{K}$ denote either the field of real numbers $\mathbb{K} = \mathbb{R}$ or the field of complex numbers $\mathbb{K} = \mathbb{C}$.
Consider a scalar product $[x,y]_N$ over the field $\mathbb{K}$, i.e., a bilinear or sesquilinear form defined by a nonsingular matrix $N\in \mathbb{K}^{n \times n}$:
\begin{align}
[x,y]_N=\begin{cases}
x^T Ny, \quad \text{for real or complex bilinear forms}, \\
\bar{x}^TNy , \quad \text{for sesquilinear forms},
\end{cases}
\end{align}
for $x, y \in \mathbb{K}^n$, where $x^T$ indicates the transpose of $x$ and $\bar{x}$ indicates the complex conjugate of $x$.
The $N$-adjoint $A^{[N]}$ of a matrix $A\in \mathbb{K}^{n \times n}$ is defined as $[Ax,y]_N=[x,A^{[N]}y]_N$ for all $x,y \in \mathbb{K}^n$, that is
\begin{align}
A^{[N]}=N^{-1}A^\# N
\end{align}
with (using the notation in \cite{Higham2010})
\begin{align}
A^\#=\begin{cases}
A^T, \quad \text{for real or complex bilinear forms}, \\
\bar{A}^T, \quad \text{for sesquilinear forms}.
\end{cases}
\end{align}
A matrix $A \in \mathbb{K}^{n\times n}$ is called selfadjoint if $A^\#=A$ and unitary if $A^\# A=I_n$, where $I_n\in\mathbb{K}^{n\times n}$ is the identity matrix.
A matrix $A \in \mathbb{K}^{n\times n}$ is called $N$-selfadjoint if $A^{[N]}=A$, or $A^\# N=NA$, and $N$-unitary if $A^{[N]} A=I_n$, or $A^\# N A = N$.
The $(M,N)$-adjoint $A^{[M, N]}\in \mathbb{K}^{n\times m}$ of a matrix $A\in \mathbb{K}^{m\times n}$, where $M \in \mathbb{K}^{m \times m}$ and $N\in \mathbb{K}^{n\times n}$ are nonsingular matrices, is defined by the condition (see e.g.~\cite{Higham2010})
\begin{align}
[Ax, y]_M=[x,A^{[M, N]} y]_N \qquad \text{for all} \quad x\in \mathbb{K}^n,y\in \mathbb{K}^m.
\end{align}
The $(M,N)$-adjoint $A^{[M, N]}$ can be written as
\begin{align}
A^{[M, N]}=N^{-1} A^\# M.
\end{align}
If $M=N$, then $A^{[N,N]}=A^{[N]}$. Two useful properties of the $(M,N)$-adjoint are\\ (a) for $A\in \mathbb{K}^{m\times n}$ and $B\in \mathbb{K}^{n\times n}$,
\begin{align}
(AB)^{[M,N]}=N^{-1}B^\# A^\# M=N^{-1}B^\# N N^{-1}A^\# M=B^{[N]}A^{[M,N]};
\end{align}
(b) for $A\in \mathbb{K}^{m\times m}$ and $B\in \mathbb{K}^{m\times n}$,
\begin{align}
(AB)^{[M,N]}=N^{-1}B^\# A^\# M=N^{-1}B^\# MM^{-1}A^\# M=B^{[M,N]}A^{[M]}.
\end{align}
The classical concept of polar decomposition in a Euclidean space is expressed by the following statements.
Any matrix $F \in \mathbb{K}^{m\times n} $ with $m\ge n$ has a right polar decomposition
\begin{align}
F = U S,
\end{align}
where the matrix $ U \in \mathbb{K}^{m\times n} $ has orthonormal columns (i.e., $U^\ast U = I_n$) and the matrix $ S \in \mathbb{K}^{n\times n} $, uniquely given by $S=\sqrt{F^\ast F}$, is selfadjoint and positive-semidefinite (i.e., all of its eigenvalues are real and positive or zero). Here $A^\ast=A^T$ for $\mathbb{K}=\mathbb{R}$ and $A^\ast=\bar{A}^T$ for $\mathbb{K}=\mathbb{C}$. $\sqrt{A}$ is the principal square root of a positive-definite matrix $A$ or the unique positive-semidefinite square root of a positive-semidefinite selfadjoint matrix $A$ with zero eigenvalues.
Similarly, any matrix $F \in \mathbb{K}^{m\times n} $ with $m\le n$ has a left polar decomposition
\begin{align}
F = S' \, U',
\end{align}
where the matrix $ U' \in \mathbb{K}^{m\times n} $ has orthonormal rows (i.e., $U' U^{\prime\ast} = I_m$) and the matrix $ S' \in \mathbb{K}^{m\times m} $, uniquely given by $S'=\sqrt{F F^\ast}$, is selfadjoint and positive-semi\-definite. In case $F\in\mathbb{K}^{n\times n}$ is a square nonsingular matrix, the right and left polar decompositions are unique with $U=FS^{-1}$ and $U'=S^{\prime-1}F$, respectively, and $U$ and $U'$ are unitary matrices.
There is a long history on generalizing the classical polar decomposition to scalar products on $\mathbb{K}^n$ given by a bilinear or sesquilinear form. Different generalizations exist based on the dimension of the matrices involved and the required properties of the factors in the decomposition.
To the extent of our knowledge, polar decompositions of square matrices $F\in\mathbb{K}^{n\times n}$ with respect to scalar products defined by a nonsingular matrix $N\in\mathbb{K}^{n\times n}$ are of the form
\begin{align}
\label{eq:Hpolar}
F = L S ,
\end{align}
where $L\in\mathbb{K}^{n\times n}$ is $N$-unitary and $S\in\mathbb{K}^{n\times n}$ is $N$-selfadjoint. For example, in~\cite{Bolshakov1995, Bolshakov1996, Bolshakov19961997, Bolshakov1997, Bolshakov19971997, Kintzel2005, Mehl2005}, necessary and sufficient conditions are given for the existence of decompositions of the form (\ref{eq:Hpolar}) when the matrix $N$ is a selfadjoint matrix $H$ (such decompositions are called $H$-polar decompositions and are not necessarily unique even if $F$ is nonsingular). Further restricting the $H$-selfadjoint factor to satisfy the condition that $HS$ is positive-semidefinite, still with $H$ selfadjoint, leads to the semidefinite $H$-polar decomposition of a square matrix $F$ in~\cite{Bolshakov19971997, Kintzel2005}, which exists (but is in general not unique) if and only if $F^{[H]}F$ is diagonalizable and has only nonnegative real eigenvalues and $\ker(F)$ satisfies the restrictions in Theorem 5.3 of~\cite{Bolshakov19971997}. The generalized polar decomposition of a square matrix $F$ studied in~\cite{Cardoso2002, Higham2004, Higham2005, Higham2010, Mackey2006} is also of the form (\ref{eq:Hpolar}), but with the additional restriction that the nonzero eigenvalues of $S$ are contained in the open right half-plane, i.e., considering both zero and nonzero eigenvalues, the spectrum of $S$: $\Lambda(S) \subseteq \{ z\in \mathbb{C} : \Re(z) > 0 \} \cup \{0\}$. In particular, \cite{Higham2005} shows that a generalized polar decomposition of a nonsingular square matrix $F$ exists if and only if $F^{[N]} F$ has no negative real eigenvalues and $(F^{[N]})^{[N]} = F$, and that when such a factorization exists, it is unique. And~\cite{Higham2010} shows that for an orthosymmetric scalar product $N$ (i.e., such that $(F^{[N]})^{[N]} = F$ for all $F\in\mathbb{K}^{n\times n}$) a generalized polar decomposition of singular and nonsingular square matrices $F$ exists with a unique $N$-selfadjoint factor $S$ if and only if $\ker(F^{[N]} F) = \ker(F)$, $F^{[N]}F$ has no negative real eigenvalues and the zero eigenvalues of $F^{[N]}F$, if any, are semisimple.
Polar decompositions of rectangular matrices $F\in\mathbb{K}^{m\times n}$ with respect to underlying scalar products defined by nonsingular matrices $M$ and $N$ on $\mathbb{K}^m$ and $\mathbb{K}^n$ have been studied in~\cite{Higham2010} (see also~\cite{Bolshakov1995, Kintzel2005} for the special case of two selfadjoint scalar products with $m=n$). In~\cite{Higham2010}, under the assumption that $M$ and $N$ form an orthosymmetric pair (see their Definition 3.2), a canonical generalized polar decomposition of $F\in\mathbb{K}^{m\times n}$ is defined as a decomposition
\begin{align}
F={\it\Lambda} S,
\end{align}
in which the matrix ${\it\Lambda}\in\mathbb{K}^{m\times n}$ is a partial $(M,N)$-isometry, i.e., ${\it\Lambda} {\it\Lambda}^{[M,N]} {\it\Lambda} = {\it\Lambda}$, the matrix $S\in\mathbb{K}^{n\times n}$ is $N$-selfadjoint with nonzero eigenvalues contained in the open right half-plane, and $\mathop{\mathrm{range}}({\it\Lambda}^{[M,N]})=\mathop{\mathrm{range}}(S)$. Theorem 3.9 in~\cite{Higham2010} states that, for an orthosymmetric pair of nonsingular matrices $M\in\mathbb{K}^{m\times m}$ and $N\in\mathbb{K}^{n\times n}$, a matrix $F\in\mathbb{K}^{m\times n}$ has a unique canonical polar decomposition if and only if $\ker(F^{[M,N]} F) = \ker(F)$, $F^{[M,N]}F$ has no negative real eigenvalues and the zero eigenvalues of $F^{[M,N]}F$, if any, are semisimple.
In this paper, we replace the condition of orthosymmetric pair of $M$ and $N$ by $(F^{[M,N]})^{[N,M]}=F$, and we drop the condition of no negative real eigenvalues of $F^{[M,N]}F$ (which is $F^{[N]}F$ for square matrices $F$ with $M=N$), by extending the form of the polar decomposition of a matrix $F\in\mathbb{K}^{m\times n}$ with $m\ge n$ to
\begin{align}
F = W S ,
\end{align}
where $W\in\mathbb{K}^{m\times n}$ has orthonormal columns with respect to suitably defined scalar products which are functions of $M$, $N$, and $F$ but is not necessarily $N$-unitary or a partial $(M,N)$-isometry, while $S\in\mathbb{K}^{n\times n}$ is selfadjoint with respect to the same suitably defined scalar products, has eigenvalues only in the open right half-plane, and turns out to be unique. When $m\le n$ we use a left decomposition
\begin{align}
F = S' W'
\end{align}
with analogous properties for $S'\in\mathbb{K}^{m\times m}$ and $W'\in\mathbb{K}^{m\times n}$ (in this case, $W'$ has orthonormal rows instead of orthonormal columns, with respect to the suitably defined scalar products). However, we do all this here only for matrices $F\in\mathbb{K}^{m\times n}$ for which $F^{[M,N]}F$ if $m\ge n$, or $FF^{[M,N]}$ if $m\le n$, is nonsingular. Thus our assumptions on $F$, $M$, and $N$ in this paper are in some respects weaker and in some respects stronger than those of previous work on polar decompositions. Our right and left decomposition Theorems~\ref{thm:F=WS_MN} and~\ref{thm:F=SW_MN} for matrices $F\in\mathbb{K}^{m\times n}$ closely resemble the classical right and left polar-decomposition theorems mentioned above.
In Section~\ref{sec:definitions}, we define $r$-positive-definite matrices and extend the concept of matrices with orthonormal columns and rows to generic pairs of scalar products. In Section~\ref{sec:function} we introduce generalized matrix sign functions. In Section~\ref{sec:F=WS}, we present our right and left decompositions for nonsingular square matrices with respect to one scalar product. In Section~\ref{sec:F=WS_MN}, we present our right and left decompositions for rectangular matrices with respect to two scalar products. Finally, in Section~\ref{sec:exmp}, we make comments on our theorems and give some examples.
\section{Some useful definitions}
\label{sec:definitions}
In this section, we introduce some definitions to simplify the language.
First we extend the definition of positive-definite matrices to matrices with nonreal eigenvalues as follows.
\begin{defn}($r$-positive-definite)
\label{defn:r-positive-definite}
A matrix $A\in\mathbb{K}^{n\times n}$ is $r$-positive-definite if all of its eigenvalues have positive real part, or equivalently if all of its eigenvalues lie in the open right half-plane.
\end{defn}
Secondly, we extend the definitions of matrices with orthonormal rows or columns to the case of non-Euclidean scalar products.
\begin{defn}($(M,N)$-orthonormal columns, $(M,N)$-orthonormal rows, and $(M,N)$-unitarity)
\label{defn:m-n-orthonormal-columns}
Let $M \in \mathbb{K}^{m \times m}$ and $N\in \mathbb{K}^{n\times n}$ be nonsingular matrices. A matrix $W\in\mathbb{K}^{m\times n}$ has $(M,N)$-orthonormal columns if $W^{[M,N]} W =I_n$.
A matrix $W\in\mathbb{K}^{m\times n}$ has $(M,N)$-orthonormal rows if $WW^{[M,N]} =I_m$. If a matrix $W\in\mathbb{K}^{m\times n}$ has both $(M,N)$-orthonormal rows and $(M,N)$-orthonormal columns (in which case $m=n$, $W$ is a nonsingular square matrix), we say that $W$ is $(M,N)$-unitary.
\end{defn}
The condition that $W\in\mathbb{K}^{m\times n}$ has $(M,N)$-orthonormal columns can also be written as
\begin{align}
\label{eq:WNWN}
W^\# M W=N,
\end{align}
or
\begin{align}
\label{eq:WNWNbis}
[Wx,Wy]_{M}=[x,y]_{N} \quad \text{for all} \quad x,y \in \mathbb{K}^n,
\end{align}
while the condition that $W\in\mathbb{K}^{m\times n}$ has $(M,N)$-orthonormal rows can also be written as
\begin{align}
M^{-1} = W N^{-1} W^\#,
\end{align}
or
\begin{align}
[x,y]_{M^{-1}} = [W^\# x,W^\# y]_{N^{-1}} \quad \text{for all} \quad x,y \in \mathbb{K}^m.
\end{align}
When $m=n$, a square matrix $W\in\mathbb{K}^{n\times n}$ that has $(M,N)$-orthonormal columns or $(M,N)$-orthonormal rows is necessarily nonsingular. A nonsingular matrix $W\in\mathbb{K}^{n\times n}$ has $(M,N)$-orthonormal columns if and only if it has $(M,N)$-orthonormal rows. By Definition~\ref{defn:m-n-orthonormal-columns}, $W$ is $(M,N)$-unitary.
A square matrix $W\in\mathbb{K}^{n\times n}$ that satisfies conditions (\ref{eq:WNWN}) and (\ref{eq:WNWNbis}) for nonsingular selfadjoint matrices $M$ and $N$ has been called $(N,M)$-unitary or $N$-$M$-unitary in~\cite{Bolshakov1995, Gohberg2005}. Our Definition~\ref{defn:m-n-orthonormal-columns} extends these notions to more general matrices.
\section{Generalized matrix sign function}
\label{sec:function}
In this section, we introduce a primary matrix function called a generalized matrix sign function.
First, we recall some facts about primary matrix functions (see, e.g.,~\cite{Gantmacher1977, Higham2008, HornJohnson1991}). A primary matrix function $f$ of a matrix $A\in\mathbb{K}^{n\times n}$ can be defined by means of a function $f: \mathbb{C} \to \mathbb{C}$ defined on the spectrum of $A$. The function $f: \mathbb{C} \to \mathbb{C}$ is called the stem function of the matrix function $f$ and here they are denoted by the same letter.
\begin{defn}(chapter~V in~\cite{Gantmacher1977} or Definition 1.1 in~\cite{Higham2008})
A function $f: \mathbb{C} \to \mathbb{C}$ is said to be defined on the spectrum of a matrix $A\in\mathbb{K}^{n\times n}$ if its value $f(\lambda_k)$ and the values of its $s_k-1$ derivatives
\begin{align}
\label{eq:derivative}
f^{(j)}(\lambda_k),\quad \quad \quad j=0, \hdots, s_k-1, \quad \quad k =1, \hdots, t,
\end{align}
exist at all eigenvalues $\lambda_k$ of $A$. Here $s_k$ is the size of the Jordan blocks $J_{s_k}(\lambda_k)$ in the Jordan decomposition of $A$.
\end{defn}
As remarked in~\cite{Higham2008} right after Definition 1.1, arbitrary numbers can be assigned as the values of $f(\lambda_k)$ and its derivatives $f^{(j)}(\lambda_k), j=1,\hdots, s_k-1$, at each eigenvalue $\lambda_k$ of $A$.
With a stem function $f$, the primary matrix function $f(A)$ of $A\in\mathbb{K}^{n\times n}$ is given by $f(A)=f(QJQ^{-1})=Qf(J)Q^{-1}$, where, with a nonsingular matrix $Q\in\mathbb{K}^{n\times n}$, $A=QJQ^{-1}$ is a Jordan decomposition of $A$ and $f(J)$ is calculated by applying the stem function to each Jordan block.
A primary matrix function $f(A)$ of a matrix $A\in\mathbb{K}^{n\times n}$ is well defined in the sense that it is unique.
By Thm.~1.12 in~\cite{HornJohnson1991}, a primary matrix function of $A\in\mathbb{K}^{n\times n}$ is a polynomial in $A$. It follows that all the primary matrix functions $f(A)$ commute with the matrix $A$ and also commute with each other.
The relations $f(A^T) = f(A)^T$ and $f(Q^{-1} A Q) = Q^{-1} f(A) Q$ for a nonsingular matrix $Q\in\mathbb{K}^{n\times n}$ hold for any primary matrix function $f$ of a matrix $A\in\mathbb{K}^{n\times n}$. If the stem function in Equation~(\ref{eq:derivative}) satisfies $f^{(j)}(\bar{\lambda})=\overline{f^{(j)}(\lambda)}$, then $f(\bar{A})=\overline{f(A)}$, and $f(A)$ is real when $A$ is real. It follows that, with a product matrix $N\in\mathbb{K}^{n\times n}$, for bilinear forms $f(A^{[N]})=f(A)^{[N]}$ always holds, while for sesquilinear forms, $f(A^{[N]})=f(A)^{[N]}$ is equivalent to $f(\bar{A})=\overline{f(A)}$ (see Theorem 3.1~\cite{Higham2005}). If $f(A^{[N]})=f(A)^{[N]}$ and $A$ is $N$-selfadjoint, then both $f(A)$ and $f(A)A$ are $N$-selfadjoint.
We now introduce the concept of a generalized matrix sign function.
\begin{defn}(generalized sign function)
\label{defn:sign_function}
A primary matrix function $\sigma: \mathbb{K}^{n\times n} \to \mathbb{K}^{n\times n}$ is called a generalized matrix sign function if for all nonsingular matrices $A\in\mathbb{K}^{n\times n}$, $\sigma(A)A$ has no eigenvalues on the negative real axis, $\sigma(\bar{A})=\overline{\sigma(A)}$, and all the eigenvalues $\lambda$ of $\sigma(A)$ are on the unit circle $|\lambda|=1$.
\end{defn}
Notice that the generalized sign $\sigma(A)$ of an $N$-selfadjoint matrix $A$ is $N$-selfadjoint.
We give some examples of generalized matrix sign functions.
In~\cite{Roberts1971}, a matrix sign function is defined as the primary matrix function associated to the stem function
\begin{align}
\text{sign}(\lambda)=\begin{cases}
+1, \quad\quad & \text{for } \Re \lambda>0, \\
-1, \quad\quad &\text{for } \Re \lambda<0 , \\
\text{undefined}, \quad\quad &\text{for } \Re \lambda=0.
\end{cases}
\end{align}
Also see, e.g.,~\cite{Higham1994, KenneyLaub1995}. It is shown in~\cite{Higham1994} that for a matrix $A$ such that $A^2$ is nonsingular and has no negative real eigenvalues, the relation $\text{sign}(A)=A(A^2)^{-1/2}$ holds.
The first example of a generalized sign function is obtained by extending the stem function $\mathop{\mathrm{sign}}(\lambda)$ to the imaginary axis. For example, the function
\begin{align}
\label{eq:Sign_1}
f(\lambda)=\begin{cases}
+1, \quad\quad & \text{for } \Re \lambda>0, \\
-1, \quad\quad &\text{for } \Re \lambda<0 , \\
+1, \quad\quad &\text{for } \Re \lambda=0, \Im \lambda \ne 0, \\
\text{undefined},\quad\quad &\text {for} \quad \lambda=0,
\end{cases}
\end{align}
with
\begin{align}
f^{(j)}(\lambda) = 0, \quad \text{for } j\ge 1,
\end{align}
is a stem function of a generalized matrix sign function.
A second example of a generalized sign function arises from the stem function
\begin{align}
\label{eq:Sign}
f(\lambda)=\begin{cases}
\text{undefined},\quad\quad &\text {for} \quad \lambda=0,\\
-1, \quad \quad &\text{for}\quad \Re \lambda <0, \Im \lambda=0, \\
+1, \quad\quad &\text {otherwise},
\end{cases}
\end{align}
with
\begin{align}
f{^{(j)}}(\lambda)=0, \quad \text{for } j\ge 1.
\end{align}
A third example of a generalized sign function defined on the spectrum of a matrix is
\begin{align}
\label{eq:Sign_3}
f(\lambda)=\begin{cases}
{\bar{\lambda}}/{|\lambda|}, \quad \quad &\text{for} \quad \lambda \ne0, \\
\text{undefined},\quad\quad &\text {for} \quad \lambda=0,
\end{cases}
\end{align}
and all the derivatives are defined to be zero, i.e.,
\begin{align}
f{^{(j)}}(\lambda)=0, \quad \text{for } j\ge 1.
\end{align}
Since in this paper we define the generalized sign function for nonsingular matrices, we leave $f(\lambda)$ at $\lambda=0$ undefined.
\section{Right and left decompositions of square matrices}
\label{sec:F=WS}
\begin{thm}
\label{thm:F=WS}
Given a nonsingular scalar product defined by $N\in \mathbb{K}^{n\times n}$ and a generalized sign function $\sigma: \mathbb{K}^{n\times n} \to \mathbb{K}^{n\times n}$, a matrix $F\in \mathbb{K}^{n\times n}$ has a decomposition
\begin{align}
\label{eq:F=WS}
F = W S,
\end{align}
where, with $\Sigma=\sigma(F^{[N]}F)$, the matrix $W\in \mathbb{K}^{n\times n}$ is $(N, N\Sigma^{-1})$-unitary with $(W^{[N]})^{[N]}=W$ and the matrix $S \in \mathbb{K}^{n\times n}$ is $r$-positive-definite, $N$-selfadjoint and $N\Sigma$-selfadjoint, if and only if $F$ is nonsingular and $(F^{[N]})^{[N]}=F$. When such a decomposition exists it is unique, with $S$ given by $S=(\Sigma F^{[N]}F)^{1/2}$ and $W=FS^{-1}$.
\end{thm}
\begin{proof} IF: Since $(F^{[N]})^{[N]}=F$, then $F^{[N]} F$ is $N$-selfadjoint besides being nonsingular. Since $\sigma$ is a generalized sign function, $\Sigma=\sigma(F^{[N]}F)$ is $N$-selfadjoint and commutes with $F^{[N]} F$, and $\Sigma F^{[N]} F$ is $N$-selfadjoint with no zero or negative real eigenvalues. Thus the principal square root
\begin{align}
S = \Big( \Sigma F^{[N]} F \Big)^{1/2}
\end{align}
is well-defined and is $r$-positive-definite and $N$-selfadjoint (the latter follows from Lemma 3.1 (b) in~\cite{Mackey2006}, for example).
Since both $\Sigma$ and $S$ are primary matrix functions and thus polynomials in $F^{[N]} F$, $\Sigma$ and $S$ commute. We compute
\begin{align}
S^{[N\Sigma]}=(N\Sigma)^{-1}S^\# (N\Sigma)=\Sigma^{-1}S^{[N]} \Sigma=\Sigma^{-1}S \Sigma=S.
\label{eq:18}
\end{align}
That is, $S$ is $N\Sigma$-selfadjoint.
Let $W = F S^{-1}$. Then $F = W S$. We need to show that $W$ is $(N,N\Sigma^{-1})$-unitary, i.e.\ $W^\# N W=N\Sigma^{-1}$.
Substituting $W=FS^{-1}$ in $W^{[N]}W$, and using $S^{[N]}=S$, $S^{-[N]} =S^{-1}$, $F^{[N]}F = \Sigma^{-1} S^2$ and $\Sigma S = S \Sigma$, we find
\begin{align}
\label{eq:detA}
W^{[N]}W=S^{-[N]} F^{[N]}FS^{-1}=S^{-1}\Sigma ^{-1}S^2 S^{-1} =\Sigma^{-1},
\end{align}
from which $W^\# NW=N\Sigma^{-1}$. Moreover,
\begin{align}
(W^{[N]})^{[N]}=((FS^{-1})^{[N]})^{[N]}=(F^{[N]})^{[N]}(S^{-[N]})^{[N]}=FS^{-1}=W.
\end{align}
Now we show that the decomposition $F=WS$ is unique. Assume there is another decomposition $F=\tilde{W}\tilde{S}$ where, with $\Sigma=\sigma(F^{[N]}F)$, $\tilde{W}$ is $(N, N\Sigma^{-1})$-unitary, $(\tilde{W}^{[N]})^{[N]}=\tilde{W}$, $\tilde{S}$ is $r$-positive-definite, $N$-selfadjoint and $N\Sigma$-selfadjoint. Since $\tilde{S}$ is both $N$-selfadjoint and $N\Sigma$-selfadjoint, then $\tilde{S}$ commutes with $\Sigma$. Thus $F^{[N]}F=\tilde{S} \tilde{W}^{[N]} \tilde{W}\tilde{S}=\tilde{S} \Sigma^{-1} \tilde{S}=\Sigma^{-1} \tilde{S}^{2}$, and $\Sigma F^{[N]}F=\tilde{S}^{2}$. Since $\tilde{S}$ is $r$-positive definite and a square root of $\Sigma F^{[N]}F$, so $\tilde{S}$ is the principal square root of $\Sigma F^{[N]}F$, which is unique. Hence $\tilde{S}=S$. The uniqueness of $W$ follows from $\tilde{W}=W=FS^{-1}$.
ONLY IF: Assume $F=WS$ where, with $\Sigma=\sigma(F^{[N]}F)$, $W$ is $(N, N\Sigma^{-1})$-unitary, $(W^{[N]})^{[N]}=W$, $S$ is $r$-positive-definite, $N$-selfadjoint and $N\Sigma$-selfadjoint. Then
\begin{align}
(F^{[N]})^{[N]}=((WS)^{[N]})^{[N]}=(W^{[N]})^{[N]}(S^{[N]})^{[N]}=WS=F.
\end{align}
Since $S$ is $N$-selfadjoint and $N\Sigma$-selfadjoint, then $S$ commutes with $\Sigma$, and it follows that $\Sigma F^{[N]}F=S^2$. Since $S$ is $r$-positive-definite, implying that it is nonsingular, then $\Sigma F^{[N]}F$ is nonsingular, it follows that $F^{[N]}F$ and $F$ are nonsingular.
\end{proof}
\begin{thm}
\label{thm:F=SW}
If a nonsingular matrix $F\in \mathbb{K}^{n\times n}$ has a decomposition $F=WS$ in Theorem~\ref{thm:F=WS}, then $F$ also has a decomposition
\begin{align}
\label{eq:F=SW}
F=S^\prime \, W,
\end{align}
where, with $\Sigma^\prime= \sigma(FF^{[N]})$, the matrix $S^\prime \in \mathbb{K}^{n\times n}$ is $r$-positive-definite, $N$-selfadjoint and $N\Sigma^\prime$-selfadjoint, and the matrix $W\in \mathbb{K}^{n\times n}$ is $(N\Sigma^\prime,N)$-unitary. When such a decomposition exists it is unique, with $S^\prime$ given by $S^\prime=(\Sigma^\prime FF^{[N]})^{1/2}$ and $W=S^{\prime-1}F$. $W$ in~(\ref{eq:F=WS}) and~(\ref{eq:F=SW}) is the same one.
\end{thm}
\begin{proof} Let $\Sigma= \sigma(F^{[N]}F)$ and $\Sigma^\prime= \sigma(FF^{[N]})$. Since
\begin{align}
FF^{[N]}=&F(F^{[N]}F)F^{-1}=WS \Sigma ^{-1} S^2 (WS)^{-1} \nonumber\\
=&W \Sigma ^{-1} S^2 W^{-1}=W(F^{[N]}F)W^{-1},
\end{align}
and $\sigma$ is a primary matrix function, then $\Sigma^\prime=W\Sigma W^{-1}$ and $\Sigma^\prime FF^{[N]}=W(\Sigma F^{[N]}F)W^{-1}$. Let
\begin{align}
S^\prime=(\Sigma^\prime FF^{[N]})^{1/2}=W (\Sigma F^{[N]}F)^{1/2}W^{-1}=WSW^{-1}.
\end{align}
Since $FF^{[N]}$ and $\Sigma^\prime FF^{[N]}$ are $N$-selfadjoint, then $S^\prime$ is $N$-selfadjoint and $N\Sigma^\prime$-selfadjoint, following a computation similar to (\ref{eq:18}). Then $F=WS=WSW^{-1} W=S^\prime W$. And
\begin{align}
\label{eq:detB}
WW^{[N]}=S^{\prime-1}F(S^{\prime-1}F)^{[N]}=S^{\prime-1}FF^{[N]}S^{\prime-1}=\Sigma^{\prime -1}.
\end{align}
Since $W$ is nonsingular, it follows that $W^\# N\Sigma^\prime W=N$, that is, $W$ is $(N\Sigma^\prime, N)$-unitary.
The proof of uniqueness is similar to the one of Theorem~\ref{thm:F=WS}.
\end{proof}
If the scalar product matrix $N$ is orthosymmetric (Definition 2.4 in~\cite{Mackey2006}), then the condition $(F^{[N]})^{[N]}=F$ is satisfied for all matrices $F$. If $N$ is not orthosymmetric, then the validity of the condition $(F^{[N]})^{[N]}=F$ depends on the specific matrices $N$ and $F$ and on the nature of the scalar product defined by $N$ (real bilinear, complex bilinear or sesquilinear). For a given matrix $N$, some matrices $F$ may satisfy the condition $(F^{[N]})^{[N]}=F$ and some may not. For example, for the sesquilinear form defined by $N=\begin{pmatrix} &1\\2i &\end{pmatrix}$, the matrix $F=\begin{pmatrix}1&\\&i\end{pmatrix}$ satisfies $(F^{[N]})^{[N]}=F$ and the matrix $F=\begin{pmatrix}&1\\i&\end{pmatrix}$ does not. Moreover, it is not enough to specify the matrices $N$ and $F$, but one must specify the kind of scalar product. For example, take $N=\begin{pmatrix}&1\\i&\end{pmatrix}$ and $F=\begin{pmatrix}&1\\4i& \end{pmatrix}$: if $N$ defines a sesquilinear product, then $(F^{[N]})^{[N]}=F$, and if $N$ defines a complex bilinear product, then $(F^{[N]})^{[N]} \ne F$.
We end this section by discussing the decomposition in the terms of the matrix determinants.
\begin{pro}
For the decompositions $F=WS=S'W$ in Theorems~\ref{thm:F=WS} and~\ref{thm:F=SW}, the following holds.
\begin{align}
& \det W = \pm 1 && \text{for real bilinear forms},
\label{eq:det1}
\\
& | \det W | = 1 && \text{for complex bilinear and sesquilinear forms},
\label{eq:det2}
\\
& \det \Sigma = \det \Sigma' = +1 && \text{for real bilinear and sesquilinear forms},
\label{eq:det3}
\\
& | \det \Sigma \, | = | \det \Sigma' \, | = 1 && \text{for complex bilinear forms} .
\label{eq:det4}
\end{align}
\end{pro}
\begin{proof}
Let $F$ be a nonsingular matrix. From $\Sigma=\sigma(F^{[N]}F)$, $\Sigma'=\sigma(FF^{[N]})$ and the fact that $\sigma$ is a generalized sign function, it follows that $|\det\Sigma\,|=|\det\Sigma'\,|=1$, which is (\ref{eq:det4}). Taking the determinant of (\ref{eq:detA},~\ref{eq:detB}) it then follows that $|\det W| = 1$, which is (\ref{eq:det2}). And in the case $\mathbb{K}=\mathbb{R}$, (\ref{eq:det2}) becomes (\ref{eq:det1}). Finally, to prove (\ref{eq:det3}) for $\Sigma$, take the determinant of $S^2=\Sigma F^{[N]}F$ and obtain
\begin{align}
\label{eq:36}
(\det S)^2=(\det \Sigma)(\det F^{\#})(\det F).
\end{align}
For bilinear forms, (\ref{eq:36}) becomes
\begin{align}
(\det S)^2=(\det \Sigma) (\det F)^2.
\label{eq:det5}
\end{align}
If $\mathbb{K}=\mathbb{R}$, one must then have $\det \Sigma=+1$. For sesquilinear forms, since $S^{[N]}=S$, then $\overline{\det S}=\det S$, so $\det S$ is real. In addition,
\begin{align}
(\det S)^2=(\det \Sigma) (\overline{\det F})(\det F).
\end{align}
Since $(\det S)^2>0$ and $(\overline{\det F})(\det F)>0$, it follows that $\det \Sigma=+1$. A similar reasoning leads to (\ref{eq:det3}) for $\Sigma'$.
\end{proof}
As a side remark, we notice that if in Definition~\ref{defn:sign_function} the condition $|\lambda|=1$ on the eigenvalues of $\sigma(A)$ is not imposed, then for the decompositions $F=WS=S'W$ in Theorems~\ref{thm:F=WS} and~\ref{thm:F=SW} one finds $\det\Sigma>0$ and $\det\Sigma'>0$ for real bilinear and sesquilinear forms, $\det\Sigma\ne0$ and $\det\Sigma'\ne0$ for complex bilinear forms, and $| \det W| = 1/|\sqrt{\det\Sigma}|$. We have chosen the condition $|\lambda|=1$ in Definition~\ref{defn:sign_function} so as to make the matrix $W$ analogous to a unitary matrix in terms of its eigenvalues.
\section{Right and left decompositions of rectangular matrices}
\label{sec:F=WS_MN}
\begin{thm}
\label{thm:F=WS_MN}
Let $M\in \mathbb{K}^{m\times m}$ and $N\in \mathbb{K}^{n\times n}$ be nonsingular matrices with $m\ge n$, and let $\sigma:\mathbb{K}^{n\times n}\to\mathbb{K}^{n\times n}$ be a generalized matrix sign function. A rectangular matrix $F\in \mathbb{K}^{m\times n}$ has a decomposition
\begin{align}
\label{eq:F=WS_MN}
F = W S,
\end{align}
where, with $\Sigma= \sigma(F^{[M,N]}F)$, the matrix $S\in \mathbb{K}^{n \times n}$ is $r$-positive-definite, $N$-selfadjoint and $N\Sigma$-selfadjoint, and the matrix $W\in \mathbb{K}^{m \times n}$ has $(M,N\Sigma^{-1})$-orthonormal columns and satisfies $(W^{[M,N]})^{[N,M]}=W$, if and only if $F^{[M,N]}F$ is nonsingular and $(F^{[M,N]})^{[N,M]}=F$. When such a decomposition exists it is unique, with $S$ given by $S=(\Sigma F^{[M,N]} F)^{1/2}$ and $W=FS^{-1}$.
\end{thm}
\begin{proof} IF: Assume that $F^{[M,N]}F$ is nonsingular and $(F^{[M,N]})^{[N,M]}=F$. In the latter equation, write $(F^{[M,N]})^{[N,M]}=M^{-1}(N^{-1}F^\# M)^\# N=M^{-1}M^\# FN^{-\#} N$, so $F=M^{-1}M^\# FN^{-\#} N$. Hence
\begin{align}
\label{eq:FMNF}
(F^{[M,N]}F)^{[N]}=& N^{-1}(N^{-1}F^\# MF)^\# N=N^{-1}F^\# M^\# FN^{-\#}N \nonumber \\
=&N^{-1}F^\# M(M^{-1}M^\# FN^{-\#}N)=F^{[M,N]}F,
\end{align}
i.e., $F^{[M,N]}F$ is $N$-selfadjoint besides being nonsingular. Since $\sigma$ is a generalized sign function, it follows that $\Sigma F^{[M,N]}F$ with $\Sigma= \sigma(F^{[M,N]}F)\in\mathbb{K}^{n\times n}$ is $N$-selfadjoint, nonsingular, and has no negative real eigenvalue. Hence the principal square root
\begin{align}
S = \Big(\Sigma F^{[M,N]}F \Big)^{1/2}
\end{align}
is well-defined and is $r$-positive-definite and $N$-selfadjoint.
Since both $\Sigma$ and $S$ are polynomials in $F^{[M,N]} F$, $\Sigma$ and $S$ commute. So
\begin{align}
S^{[N\Sigma]}=\Sigma^{-1} N^{-1} S^\# N\Sigma=\Sigma^{-1} S^{[N]} \Sigma=S.
\end{align}
That is, $S$ is $N\Sigma$-selfadjoint.
Let $W=F S^{-1}$. Then $F = W S$. To show that $W$ has $(M,N\Sigma^{-1})$-orthonormal columns, we show that it satisfies $W^{[M,N\Sigma^{-1}]}W=I_n$.
Substituting $W=FS^{-1}$ into $W^{[M,N\Sigma^{-1}]}W$, and using $S^{[N]}=S$, $F^{[M,N]}F = \Sigma^{-1} S^2$ and $\Sigma S = S \Sigma$, we find
\begin{align}
\label{eq:WMNW}
W^{[M,N\Sigma^{-1}]}W=&(FS^{-1})^{[M,N\Sigma^{-1}]} FS^{-1} \nonumber\\
=&S^{-[N\Sigma^{-1}]}F^{[M,N\Sigma^{-1}]} FS^{-1} \nonumber\\
=&S^{-1}\Sigma F^{[M,N]} FS^{-1} \nonumber\\
=&I_n.
\end{align}
Furthermore,
\begin{align}
\label{eq:WMNNM}
(W^{[M,N]})^{[N,M]}=&\left( (FS^{-1})^{[M,N]} \right )^{[N,M]}\nonumber\\
=&\left( S^{-[N]} F^{[M,N]} \right )^{[N,M]}\nonumber\\
=&(F^{[M,N]})^{[N,M]} (S^{-[N]})^{[N]}\nonumber\\
=&FS^{-1}\nonumber\\
=&W.
\end{align}
Now we show that the decomposition is unique. Assume there is another decomposition $F=\tilde{W}\tilde{S}$ where $\tilde{W}$ and $\tilde{S}$ satisfy the same conditions as $S$ and $W$ listed after~(\ref{eq:F=WS_MN}). Since $\tilde{S}$ is both $N$-selfadjoint and $N\Sigma$-selfadjoint, then $\tilde{S}$ commutes with $\Sigma$. Thus $F^{[M,N]}F=\tilde{S}^{[N]}\tilde{W}^{[M,N]} \tilde{W} \tilde{S} =\tilde{S} \Sigma^{-1} \tilde{S}=\Sigma^{-1} \tilde{S}^{2}$, and $\Sigma F^{[M,N]}F=\tilde{S}^{2}$. Since $\tilde{S}$ is $r$-positive-definite and a square root of $\Sigma F^{[M,N]}F$, so $\tilde{S}$ is the principal square root of $\Sigma F^{[M,N]}F$, which is unique. Hence $\tilde{S}=S$. The uniqueness of $W$ follows from $\tilde{W}=W=FS^{-1}$.
ONLY IF: If $F$ has a decomposition $F=WS$ as in the text of the theorem, then
\begin{align}
(F^{[M,N]})^{[N,M]}=&\left((WS)^{[M,N]}\right)^{[N,M]} \nonumber\\
=&\left(S^{[N]}W^{[M,N]}\right)^{[N,M]} \nonumber \\
=&(W^{[M,N]})^{[N,M]}(S^{[N]})^{[N]} \nonumber \\
=&WS \nonumber\\
=&F.
\end{align}
Since $S$ is assumed to be both $N$-selfadjoint and $N\Sigma$-selfadjoint, then $S$ commutes with $\Sigma$, and it follows that $\Sigma F^{[M,N]}F=S^2$. Since $S$ is $r$-positive-definite, implying that it is nonsingular, then $\Sigma F^{[M,N]}F$ is nonsingular, and it follows that $F^{[M,N]}F$ is nonsingular.
\end{proof}
\begin{thm}
\label{thm:F=SW_MN}
Let $M\in \mathbb{K}^{m\times m}$ and $N\in \mathbb{K}^{n\times n}$ be nonsingular matrices with $m \le n$, and let $\sigma:\mathbb{K}^{m\times m}\to\mathbb{K}^{m\times m}$ be a generalized matrix sign function. A rectangular matrix $F\in \mathbb{K}^{m\times n}$ has a decomposition
\begin{align}
\label{eq:F=SW_MN}
F = S \, W,
\end{align}
where, with $\Sigma= \sigma(FF^{[M,N]})$, the matrix $S\in \mathbb{K}^{m \times m}$ is $r$-positive-definite, $M$-selfadjoint and $M\Sigma$-selfadjoint, and the matrix $W\in \mathbb{K}^{m \times n}$ has $(M\Sigma,N)$-orthonormal rows and satisfies $(W^{[M,N]})^{[N,M]}=W$, if and only if $FF^{[M,N]}$ is nonsingular and $(F^{[M,N]})^{[N,M]}=F$. When such a decomposition exists it is unique, with $S$ given by $S=(\Sigma F F^{[M,N]})^{1/2}$ and $W=S^{-1}F$.
\end{thm}
\begin{proof}
The proof is similar to the proof of Theorem~\ref{thm:F=WS_MN} and we only give an outline. In a way similar to (\ref{eq:FMNF}), one can show that $FF^{[M,N]}$ is $M$-selfadjoint. Let $\Sigma= \sigma(FF^{[M,N]})$ and $S=(\Sigma F F^{[M,N]})^{1/2}$. Then $S$ is both $M$ and $M\Sigma$-selfadjoint. $W$ is given by $W=S^{-1}F$. In a way similar to~(\ref{eq:WMNW},~\ref{eq:WMNNM}), one can prove that $W W^{[M\Sigma,N]}=I_m$ and $(W^{[M,N]})^{[N,M]}=W$. The proof of uniqueness is similar to the one of Theorem~\ref{thm:F=WS_MN}.
\end{proof}
In Theorem~\ref{thm:F=WS_MN} and~\ref{thm:F=SW_MN}, if $M$ and $N$ form an orthosymmetric pair, then the conditions $(F^{[M, N]})^{[N,M]}=F$ and $(W^{[M,N]})^{[N,M]}=W$ are always satisfied (Definition 3.2, Lemma 3.3 in~\cite{Higham2010}).
Notice that when $m=n$, the matrix $F$ is a square matrix. In this case, if $F^{[M,N]}F$ is nonsingular, then also $F$ and $FF^{[M,N]}$ are nonsingular. By Theorems~\ref{thm:F=WS_MN} and~\ref{thm:F=SW_MN}, $F=WS=S^\prime W^\prime$, where $W$ and $S$ satisfy the conditions in Theorem~\ref{thm:F=WS_MN} and $W'$ and $S'$ those in Theorem~\ref{thm:F=SW_MN}. The following theorem shows that $W^\prime=W$ and $S^\prime=WSW^{-1}$.
\begin{thm}
\label{thm:F=WS=SW_MN}
Let $M\in \mathbb{K}^{n\times n}$ and $N\in \mathbb{K}^{n\times n}$ be nonsingular matrices, and let $\sigma:\mathbb{K}^{n\times n}\to\mathbb{K}^{n\times n}$ be a generalized matrix sign function. Then any nonsingular matrix $F \in \mathbb{K}^{n \times n}$ such that $(F^{[M,N]})^{[N,M]}=F$ can be factorized uniquely as
\begin{align}
F = W S=S^\prime W,
\end{align}
where, with $\Sigma= \sigma(F^{[M,N]}F)$ and $\Sigma^\prime= \sigma(FF^{[M,N]})$, the matrix $S\in \mathbb{K}^{n \times n}$ is $r$-positive-definite, $N$-selfadjoint and $N\Sigma$-selfadjoint, the matrix $S^\prime \in \mathbb{K}^{n \times n}$ is $r$-positive-definite, $M$-selfadjoint and $M\Sigma^\prime$-selfadjoint, and the matrix $W\in \mathbb{K}^{n \times n}$ is $(M,N\Sigma^{-1})$-unitary, $(M\Sigma^\prime, N)$-unitary and satisfies $(W^{[M,N]})^{[N,M]}=W$. The matrices $S$, $S'$, and $W$ are given by $S=(\Sigma F^{[M,N]} F)^{1/2}$, $S'=(\Sigma'F F^{[M,N]})^{1/2}$, and $W=FS^{-1} = S^{\prime-1} F$.
\end{thm}
\begin{proof}
Let $\Sigma=\sigma(F^{[M,N]}F)$ and $\Sigma^\prime=\sigma(FF^{[M,N]})$. By Theorem~\ref{thm:F=WS_MN}, $F=WS$ with $S=(\Sigma F^{[M,N]} F)^{1/2}$ and $W=FS^{-1}$.
Since for nonsingular $F$,
\begin{align}
FF^{[M,N]}=&F (F^{[M,N]}F)F^{-1}=WS \Sigma ^{-1} S^2 (WS)^{-1} \nonumber\\
=&W \Sigma ^{-1} S^2 W^{-1}=W (F^{[M,N]}F)W^{-1},
\end{align}
then $\Sigma^\prime=W\Sigma W^{-1}$ and $S^\prime=(\Sigma^\prime F F^{[M,N]})^{1/2}=W(\Sigma F^{[M,N]}F)^{1/2}W^{-1}=WSW^{-1}$. So $F=WS=WSW^{-1}W=S^\prime W$. By Theorems~\ref{thm:F=WS_MN} and~\ref{thm:F=SW_MN}. $W$ is both $(M, N\Sigma^{-1})$-unitary and $(M\Sigma^\prime, N)$-unitary.
\end{proof}
When $M=N$, Theorem~\ref{thm:F=WS=SW_MN} reduces to Theorems~\ref{thm:F=WS} and~\ref{thm:F=SW}.
\section{Comments and examples}
\label{sec:exmp}
In this section, we make some comments on the previous theorems and give some examples.
\subsection{Comments and examples on Theorems~\ref{thm:F=WS} and~\ref{thm:F=SW}}
In Theorem~\ref{thm:F=WS}, when $\mathbb{K}=\mathbb{C}$, the same matrix $N$ can define a complex bilinear product or a sesquilinear product, and in general the corresponding $F=WS$ decompositions are different.
\begin{exmp}
\label{exmp:a}
Let the generalized sign function be defined by~(\ref{eq:Sign_1}), and let $F=\begin{pmatrix}-1+2 i\end{pmatrix} \in \mathbb{C}^{1\times1}$.
Let the matrix $N=\begin{pmatrix} 1\end{pmatrix}$ define a complex bilinear product. Then $(F^{[N]})^{[N]}=F$ and one computes $F^{[N]}F=N^{-1}F^TNF=\begin{pmatrix}-3-4i\end{pmatrix}$, $\Sigma=\begin{pmatrix}-1\end{pmatrix}$, $S=\begin{pmatrix}2+i\end{pmatrix}$, and $W=(i)$. Therefore $F=WS=\begin{pmatrix}i\end{pmatrix}\begin{pmatrix}2+i\end{pmatrix}$.
Let the matrix $N=\begin{pmatrix} 1\end{pmatrix}$ define a sesquilinear product. Then $(F^{[N]})^{[N]}=F$ and one computes $F^{[N]}F=N^{-1}\bar{F}^TNF
=\begin{pmatrix}5\end{pmatrix}$, $\Sigma=\begin{pmatrix}1\end{pmatrix}$, $S=\begin{pmatrix}\sqrt{5}\end{pmatrix}$ and $W=\begin{pmatrix}\frac{-1+2i}{\sqrt{5}}\end{pmatrix}$.
Therefore $F=WS=\begin{pmatrix}\frac{-1+2i}{\sqrt{5}}\end{pmatrix}\begin{pmatrix}\sqrt{5}\end{pmatrix}$.
\end{exmp}
Different generalized sign functions give rise to different $F=WS$ decompositions in Theorem~\ref{thm:F=WS}.
\begin{exmp}
In Example~\ref{exmp:a} with $N=(1)$ defining a complex bilinear product, replace the generalized sign function~(\ref{eq:Sign_1}) by~(\ref{eq:Sign}) and~(\ref{eq:Sign_3}), respectively.
With the generalized sign function in~(\ref{eq:Sign}), from $F^{[N]}F
=\begin{pmatrix}-3-4i\end{pmatrix}$ one finds $\Sigma=\begin{pmatrix}1\end{pmatrix}$ and $S=(\Sigma F^{[N]}F)^{1/2}=\begin{pmatrix}1-2i\end{pmatrix}$, therefore $F=WS=\begin{pmatrix}-1\end{pmatrix}\begin{pmatrix}1-2i\end{pmatrix}$.
With the generalized sign function in~(\ref{eq:Sign_3}), from $F^{[N]}F
=\begin{pmatrix}-3-4i\end{pmatrix}$ one finds $\Sigma=\begin{pmatrix}\frac{-3+4i}{5}\end{pmatrix}$ and $S=\begin{pmatrix}\sqrt{5}\end{pmatrix}$, therefore $F=WS=\begin{pmatrix}\frac{-1+2i}{\sqrt{5}}\end{pmatrix}\begin{pmatrix}\sqrt{5}\end{pmatrix}$.
\end{exmp}
In general, the decomposition $F=WS$ in Theorem~\ref{thm:F=WS} of a given matrix $F$ differs from other kinds of polar decompositions in the literature, such as the generalized polar decomposition, the $H$-polar decomposition and polar decompositions defined for particular product matrices.
The decomposition $F=WS$ is related to the generalized polar decomposition in the sense that the $N$-selfadjoint matrix $S$ is $r$-positive-definite. In particular, for the generalized sign function defined by~(\ref{eq:Sign}), if $F^{[N]}F$ has no negative real eigenvalues, then $\Sigma=I_n$ and $W$ is $N$-unitary, therefore, $F=WS$ is a generalized polar decomposition.
An $H$-polar decomposition is defined for an indefinite inner product specified by a selfadjoint matrix $H\in \mathbb{K}^{n\times n}$. In general, the decomposition $F=WS$ of a matrix $F$ for a selfadjoint matrix $N=H$ is different from an $H$-polar decomposition $F=L_HS_H$ of the same matrix $F$.
\begin{exmp} Let the generalized sign function be defined by~(\ref{eq:Sign}).
Let the selfadjoint matrix $H=\begin{pmatrix}
1&\\
&-4
\end{pmatrix}$ define a real bilinear product.
Let $F=\begin{pmatrix}
&4\\
1&
\end{pmatrix}$, which satisfies $(F^{[H]})^{[H]}=F$.
The decomposition $F=WS$ in Theorem~\ref{thm:F=WS} is computed as follows.
$
F^{[H]}F=\begin{pmatrix}
-4&\\
&-4
\end{pmatrix}$,
$ \Sigma=\begin{pmatrix}
-1&\\
&-1
\end{pmatrix}$,
$S=\begin{pmatrix}
2& \\
&2
\end{pmatrix}
$
and
$W = \begin{pmatrix}
&2\\
\frac{1}{2}&
\end{pmatrix}
$.
Therefore
$
F=WS=\begin{pmatrix}
&2\\
\frac{1}{2}&
\end{pmatrix}
\begin{pmatrix}
2&\\
&2
\end{pmatrix}.
$
Since $W$ is not $H$-unitary, this $F=WS$ decomposition is not an $H$-polar decomposition.
One possible $H$-polar decomposition is
$
F=L_HS_H=\begin{pmatrix}
-1&\\&1
\end{pmatrix}
\begin{pmatrix}
&-4\\
1&
\end{pmatrix}.
$
\end{exmp}
The decomposition $F=WS$ in Theorem~\ref{thm:F=WS} is also in general different from other kinds of polar decompositions defined for scalar products given by particular matrices such as $N=I_n$, $N=Z$, $N=J$, or $N=D$ (see, e.g.,~\cite{Cardoso2002, Mackey2006}). Here
\begin{align}
Z=\begin{pmatrix}
&&1\\
&\adots&\\
1&&
\end{pmatrix}, \quad
J=\begin{pmatrix}
&I_m\\
-I_m
\end{pmatrix}, \quad
D=\begin{pmatrix}
I_p&\\
&-I_q
\end{pmatrix}.
\end{align}
\begin{exmp} Let the generalized sign function be defined by~(\ref{eq:Sign}).
Let the matrix $J=\begin{pmatrix}
&1\\
-1&
\end{pmatrix}$ define a sesquilinear product.
Let $F=\begin{pmatrix}
i&\\
&-4i
\end{pmatrix}$, which satisfies $(F^{[J]})^{[J]}=F$.
Table 2.1 in~\cite{Mackey2006} defines a polar decomposition $F=L_JS_J$, where $L_J$ is $J$-conjugate symplectic and $S_J$ is $J$-skew-Hermitian. One possible such decomposition is $F=L_JS_J=\begin{pmatrix}\frac{1}{2}&\\&2\end{pmatrix}\begin{pmatrix}2i&\\&-2i\end{pmatrix}$.
The $F=WS$ decomposition in Theorem~\ref{thm:F=WS} gives instead $F^{[J]}F=\begin{pmatrix}-4&\\&-4\end{pmatrix}$, $\Sigma=\begin{pmatrix}-1&\\&-1\end{pmatrix}$, $S=\begin{pmatrix}2&\\&2\end{pmatrix}$, and $W=\begin{pmatrix}\frac{i}{2}&\\&-2i\end{pmatrix}$, therefore $F=WS=\begin{pmatrix}\frac{i}{2}&\\&-2i\end{pmatrix}\begin{pmatrix}2&\\&2\end{pmatrix}$.
\end{exmp}
Finally, we give an example of the decompositions $F=WS$ in Theorem~\ref{thm:F=WS} and $F=S'W$ in Theorem~\ref{thm:F=SW} when $N$ is not orthosymmetric.
\begin{exmp}Let the generalized sign function be defined by~(\ref{eq:Sign_3}).
Let the nonorthosymmetric matrix $N=\begin{pmatrix} &1\\2i &\end{pmatrix}$ define a sesquilinear product.
Let $F=\begin{pmatrix}-1&\\&4i\end{pmatrix}$.
One computes
$F^{[N]}F=\begin{pmatrix}4i&\\&-4i\end{pmatrix}$, $\Sigma=\begin{pmatrix}-i&\\&i\end{pmatrix}$ and $S=\begin{pmatrix}2&\\&2\end{pmatrix}$. Therefore $F=WS=\begin{pmatrix}-\frac{1}{2}&\\&2i\end{pmatrix}\begin{pmatrix}2&\\&2\end{pmatrix}$ and $F=S^\prime W=\begin{pmatrix}2&\\&2\end{pmatrix}\begin{pmatrix}-\frac{1}{2}&\\&2i\end{pmatrix}$. In this particular example, $S=S^\prime$.
\end{exmp}
\subsection{Comments and examples on Theorems~\ref{thm:F=WS_MN}-\ref{thm:F=WS=SW_MN}}
In Theorem~\ref{thm:F=WS_MN}, the $F=WS$ decomposition depends on the nature of the scalar product defined by the matrices $M$ and $N$ (bilinear or sesquilinear product).
\begin{exmp}
Let the generalized sign function be defined by~(\ref{eq:Sign_3}). Let $M=\begin{pmatrix}&1 \\i & \end{pmatrix}$, $N=\begin{pmatrix}1+i\end{pmatrix}$, and $F=\begin{pmatrix}1\\-4i\end{pmatrix}$.
If $M$ and $N$ define complex bilinear products, $(F^{[M,N]})^{[N,M]}=\begin{pmatrix}-i\\4\end{pmatrix}\ne F$, and $F$ does not have an $F=WS$ decomposition.
If $M$ and $N$ define sesquilinear products, then $(F^{[M,N]})^{[N,M]}=F$ and one computes $F^{[M,N]}F=\begin{pmatrix}-4\end{pmatrix}$, $\Sigma=\begin{pmatrix}-1\end{pmatrix}$, $S=\begin{pmatrix}2\end{pmatrix}$, and therefore $F=WS=\begin{pmatrix}\frac{1}{2}\\ -2i\end{pmatrix}\begin{pmatrix}2\end{pmatrix}$.
\end{exmp}
Theorems~\ref{thm:F=WS_MN} is related to the canonical generalized polar decomposition in~\cite{Higham2010} in the sense that the $N$-selfadjoint matrix $S$ is $r$-positive-definite. In particular, let the generalized sign function be defined by~(\ref{eq:Sign}), and let $M$ and $N$ form an orthosymmetric pair. If $F^{[M,N]}F$ has no negative real eigenvalues, then $\Sigma=I_n$ and $W$ is $(M,N)$-unitary. Therefore, in this case, $F=WS$ is a canonical generalized polar decomposition.
In general, $M$ and $N$ do not need to form an orthosymmetric pair. A more general condition is given by $(F^{[M,N]})^{[N,M]}=F$.
\begin{exmp}
Let the generalized sign function be defined by~(\ref{eq:Sign_3}). Let the matrices $M=\begin{pmatrix}&1 \\i & \end{pmatrix}$ and $N=\begin{pmatrix}& i\\-1&\end{pmatrix}$ define two complex bilinear products. Let $F=\begin{pmatrix}-1&\\&4\end{pmatrix}$, which satisfies $(F^{[M,N]})^{[N,M]}=F$. One computes $F^{[M,N]}F=\begin{pmatrix}4i&\\&4i\end{pmatrix}$, $\Sigma=\begin{pmatrix}-i&\\&-i\end{pmatrix}$, $S=\begin{pmatrix}2&\\&2\end{pmatrix}$, and $W=\begin{pmatrix}-\frac{1}{2}&\\ &2\end{pmatrix}$. Therefore $F=WS=\begin{pmatrix}-\frac{1}{2}&\\ &2\end{pmatrix}\begin{pmatrix}2&\\&2\end{pmatrix}$.
\end{exmp}
\begin{exmp}
Let the generalized sign function be defined by~(\ref{eq:Sign}). Let the matrices $M=\begin{pmatrix}4i & \\ &1 \end{pmatrix}$ and $N=\begin{pmatrix}1&\\ &-2i\end{pmatrix}$ define two sesquilinear products. Let $F=\begin{pmatrix}&1\\3i&\end{pmatrix}$, which satisfies $(F^{[M,N]})^{[N,M]}=F$. One computes $F^{[M,N]}F=\begin{pmatrix}9&\\&-2\end{pmatrix}$, $\Sigma=\begin{pmatrix}1&\\&-1\end{pmatrix}$, $S=\begin{pmatrix}3&\\&\sqrt{2}\end{pmatrix}$, and $W=\begin{pmatrix}& \frac{1}{\sqrt{2}}\\i&\end{pmatrix}$. Therefore $F=WS=\begin{pmatrix}& \frac{1}{\sqrt{2}}\\i&\end{pmatrix}\begin{pmatrix}3&\\&\sqrt{2}\end{pmatrix}$. Similarly one finds $F=S^\prime W=\begin{pmatrix}\sqrt{2}&\\ &3\end{pmatrix}\begin{pmatrix}&\frac{1}{\sqrt{2}}\\i&\end{pmatrix}$.
\end{exmp}
\section*{Acknowledgements}
This research was supported by the University of Utah.
\section*{References}
|
1,116,691,497,372 | arxiv | \section{Introduction}
The muon anomalous magnetic moment $a_{\mu}$ is an effective operator of dimension five
\begin{eqnarray}{\label{amu}}
\mathcal{L}\supset \frac{e}{4m_{\mu}}a_{\mu} \bar{u}\sigma_{\rho\lambda} u F^{\rho\lambda},
\end{eqnarray}
where $e$ is the electric charge, $m_{\mu}$ is the muon mass, $\sigma^{\rho\lambda}=\frac{i}{2}[\gamma^{\rho},\gamma^{\lambda}]$, and $F_{\rho\lambda}$ is the electromagnetic field strength.
Measurement of this observable provides a direct check on the completeness of a renormalizable theory such as Standard Model (SM).
This makes $a_{\mu}$ one of those precisely calculated in the SM, see \cite{Aoyama:2020ynm, 0902.3360} for reviews.
A comparison between the theoretical prediction of the SM and precise measurements on $a_{\mu}$ enables us to uncover imprints of new physics.
A so-called muon anomaly has been reported by the BNL \cite{Muong-2:2006rrc} experiment,
which shows a $3.7\sigma$ deviation of $a_{\mu}$ from the SM reference value.\footnote{The SM theoretical value of $a_{\mu}$ is subject to an important uncertainty arising from QCD hadron effects \cite{Borsanyi:2020mff}.}
Recently, this deviation is further strengthened by the latest FNAL \cite{Muong-2:2021ojo} experiment with a larger significance of $4.2\sigma$. Combining the BNL and FNAL data gives
\begin{eqnarray}{\label{deltaamu}}
\delta a^{\rm{exp}}_{\mu}= (2.51\pm 0.59) \times 10^{-9}.
\end{eqnarray}
To explain this anomaly, various models of new physics have been proposed in the literature.
Since $\delta a^{\rm{exp}}_{\mu}$ in Eq.(\ref{deltaamu}) is of order one-loop electroweak contribution to $a_{\mu}$,
it is natural to consider new electroweak degrees of freedom such as vectorlike leptons \cite{Kannike:2011ng, Dermisek:2013gta, Freitas:2014pua,Poh:2017tfo,Dermisek:2021ajd,DeJesus:2020yqx, Frank:2020smf} among others, see \cite{Athron:2021iuf} for a comprehensive review on model buildings.
In this work, we consider an alternative scenario of composite Higgs with partial compositeness,
where composite muon partners play the role of the vectorlike leptons.
Previous work,
which advocated interpreting the muon anomaly with partial compositeness,
can be found in \cite{Redi:2013pga,Antipin:2014mda,Doff:2015nru, Megias:2017dzd, Cacciapaglia:2021gff}.
In this paper, we will consider this scenario by a systematic analysis, based on a complete effective field theory description of this scenario.
As in the aforementioned vectorlike lepton models,
the desired contribution to $a_{\mu}$ due to the composite lepton partners
is expected to be constrained by their effects on the other precise electroweak observables,
either through gauge and/or Yukawa interactions.
The rest of this paper is organized as follows.
In Sec.\ref{model}, we introduce the effective field theory description of our scenario,
with key features different from those of vectorlike leptons emphasized.
Sec.\ref{anomaly} addresses the new contributions to $a_{\mu}$ due to the composite quark and composite lepton partners,
which are two- and one-loop effect respectively.
We will show that the significance of the observed anomaly can be naturally reduced to 2$\sigma$ order by the composite muon partners with mass scale of order $\sim$ TeVs and strong coupling constant $g_{*}$ of order unity.
In Sec.\ref{constraint}, we firstly consider indirect constraints from precision measurements on Higgs, $Z$ and oblique electroweak parameters, then discuss the prospect of searching for signals in the surviving parameter regions in terms of both Drell-Yan and Higgs-associated productions of the composite muon partners at the high-luminosity LHC.
We present the main numerical results in Sec.\ref{results}.
Finally, we conclude in Sec.\ref{con}.
\section{Partial Compositeness}
\label{model}
In the framework of composite Higgs with partial compositeness,
as firstly proposed by refs.\cite{Dugan:1984hq, Kaplan:1991dc},
the general effective Lagrangian for partially composite fermion degrees of freedom based on coset $\mathcal{G}/\mathcal{H}$ reads as
\begin{eqnarray}{\label{Lag}}
\mathcal{L}&=& i\bar{\Psi}^{i}\gamma^{\mu}D_{\mu}\Psi^{i}-m_{\Psi^{i}_{L}}\bar{\Psi}_{L}^{i}\Psi^{i}_{L}-m_{\Psi^{i}_{R}}\bar{\Psi}_{R}^{i}\Psi^{i}_{R}\nonumber\\
&-& g_{*}\left[Q^{i}\bar{U}^{i}_{R}H+Q^{i}\bar{D}^{i}_{R}\tilde{H}+L^{i}\bar{E}^{i}_{R}\tilde{H} \right]+h.c \nonumber\\
&-&\left[\sin\theta_{\psi^{i}_{L}}m_{\Psi^{i}_{L}}\bar{\psi}^{i}_{L}\Psi^{i}_{L}+\sin\theta_{\psi^{i}_{R}}m_{\Psi^{i}_{R}}\bar{\Psi}^{i}_{R}\psi^{i}_{R}\right]+h.c.
\end{eqnarray}
where $i$ is generation index,
$\Psi_{L}$ refer to the composite doublets $Q=(U_{L}, D_{L})^{T}$, $L=(N_{L}, E_{L})^{T}$ and $\Psi_R$
composite singlets $U_{R}$, $D_{R}$, $E_{R}$ respectively,
$\psi_{L}$ denote the elementary doublets $q=(u_{L}, d_{L})^{T}$, $l=(\nu_{\ell}, \ell_{L})^{T}$ and $\psi_R$ elementary singlets $u_{R}$, $d_{R}$, $\ell_{R}$ respectively.
Here, $H=(H^{+}, \upsilon+ h)^{T}$ is the composite Higgs doublet,
with $\upsilon$ the weak scale and $\tilde{H}=i\sigma_{2}H^{*}$.
Besides the global group $\mathcal{G}/\mathcal{H}$,
two external groups, i.e, the QCD gauge group and $U(1)_X$ global group, are also necessary.
For a review, see \cite{Panico:2015jxa}.
Apart from the above fields, the masses and couplings in Eq.(\ref{Lag}) have the following physical contents.
\begin{itemize}
\item $m_{\Psi^{i}_{L}}$ and $m_{\Psi^{i}_{R}}$ in the first line denote the vectorlike masses for the composite doublets and singlets, respectively.
\item The gauge covariant derivative for a composite lepton partner relies on its explicit representation.
Here, we simply adopt $D_{\mu}=\partial_{\mu}-iYg'B_{\mu}-ig W_{\mu}$,\footnote{We will reminder the reader when arguments related to this point are made.}
where $B_{\mu}$ is the $U(1)_Y$ gauge field with gauge coupling $g'$ and hypercharge $Y$,
and $W_{\mu}$ is the $SU(2)_L$ gauge field with gauge coupling $g$.
\item The second line of Eq.(\ref{Lag}) corresponds to Yukawa couplings \cite{Contino:2006nn, Redi:2011zi} of composite fermions to the composite Higgs,
with coupling constant $g_{*}$ as large as $\sim \sqrt{4\pi}$ which are different from those of vectorlike lepton models \cite{Kannike:2011ng, Dermisek:2013gta, Freitas:2014pua, Poh:2017tfo, Dermisek:2021ajd,DeJesus:2020yqx, Frank:2020smf}.
These Yukawa interactions give rise to mixing effects between the composite doublet and singlet states due to electroweak symmetry breaking.
\item Apposed to mixing effects between the elementary and composite fermions in some of those vectorlike models,
which are induced by the electroweak symmetry breaking,
the mixings \cite{Redi:2013pga, Redi:2011zi} in the last line of Eq.(\ref{Lag}), which are characterized by the small mixing angles $\sin\theta_{\psi^{i}_{L}}$ and $\sin\theta_{\psi^{i}_{R}}$ ,
are instead external to the electroweak symmetry breaking.
They superficially respect $\mathcal{G}$ by insertion(s) of Goldstone matrix $U$,
but explicitly break it.
\end{itemize}
We are aware that instead of the effective Lagrangian description in Eq.(\ref{Lag}) there is an alternative description on the interactions between the composite Higgs and composite fermions based on the Goldstone matrix $U$ in Callan-Coleman-Wess-Zumino formalism \cite{Panico:2015jxa}.
Both descriptions should be viable as inspired by the couplings of $\pi$ to nucleons in QCD.
In the description \cite{Contino:2006nn, Panico:2015jxa} we follow here,
it is more transparent to reproduce the SM Yukawa couplings after the diagonalization of the mixing effects between the composite and elementary fermions, see e.g, below Eq.(\ref{Yukawa}).
Along this line, both hierarchical and diagonal $\sin\theta_{\psi^{i}_{L}}$ and $\sin\theta_{\psi^{i}_{R}}$ in flavor space are
technically appropriate to address the elementary fermion mass hierarchy and small flavor-violation effects simultaneously.
For a quantitative analysis on the composite fermion contributions to $a_{\mu}$,
it is sufficient to make use of a set of parameters as composed of composite partner masses $m_{\Psi_{i}}$,
the strong coupling constant $g_{*}$, and the small mixing angles $\sin\theta_{\psi^{i}_{L}}$ and $\sin\theta_{\psi^{i}_{R}}$.
\section{Muon g-2}
\label{anomaly}
In this section we discuss the contributions of new composite fermion partners to muon $a_{\mu}$.
We will show that while the effect of composite quark partners is negligible,
the composite muon partners are able to address the anomaly.
\subsection{Composite Quark Partners}
Composite quark states contribute to $a_{\mu}$ either via
hadron effect in the low energy region or two-loop electroweak corrections.
For the hadron effect, it goes through photon vacuum polarization \cite{0902.3360}
\begin{eqnarray}{\label{had}}
\delta a^{\rm{had}}_{\mu}\sim \int \frac{R(s)\hat{K}(s)}{s^{2}} ds,
\end{eqnarray}
where $R(s)$ is the total cross section of electron pair annihilation with $s$ the center-of-mass energy,
and $\hat{K}$ is the weight function.
As shown in Eq.(\ref{had}), the hadron effect is proportional to $s^{-2}$ that dramatically suppresses the heavy color state contribution.
In the composite quark mass range $m_{\Psi_{C}}>m_{t}$,
$\delta a_{\mu}(\rm{had})< 1\%\times a^{SM}_{\mu}(\rm{had})$,
where $a^{SM}_{\mu}(\rm{had})\approx (69.0 \pm0.526) \times 10^{-9}$ \cite{0902.3360} is the SM hadron contribution.
Given the large coupling $g_*$, the two-loop electroweak corrections to $a_{\mu}$ is dominated by a type of Barr-Zee \cite{Barr:1990vd} diagram.
In this diagram, the composite quark partners run in the inner fermion loop which gives an effective vertex of type $\Gamma_{\gamma\gamma h}$,
and the mixing mass between the elementary and composite muon as a vertex has to be inserted in the fermion line attached to the external muon in order to flip the muon chirality.
Compared to this diagram,
the other two-loop Feynman diagrams as those of SM \cite{Czarnecki:1995wq,Peris:1995bb} are relatively subdominant.
Referring to the illustrative numbers that the model-independent LHC bound on the composite quark mass is larger than $\sim 1.5$ TeV \cite{ATLAS:2018ziw, CMS:2018zkf}, the model-dependent LHC bound on the composite muon mass is above $\sim 200$ GeV \cite{CMS:2019hsm, ATLAS:2015qoy},
and the theoretical limit on the strong coupling $g_{*}\leq \sqrt{4\pi}$,
one finds that this diagram contributes to a value of $\delta a_{\mu}$ beneath the reported value in Eq.(\ref{deltaamu}) at least by two orders of magnitude.
Adjusting the value of $g_*$ and/or $m_{\Psi_{\mu}}$ in allowed ranges does not change the situation.
\subsection{Composite Lepton Partners}
The small mixings between the composite muon partners and the elementary muon in Eq.(\ref{Lag}) lead to the following interactions
\begin{eqnarray}{\label{leptona}}
\mathcal{L}_{\rm{int}}&=&\left[g^{W}_{NL}\bar{N}\gamma^{\mu}\mu_{L}+g^{W}_{NR}\bar{N}\gamma^{\mu}\mu_{R}\right]W^{+}_{\mu}+h.c\nonumber\\
&+&\left[g^{H}_{FL}\bar{\Psi}_{F}\mu_{L}+g^{H}_{FR}\bar{\Psi}_{F}\mu_{R}\right]h+h.c,
\end{eqnarray}
where the couplings are explicitly shown in Table.\ref{couplings}.
Here, index $F= A, B$ refer to the two mass eigenstates of the charged left- and right-hand composite muon partners with mixing angle $\beta$, see Eq.(\ref{masses}).
In Eq.(\ref{leptona}) the absence of vertex $\Psi_{F}-\mu-\gamma$ is due to the same electric charge $Q_{\Psi_{\mu}}=Q_{\mu}=-1$,
whereas that of vertex $\Psi_{F}-\mu-Z$ follows from the same weak isospin we have adopted.\footnote{
This vertex can be present in multi-site realizations of the scenario,
where the gauge covariant derivative takes a form $D_{\mu}=\partial_{\mu}-ig'Y B_{\mu}-ig_{*}\rho_{\mu}$,
with $\rho_{\mu}$ heavy composite gauge bosons to mix with elementary electroweak gauge bosons.
Even so, the effect of this interaction on $a_{\mu}$ is still small compared to that of the Yukawa interaction with large $g_{*}$, see below Eq.(\ref{lepton}).}
\begin{table}
\begin{center}
\begin{tabular}{cccc}
\hline\hline
~~~~& $A$~~~~& $B$~~~~~~& $N$\\ \hline
$g^{W}_{NL}$~~ & $-$~~~~ & $-$~~~~ & $\frac{g}{\sqrt{2}}\sin\theta_{\mu_{L}}$ \\
$g^{W}_{NR}$~~ & $-$~~~~ & $-$~~~~ & $0$ \\
$g^{H}_{FL}$~~ & $-g_{*}\sin\theta_{\mu_{L}}\sin\beta$ ~~~~& $g_{*}\sin\theta_{\mu_{L}}\cos\beta$ & $-$ \\
$g^{H}_{FR}$~~ & $g_{*}\sin\theta_{\mu_{R}}\cos\beta$~~~~ & $g_{*}\sin\theta_{\mu_{R}}\sin\beta$ & $-$ \\
\hline \hline
\end{tabular}
\caption{Couplings of the composite lepton partners to the elementary left- and right-hand muon with index $F=A,B$ in the leading order of $\sin\theta_{\mu_{L}}$ and/or $\sin\theta_{\mu_{R}}$. See texts for details about the absences of both $\gamma$- and $Z$-relevant vertex.}
\label{couplings}
\end{center}
\end{table}
As mentioned above, the small mixing angles $\sin\theta_{\mu_{L,R}}$ in Table.\ref{couplings} are related to the SM muon Yukawa coupling as follows
\begin{eqnarray}{\label{Yukawa}}
y_{\mu}=g_{*}\sin\theta_{\mu_{L}} \sin\theta_{\mu_{R}} m_{\Psi_{\mu_{L}}}m_{\Psi_{\mu_{R}}}\left[\frac{1}{4}\left(m^{-2}_{A}+m^{-2}_{B}\right)\sin2\beta+\left(m^{-1}_{A}m^{-1}_{B}\right)\cos2\beta \right]
\end{eqnarray}
where the left- and right-hand composite muon masses have been eliminated via the mass eigenvalues $m_A$ and $m_B$ with
\begin{eqnarray}{\label{masses}}
m_{A}&=& m_{\Psi_{\mu_{L}}}\cos^{2}\beta+g_{*}\upsilon\sin2\beta+ m_{\Psi_{\mu_{R}}}\sin^{2}\beta, \nonumber\\
m_{B}&=& m_{\Psi_{\mu_{R}}}\cos^{2}\beta-g_{*}\upsilon\sin2\beta+ m_{\Psi_{\mu_{L}}}\sin^{2}\beta,
\end{eqnarray}
and the angle $\beta$ satisfies
\begin{eqnarray}{\label{beta}}
\left(m_{\Psi_{\mu_{L}}}-m_{\Psi_{\mu_{R}}}\right)\sin2\beta=2g_{*}\upsilon\cos2\beta.
\end{eqnarray}
Using Table.\ref{couplings} one obtains each contribution to $a_{\mu}$ with respect to the individual Feynman diagram in Fig.\ref{Feyn},
\begin{eqnarray}{\label{lepton}}
\delta a^{Z}_{\mu}&\approx&\sum_{F=A,B}-\frac{m^{2}_{\mu}}{16\pi^{2}m^{2}_{Z}}\left[\frac{4}{3}g^{Z}_{FL}g^{Z}_{FR}\frac{m_{F}}{m_{\mu}}\mathcal{C}\left(\frac{m^{2}_{F}}{m^{2}_{Z}}\right)\right]\nonumber\\
\delta a^{W}_{\mu}&\approx&-\frac{m^{2}_{\mu}}{16\pi^{2}m^{2}_{W}}\left[-\frac{1}{3}(g^{W}_{NL})^{2}\mathcal{M}\left(\frac{m^{2}_{N}}{m^{2}_{W}}\right)\right]\nonumber\\
\delta a^{H}_{\mu}&\approx&\sum_{F=A,B}-\frac{m^{2}_{\mu}}{16\pi^{2}m^{2}_{h}}\left[-\frac{2}{3}g^{H}_{FL}g^{H}_{FR}\frac{m_{F}}{m_{\mu}}\mathcal{F}\left(\frac{m^{2}_{F}}{m^{2}_{h}}\right)\right],
\end{eqnarray}
in the large composite mass region $m_{F}>m_{h}, m_{Z}, m_{W}$,
where the loop functions $\mathcal{C}$, $\mathcal{M}$ and $\mathcal{F}$ can be found in ref.\cite{Athron:2021iuf}.
\begin{figure}
\centering
\includegraphics[width=13cm,height=3.5cm]{Feyn.pdf}
\centering
\caption{Feynman diagrams for the new contributions to $a_{\mu}$ due to the composite states with $\Psi_F$ the composite muon states and $N$ the neutral field.}
\label{Feyn}
\end{figure}
The correction to $a_{\mu}$ due to the composite lepton partners in Eq.(\ref{lepton}) in our model is dominated by the $h$-exchange diagram.
Given a fixed $g_*$ the value of $\delta a^{H}_{\mu}$ therein, which is proportional to $\sin2\beta$ as seen in Table.\ref{couplings},
maximizes at $\tan\beta=1$ that corresponds to $m_{\Psi_{\mu_{L}}}\approx m_{\Psi_{\mu_{R}}}\approx m_{\Psi_{\mu}}$.\footnote{In contrast to the degenerate masses, a large mass splitting between $m_{\Psi_{\mu_{L}}}$ and $m_{\Psi_{\mu_{R}}}$ leads to $\tan\beta\sim 0$ which dramatically suppresses the magnitude of $\delta a^{H}_{\mu}$.}
Under this choice $\delta a^{H}_{\mu}$ explicitly reads
\begin{eqnarray}{\label{app}}
\delta a^{H}_{\mu}\approx \frac{g_{*}y_{\mu}}{2\pi^{2}} \frac{m_{\mu}}{m_{\Psi_{\mu}}}f\left(\frac{g_{*}\upsilon}{m_{\Psi_{\mu}}}\right),
\end{eqnarray}
with the function
\begin{eqnarray}{\label{f}}
f(x)=\frac{x}{(1-x^{2})[(1-x)^{-2}+(1+x)^{-2}]},
\end{eqnarray}
where the small mixing angles $\sin\theta_{\mu_{L,R}}$ have been replaced by $y_{\mu}$ in Eq.(\ref{Yukawa}), and the rational region of $x$ corresponds to $0<x<1$.
Eq.(\ref{app}) shows that $\delta a^{H}_{\mu}$ are sensitive to $g_{*}$, $m_{\Psi_{\mu}}$ and $x$.
To resolve the muon anomaly, a large coupling $g_{*}$ seems necessary.
For instance, with $g_{*}\sim 3$ and $m_{\Psi_{\mu}}\sim 1$ TeV the reported muon g-2 anomaly is easily resolved.
We will present more details about the viable parameter space based on Eq.(\ref{app}) in Sec.\ref{results} after we investigate the constraints in the next section.
\section{Constraints}
\label{constraint}
\subsection{Indirect Constraints}
\subsubsection{Higgs Portal}
The strong coupling of $\Psi_F$ to the composite Higgs is the key ingredient to address the muon anomaly,
which can be constrained by precise measurements on the Higgs couplings.
For the couplings of Higgs to SM gauge bosons we consider the one-loop modification to $h\rightarrow\gamma\gamma$ due to the composite muon partners as
\begin{eqnarray}{\label{Yukawaa}}
\Delta \Gamma(h\rightarrow\gamma\gamma)\approx\frac{\alpha^{2}g^{2}g^{2}_{*}\sin^{2}(2\beta)}{576\pi^{3}} \frac{m^{3}_{h}}{m^{2}_{W}},
\end{eqnarray}
where the large mass limit $m_{\Psi_{\mu}}>m_{h}$ has been used.
Comparing Eq.(\ref{Yukawaa}) to current LHC bounds \cite{ATLAS:2019nkf, CMS:2018uag} imposes an upper
bound on the magnitude of $g_{*}$.
Just like $h\rightarrow\gamma\gamma$ the decay mode $h\rightarrow Z\gamma$ \cite{Azatov:2013ura} imposes similar constraint on $g_{*}$, even though the later one is not always independently analyzed by LHC experiments.
For the coupling of Higgs to SM fermions, the precision tests on the SM Yukawa couplings provide information on the composite mass scale as
\begin{eqnarray}{\label{Yukawad}}
\frac{\delta y_{\psi}}{y_{\psi}}\sim \frac{\upsilon^{2}}{f^{2}}
\end{eqnarray}
where $f$ is the breaking scale of $\mathcal{G}$.
Taking a conservative lower bound $f\geq 1$ TeV \cite{Xu:2020omq} inferred from current LHC data
implies that the composite mass scale $m_{*}\approx g_{*}f\geq g_{*}$ TeV.
If there is a large mass gap between $m_{*}$ and $m_{\Psi_{\mu}}$, the hypothesis of universal mass in the composite sector has to be abandoned.
\subsubsection{$Z$ Portal}
The mixing effects between the elementary and the composite muon field modify
the $Z-\mu-\mu$ couplings as
\begin{eqnarray}{\label{Zuu}}
g^{Z}_{\mu_{L}\mu_{L}}&\approx&\left(-\frac{1}{2}+s^{2}_{W}\right)(1+\sin^{2}\theta_{\mu_{L}}) \nonumber\\
g^{Z}_{\mu_{R}\mu_{R}}&\approx&s^{2}_{W}(1+\sin^{2}\theta_{\mu_{R}}),
\end{eqnarray}
where unity in the bracket means the SM values with $s_W$ the weak mixing angle.
Imposing the LEP constraint $\delta g^{Z}_{\mu_{L}\mu_{L}}\leq 10^{-3}$ \cite{ParticleDataGroup:2016lqr} implies
$\sin\theta_{\mu_{L}}\leq 3.3\times 10^{-2}$, which is far above the lower bound $\sin\theta_{\mu_{L}}\geq 10^{-4}$ as seen from the SM muon Yukawa coupling in Eq.(\ref{Yukawa}).
Note that the corrections in Eq.(\ref{Zuu}) are model-dependent as they depend on
the isospins of the composite muon partners.
\subsubsection{Oblique Corrections}
The last precise tests are oblique parameters \cite{Baak:2012kk}.
The oblique parameter $\hat{S}$ tied to $W^{3}_{\mu}$ and $B_{\mu}$ gauge boson receives corrections from the composite vector bosons $\rho_{\mu}$ at tree level and the composite Higgs and fermions at loop level, respectively.
The tree-level correction $\hat{S}(\rho_{\mu})\sim m^{2}_{W}/m^{2}_{*}$ implies that $m_{*}\geq 3$ TeV.
Moreover, the oblique parameter $\hat{T}$ does not receive tree-level correction due to the protection of custodial symmetry.
$\hat{T}$ only obtains loop corrections due to the composite Higgs and fermions.
Similar to vectorlike doublet $+$ singlet fermion contributions \cite{Lavoura:1992np,Grojean:2013qca},
the one-loop corrections to $\hat{S}$ and $\hat{T}$ due to the composite muon partner loops are
\begin{eqnarray}{\label{ST}}
\Delta\hat{S}&\sim&\frac{g^{2}}{16\pi^{2}} \xi \ln\left(\frac{m_{*}}{m_{\Psi_{\mu}}}\right),\nonumber\\
\Delta\hat{T}&\sim& \frac{1}{16\pi^{2}}\xi\sin^{4}\theta_{\mu_{L}}\frac{m_{\Psi_{\mu}}^{2}}{f^{2}},
\end{eqnarray}
with $\xi=\upsilon^{2}/f^{2}$.
The correction to the $\hat{T}$ parameter in Eq.(\ref{ST}) is greatly suppressed by the small factor $\sin^{4}\theta_{\mu_{L}}$ as upper bounded by the $Z$ coupling in Eq.(\ref{Zuu}).
Likewise, given $\xi\leq 0.1$ inferred from Eq.(\ref{Yukawad}) the correction to the $\hat{S}$ parameter in Eq.(\ref{ST}) can be safely neglected as well.
\subsection{Direct Constraints}
Let us now consider direct detections on the parameter space with respect to the muon anomaly.
The couplings and the small mixings of the composite muon partners $\Psi_F$ are the key factors to develop strategies of searching them at colliders,
which are obviously different from studies on composite quark partners \cite{Barcelo:2011wu, DeSimone:2012fs, Cacciapaglia:2018qep}, composite gauge bosons \cite{Thamm:2015zwa, BuarqueFranzosi:2016ooy, Liu:2018hum},
and composite Higgs \cite{Banerjee:2021efl}.
After produced at colliders,
the composite muon partners decay through the modes $\Psi^{\pm}_{F}\rightarrow W^{\pm}\nu_{\mu}$, $\Psi^{\pm}_{F}\rightarrow h\mu^{\pm}$ and $\Psi^{\pm}_{F}\rightarrow W^{\pm}N$ for the heavier state $F=A$.
For a practical purpose, it is sufficient to focus on the signals of the lighter state $\Psi_B$ at the 14-TeV LHC.
For simulation of signals and relevant SM backgrounds in the following, we use FeynRules \cite{1310.1921} to generate model files prepared for MadGraph5 \cite{1405.0301} that includes Pythia 8 \cite{Sjostrand:2014zea} for parton showering and hadronization and the package Delphes 3 \cite{1307.6346} for fast detector simulation.
\begin{figure}
\centering
\includegraphics[width=8cm,height=4cm]{DY.pdf}
\centering
\caption{Feynman diagram for the Drell-Yan productions of $\Psi_B$ pair at the LHC with the decay mode $\Psi^{\pm}_{B}\rightarrow h\mu^{\pm}$.}
\label{DY}
\end{figure}
\subsubsection{Gauge Boson Portal}
The $\Psi_B$ pair can be produced either through $e^{+}e^{-}\rightarrow Z^{*}/\gamma^{*}\rightarrow \Psi^{+}_{B}\Psi^{-}_{B}$ at LEP or Drell-Yan processes $pp\rightarrow Z^{*}/\gamma^{*}\rightarrow \Psi^{+}_{B}\Psi^{-}_{B}$ at LHC.
Here, do not confuse the coupling $\bar{\Psi}_{F}-\Psi_{F}-Z$ with the coupling $\bar{\Psi}_{F}-\mu-Z$.
So far, the LEP-2 has excluded the mass range $m_{B}\leq 101.2$ GeV \cite{L3:2001xsz},
while the LHC has reported a limit $m_{B}\geq 200$ GeV based on the decay mode $\Psi^{\pm}_{B}\rightarrow Z\mu^{\pm}$ \cite{ATLAS:2015qoy} in certain vectorlike lepton models.
We can revisit this LHC bound by repeating the Drell-Yan processes with Feynman diagram shown in Fig.\ref{DY},
where the decay mode $\Psi^{\pm}_{B}\rightarrow Z\mu^{\pm}$ is replaced by our decay mode $\Psi^{\pm}_{B}\rightarrow h\mu^{\pm}$.
Followed by the decay of $h\rightarrow \bar{b}b$,
the primary SM backgrounds corresponding to this signal is given by $pp \rightarrow Z/\gamma(\ell^{+}\ell^{-})+{\rm jets}$.
Fig.\ref{DYc} shows the cross sections of the Drell-Yan processes for the lighter composite muon state $\Psi_{B}$ at the 14-TeV LHC with $g_{*}=\sqrt{4\pi}$ that represents a typical value of $g_{*}$ in the parameter space obtained from the muon g-2 anomaly.
In this figure, the virtual $Z$ contribution is subdominant in comparison with the virtual photon contribution.
This follows from the fact that the coupling of $\Psi_{B}$ pair to $Z$ is relatively smaller than the coupling of $\Psi_{B}$ pair to $\gamma$ as in the SM.
The SM background for this signal comes from both $pp \rightarrow Z(\ell^{+}\ell^{-})+{\rm jets}$ and $pp \rightarrow \gamma(\ell^{+}\ell^{-})+{\rm jets}$, with leading-order cross section $4.8\times 10^6$ fb and $6.5\times 10^6$ fb respectively.
Combing them gives a total cross section about $1.13\times10^7$ fb,
implying that the signal cross section as shown in Fig.\ref{DYc} is at least six order smaller than its SM background cross section.
\begin{figure}
\centering
\includegraphics[width=13cm,height=8cm]{DYC.pdf}
\centering
\caption{Cross sections of the Drell-Yan processes in Fig.\ref{DY} as function of the lighter mass $m_B$ at the 14-TeV LHC with $g_{*}=\sqrt{4\pi}$,
as compared to the total cross section of the SM background about $1.13\times10^7$ fb.}
\label{DYc}
\end{figure}
In order to distinguish the above signal from its SM background,
we apply the latest CMS cuts \cite{CMS:2022cpr, CMS:2022qww} on four-b jets and the CMS cuts \cite{CMS:2015rjz} on
two muons with same flavor but opposite sign
\begin{eqnarray}{\label{cuts}}
{p_{T}}_{j_{1(2,3,4)}} &>& 45 ~{\rm GeV}, \nonumber\\
{p_{T}}_{\mu_{1(2)}} &>& 20~{\rm GeV} , \nonumber \\
|\eta_{j_{1(2,3,4)}}|&<& 2.5, \nonumber\\
|\eta_{\mu_{1(2)}}| &<& 2.4,
\end{eqnarray}
to our Drell-Yan processes,
where ${p_{T}}_{j_{1(2,3,4)}}$ and $\eta_{j_{1(2,3,4)}}$ are the transverse momentum
and pseudo-rapidity of jet respectively,
while ${p_{T}}_{\mu_{1(2)}}$ and $\eta_{\mu_{1(2)}}$ are the transverse momentum
and pseudo-rapidity of muon respectively.
Besides the cuts in Eq.(\ref{cuts}),
any event with an additional jet with $p_T > 30$ GeV or muon with $p_T > 10 $ GeV is rejected.
The cuts in Eq.(\ref{cuts}) can dramatically reduce the number of SM events arising from $pp \rightarrow \gamma(\ell^{+}\ell^{-})+{\rm jets}$,
which makes the SM background of the Drell-Yan processes similar to that of the following $h$-associated processes.
\begin{figure}
\centering
\includegraphics[width=13cm,height=4cm]{pro.pdf}
\centering
\caption{Feynman diagrams for the $h$-associated single (left) and pair (right) production of $\Psi_B$ at the LHC.}
\label{pro}
\end{figure}
\subsubsection{Higgs Portal}
Through Higgs $\Psi_B$ can be singly produced by $pp\rightarrow Zh^{*}\rightarrow Z\Psi^{\pm}_{B}\mu^{\mp}$ which is suppressed by the small mixing angles $\theta_{\mu_{L}}$ and $\theta_{\mu_{R}}$, or produced in pair by $pp\rightarrow \Psi^{+}_{B}\Psi^{-}_{B}jj'$ which is instead not suppressed by the small mixing angles.
Note that the jets $j$ and $j'$ can be either the same or different.
Fig.\ref{pro} shows the Feynman diagrams for these two processes.
In Fig.\ref{pro}(a),
the single production $pp\rightarrow Zh^{*}\rightarrow l^{+}l^{-}\Psi^{\pm}_{B}\mu^{\mp}\rightarrow l^{+}l^{-}h\mu^{+}\mu^{-}$ is followed by the SM decay $Z\rightarrow l^{+}l^{-}$ with $l=e,\mu$ and the decay mode $\Psi^{\pm}_{B}\rightarrow h\mu^{\pm}$,
for which the SM background is $pp \rightarrow$ mono-Z+jets with cross section of $1.9\times10^{2}$ fb.
In Fig.\ref{pro}(b), the pair production $pp\rightarrow \Psi^{+}_{B}\Psi^{-}_{B}jj'\rightarrow hh\mu^{+}\mu^{-}jj'$ is followed by the decay mode $\Psi^{\pm}_{B}\rightarrow h\mu^{\pm}$,
for which the SM background is $pp \rightarrow Z+$ jets with cross section of $4.8\times10^{6}$ fb.
Fig.\ref{cs} shows the cross sections of the two processes in Fig.\ref{pro} at the 14-TeV LHC with $g_{*}=\sqrt{4\pi}$.
In the case of single production we have additionally chosen the equal small mixing angles $\theta_{\mu_{L}}\approx\theta_{\mu_{R}}$.
Adjusting the allowed values of these small mixing angles may yield a larger cross section for the single production.
However, it is still not enough to let this process be important.
In contrast, the pair production seems more promising,
as the gap between the cross sections of the signal and SM background is smaller in the mass range $m_B$ less than 300 GeV.
As the final states of this signal contain four b-jets,
the CMS cuts in the previous Drell-Yan processes can be similarly applied to it.
Then, one recognizes that the SM background of the $h$-associated pair production is similar to that of the Drell-Yan processes.
Given the same $m_B$ and $g_{*}$,
the signal cross section in the $h$-associated pair production in Fig.\ref{cs} is smaller than in the Drell-Yan productions in Fig.\ref{DYc} by roughly an order of magnitude,
leading to the later one more competitive than the former one.
We will neglect the $h$-associated pair production in the following numerical analysis.
\begin{figure}
\centering
\includegraphics[width=13cm,height=8cm]{cs.pdf}
\centering
\caption{Cross section of the single (in blue) and pair (in red) of $\Psi_B$ in Fig.\ref{pro} as function of $m_B$ at the 14-TeV LHC with $g_{*}=\sqrt{4\pi}$.
Compared to the pair production,
the cross section of single production is suppressed by the small mixing angles.}
\label{cs}
\end{figure}
\section{Numerical Results}
\label{results}
Collecting the parameter space in Sec.\ref{anomaly} and the indirect and direct constraints in Sec.\ref{constraint},
we present the main numerical results in Fig.\ref{tot}.
In this figure, the parameter space (in green band) means the solution to the muon g-2 anomaly within a discrepancy of $2\sigma$,
which requires the mass $m_B$ larger than $\sim 220$ GeV.
The stringent indirect constraint comes from the up-to-date LHC measurements on the coupling of $h$ to di-photon (in orange line) has excluded $g_{*}$ larger than $\sim 1.8$.
These two numbers determine the surviving parameter regions,
where the hypothesis of universal mass scale in the composite sector seems to be violated.
The direct limits with 2$\sigma$ exclusion (in red curve) and 5$\sigma$ discovery (in blue curve)\footnote{We follow the criteria $S/\sqrt{B}=1.96$ for exclusion and $S/\sqrt{S+B}=5$ for discovery, where $S$ and $B$ refer to the event number of signal and SM background, respectively.} arise from the Drell-Yan productions of the lighter composite state $\Psi_B$ at the 14-TeV LHC with the integrated luminosity of 3 ab$^{-1}$,
indicating that the surviving parameter regions with $m_{B}$ up to at most $\sim 277$ GeV can be discovered
and up to $\sim 328$ GeV excluded, respectively.
\begin{figure}
\centering
\includegraphics[width=13cm,height=8cm]{tot.pdf}
\centering
\caption{Parameter space (in green) with respect to the muon g-2 anomaly, the most stringent indirect exclusion limit (in orange) from the coupling of Higgs to di-photon,
and the prospect of searching for the surviving parameter regions based on the Drell-Yan processes of $\Psi_B$ at the 14-TeV LHC with an integrated luminosity 3 ab$^{-1}$,
with 2$\sigma$ exclusion (in red) and 5$\sigma$ discovery (in blue) shown simultaneously. See text for details.}
\label{tot}
\end{figure}
A few comments are in order regarding the numerical results in Fig.\ref{tot}.
\begin{itemize}
\item The direct limits arising from the Drell-Yan productions are more sensitive to $m_B$ than $g_*$.
The reason is that the signal cross section is affected by $g_*$ only through the dependence of the branching ratio of the decay mode $\Psi^{\pm}_{B}\rightarrow h\mu^{\pm}$ on this parameter,
which is nearly fixed in the parameter regions with $m_{B}\geq 200$ GeV and $g_{*}\geq 1$.
We have verified this point by observing that the cross section in Fig.\ref{DYc} hardly change as $g_*$ varies.
\item As previously noticed in Eq.(\ref{lepton}),
the parameter space as shown by the green band is rather sensitive to the mixing effect between the composite muon states.
If one let the input masses be non-degenerate,
the parameter space will be obviously reduced or even disappear.
\item Apart from the parameter space, the indirect exclusion limit on $g_{*}$ is affected by the mixing effects in other composite lepton and quark states as well.
The reason is that each of these composite states contributes to $\Delta \Gamma(h\rightarrow \gamma\gamma)\sim g^{2}_{*}N_{ci}Q^{2}_{i}\sin^{2}(2\beta_{i})$ in Eq.(\ref{Yukawaa}),
with $Q_i$, $\beta_i$ and $N_{ci}$ the electric charge, mixing angle and color factor of composite state $i$ respectively.
This differs from the case of SM,
where this decay width is mainly determined by the top quark.
The exclusion limit on $g_{*}$ in Fig.\ref{tot} corresponds to the situation where
only the composite muon contributions to $\Delta\Gamma(h\rightarrow \gamma\gamma)$ matter and
each other of $\sin(2\beta_{i})$ is at least a few times of magnitude smaller than unity.
Otherwise, this limit will be significantly strengthened.
For example, the value of $g_{*}$ is suppressed by a generation factor if the mixing angles related to composite electron and tau states are the same as that of the composite muon states.
\end{itemize}
\section{Conclusion}
\label{con}
In this work we have considered the scenario of composite Higgs with partial compositeness as the solution to the muon anomalous magnetic moment as recently confirmed by the FNAL experiment.
The anomaly is resolved by the one-loop correction to $a_{\mu}$ due to the composite muon partners with their mass scale of order TeVs and strong coupling constant $g_{*}$ of order unity.
We have presented the concrete parameter space by taking into account the constraints of precise tests on $h$, $Z$ and oblique electroweak parameters.
The parameter space is so sensitive to $g_*$ tightly constrained by the precision measurement on the coupling of Higgs to di-photon
that all of the mixing effects in other composite states have to be small,
especially in the composite quark states due to the extra color factor $N_c$.
Furthermore, we have explored the prospect of searching for the surviving parameter regions
in terms of both the Drell-Yan and $h$-associated productions of the lighter composite muon partner at the 14-TeV LHC
with the integrated luminosity 3 ab$^{-1}$.
To draw the direct detection limits,
we have applied the latest CMS cuts about four b-jets to both the Drell-Yan and the $h$-associated pair productions.
It turns out that
the surviving parameter regions with the lighter composite muon mass below $\sim 328$ GeV can be excluded at $2\sigma$ order in the Drell-Yan productions,
despite the ability of exclusion in the $h$-associated productions being weak.
Finally, although not discussed here, this scenario may be linked to other leptonic anomalies.
\section*{Acknowledgments}
This work is supported in part by the National Natural Science Foundation of China with Grant No. 11775039.
|
1,116,691,497,373 | arxiv | \subsection*{Acknowledgements}
The work in France was supported by Agence Nationale pour la Recherche
(TRANSHYPERO project) and was granted access to the HPC resources of
[TGCC/CINES/IDRIS] under the allocation 2017-A0010907682 made by GENCI.
The work at Los Alamos National Laboratory (LANL) was supported by LANL
LDRD program (J-C.B., W.N., S.T., A.D.M.) and was partially performed at the
Center for Nonlinear Studies. The work was conducted, in part, at the Center
for Integrated Nanotechnologies (CINT), a U.S. Department of Energy, Office
of Science user facility. Work at Northwestern University was supported by
grant SC0012541 from the U.S. Department of Energy, Office of Science.
C.C.S. and M.G.K. acknowledge the support under ONR Grant N00014-17-1-2231.
\subsection*{Author contributions}
M. K., S.T. and C.K. conceived the idea, designed the work, and wrote the
manuscript. J.E developed the semi-empirical BSE approach and the elastic
model. M.K. performed the DFT calculations with support from B.T. and L.P. C.K.
and M.K. analysed the data and provided insight into the mechanisms. M.G.K.
and C.S.S. lend their expertise in chemistry. A.D.M., J.C., H. T. W. N. supplied
knowledge from an application perspective. All authors contributed to this work,
read the manuscript and agree to its contents, and all data are reported in the
main text and supplemental materials.
\clearpage
\newpage
\begin{landscape}
\begin{figure*}[p]
\begin{center}
\includegraphics[width=0.90\linewidth]{flexoelastic}
\end{center}
\vspace{-0.2cm}
\caption{\small
\textbf{LHPs generalized improper flexoelastic model.}
\textbf{a}, Schematics of hybrid layered compounds regarded as
heterostructures L1/L2 with L1 the 3D (n=$\infty$) bulk materials,
\textit{e.g.} MAPbI$_{\text{3}}$, and L2, a n=1 compound, \textit{e.g.}
(BA)$_{\text{2}}$PbI$_{\text{4}}$.
\textbf{b}, In-plane expansion and out-of-plane contractions of
experimental lattice constants for (BA)$_{\text{2}}$(MA)$_{\text{n-1}}$Pb$_n$I$_{\text{3n+1}}$
and the L1 and L2 layers.
The room-temperature structures of MAPbI$_{\text{3}}$ and
(BA)$_{\text{2}}$PbI$_{\text{4}}$ serve as references for L1 and
L2 structures, respectively.
\textbf{c}, Same from the improper flexoelastic model (see \textcolor{orange}{Method}
for details).
\textbf{d}, Computed elastic energy density for the
(BA)$_{\text{2}}$(MA)$_{\text{n-1}}$Pb$_{\text{n}}$I$_{\text{3n+1}}$ heterostructure.
}
\label{fig:elastic}
\end{figure*}
\end{landscape}
\begin{landscape}
\begin{figure*}[p]
\begin{center}
\includegraphics[width=0.85\linewidth]{relaxation}
\end{center}
\vspace{-0.5cm}
\caption{\small
\textbf{Surface relaxation in LHP multi-quantum wells.}
\textbf{a}, Schematics of LHP-based devices in in-plane and
out-of-plane orientation.
\textbf{b}, Schematics of the (101) surface of the layered perovskite
(BA)$_{\text{2}}$(MA)$_{\text{n-1}}$Pb$_{\text{n}}$I$_{\text{3n+1}}$ with n=3.
\textbf{c}, Variation of the interlayer height difference ($\Delta$h) from
bulk-like to surface (see inset).
\textbf{d}, Variation of in-plane ($\beta$) and out-of-plane ($\delta$)
tiltings of surface octahedra due to the (101) surface relaxation.
}
\label{fig:surface}
\end{figure*}
\end{landscape}
\begin{figure*}[p]
\begin{center}
\includegraphics[width=1.00\linewidth]{optoelectronics}
\end{center}
\vspace{-0.2cm}
\caption{\small
\textbf{Impact of surface structural relaxation on electronic and optical properties
in (BA)$_{\text{2}}$(MA)$_{\text{n-1}}$Pb$_{\text{n}}$I$_{\text{3n+1}}$.}
\textbf{a}, Slab band structures in the bulk-like (left) and relaxed (101)
surface (right) for n=2 and 3.
\textbf{b}, DFT variation of E\textunderscript{G} going from bulk-like to relaxed
(101) surface.
\textbf{c}, Local densities of states (LDOS) computed at the valence band maximum
and conduction band minimum for the n=3 RPP in bulk and relaxed surface.
\textbf{d}, Difference between the barycenter of electron and hole wavefunctions.
\textbf{e}, Schematics of the surface-induced exciton dissociation
in RPPs with n$\geqslant$3.
}
\label{fig:exciton}
\end{figure*}
\begin{figure*}[p]
\begin{center}
\includegraphics[width=1.00\linewidth]{design}
\end{center}
\vspace{-0.2cm}
\caption{\small
\textbf{Design of LHPs for photovoltaics and optoelectronics.}
\textbf{a}, Lattice mismatch between various monolayered
A'$_{\text{2}}$PbI$_{\text{4}}$ perovskites (n=1) and
MAPbI$_{\text{3}}$ (\textit{I4cm}; n=$\infty$). All data are taken from X-ray
structures resolved at room-temperature.
Names for organic compounds and corresponding references are
given \textcolor{orange}{Table \small\bf S4}.
\textbf{b}, Computed elastic energy density for heterostructures built with
MAPbI$_{\text{3}}$ and (BA)$_{\text{2}}$PbI$_{\text{4}}$ (grey line),
(C$_{\text{9}}$H$_{\text{19}}$NH$_{\text{3}}$)$_{\text{2}}$PbI$_{\text{4}}$ (NoA, blue line),
and (4Cl-C$_{\text{6}}$H$_{\text{4}}$NH$_{\text{3}}$)$_{\text{2}}$PbI$_{\text{4}}$ (4Cl-PhA, red line).
}
\label{fig:design}
\end{figure*}
\clearpage
\newpage
\includepdf[pages = 1-5]{2018-01-02-mismatch-method-v11}
\includepdf[pages = 1-8]{2018-01-02-mismatch-si-v11}
\end{document}
|
1,116,691,497,374 | arxiv | \section{Introduction}
A distributed system is abstracted as a set of entities (nodes, processes, agents, etc) that communicate with each other using a communication medium.
The two most used communication media are communication channels (message-passing system) and shared memory (read/write operations).
Programming with shared objects is generally more convenient as it offers a higher level of abstraction to the programmer,
therefore facilitates the work of designing distributed applications.
A natural question is the level of consistency ensured by shared objects.
An intuitive property is that shared objects should behave as if all processes accessed the same physical copy of the object.
\emph{Sequential consistency}~\cite{lamport1979make} ensures that all the operations that happen in a distributed history
appear as if they were executed sequentially, in an order that respects the sequential order of each process (called the \emph{process order}).
Unfortunately, sequential consistency is not composable: if a program uses two or more objects,
despite each object being sequentially consistent individually, the set of all objects may not be sequentially consistent.
An example is shown in Fig.~\ref{fig:composability}, where two processes share two registers named $X$ and $Y$; although the operations of each register
may be totally ordered (the read precedes the write), it is impossible to order all the operations at once.
\emph{Linearizability}~\cite{herlihy1990linearizability} overcomes this limitation by adding constraints on real time: each operation appears at a single
point in time, between its start event and its end event. As a consequence, linearizability enjoys the locality property \cite{herlihy1990linearizability} that ensures its composability.
Because of this composability, much more effort has been focused on linearizability than on sequential consistency so far.
However, one of our contributions implies that in asynchronous systems where no global clock can be implemented to measure real time, a process cannot distinguish between linearizability and sequential consistency, thus
the connection to real time
seems to be a worthless --- though costly --- guarantee.
\begin{figure}[t]
\centering
\begin{tikzpicture}
\draw[->] (-0.3,0.5) node[left]{$p_1$} -- (4.4,0.5) ;
\draw[->] (-0.3,0.0) node[left]{$p_0$} -- (4.4,0.0) ;
\draw[->,very thick, colorA] (4.4,-0.5) -- (-0.3,1) node[left]{$l_X$};
\draw[->,very thick, colorB] (4.4,1) -- (-0.3,-0.5) node[left]{$l_Y$};
\draw[fill=colorA!10] (1.0,0.5) +(-0.8,-0.2) rectangle +(0.8,0.2) +(0,0) node{\footnotesize $X.\textsf{write}(1)$};
\draw[fill=colorB!10] (3.0,0.5) +(-0.9,-0.2) rectangle +(0.9,0.2) +(0,0) node{\footnotesize $Y.\textsf{read}\rightarrow 0$};
\draw[fill=colorB!10] (1.0,0) +(-0.8,-0.2) rectangle +(0.8,0.2) +(0,0) node{\footnotesize $Y.\textsf{write}(1)$};
\draw[fill=colorA!10] (3.0,0) +(-0.9,-0.2) rectangle +(0.9,0.2) +(0,0) node{\footnotesize $X.\textsf{read}\rightarrow 0$};
\end{tikzpicture}
\caption{Sequential consistency is not composable: registers $X$ and $Y$ are both sequentially consistent but their composition is not.}
\label{fig:composability}
\end{figure}
In this paper we focus on message-passing distributed systems. In such systems a shared memory is not a physical object; it has to be built using the underlying message-passing communication network.
Several bounds have been found on the cost of sequential consistency and linearizability in synchronous distributed systems, where the transit time for any message is in a range $[d-u,d]$,
where $d$ and $u$ are called respectively the \emph{latency} and the \emph{uncertainty} of the network. Let us consider an implementation of a shared memory, and let $r$ (resp. $w$) be the
worst case latency of any read (resp. write) operation. Lipton and Sandberg proved in \cite{lipton1988pram} that, if the algorithm implements a sequentially consistent memory, the inequality $r+w\geq d$ must hold.
Attiya and Welch refined this result in~\cite{attiya1994sequential}, proving that each kind of operations could have a 0-latency implementation for sequential consistency (though not both in the same implementation)
but that the time duration of both kinds of operations has to be at least linear in $u$ in order to ensure linearizability.
Therefore the following questions arise. Are there applications for which the lack of composability of sequential consistency is not a problem? For these applications, can we expect the same benefits in weaker message-passing models, such as asynchronous failure-prone systems, from using sequentially consistent objects rather than linearizable objects?
To illustrate the contributions of the paper, we also address a higher level operation: a snapshot operation \cite{Afek93} that allows to read in a single operation a whole set of registers. A sequentially consistent snapshot is such that the set of values it returns may be returned by a sequential execution. This operation is very useful as it has been proved \cite{Afek93} that linearizable snapshots can be wait-free implemented from single-writer/multi-reader registers. Indeed, assuming a snapshot operation does not bring any additional power with respect to shared registers. Of course this induces an additional cost: the best known simulation needs $O(n\log n)$ basic read/write operations to implement each of the snapshot operations and the associated update operation \cite{AttiyaR98}. Such an operation brings a programming comfort as it reduces the ``noise'' introduced by asynchrony and failures \cite{G98} and is particularly used in round-based computations \cite{Gafni98} we consider for the study of the composability of sequential consistency.
\paragraph{Contributions.} This paper has three major contributions. (1) It identifies two contexts that can benefit from the use of sequential consistency: round-based algorithms that use a different shared object for each round, and asynchronous shared-memory
systems, where programs can not differentiate a sequentially consistent memory from a linearizable memory.
(2) It proposes an implementation of a sequentially consistent memory where waiting is only required when a write is immediately followed by a read.
This extends the result presented in~\cite{attiya1994sequential}, which only applies to synchronous failure-free systems, to failure-prone asynchronous systems.
(3) The proposed algorithm also implements a sequentially consistent snapshot operation the cost of which compares very favorably with the best existing linearizable implementation to our knowledge (the stacking of the snapshot algorithm of Attiya and Rachman \cite{AttiyaR98} over the ABD simulation of linearizable registers).
\paragraph{Outline.}
The remainder of this article is organized as follows. In Section~\ref{sec:composition}, we define more formally sequential consistency,
and we present special contexts in which it becomes composable. Then, in Section~\ref{sec:implementation}, we present our implementation
of shared memory and study its complexity. Finally, Section~\ref{sec:conclusion} concludes the paper.
\section{Sequential Consistency and Composability}\label{sec:composition}
\subsection{Definitions}
In this section we recall the definitions of the most important notions we discuss in this paper:
two consistency criteria, sequential consistency ($SC$, Def.~\ref{def:SC}, \cite{lamport1979make}) and linearizability ($L$, Def.~\ref{def:Lin}, \cite{herlihy1990linearizability}),
as well as composability (Def.~\ref{def:comp}). A consistency criterion associates a set of admitted \emph{histories}
to the \emph{sequential specification} of each given object. A history is a representation of an execution. It contains a set of operations,
that are partially ordered according to the sequential order of each process, called \emph{process order}.
A sequential specification
is a language, i.e. a set of sequential (finite and infinite) words. For a consistency criterion $C$ and a sequential specification $T$,
we say that an algorithm implements a $C(T)$-consistent object if all its executions can be modelled by a history that belongs to $C(T)$,
that contains all returned operations and only invoked operations.
Note that this implies that if a process crashes during an operation, then the operation will appear in the history as if it was complete or as if it never took place at all.
\begin{definition}[Linear extension]\label{def:lin_order}
Let $H$ be a history and $T$ be a sequential specification. A \emph{linear extension} $\le$ is a total order on all the operations of $H$, that contains the process order, and such that each event $e$ has a finite past $\{e' : e'\le e\}$ according to the total order.
\end{definition}
\begin{definition}[Sequential Consistency]\label{def:SC}
Let $H$ be a history and $T$ be a sequential specification. The history $H$ is \emph{sequentially consistent} regarding $T$, denoted $H\in SC(T)$, if there exists a linear extension $\le$ such that the word composed of all the operations of $H$ ordered by $\le$ belongs to $T$.
\end{definition}
\begin{definition}[Linearizability]\label{def:Lin}
Let $H$ be a history and $T$ be a sequential specification. The history $H$ is \emph{linearizable} regarding $T$,
denoted $H\in L(T)$, if there exists a linear extension $\le$ such that
(1) for two operations $a$ and $b$, if the end of $a$ precedes the beginning of $b$ in real time, then $a\le b$ and
(2) the word formed of all the operations of $H$ ordered by $\le$ belongs to $T$.
\end{definition}
Let $T_1$ and $T_2$ be two sequential specifications. We define the \emph{composition} of $T_1$ and $T_2$, denoted by $T_1\times T_2$,
as the set of all the interleaved sequences of a word from $T_1$ and a word from $T_2$. An interleaved sequence of two words $l_1$ and $l_2$ is
a word composed of the disjoint union of all the letters of $l_1$ and $l_2$, that appear in the same order as they appear in $l_1$ and $l_2$.
For example, the words $ab$ and $cd$ have six interleaved sequences: $abcd$, $acbd$, $acdb$, $cabd$, $cadb$ and $cdab$.
A consistency criterion $C$ is composable (Def.~\ref{def:comp}) if the composition of a $C(T_1)$-consistent object and a $C(T_2)$-consistent object
is a $C(T_1\times T_2)$-consistent object. Linearizability is composable, and sequential consistency is not.
\begin{definition}[Composability]\label{def:comp}
For a history $H$ and a sequential specification $T$, let us denote by $H_{T}$ the sub-history of $H$ that only contains the operations belonging to $T$.
A consistency criterion $C$ is \emph{composable} if, for all sequential specifications $T_1$ and $T_2$ and all histories $H$ containing only events
on $T_1$ and $T_2$, $(H_{T_1} \in C(T_1) \text{ and } H_{T_2} \in C(T_2))$ imply $H \in C(T_1\times T_2)$.
\end{definition}
\subsection{From Linearizability to Sequential Consistency}
\begin{figure}[t]
\begin{subfigure}{0.15\textwidth}
\centering
\begin{tikzpicture}
\draw[fill=colorB!10] (0.3,2.8) rectangle (2.7,3.6) ;
\draw[fill=colorB!10] (0.3,1) rectangle (2.7,2.6) ;
\draw[fill=colorA!10] (0.5,1.2) rectangle (1.4,1.8) ;
\draw[fill=colorA!10] (1.6,1.2) rectangle (2.5,1.8) ;
\draw[fill=colorA!10] (0.3,0) rectangle (2.7,0.8) ;
\draw (1.5,3.2) node{Application};
\draw (1.5,2.2) node{$Y \times Z$};
\draw (1,1.5) node{$Y$};
\draw (2,1.5) node{$Z$};
\draw (1.5,0.4) node{$X$ (memory)};
\end{tikzpicture}
\caption{Layer based architecture.}
\label{fig:lin:archi}
\end{subfigure}
\hspace{\fill}
\begin{minipage}{0.83\textwidth}
\begin{subfigure}{\textwidth}
\centering
\begin{tikzpicture}
\draw[->] (-0.5,1) node[left]{$p_1$} -- (11.5,1);
\draw[->] (-0.5,0) node[left]{$p_0$} -- (11.5,0);
\draw[fill=colorB!10] (0,1) +(2.1,-0.3) rectangle +(10,0.3) +(7,0) node{\footnotesize $Y_{SC}.op_1^0$};
\draw[fill=colorB!10] (0,0) +(0.2,-0.3) rectangle +(3.5,0.3) +(2.7,0) node{\footnotesize $Y_{SC}.op_0^0$};
\draw[fill=colorB!10] (0,0) +(4.8,-0.3) rectangle +(11.2,0.3) +(5.6,0) node{\footnotesize $Y_{SC}.op_0^1$};
\draw[->,colorA,rounded corners, thick]
(2.5,1.5) -- (3,1.5) -- (3,0.5) --
(1.1,0.5) -- (1.1,-0.5) --
(7.9,-0.5) -- (7.9,0.5) --
(4.9,0.5) -- (4.9,1.5) --
(9.1,1.5) -- (9.1,0.5) --
(10.3,0.5) -- (10.3,-0.5) -- (10.8,-0.5);
\draw[fill=colorA!10] (3.0,1) +(-0.7,-0.2) rectangle +(0.7,0.2) +(0,0) node{\footnotesize $X_{SC}.op_1^0$};
\draw[fill=colorA!10] (4.9,1) +(-0.7,-0.2) rectangle +(0.7,0.2) +(0,0) node{\footnotesize $X_{SC}.op_1^1$};
\draw[fill=colorA!10] (9.1,1) +(-0.7,-0.2) rectangle +(0.7,0.2) +(0,0) node{\footnotesize $X_{SC}.op_1^2$};
\draw[fill=colorA!10] (1.1,0) +(-0.7,-0.2) rectangle +(0.7,0.2) +(0,0) node{\footnotesize $X_{SC}.op_0^0$};
\draw[fill=colorA!10] (7.9,0) +(-0.7,-0.2) rectangle +(0.7,0.2) +(0,0) node{\footnotesize $X_{SC}.op_0^1$};
\draw[fill=colorA!10] (10.3,0) +(-0.7,-0.2) rectangle +(0.7,0.2) +(0,0) node{\footnotesize $X_{SC}.op_0^2$};
\end{tikzpicture}
\caption{The implementation of upper layer objects call operations on objects from lower layers.}
\label{fig:lin:histSC}
\end{subfigure}
\vspace{3mm}
\begin{subfigure}{\textwidth}
\centering
\begin{tikzpicture}
\draw[->] (-0.5,1) node[left]{$p_1$} -- (11.5,1);
\draw[->] (-0.5,0) node[left]{$p_0$} -- (11.5,0);
\draw[fill=colorB!10] (0,1) +(0.3,-0.3) rectangle +(9.3,0.3) +(3.9,0) node{\footnotesize $Y_{SC}.op_1^0$};
\draw[fill=colorB!10] (0,0) +(0.7,-0.3) rectangle +(3.85,0.3) +(1.5,0) node{\footnotesize $Y_{SC}.op_0^0$};
\draw[fill=colorB!10] (0,0) +(3.95,-0.3) rectangle +(11.1,0.3) +(7.5,0) node{\footnotesize $Y_{SC}.op_0^1$};
\draw[->,colorA,rounded corners, thick]
(0.7,1.5) -- (1.2,1.5) -- (1.2,0.5) --
(3.0,0.5) -- (3.0,-0.5) --
(4.8,-0.5) -- (4.8,0.5) --
(6.6,0.5) -- (6.6,1.5) --
(8.4,1.5) -- (8.4,0.5) --
(10.2,0.5) -- (10.2,-0.5) -- (10.7,-0.5);
\draw[fill=colorA!10] (1.2,1) +(-0.7,-0.2) rectangle +(0.7,0.2) +(0,0) node{\footnotesize $X_{SC}.op_1^0$};
\draw[fill=colorA!10] (6.6,1) +(-0.7,-0.2) rectangle +(0.7,0.2) +(0,0) node{\footnotesize $X_{SC}.op_1^1$};
\draw[fill=colorA!10] (8.4,1) +(-0.7,-0.2) rectangle +(0.7,0.2) +(0,0) node{\footnotesize $X_{SC}.op_1^2$};
\draw[fill=colorA!10] (3.0,0) +(-0.7,-0.2) rectangle +(0.7,0.2) +(0,0) node{\footnotesize $X_{SC}.op_0^0$};
\draw[fill=colorA!10] (4.8,0) +(-0.7,-0.2) rectangle +(0.7,0.2) +(0,0) node{\footnotesize $X_{SC}.op_0^1$};
\draw[fill=colorA!10] (10.2,0) +(-0.7,-0.2) rectangle +(0.7,0.2) +(0,0) node{\footnotesize $X_{SC}.op_0^2$};
\end{tikzpicture}
\caption{An asynchronous process cannot differentiate this history from the one on Figure~\ref{fig:lin:histSC}.}
\label{fig:lin:histL}
\end{subfigure}
\end{minipage}
\caption{In layer based program architecture running on asynchronous systems, local clocks of different processes can be distorted such that it
is impossible to differentiate a sequentially consistent execution from a linearizable execution.}
\label{fig:lin}
\end{figure}
Software developers usually abstract the complexity of their system gradually, which results in a layered software architecture:
at the top level, an application is built on top of several objects specific to the application,
themselves built on top of lower levels. Such an architecture is represented in Fig.~\ref{fig:lin:archi}.
The lowest layer usually consists of one or several objects provided by the system itself, typically a shared memory.
The system can ensure sequential consistency globally on all the provided objects, therefore composability is not required for this level.
Proposition~\ref{prop:lin_SC} expresses the fact that, in asynchronous systems, replacing a linearizable object by a sequentially consistent one
does not affect the correctness of the programs running on it circumventing the non composability of sequential consistency.
This result may have an impact on parallel architectures, such as modern multi-core processors and, to a higher extent,
high performance supercomputers, for which the communication with a linearizable central shared memory is very costly,
and weak memory models such as cache consistency~\cite{goodman1991cache} make the writing of programs tough.
\begin{proposition}\label{prop:lin_SC}
Let $A$ be an algorithm that implements an $SC(Y)$-consistent object when it is executed on an asynchronous system providing an $L(X)$-consistent object. Then $A$ also implements an $SC(Y)$-consistent object when it is executed in an asynchronous system providing an $SC(X)$-consistent object.
\end{proposition}
\begin{proof}
Let $A$ be an algorithm that implements an $SC(Y)$-consistent object when it is executed on
an asynchronous system providing an $L(X)$-consistent object.
Let us consider a history $H_{SC}$ obtained by the execution of $A$ in an asynchronous system providing a $SC(X)$-consistent object.
Such a history is depicted on Fig.~\ref{fig:lin:histSC}. The history $H_{SC}$ contains operations on $X$ (in red in Fig.~\ref{fig:lin:histSC}),
as well as on $Y$ (in blue in Fig.~\ref{fig:lin:histSC}).
We will now build another history $H_{L}$, in which the operations on $X$ are linearizable, and the operations on $Y$ consist in the same calls to operations on $X$.
Such a history is depicted on Fig.~\ref{fig:lin:histL}. The only difference between the histories on Fig.~\ref{fig:lin:histSC} and~\ref{fig:lin:histL} is the
way the two processes experience time. As the system is asynchronous, it is impossible for them to distinguish them.
Let us enumerate all the operations made on $X$ in their linear extension $\le$ required for sequential consistency.
Now, we build the execution $H_L$ in which the $i^{\text{th}}$ operation on $X$ of $H_{SC}$ is called on an $L(X)$-consistent object
at time $2i$ seconds and lasts for one second. As no two operations overlap, and the operations happen in the
same order in $H_L$ and in the linearization of $H_{SC}$, $\le$ is the only linear extension accepted by linearizability.
Therefore, all operations can return the same values in $H_L$ and in $H_{SC}$ (and they will if $X$ is deterministic).
Now let us assume all operations on $X$ in $H_L$ were called by algorithm $A$, in the same pattern as in $H_{SC}$.
When considering the operations on $Y$, $H_L$ is $SC(Y)$-consistent. Moreover, as $A$ works on asynchronous systems
and the same values were returned by $X$ in $H_{SC}$ and in $H_{L}$, $A$ returns the same values in both histories.
Therefore, $H_{SC}$ is also $SC(Y)$-consistent.
\end{proof}
An interesting point about Proposition~\ref{prop:lin_SC} is that it allows sequentially consistent --- but not linearizable ---
objects to be composable. Let $A_Y$ and $A_Z$ be two algorithms that implement $L(Y)$-consistent and $L(Z)$-consistent objects
when they are executed on an asynchronous system providing an $L(X)$-consistent object, like on Fig.~\ref{fig:lin:archi}.
As linearizability is stronger than sequential consistency, according to Proposition~\ref{prop:lin_SC}, executing $A_Y$ and $A_Z$
on an asynchronous system providing an $SC(X)$-consistent object would implement sequentially consistent --- yet not linearizable ---
objects. However, in a system providing the linearizable object $X$, by composability of linearizability,
the composition of $A_Y$ and $A_Z$ implements an $L(Y\times Z)$-consistent object. Therefore, by Proposition~\ref{prop:lin_SC} again,
in a system providing the sequentially consistent object $X$, the composition also implements an $SC(Y\times Z)$-consistent object.
In this example, the sequentially consistent versions of $Y$ and $Z$ derive their composability from an anchor to a \emph{common time},
given by the sequentially consistent memory, that can differ from \emph{real time}, required by linearizability.
\subsection{Round-Based Computations}\label{sec:round}
Even at a single layer, a program can use several objects that are not composable, but that are used in a fashion so that the non-composability is invisible to the program. Let us illustrate this with round-based algorithms. The synchronous distributed computing model has been extensively studied and well-understood leading the researchers to try to offer the same comfort when dealing with asynchronous systems, hence the introduction of synchronizers \cite{Awerbuch85}. A synchronizer slices a computation into phases during which each process executes three steps: send/write, receive/read and then local computation. This model has been extended to failure prone systems in the round-by-round computing model \cite{Gafni98} and to the Heard-Of model \cite{CS09} among others. Such a model is particularly interesting when the termination of a given program is only eventual. Indeed, some problems are undecidable in failure prone purely asynchronous systems. In order to circumvent this impossibility, eventually or partially synchronous systems have been introduced \cite{DLS88}. In such systems the termination may hold only after some finite but unbounded time, and the algorithms are implemented by the means of a series of asynchronous rounds each using its own shared objects.
In the round-based computing model, the execution is sliced into a sequence of asynchronous rounds. During each round, a new data structure (usually a single-writer/multi-reader register per process) is created and it is the only shared object used to communicate during the round.
At the end of the round, each process destroys its local accessor to the object, so that it can no more access it. Note that the rounds are asynchronous:
the different processes do not necessarily start and finish their rounds at the same time. Moreover, a process may not terminate a round, and keep accessing the same shared object forever or may crash during this round and stop executing. A round-based execution is illustrated in Fig.~\ref{fig:rounds:hist}.
In Proposition~\ref{prop:round}, we prove that sequentially consistent objects of different rounds behave well together:
as the ordering added between the operations of two different objects always follows the round numbering, that is
consistent with the program order already contained in the linear extension of each object, the composition of
all these objects cannot create loops (Figure~\ref{fig:rounds:hist}).
Putting together this result and Proposition~\ref{prop:lin_SC}, all the algorithms
that use a round-based computation model can benefit of any improvement on the implementation of an array of
single-writer/multi-reader register that sacrifices linearizability for sequential consistency. Note that this remains true whatever is the data structure used during each round. The only constraint is that a sequentially consistent shared data structure can be accessed during a unique round. If each object is sequentially consistent then the whole execution is consistent.
\begin{figure}[t]
\hspace{\fill}
\begin{subfigure}{0.2\textwidth}
\centering
\begin{tikzpicture}
\draw[fill=colorB!10] (0,2) rectangle (3,2.8) ;
\draw[fill=colorB!10] (0,1) rectangle (1.4,1.8) ;
\draw[fill=colorB!10] (1.6,1) rectangle (3,1.8) ;
\draw[fill=colorB!10] (0,0) rectangle (0.5,0.8) ;
\draw[fill=colorB!10] (0.8333,0) rectangle (1.3333,0.8) ;
\draw[fill=colorB!10] (1.666,0) rectangle (2.166,0.8) ;
\draw[fill=colorB!10] (2.5,0) rectangle (3,0.8) ;
\draw (1.5,2.4) node{Application};
\draw (0.7,1.4) node{$Y$};
\draw (2.3,1.4) node{$Z$};
\draw (0.25,0.4) node{$X_1$};
\draw (0.6666,0.4) node{$\rightarrow$};
\draw (1.08333,0.4) node{$X_2$};
\draw (1.5,0.4) node{$\rightarrow$};
\draw (1.917,0.4) node{$X_3$};
\draw (2.33333,0.4) node{$\rightarrow$};
\draw (2.75,0.4) node{$\cdots$};
\end{tikzpicture}
\caption{Round-based program architecture.}
\label{fig:rounds:archi}
\end{subfigure}
\hspace{\fill}
\begin{subfigure}{0.75\textwidth}
\centering
\begin{tikzpicture}
\draw (0.15,2) node{round 1};
\draw (3.9 ,2) node{round 2};
\draw (6.9 ,2) node{round 3};
\draw[thick,rounded corners=10] (1.8,1.8) -- (2.5,-0.4);
\draw[thick,rounded corners=10] (6,1.8) -- (4,0.15) -- (9,0.15) ;
\draw[thick,rounded corners=10] (7.8,1.8) -- (8.2,0.15);
\draw[->] (-1.5,1.0) node[left]{$p_2$} -- (9,1.0) ;
\draw[->] (-1.5,0.5) node[left]{$p_1$} -- (9,0.5) ;
\draw[->] (-1.5,0.0) node[left]{$p_0$} -- (9,0.0) ;
\draw[->,colorA,very thick]
(-1.2,1.5) to[out=-45,in=110,distance=5]
(-0.9,1.0) to[out=-70,in=120,distance=5]
(-0.7,0.5) to[out=-60,in=180,distance=10]
(0.3,0.0) to[out=0 ,in=-120,distance=10]
(1.3,0.5) to[out=60,in=-110,distance=5]
(1.5,1.0) to[out=70,in=-135,distance=5]
(1.8,1.5);
\draw[colorB,very thick,dashed]
(1.8,1.5) to[out=45,in=150,distance=5]
(2.7,1.5);
\draw[->,colorA,very thick]
(2.7,1.5) to[out=-30,in=60,distance=8]
(3.0,1.0) to[out=-120,in=180,distance=15]
(4.7,1.0) to[out=0 ,in=10,distance=17]
(3.4,0.5) to[out=-170,in=180,distance=15]
(3.2,0.0) to[out=0,in=180,distance=20]
(4.2,0.0) -- (8.2,0.0) -- (9,0.0);
\draw[colorB,very thick,dashed]
(9,0) to[out=0,in=-90,distance=10]
(9.5,.5) to[out=90,in=20,distance=25]
(6.2,1.5);
\draw[->,colorA,very thick]
(6.2,1.5) to[out=-160,in=90,distance=5]
(5.7,1.0) to[out=-90,in=150,distance=10]
(4.6,0.5) to[out=-30,in=-120,distance=12]
(7.0,1.0) to[out=60,in=-150,distance=7]
(7.3,1.3);
\draw[colorB,very thick,dashed]
(7.3,1.3) to[out=30,in=135,distance=5]
(8,1.25);
\draw[->, colorB,very thick,dotted] (1.7,1.0) -- (2.8,1.0);
\draw[->, colorB,very thick,dotted] (1.5,0.5) -- (3.1,0.5);
\draw[->, colorB,very thick,dotted] (0.7,0.0) -- (2.8,0.0);
\draw[->, colorB,very thick,dotted] (5,1.0) -- (5.5,1.0);
\draw[->, colorB,very thick,dotted] (3.7,0.5) -- (4.3,0.5);
\draw[colorA] (1.5,1.5) node{$l_1$};
\draw[colorA] (8.7,-0.2) node{$l_2$};
\draw[colorA] (6.95,1.35) node{$l_3$};
\draw[colorB] (2.2,1.75) node{$\lessdot$};
\draw[colorB] (2,-0.15) node{$\rightarrow^\star$};
\draw (-0.9,1.0)node{$\bullet$} +(0.45,0) node[above]{\footnotesize $X_1.op_1$};
\draw (-0.7,0.5)node{$\bullet$} +(0.45,0) node[above]{\footnotesize $X_1.op_2$};
\draw (0.3,0.0) node{$\bullet$} node[below]{\footnotesize $X_1.op_3$};
\draw (1.3,0.5) node{$\bullet$} +(-0.45,0) node[above]{\footnotesize $X_1.op_4$};
\draw (1.5,1.0) node{$\bullet$} +(-0.45,0) node[above]{\footnotesize $X_1.op_5$};
\draw (3.0,1.0) node{$\bullet$} +(-0.5,0) node[above]{\footnotesize $X_2.op_1$};
\draw (4.7,1.0) node{$\bullet$} +(-0.1,0) node[above]{\footnotesize $X_2.op_2$};
\draw (3.4,0.5) node{$\bullet$} +(0.2,0) node[below]{\footnotesize $X_2.op_3$};
\draw (3.2,0.0) node{$\bullet$} node[below]{\footnotesize $X_2.op_4$};
\draw (4.7,0.0) node{$\bullet$} node[below]{\footnotesize $X_2.op_5$};
\draw (6.2,0.0) node{$\bullet$} node[below]{\footnotesize $X_2.op_6$};
\draw (7.7,0.0) node{$\bullet$} node[below]{\footnotesize $X_2.op_7$};
\draw (5.7,1.0) node{$\bullet$} +(0.6,-0.1) node[above]{\footnotesize $X_3.op_1$};
\draw (4.6,0.5) node{$\bullet$} +(0.3,0.05) node[below]{\footnotesize $X_3.op_2$};
\draw (7.0,1.0) node{$\bullet$} +(0.4,0) node[below]{\footnotesize $X_3.op_3$};
\end{tikzpicture}
\caption{As the ordering between different objects follows the process order, that is contained into the
serialization order of each object, no loop can appear.}
\label{fig:rounds:hist}
\end{subfigure}
\hspace{\fill}
\caption{The composition of sequentially consistent objects used in different rounds is sequentially consistent.}
\label{fig:rounds}
\end{figure}
\begin{proposition}\label{prop:round}
Let $(T_r)_{r\in\mathbb{N}}$ be a family of sequential specifications and $(X_r)_{r\in\mathbb{N}}$ be a
family of shared objects such that, for all $r$, $X_r$ is $SC(T_r)$-consistent. Let $H$ be a history
that does not contain two operations $X_r.a$ and $X_{r'}.b$ with $r>r'$ such that $X_r.a$ precedes $X_{r'}.b$ in the process order.
Then $H$ is sequentially consistent with respect to the composition of all the $T_r$.
\end{proposition}
\begin{proof}
Let $(T_r)_{r\in\mathbb{N}}$ be a family of sequential specifications and $(X_r)_{r\in\mathbb{N}}$ be a
family of shared object such that, for all $r$, $X_r$ is $SC(T_r)$-consistent. Let $H$ be a history
that does not contain two operations $X_r.a$ and $X_{r'}.b$ with $r>r'$ such that $X_r.a$ precedes $X_{r'}.b$ in the process order.
For each $X_r$, there exists a linearization $l_r$ that contains the operations on $X_r$ and respects $T_r$.
For each operation $op$, let us denote by $op.r$ the index of the object $X_r$ on which it is made
and by $op.i$ the number of operations that precede $op$ in the linearization $l_r$.
Let us define two binary relations $\lessdot$ and $\rightarrow$ on the operations of $H$. For two operations $op$ and $op'$,
$op \lessdot op'$ if $op.r < op'.r$, or $op.r = op'.r$ and $op.i \le op'.i$. Note that $\lessdot$ is the concatenation of all
the linear extensions, so it is a total order on all the operations of $H$, but it may not be a linear extension as an
operation can have an infinite past if a process does not finish its round. For two operations $op$ and $op'$, $op \rightarrow op'$ if
$op$ and $op'$ were done in that order by the same process, or $op.r = op'.r$ and $op.i \le op'.i$. Let $\rightarrow^\star$
be the transitive closure of $\rightarrow$.
Notice that, according to the round based model, $\rightarrow$ is contained into $\lessdot$, and so is $\rightarrow^\star$ because $\lessdot$ is transitive.
The relation $\rightarrow^\star$ is transitive and reflexive by construction. Moreover, if $op\rightarrow^\star op' \rightarrow^\star op$, we have $op.r\le op'.r \le op.r$
and therefore $op.i\le op'.i \le op.i$, so $op=op'$ (antisymmetry), which proves that $\rightarrow^\star$ is a partial order. Moreover, let us suppose that an
operation contains an infinite past according to $\rightarrow^\star$. There is a smallest such operation, $op_{\min}$, according to $\lessdot$. The direct predecessors of
$op_{\min}$ according to $\rightarrow$ are smaller than $op_{\min}$ according to $\lessdot$, so they have a finite past. Moreover, they precede $op_{\min}$
either in the process order or in the linearization $l_{op.r}$, so there is a finite number of them. This is a contradiction, so all operations have a finite
past according to $\rightarrow^\star$.
It is possible to extend $\rightarrow^\star$ to a total order $\leq$ such that all operations have a finite past according to $\leq$. As $\leq$ contains
the total orders defined by all the $l_r$, the execution of all the operations in the order $\leq$ respects the sequential specification of the composition
of all the $X_r$.
\end{proof}
\section{Implementation of a Sequentially Consistent Memory}\label{sec:implementation}
In this section we will describe the computation model that we consider for the implementation of a sequentially consistent shared memory (Section~\ref{sec:model}).
In Section~\ref{sec:memory} we will discuss the characteristics of such a memory and, finally, in Section~\ref{sec:algo} we will present the proposed implementation of the discussed data structure.
Finally, in Section~\ref{sec:complexity} we discuss the complexity of the proposed implementation.
\subsection{Computation Model}
\label{sec:model}
The computation system consists of a set $\Pi$ of $n$ sequential processes which are denoted $p_0, p_1, \ldots, p_{n-1}$.
The processes are asynchronous, in the sense that they all proceed at their own speed,
not upper bounded and unknown to all other processes.
Among these $n$ processes, up to $t$ may crash (halt prematurely) but otherwise execute correctly the algorithm until the moment of their crash.
We call a process \emph{faulty} if it crashes, otherwise it is called \emph{correct} or \emph{non-faulty}.
In the rest of the paper we will consider the above model restricted to the case $t<\frac{n}{2}$.
The processes communicate with each other by sending and receiving messages through a complete network of bidirectional communication channels.
This means that a process can directly communicate with any other process, including itself ($p_i$ receives its own messages instantaneously),
and can identify the sender of the message it received. Each process is equipped with two operations: \textbf{send} and \textbf{receive}.
The channels are reliable (no losses, no creation, no duplication, no alteration of messages) and asynchronous
(finite time needed for a message to be transmitted but there is no upper bound). We also assume
the channels are FIFO: if $p_i$ sends two messages to $p_j$, $p_j$ will receive them in the order they were sent.
As stated in \cite{birman1987reliable}, FIFO channels can always be implemented on top of non-FIFO channels.
Therefore, this assumption does not bring additional computational power to the model, but it allows us to simplify the writing of the algorithm. Process $p_i$ can also use the macro-operation \textbf{FIFO broadcast}, that can be seen as a multi-send that sends a message to all processes, including itself. Hence, if a faulty process crashes during the broadcast operation some processes may receive the message while others may not, otherwise all correct processes will eventually receive the message.
\subsection{Single-Writer/Multi-Reader Registers and Snapshot Memory}
\label{sec:memory}
The shared memory considered in this paper, called a \emph{snapshot memory}, consists of an array of shared registers denoted $\DataFont{REG}[1..n]$. Each entry $\DataFont{REG}[i]$ represents a single-writer/multi-reader (SWMR) register. When process $p_i$ invokes $\DataFont{REG}.\AlgoWrite(v)$, the value $\Algov$ is written into the SWMR register $\DataFont{REG}[i]$ associated with process $p_i$. Differently, any process $p_i$ can read the whole array $\DataFont{REG}$ by invoking a single operation namely $\DataFont{REG}.\AlgoSnap()$. According to the sequential specification of the snapshot memory, $\DataFont{REG}.\AlgoSnap()$ returns an array containing the most recent value written by each process or the initial default value if no value is written on some register.
Concurrency is possible between snapshot and writing operations, as soon as the considered consistency criterion, namely linearizability or sequential consistency, is respected. Informally in a sequentially consistent snapshot memory, each snapshot operation must return the last value written by the process that initiated it,
and for any pair of snapshot operations, one must return values at least as recent as the other for all registers.
Compared to read and write operations, the snapshot operation is a higher level abstraction introduced in \cite{Afek93} that eases program design without bringing additional power with respect to shared registers. Of course this induces an additional cost: the best known simulation, above SWMR registers proposed in \cite{AttiyaR98}, needs $O(n\log n)$ basic read/write operations to implement each of the snapshot and the associated update operations.
Since the seminal paper \cite{attiya1995sharing} that proposed the so-called ABD simulation that emulates a linearizable shared memory over a message-passing distributed system, most of the effort has been put on the shared memory model given that a simple stacking allows to translate any shared memory-based result to the message-passing system model.
Several implementations of linearizable snapshot have been proposed in the literature some works consider variants of snapshot (e.g. immediate snapshot \cite{BorowskyG92}, weak-snapshot \cite{dwork1992time}, one scanner \cite{KirousisST94}) others consider that special constructions such as test-and-set (T\&S) \cite{AHR95} or load-link/store-conditional (LL/SC) \cite{RST01} are available, the goal being to enhance time and space efficiency.
In this paper, we propose the first message-passing sequentially consistent (not linearizable) snapshot memory implementation directly over a message-passing system (and consequently the first sequentially consistent array of SWMR registers), as traditional read and write operations can be immediately deduced from snapshot and update with no additional cost.
\subsection{The Proposed Algorithm}
\label{sec:algo}
\begin{algorithm}[p]
\tcc{Local variable initialization}
$\KwX_i \leftarrow [0,\dots, 0]$\label{al:SCS:varX}\tcp*{$\KwX_i \in \mathbb{N}^{n}$: $\KwX_i[j]$ is the last validated value written by $p_j$}
$\KwVC_i \leftarrow [0,\dots, 0]$\label{al:SCS:varVC}\tcp*{$\KwVC_i \in \mathbb{N}^{n}$: $\KwVC_i[j]$ is the stamp given by $p_j$ to value $\KwX_i[j]$}
$\KwSC_i \leftarrow 0$\label{al:SCS:varSC}\tcp*{$\KwSC_i \in \mathbb{N}$: used to stamp all the updates}
$\KwG_i \leftarrow \emptyset$\label{al:SCS:varG}\tcp*{$\KwG_i \subset \mathbb{N}^{3+n}$: contains a $g = (g.\KwGV, g.\KwGK, g.\KwGT, g.\KwGCL)$ per non-val. update of $g.\KwGV$ by $p_{g.\KwGK}$}
$\KwV_i \leftarrow \bot$\label{al:SCS:varV}\tcp*{$\KwV_i \in \mathbb{N}\cup \{\bot\}$: stores updates that have not yet been proposed to validation}
\renewcommand{\nl}{\let\nl\oldnl}\hrulefill\\
\renewcommand{\nl}{\let\nl\oldnl}\SubAlgo{\Op $\KwWrite(\Kwv)$ \tcc*[h]{$\Kwv\in \mathbb{N}$: written value; no return value}}{
\uIf(\label{al:SCS:w1}\tcp*[f]{no non-validated update by $p_i$}){$\forall g\in \KwG_i: g.\KwGK\neq i$} {
$\KwSC_i\text{++}$\; \Broadcast $\KwM(\Kwv,i,\KwSC_i,\KwSC_i)$\;\label{al:SCS:w3}
}
\lElse(\label{al:SCS:w4}\tcp*[f]{postpone the update}){$\KwV_i \leftarrow \Kwv$}
}
\renewcommand{\nl}{\let\nl\oldnl}\hrulefill\\
\renewcommand{\nl}{\let\nl\oldnl}\SubAlgo{\Op $\KwSnap()$ \tcc*[h]{return type: $\mathbb{N}^n$}}{
\Wait{$\KwV_i=\bot \land \forall g\in \KwG_i: g.\KwGK\neq i$} \label{al:SCS:r1}\tcp*[r]{make sure $p_i$'s updates are validated}
\Return $\KwX_i$\label{al:SCS:r2}\;
}
\renewcommand{\nl}{\let\nl\oldnl}\hrulefill\\
\renewcommand{\nl}{\let\nl\oldnl}\SubAlgo{\RecA $\KwM(\Kwv,\Kwk,\Kwt,\Kwcl)$ \RecB $p_j$} {
\tcc{$\Kwv\in \mathbb{N}$: written value, $\Kwk\in \mathbb{N}$: writer id, $\Kwt\in \mathbb{N}$: stamp by $p_{\Kwk}$, $\Kwcl\in \mathbb{N}$: stamp by $p_j$}
\If(\label{al:SCS:mA1}\tcp*[f]{update not validated yet}){$\Kwt>\KwVC_i[\Kwk]$} {
\eIf(\label{al:SCS:mA2}\tcp*[f]{update already known}){$\exists g\in \KwG_i: g.\KwGK = \Kwk \land g.\KwGT = \Kwt$}{
$g.\KwGCL[j] \leftarrow \Kwcl$\;\label{al:SCS:mA3}
}(\label{al:SCS:mA4}\tcp*[f]{first message for this update}){
\If{$\Kwk\neq i$}{
$\KwSC_i\text{++}$\; \Broadcast $\KwM(\Kwv, \Kwk, \Kwt, \KwSC_i)$\label{al:SCS:mA6}\tcp*[r]{forward with own stamp}
}
\textbf{var} $g \leftarrow \left(g.\KwGV = \Kwv, g.\KwGK=\Kwk, g.\KwGT=\Kwt, g.\KwGCL=[\infty,\dots,\infty]\right)$\label{al:SCS:mA7}\;
$g.\KwGCL[j] \leftarrow \Kwcl$\label{al:SCS:mA8}\;
$\KwG_i \leftarrow \KwG_i\cup \{g\}$\label{al:SCS:mA9}\tcp*[r]{create an entry in $\KwG_i$ for the update}
}
}
\textbf{var} $G' = \{g \in \KwG_i: |\{l : g'.\KwGCL[l] < \infty\}| > \frac{n}{2} \} $\label{al:SCS:mB1}\tcp*[r]{$G'$ contains updates that can be validated}
\lWhile(\label{al:SCS:mB2}){$\exists g\in \KwG_i\setminus G', g'\in G' : |\{l : g'.\KwGCL[l] < g.\KwGCL[l] \}| \neq \frac{n}{2}$}{
$G'\leftarrow G'\setminus \{g'\}$\label{al:SCS:mB3}}
$\KwG_i \leftarrow \KwG_i \setminus G'$\label{al:SCS:mB4}\tcp*[r]{validate updates of $G'$}
\For(\label{al:SCS:mB5}){$g\in G'$}{
\lIf{$\KwVC_i[g.\KwGK]<g.\KwGT$}{$\KwVC_i[g.\KwGK]=g.\KwGT;$ $\KwX_i[g.\KwGK]=g.\KwV$}\label{al:SCS:mB6}
}
\If(\label{al:SCS:mC1}\tcp*[f]{start validation process for postponed update if any}){$\KwV_i\neq\bot \land \forall g\in \KwG_i: g.\KwGK\neq i$}{
$\KwSC_i\text{++}$\;
\Broadcast $\KwM(\KwV_i,i,\KwSC_i,\KwSC_i)$\label{al:SCS:mC3}\;
$\KwV_i \leftarrow \bot$\label{al:SCS:mC4}\;
}
}
\caption{Implementation of a sequentially consistent memory (code for $p_i$)}
\label{algo:SCS}
\end{algorithm}
Algorithm~\ref{algo:SCS} proposes an implementation of the sequentially consistent snapshot memory data structure presented in Section~\ref{sec:memory}. Process $p_i$ can write a value $\Algov$ in its own register $\DataFont{REG}[i]$
by calling the operation $REG.\AlgoWrite(v)$, implemented by the lines~\ref{al:SCS:w1}-\ref{al:SCS:w4}. It can also call the operation $REG.\AlgoSnap()$, implemented by the lines~\ref{al:SCS:r1}-\ref{al:SCS:r2}.
Roughly speaking, the principle of this algorithm is to maintain, on each process, a local view of the object that reflects a set of \emph{validated} update operations.
To do so, when a value is written, all processes label it with their own timestamp. The order in which processes timestamp two different update operations define a \emph{dependency relation} between these operations. For two operations $a$ and $b$, if $b$ depends on $a$, then $p_i$ cannot validate $b$ before $a$.
More precisely, each process $p_i$ maintains five local variables:
\begin{itemize}
\item $\AlgoX_i \in \mathbb{N}^n$ represents the array of most recent validated values written on each register.
\item $\AlgoVC_i \in \mathbb{N}^n$ represents the timestamps associated with the values stored in $\AlgoX_i$, labelled by the process that initiated them.
\item $\AlgoSC_i\in \mathbb{N}$ is an integer clock used by $p_i$ to timestamp all the update operations. $\AlgoSC_i$ is incremented each time a message is sent,
which ensures all timestamps from the same process are different.
\item $\AlgoG_i \subset \mathbb{N}^{3+n}$ encodes the dependencies between the update
operations that have not been validated yet, as they are known by $p_i$. An element $g\in \AlgoG_i$, of the form $(g.\AlgoGV, g.\AlgoGK, g.\AlgoGT, g.\AlgoGCL)$, represents
the update operation of value $g.\AlgoGV$ by process $p_{g.\AlgoGK}$ labelled by process $p_{g.\AlgoGK}$ with timestamp $g.\AlgoGT$. For all $0\leq j<n$,
$g.\AlgoGCL[j]$ contains the timestamp associated by $p_j$ if it is known by $p_i$, and $\infty$ otherwise.
All updates of a history can be uniquely represented by a pair of integers $(k, t)$, where $p_k$ is the process that invoked it, and $t$ is the timestamp associated
to this update by $p_k$. Considering a history and a process $p_i$, we define the dependency relation $\rightarrow_i$ on pairs of integers $(k, t)$, by
$(k, t)\rightarrow_i (k', t')$ if for all $g, g'$ ever inserted in $G_i$ with
$(g.\AlgoGK,g.\AlgoGT)=(k,t)$, $(g'.\AlgoGK,g'.\AlgoGT)=(k',t')$,
we have $|\{j : g'.\AlgoGCL[j] < g.\AlgoGCL[j] \}| \le \frac{n}{2}$ (i.e. the dependency does not exist if $p_i$ knows that a majority of processes have seen the first update before the second).
Let $\rightarrow_i^\star$ denote the transitive closure of $\rightarrow_i$.
\item $\AlgoV_i\in \mathbb{N}\cup\{\bot\}$ is a buffer register used to store a value written while the previous one is not yet validated. This is necessary for validation (see below).
\end{itemize}
The key of the algorithm is to ensure the inclusion between sets of validated updates on any two processes at any time. Remark that it is not always necessary
to order all pairs of update operations to implement a sequentially consistent snapshot memory: for example, two update operations on different registers commute.
Therefore, instead of validating both operations on all processes in the exact same order (which requires Consensus), we can validate them at the same
time to prevent a snapshot to occur between them. Therefore, it is sufficient to ensure that, for all pairs of update operations, there is a dependency
agreed by all processes (possibly in both directions). This property is expressed by Lemma~\ref{lemma:safety} from Section~\ref{sec:proof}.
This is done by the mean of messages of the form $\AlgoM(\Algov,\Algok,\Algot,\Algocl)$ containing four integers: $\Algov$ the value written, $\Algok$ the identifier of the process that initiated the update, $\Algot$ the timestamp given by $p_{\Algok}$ and $\Algocl$ the timestamp given by the process that sent this message. Timestamps
of successive messages sent by $p_i$ are unique and totally ordered, thanks to variable $\AlgoSC_i$, that is incremented each time a message is sent by $p_i$.
When process $p_i$ wants to submit a value $\Algov$ for validation, it FIFO-broadcasts a message $\AlgoM(\Algov,i,\AlgoSC_i,\AlgoSC_i)$ (lines~\ref{al:SCS:w3} and~\ref{al:SCS:mC3}).
When $p_i$ receives a message $\AlgoM(\Algov,\Algok,\Algot,\Algocl)$, three cases are possible. If $p_i$ has already validated the corresponding update ($\Algot > \AlgoVC_i[\Algok]$),
the message is simply ignored. Otherwise, if it is the first time $p_i$ receives a message concerning this update ($\AlgoG_i$ does not contain any piece of information concerning it),
it FIFO-broadcasts a message with its own timestamp and adds a new entry $g\in \AlgoG_i$. Whether it is its first message or not, $p_i$ records the timestamp $\Algocl$, given by $p_j$, in $g.\AlgoGCL[j]$ (lines~\ref{al:SCS:mA3} or~\ref{al:SCS:mA8}). Note that we cannot update $g.\AlgoGCL[\Algok]$ at this point, as the broadcast is not causal: if $p_i$ did so, it could miss dependencies imposed by the order in which $p_{\Algok}$ saw concurrent updates. Then, $p_i$ tries to validate update operations: $p_i$ can validate an operation $a$ if it has received messages from a majority of processes, and there is no operation $b\rightarrow_i^\star a$ that cannot be validated. For that, it creates the set $G'$ that initially contains all the operations that have received enough messages, and removes all operations with unvalidatable dependencies from it (lines~\ref{al:SCS:mB1}-\ref{al:SCS:mB3}), and then updates $\AlgoX_i$ and $\AlgoVC_i$ with the most recent validated values (lines~\ref{al:SCS:mB4}-\ref{al:SCS:mB6}).
\begin{figure}[t]
\centering
\scalebox{1.2}{
\begin{tikzpicture}
\fill[colorB!20] (5,2.6) {[rounded corners=5] -- (5,1.75) -- (7,1.75)} -- (7,-0.6) -- (8,-0.6) -- (8,2.6) -- cycle;
\fill[pattern=stripes, pattern color=colorA!20] (4,2.6) {[rounded corners=5] -- (4,1.25)} -- (7,1.25) -- (7,-0.6) -- (8,-0.6) -- (8,2.6) -- cycle;
\draw[black!40,rounded corners] (5,2.6) -- (5,1.75) -- (7,1.75) -- (7,-0.6);
\draw[black!40,rounded corners] (4,2.6) -- (4,1.25) -- (7,1.25);
\draw[draw=black!40,fill=black!40] (-0.5,-0.1) rectangle (7,0.1);
\draw[draw=black!40,fill=black!40] (-0.5,1.9) rectangle (4,2.1);
\draw[->] (-1,2.0) node[left]{$p_4$} -- (8,2.0);
\draw[->] (-1,1.5) node[left]{$p_3$} -- (8,1.5);
\draw[->] (-1,1.0) node[left]{$p_2$} -- (8,1.0);
\draw[->] (-1,0.5) node[left]{$p_1$} -- (8,0.5);
\draw[->] (-1,0.0) node[left]{$p_0$} -- (8,0.0);
\draw (3.75,2.35) node{$\emptyset$};
\draw (4.5,2.35) node{$\{a\}$};
\draw (5.5,2.35) node{$\{a, b\}$};
\draw[colorA, very thick] (-0.5,2.0) +(0,-0.2) -- +(0,0.2) +(0.8,0.4) node{$a: \DataFont{REG}[4].\AlgoWrite(1)$};
\draw[colorB, very thick] (-0.5,0.0) +(0,-0.2) -- +(0,0.2) +(0.8,-0.4) node{$b: \DataFont{REG}[0].\AlgoWrite(1)$};
\draw[-latex, colorA] (-0.5,2.0) to[out=90,in=30,distance=10] (-0.5,2.0);
\draw[-latex, colorA] (-0.5,2.0) -- (1,1.5);
\draw[-latex,colorA] (1,1.5) to[out=-150,in=-90,distance=10] (1,1.5);
\draw[-latex,colorA] (1,1.5) -- (2.5,1.0);
\draw[-latex,colorA] (1,1.5) -- (2.5,2.0);
\draw[-latex,colorA] (2.5,1.0) to[out=-150,in=-90,distance=10] (2.5,1.0);
\draw[-latex,colorA] (2.5,1.0) -- (4.0,1.5);
\draw[-latex,colorA] (2.5,1.0) -- (4.0,2.0);
\draw[-latex,colorA] (2.5,1.0) -- (4.5,0.5);
\draw[-latex,colorA] (2.5,1.0) -- (4.5,0.0);
\draw[-latex,colorA] (4.5,0.5) to[out=-150,in=-90,distance=10] (4.5,0.5);
\draw[-latex,colorA] (4.5,0.5) -- (7.0,0.0);
\draw[-latex,colorA] (4.5,0.5) -- (5.50,1.0);
\draw[-latex,colorA] (4.5,0.0) to[out=-150,in=-90,distance=10] (4.5,0.0);
\draw[-latex,colorA] (4.5,0.0) -- (7.0,0.5);
\draw[-latex,colorB] (-0.5,0.0) to[out=-90,in=-30,distance=10] (-0.5,0.0);
\draw[-latex,colorB] (-0.5,0.0) -- (1,0.5);
\draw[-latex,colorB] (1.0,0.5) to[out=150,in=90,distance=10] (1.0,0.5);
\draw[-latex,colorB] (1.0,0.5) -- (4.0,1.0);
\draw[-latex,colorB] (1.0,0.5) -- (3.0,0.0);
\draw[-latex,colorB] (1.0,0.5) -- (3.0,2.0);
\draw[-latex,colorB] (4.0,1.0) to[out=150,in=90,distance=10] (4.0,1.0);
\draw[-latex,colorB] (4.0,1.0) -- (5.5,0.0);
\draw[-latex,colorB] (4.0,1.0) -- (5,1.5);
\draw[-latex,colorB] (4.0,1.0) -- (5,2.0);
\draw[-latex,colorB] (4.0,1.0) -- (5.5,0.5);
\draw[-latex,colorB] (5.0,1.5) to[out=150,in=90,distance=10] (5.0,1.5);
\draw[-latex,colorB] (5.0,1.5) -- (7.0,1.0);
\draw[-latex,colorB] (3,2.0) to[out=150,in=90,distance=10] (3,2.0);
\draw[-latex,colorB] (5.0,2.0) -- (7.0,1.5);
\end{tikzpicture}
}
\caption{An execution of Algorithm~\ref{algo:SCS}. An update is validated by a process when it has received enough messages for this update, and all the other updates it depends of have also been validated.}
\label{fig:expl_algo:handshake}
\end{figure}
This mechanism is illustrated in Fig. \ref{fig:expl_algo:handshake}, featuring five processes. Processes $p_0$ and $p_4$ initially call operation $\DataFont{REG}.\AlgoWrite(1)$. Messages that have an impact in the algorithm are represented by arrows, and messages that do not appear on the figure are received later.
Several situations may occur. The simplest case is process $p_3$, that received three messages concerning $a$ (from $p_4$, $p_3$ and $p_2$, with $3>\frac{n}{2}$) before its first message concerning $b$, allowing it to validate $a$. The case of process $p_4$ is similar: even if it knows that process $p_1$ saw $b$ before $a$, it received messages concerning $a$ from three \emph{other} processes, which allows it to ignore the message from $p_1$.
At first sight, the situation of processes $p_0$ and $p_1$ may look similar to the situation of $p_4$. However, the message they received concerning $a$
and one of the messages they received concerning $b$ come from the same process $p_2$, which forces them to respect the dependency $a\rightarrow_0 b$.
Note that the same situation occurs on process $p_2$ so, even if $a$ has been validated before $b$ by other processes, $p_2$ must respect the dependency
$b\rightarrow_2 a$.
Sequential consistency requires the total order to contain the process order. Therefore, a snapshot of process $p_i$ must return values at least as recent as its last updated value.
In other words, it is not allowed to return from a snapshot between an update and the time when it is validated
(grey zones in Fig.~\ref{fig:expl_algo:handshake}). There are two ways to implement this: we can either wait at the end of each update until it is validated, in
which case all snapshot operations are done for free, or wait at the beginning of all snapshot operations that immediately follow an update operation. This
extends the remark of~\cite{attiya1994sequential} to crash-prone asynchronous systems: to implement a sequentially consistent memory, it is necessary and
sufficient to wait either during read or during write operations. In Algorithm~\ref{algo:SCS}, we chose to wait during read/snapshot operations (line~\ref{al:SCS:r1}).
This is more efficient for two reasons: first, it is not necessary to wait between two consecutive updates, which can not be avoided if we
wait at the end of the update operation, and second the time between the end of an update and the beginning of a snapshot counts in the validation
process, but it can be used for local computations. Note that when two snapshot operations are invoked successively, the second one also returns immediately,
which improves the result of~\cite{attiya1994sequential} according to which waiting is necessary for all the operations of one kind.
\begin{figure}[t]
\centering
\scalebox{1.1}{
\begin{tikzpicture}
\colorlet{colorA}{black!20!red}
\colorlet{colorB}{black!20!blue}
\draw[colorA] (1.5,-0.85) node{$a$};
\draw (2 ,-0.85) node{$\rightleftharpoons$};
\draw[colorB] (2.5,-0.85) node{$b$};
\draw (3 ,-0.85) node{$\rightleftharpoons$};
\draw[colorA] (3.5,-0.85) node{$c$};
\draw (4 ,-0.85) node{$\rightleftharpoons$};
\draw[colorB] (4.5,-0.85) node{$d$};
\draw (5 ,-0.85) node{$\rightleftharpoons$};
\draw[colorA] (5.5,-0.85) node{$e$};
\draw (6 ,-0.85) node{$\rightleftharpoons$};
\draw[colorB] (6.5,-0.85) node{$f$};
\draw (7 ,-0.85) node{$\rightleftharpoons$};
\draw[colorA] (7.5,-0.85) node{$g$};
\draw (8 ,-0.85) node{$\rightleftharpoons$};
\draw[colorB] (8.5,-0.85) node{$h$};
\draw (9 ,-0.85) node{$\rightleftharpoons$};
\draw (9.5,-0.85) node{$\dots$};
\draw[->] (0.5,1.5) node[left]{$p_3$} -- (9.5,1.5);
\draw[->] (0.5,1.0) node[left]{$p_2$} -- (9.5,1.0);
\draw[->] (0.5,0.5) node[left]{$p_1$} -- (9.5,0.5);
\draw[->] (0.5,0.0) node[left]{$p_0$} -- (9.5,0.0);
\draw[colorA, very thick] (1,1.5) +(0,-0.2) -- +(0,0.2) +(0,0.35) node{$a$};
\draw[colorA, very thick] (3,1.5) +(0,-0.2) -- +(0,0.2) +(0,0.35) node{$c$};
\draw[colorA, very thick] (5,1.5) +(0,-0.2) -- +(0,0.2) +(0,0.35) node{$e$};
\draw[colorA, very thick] (7,1.5) +(0,-0.2) -- +(0,0.2) +(0,0.35) node{$g$};
\draw[colorA] (9,1.85) node{$\dots$};
\draw[colorB, very thick] (1,0.0) +(0,-0.2) -- +(0,0.2) +(0,-0.35) node{$b$};
\draw[colorB, very thick] (3,0.0) +(0,-0.2) -- +(0,0.2) +(0,-0.35) node{$d$};
\draw[colorB, very thick] (5,0.0) +(0,-0.2) -- +(0,0.2) +(0,-0.35) node{$f$};
\draw[colorB, very thick] (7,0.0) +(0,-0.2) -- +(0,0.2) +(0,-0.35) node{$h$};
\draw[colorB] (9,-0.35) node{$\dots$};
\draw[-latex,colorA] (1,1.5) -- (1.67,1.0);
\draw[-latex,colorA] (3,1.5) -- (3.67,1.0);
\draw[-latex,colorB] (1,0.0) -- (4.33,1.0);
\draw[-latex,colorA] (5,1.5) -- (5.67,1.0);
\draw[-latex,colorB] (3,0.0) -- (6.33,1.0);
\draw[-latex,colorA] (7,1.5) -- (7.67,1.0);
\draw[-latex,colorB] (5,0.0) -- (8.33,1.0);
\draw[-latex,colorB] (7,0.0) -- (9.0,1.0);
\draw[-latex,colorB] (1,0.0) -- (1.67,0.5);
\draw[-latex,colorB] (3,0.0) -- (3.67,0.5);
\draw[-latex,colorA] (1,1.5) -- (4.33,0.5);
\draw[-latex,colorB] (5,0.0) -- (5.67,0.5);
\draw[-latex,colorA] (3,1.5) -- (6.33,0.5);
\draw[-latex,colorB] (7,0.0) -- (7.67,0.5);
\draw[-latex,colorA] (5,1.5) -- (8.33,0.5);
\draw[-latex,colorA] (7,1.5) -- (9.0,0.5);
\end{tikzpicture}
}
\caption{If we are not careful, infinite chains of dependencies may occur. We must avoid infinite chains of dependencies in order to ensure termination}
\label{fig:expl_algo:dependences}
\end{figure}
In order to obtain termination of the snapshot operations (and progress in general), it is necessary to ensure that all update operations are eventually
validated by all processes. This property is expressed by Lemma~\ref{lemma:liveness} from Section~\ref{sec:proof}.
Figure \ref{fig:expl_algo:dependences} illustrates what could happen. On the one hand,
process $p_2$ receives a message concerning $a$ and a message concerning $c$ before a message concerning $b$. On the other hand,
process $p_1$ receives a message concerning $b$ before messages concerning $a$ and $c$. Therefore, it may create dependencies
$a\rightarrow_i b \rightarrow_i c \rightarrow_i b \rightarrow_i a$ on some process $p_i$, which means $p_i$ will be forced to validate $a$ and $c$
at the same time, even if they are ordered by the process order. The pattern in Fig.~\ref{fig:expl_algo:dependences} shows that it can
result in an infinite chain of dependencies, blocking validation of any update operation. To break this chain, we force process $p_3$ to wait until $a$ is validated
locally before it proposes $c$ to validation, by storing the value written by $c$ in a local variable $\AlgoV_i$ until $a$ is validated (lines~\ref{al:SCS:w1} and~\ref{al:SCS:w4}). When $a$ is validated,
we start the same validation process for $c$ (lines~\ref{al:SCS:mC1}-\ref{al:SCS:mC4}).
Remark that, if several updates (say $c$ and $e$) happen before $a$ is validated, the update of $c$ can be dropped as it will eventually be overwritten by $e$.
In this case, $c$ will happen just before $e$ in the final linearization required for sequential consistency.
This algorithm could be adjusted to implement multi-writer/multi-reader registers. Only three points must be changed. First, the identifier of the register written
should be added to all messages and all $g\in \AlgoG_i$. Second, concurrent updates on the same register must be ordered; this can be done,
for example, by replacing $\AlgoSC_i$ by a Lamport Clock, that respects the order in which updates are validated, and using a lexicographic
order on pairs $(\Algocl, \Algok)$. Third, variable $\AlgoV_i$ must be replaced by a set of update operations, and so does the value contained in the messages.
All in all, this greatly complexifies the algorithm, without changing the way concurrency is handled.
This is why we only focus on collections of SWMR registers here.
\subsection{Correctness}\label{sec:proof}
In order to prove that Algorithm~\ref{algo:SCS} implements a sequentially consistent snapshot memory, we must show that two important properties are verified
by all histories it admits. These two properties correspond to lemmas~\ref{lemma:safety} and~\ref{lemma:liveness}. In Lemma~\ref{lemma:safety},
we show that it is possible to totally order the sets of updates validated by two processes at different moments. This allows us to build a total order
on all the operations. In Lemma~\ref{lemma:liveness}, we prove that all update operations are eventually validated by all processes. This is important
to ensure termination of snapshot operations, and to ensure that update operations can not be ignored forever.
Before that, Lemma~\ref{lemma:broadcast} expresses a central property on how the algorithm works: the fact that each correct process
broadcasts a message corresponding to each written value proposed to validation.
Finally, Property~\ref{prop:correct} proves that all histories admitted by Algorithm~\ref{algo:SCS} are sequentially consistent.
In the following and for each process $p_i$ and local variable $x_i$ used in the algorithm, let us denote by $x_i^t$ the value of $x_i$ at time $t$.
For example, $\AlgoVC_i^0$ is the initial value of $\AlgoVC_i$. For arrays of $n$ integers $cl$ and $cl'$, we also denote by $cl \le cl'$
the fact that, for all $i$, $cl[i]\le cl'[i]$ and $cl < cl'$ if $cl \le cl'$ and $cl \neq cl'$.
\begin{lemma} \label{lemma:broadcast}
If a message $\AlgoM(\Algov, \Algok, \Algot, \Algocl)$ is broadcast by a correct process $p_i$,
then each correct process $p_j$ broadcasts a unique message $\AlgoM(\Algov, \Algok, \Algot, \Algocl')$.
In the following, for all processes $p_j$ and pairs $(k, t)$, let us denote by $M_j(k,t)$ the message
$\AlgoM(\Algov, \Algok, \Algot, \Algocl')$ and by $CL_j(k,t) = \Algocl'$ the stamp that $p_j$ put in this message.
\end{lemma}
\begin{proof}
Let $p_i$ and $p_j$ be two correct processes, and suppose $p_i$ broadcasts a message $M_i(\Algok, \Algot)$.
First, we prove that $p_j$ broadcasts a message $M_j(\Algok, \Algot)$.
As $p_i$ is correct, $p_j$ will eventually receive the message sent by $p_i$. At that time,
if $\Algot > \AlgoVC_j[\Algok]$, after the condition on line~\ref{al:SCS:mA2} and whatever its result,
$\AlgoG_i$ contains a value $g$ with $g.\AlgoGK = \Algok$ and $g.\AlgoGT = \Algot$. That $g$ was inserted on
line~\ref{al:SCS:mA2} (possibly after the reception of a different message), just after $p_j$ sent a message $M_j(\Algok, \Algot)$
at line~\ref{al:SCS:mA6}.
Otherwise, $\AlgoVC_j[\Algok]$ was incremented on line~\ref{al:SCS:mB6}, when validating some $g'$, that was added in $\AlgoG_j$
after $p_j$ received a (first) message $M_l(g'.\AlgoGK, g'.\AlgoGT)$, with $g'.\AlgoGK = \Algok$ and $g'.\AlgoGT = \AlgoVC_j[\Algok]$.
Remark that, as FIFO reception is used, $p_{\Algok}$ sent message $M_{\Algok}(\Algok, \Algot)$ before $M_{\Algok}(\Algok, \AlgoVC_j[\Algok])$,
and all other processes only forward messages, $p_j$ received message $M_l(\Algok, \Algot)$ before $M_l(\Algok, \AlgoVC_j[\Algok])$, and at that time,
$\Algot > \AlgoVC_j[\Algok]$, so the first case applies.
Now, we prove that $p_i$ will broadcast no other message with the same $\Algok$ and $\Algot$ later.
If $i=\Algok$, the message would be sent on line~\ref{al:SCS:w3} or~\ref{al:SCS:mC3}, just after $\AlgoSC_i$ is incremented,
which would lead to a different $\Algot$. Otherwise, the message would be sent on line~\ref{al:SCS:mA6}, which would mean
the condition of line~\ref{al:SCS:mA2} is false. As $p_i$ broadcast a first message, a corresponding $g$ was present in $Algog_i$,
deleted on line~\ref{al:SCS:mB4}, which would make the condition of line~\ref{al:SCS:mA1} to be false.
\end{proof}
\begin{lemma} \label{lemma:safety}
Let $p_i$, $p_j$ be two processes and $t_i$, $t_j$ be two time instants, and let us denote by $\AlgoVC_i^{t_i}$ (resp. $\AlgoVC_j^{t_j}$) the value of $\AlgoVC_i$ (resp. $\AlgoVC_j$) at time $t_i$ (resp. $t_j$). We have either, for all $k$, $\AlgoVC_i^{t_i}[k] \le \AlgoVC_j^{t_j}[k]$ or for all $k$, $\AlgoVC_j^{t_j}[k] \le \AlgoVC_i^{t_i}[k]$.
\end{lemma}
\begin{proof}
Let $p_i$, $p_j$ be two processes and $t_i$, $t_j$ be two instants. Let us suppose (by contradiction) that
there exist $k$ and $k'$ such that $\AlgoVC_j^{t_j}[k] < \AlgoVC_{i}^{t_i}[k]$ and $\AlgoVC_i^{t_i}[k'] < \AlgoVC_{j}^{t_j}[k']$.
As $\AlgoVC_i$ is only updated on line \ref{al:SCS:mB6}, at some time $t_i^k \le t_i$, there was $g_i^k\in G'$ with
$g_i^k.\AlgoGK = k$ and $g_i^k.t = \AlgoVC_i^{t_i}[k]$. According to line \ref{al:SCS:mB1}, we have
$|\{l : g_i^k.\AlgoGCL[l] < \infty\}| > \frac{n}{2}$ and according to lines \ref{al:SCS:mA3} and \ref{al:SCS:mA8},
each finite field $g_i^k.\AlgoGCL[l]$ corresponds to the reception of a message $M_l(k, \AlgoVC_i^{t_i}[k])$.
Similarly, process $p_j$ received messages $M_l(k', \AlgoVC_j^{t_j}[k'])$ from more than $\frac{n}{2}$ processes.
Since the number of processes is $n$, the intersection of these two sets of processes is not empty.
Let $p_c$ be a process that belongs to both sets, i.e. $p_c$ broadcast messages $M_c(k, \AlgoVC_i^{t_i}[k])$ and
$M_c(k', \AlgoVC_j^{t_j}[k'])$. Process $p_c$ sent these two messages in a given order, let us say
$M_c(k', \AlgoVC_j^{t_j}[k'])$ before $M_c(k, \AlgoVC_i^{t_i}[k])$ (the other case is symmetric).
As $\AlgoSC_c$ is never decremented and it is incremented before all sendings, $CL_c(k', \AlgoVC_j^{t_j}[k']) < CL_c(k, \AlgoVC_i^{t_i}[k])$.
Moreover, as the protocol uses FIFO ordering, $p_i$ received the two messages in the same order.
According to line~\ref{al:SCS:mB6}, $\AlgoVC_i$ can only increase, so $\AlgoVC_i^{t'_i}[k'] \le \AlgoVC_i^{t_i}[k']$ and $\AlgoVC_i^{t_i}[k'] < \AlgoVC_j^{t_j}[k']$. It means that the condition on line~\ref{al:SCS:mA1} was true when
$p_i$ received $M_c(k', \AlgoVC_j^{t_j}[k'])$. Then, after the execution of the condition starting on line~\ref{al:SCS:mA2} and whatever the result of this condition,
there was a $g_i^{k'}\in \AlgoG_i$ with $g_i^{k'}.\AlgoGK = k'$, $g_i^{k'}.\AlgoGT = \AlgoVC_j^{t_j}[k']$ and $g_i^{k'}.\AlgoGCL[c] = CL_c(k', \AlgoVC_j^{t_j}[k'])$.
At time $t_i$, if $g_i^{k'}\not\in G_i$, it was removed on line~\ref{al:SCS:mB4}, which means $\AlgoVC_i^{t}[k']\ge g_i^{k'}.\AlgoGT = \AlgoVC_j^{t_j}[k']$
by lines~\ref{al:SCS:mB5} and~\ref{al:SCS:mB6}, which is absurd by our hypothesis. Otherwise, after line~\ref{al:SCS:mB2} was executed at time $t_i^k$,
we have $g_i^{k}\in G'$ and $g_i^{k'}\not\in G'$, which is impossible as $g_i^{k'}.\AlgoGCL[c]\le g_i^{k}.GCL[c]$.
This is a contradiction. Therefore $\AlgoVC_i^{t_i} \le \AlgoVC_j^{t_j}$ or $\AlgoVC_j^{t_j} \le \AlgoVC_i^{t_i}$.
\end{proof}
\begin{lemma} \label{lemma:liveness}
If a message $\AlgoM(\Algov,i,\Algot,\Algot)$ is sent by a correct process $p_i$, then beyond some time $t'$,
for each correct process $p_j$, $\AlgoVC_j^{t'}[i] \ge \Algot$.
\end{lemma}
\begin{proof}
Let us suppose a message $M_i(i,t)$ is sent by a correct process $p_i$.
Let us suppose (by contradiction) that there exists a process $p_j$ such that the pair $(i, t)$ has an infinity of
predecessors according to $\rightarrow_j^\star$.
As the number of processes is finite, an infinity of these predecessors correspond to the same process, let us say $(k, t_l)_{l\in \mathbb{N}}$.
As $p_j$ is correct, $p_k$ eventually receives message $M_j(i,t)$, which means an infinity of messages $m_k(k,t_l)$
were sent after $p_k$ receives message $m_j(i,t)$, and for all of them, $(k,t_l)\rightarrow_i^\star (i,t)\rightarrow_i (k,t_l)$.
Therefore, there exists a sequence $(k_1, t'_1) \rightarrow_i (k_2, t'_2) \rightarrow_i \dots \rightarrow_i (k_m, t'_m)$
with $k_1 = k_m = k$ and $t'_m > t'_1$. Two cases are possible for $(k_2, t'_2)$:
\begin{itemize}
\item If $p_k$ received a message $M_x(k_2, t'_2)$ (from any $p_x$) before it sent $M_k(k, t'_1)$,
then $p_k$ also send $M_k(k_2, t'_2)$ before it sent $M_k(k, t'_1)$, and all processes
received these messages in the same order (and possibly a message $M_x(k_2, t'_2)$ even before from another process),
which is in contradiction with the fact that $(k, t'_1) \rightarrow_i (k_2, t'_2)$.
\item Otherwise, there is an index $l$ such that process $p_{k}$ received a message $M_x(k_{l'}, t'_{l'})$ (from any $p_x$)
for all $l'>l$ but not for $l'=l$, before it sent message $M_k(k, t'_1)$. Whether it finally sends it on line \ref{al:SCS:w3} or line \ref{al:SCS:mC3},
there was no $g\in \AlgoG_i$ corresponding to $(k_m, t'_m)$ so, by lines \ref{al:SCS:mB1}-\ref{al:SCS:mB2}, $p_{k}$ received messages
$M_x(k_{l'}, t'_{l'})$ for all $l'>l$, from a majority of processes $p_x$, and all of them sent $M_x(k_{l'}, t'_{l'})$ before $M_x(k_{l}, t'_{l})$.
As $(k_{l}, t'_{l}) \rightarrow_i (k_{l+1}, t'_{l+1})$ and FIFO reception is used, a majority of processes sent
$M_x(k_{l}, t'_{l})$ before $M_x(k_{l'}, t'_{l'})$. This is impossible as two majorities always have a non-empty intersection.
Therefore, this case is also impossible.
\end{itemize}
Finally, for all correct processes $p_j$, there exists a finite number of pairs $(k, t')$ such that $(k, t') \rightarrow_j (i, t)$.
As $p_j$ is correct, according to Lemma~\ref{lemma:broadcast}, $p_j$ will eventually receive a message $M_x(k,t')$ for all of them
from all correct processes, which are in majority.
At the last message, on line \ref{al:SCS:mB5}, $G'$ will contain a $g$ with $g.\AlgoGK=i$ and $g.\AlgoGT=t$
and after it executed line \ref{al:SCS:mB6}, it will have $\AlgoVC_j[i] \ge t$. As $\AlgoVC_j[i]$ can only grow and what precedes is true for all $j$,
eventually it will be true for all correct processes.
\end{proof}
Finally, given Lemmas~\ref{lemma:safety} and~\ref{lemma:liveness}, it is possible to prove that Algorithm \ref{algo:SCS} implements a sequentially consistent snapshot memory (Proposition~\ref{prop:correct}). The idea is to order snapshot operations according to the order given by Lemma~\ref{lemma:safety} on
the value of $\AlgoVC_i$ when they were made and to insert the update operations at the position where
$\AlgoVC_i$ changes because they are validated. It is possible to complete this order into a linearization
order, thanks to Lemma~\ref{lemma:liveness}, and to show that the execution of all the operations in that order
respects the sequential specification of the snapshot memory data structure.
\pagebreak
\begin{proposition}\label{prop:correct}
All histories admitted by Algorithm \ref{algo:SCS} are sequentially consistent.
\end{proposition}
\begin{proof}
Let $H$ be a history admitted by Algorithm \ref{algo:SCS}. For each operation $op$, let us define $op.clock$ as follows:
\begin{itemize}
\item If $op$ is a snapshot operation done on process $p_i$, $op.clock$ is the value of $\AlgoVC_i$ when $p_i$ executes line~\ref{al:SCS:r2}.
\item If $op$ is an update operation done on process $p_i$, let us remark that the call to $op$ is followed by the sending of a message
$\AlgoM(v, i, cl_i, cl_i)$, either directly on line~\ref{al:SCS:w3} or later on line~\ref{al:SCS:mC3} as lemma~\ref{lemma:liveness} prevents
the condition of line~\ref{al:SCS:mC1} to remain false forever (in this case, the value $v$ may be more recent from the one written in $op$).
Let us consider the clock $cl_i$ of the first such message sent by $p_i$. We pose $op.clock$ as the smallest value taken by variable
$\AlgoVC_j$ for any $j$ (according to the total order given by lemma~\ref{lemma:safety}) such that $op_i\le op.clock[i]$
(such a clock exists according to lemma~\ref{lemma:liveness}).
\end{itemize}
Let $\lessdot$ be any total order on all the operations, that contains the process order,
and such that all operation has a finite past according to $\lessdot$ ($\lessdot$ is only used to break ties).
We define the relation $\leq$ on all operations of $H$ by $op\leq op'$ if
\begin{enumerate}
\item $op.clock < op'.clock$, or
\item $op.clock = op'.clock$, $op$ is an update operation and $op'$ is a snapshot operation, or
\item $op.clock = op'.clock$, $op$ and $op'$ are either two snapshot or two update operations and $op\lessdot op'$.
\end{enumerate}
Let us prove that $\leq$ is a total order.
\begin{description}
\item[reflexivity:] for all $op$, the third point in the definition is respected, as $\lessdot$ is a total order.
\item[antisymmetry:] let $op, op'$ be two operations such that $op\leq op'\leq op$. We have $op.clock = op'.clock$, $op$ and $op'$ are either two snapshot or
two update operations and, as $\lessdot$ is antisymmetric, $op=op'$.
\item[transitivity:] let $op, op', op''$ be three operations such that $op\leq op'\leq op''$.
If $op.clock \leq op'.clock$ or $op'.clock \leq op''.clock$, then $op.clock \leq op''.clock$.
Otherwise, $op.clock = op'.clock = op''.clock$. If the three operations are all update or all snapshot operations, $op.clock \lessdot op''.clock$ so $op.clock \leq op''.clock$.
Otherwise, $op$ is an update and $op'$ is a snapshot so $op.clock \leq op''.clock$.
\item[total:] let $op, op'$ be two operations. If $op.clock\neq op'.clock$, they are ordered according to lemma~\ref{lemma:safety}. Otherwise, they are ordered
by one of the last two points.
\end{description}
Let us prove that $\leq$ contains the process order. Let $op$ and $op'$ be two operations that occurred on the same process $p_i$, on which $op$ preceded $op'$.
According to lemma~\ref{lemma:safety}, $op.clock$ and $op'.clock$ are ordered.
\begin{itemize}
\item If $op.clock < op'.clock$ then $op\leq op'$.
\item Let us suppose $op.clock = op'.clock$. It is impossible that $op$ is a read operation and $op'$ is an update operation:
as $\AlgoSC_i$ is always increased before $p_i$ sends a message, $op.clock[i] < op'.clock[i]$.
If $op$ is an update operation and $op'$ is a snapshot operation, then $op\leq op'$. In the other cases, $op\lessdot op'$ so $op\leq op'$.
\item We now prove case $op.clock > op'.clock$ cannot happen. As above, it is impossible that $o$ is a read operation and $o'$ is an update operation.
It is also impossible that $op$ and $op'$ are two read or two update operations because $\AlgoVC_i$ can only grow.
Finally, if $op$ is an update operation and $op'$ is a snapshot operation, $op.clock[i] \le op'.clock[i]$ thanks to line~\ref{al:SCS:r1},
and by definition of $op.clock$ for update operations, $op.clock \le op'.clock$.
\end{itemize}
Let us prove that all operations have a finite past according to $\leq$. Let $op$ be an operation of the history. Let us first remark that,
for each process $p_i$, $op.clock[i]$ corresponds to a message $M_i(i, op.clock[i])$.
According to lemma~\ref{lemma:liveness}, eventually, for all processes $p_i$, $\AlgoVC_i \ge op.clock$.
Only a finite number of operations have been done before that, therefore $\{op': op'.clock < op.clock\}$ is finite.
Moreover, all the updates $op'$ with $op'.clock \le op.clock$ are done before that time, so there is a finite number of them.
If $op$ is an update operation, then its antecedents $op'$ verify either $op'.clock < op.clock$ or $op'.clock = op.clock$ and $op'$ is a write operation.
In both cases, there is a finite number of them. If $op$ is a snapshot operation, its antecedents $op'$ verify either
(1) $op'.clock < op.clock$, (2) $op'.clock = op.clock$ and $op'$ is an update operation or (3) $op'.clock = op.clock$ and $op'$ is a snapshot operation.
Cases (1) and (2) are similar as above, and antecedents that verify case (3) also are its antecedents by $\lessdot$ so there is a finite number of them.
Finally, in all cases, $op$ has a finite number of antecedents.
Let us prove that the execution of all the operations in the order $\leq$ respects the sequential specification of memory.
Let $op$ be a snapshot operation invoked by process $p_i$ and let $p_j$ be a process. According to line~\ref{al:SCS:mB6},
the value of $\AlgoX_i[j]$ corresponds to the value contained in a message $M_j(j, op.clock[j])$. Let $op'$ be the last
update operation invoked by process $p_j$ before it sent this message. Whether the message was sent on line \ref{al:SCS:w3} or \ref{al:SCS:mC3},
$\AlgoX_i[j]$ is the value written by $op'$. Moreover, $op'.clock \le op.clock$ so $op'\leq o'$ and for all update operations $op''$ done by
process $p_j$ after $op'$, $op.clock < op''.clock$ so $op\leq op''$. All in all, $op$ returns the last values written on each register,
according to the order $\leq$.
Finally, $\leq$ defines a linearization of all the events of the history that respects the sequential specification of the shared object.
Therefore, $H$ is sequentially consistent.
\end{proof}
\subsection{Complexity}\label{sec:complexity}
\begin{figure}[t]
\centering
\scalebox{0.9}{
\begin{tikzpicture}
\draw (-0.5,0) -- (17.3,0) ;
\draw (-0.5,0.5) -- (17.3,0.5) ;
\draw (-0.5,1) -- (17.3,1) ;
\draw (-0.5,1.5) -- (17.3,1.5) ;
\draw (2.6,2.4) -- (17.3,2.4) ;
\draw (-0.5,0) -- (-0.5,1.5) ;
\draw (1,1.25) node{ABD \cite{attiya1995sharing}} ;
\draw (1,0.75) node{ABD + AR \cite{attiya1995sharing,AttiyaR98}} ;
\draw (1,0.25) node{Algorithm \ref{algo:SCS}} ;
\draw (2.5,0) -- (2.5,1.5) ;
\draw (2.6,0) -- (2.6,2.4) ;
\draw (4.15,2.15) node{Read} ;
\draw (3.5,1.75) node{\# messages} ;
\draw (3.5,1.25) node{$\mathcal{O}(n)$} ;
\draw[black!50] (3.5,0.75) node{$\sim$} ;
\draw (3.5,0.25) node{$0$} ;
\draw (4.4,0) -- (4.4,1.5) ;
\draw (5.05,1.75) node{latency} ;
\draw (5.05,1.25) node{$4$} ;
\draw[black!50] (5.05,0.75) node{$\sim$} ;
\draw (5.05,0.25) node{$0$ --- $4$} ;
\draw (5.7,0) -- (5.7,2.4) ;
\draw (7.25,2.15) node{Write} ;
\draw (6.6,1.75) node{\# messages} ;
\draw (6.6,1.25) node{$\mathcal{O}(n)$} ;
\draw[black!50] (6.6,0.75) node{$\sim$} ;
\draw (6.6,0.25) node{$\mathcal{O}(n^2)$} ;
\draw (7.5,0) -- (7.5,1.5) ;
\draw (8.15,1.75) node{latency} ;
\draw (8.15,1.25) node{$2$} ;
\draw[black!50] (8.15,0.75) node{$\sim$} ;
\draw (8.15,0.25) node{$0$} ;
\draw (8.8,0) -- (8.8,2.4) ;
\draw (8.9,0) -- (8.9,2.4) ;
\draw (11,2.15) node{Snapshot} ;
\draw (9.95,1.75) node{\# messages} ;
\draw[black!50] (9.95,1.25) node{$\sim$} ;
\draw (9.95,0.75) node{$\mathcal{O}\left(n^2\log n\right)$} ;
\draw (9.95,0.25) node{$0$} ;
\draw (11,0) -- (11,1.5) ;
\draw (12.05,1.75) node{latency} ;
\draw[black!50] (12.05,1.25) node{$\sim$} ;
\draw (12.05,0.75) node{$\mathcal{O}\left(n\log(n) \right)$} ;
\draw (12.05,0.25) node{$0$ --- $4$} ;
\draw (13.1,0) -- (13.1,2.4) ;
\draw (15.2,2.15) node{Update} ;
\draw (14.15,1.75) node{\# messages} ;
\draw[black!50] (14.15,1.25) node{$\sim$} ;
\draw (14.15,0.75) node{$\mathcal{O}\left(n^2\log n\right)$} ;
\draw (14.15,0.25) node{$\mathcal{O}(n^2)$} ;
\draw (15.2,0) -- (15.2,1.5) ;
\draw (16.25,1.75) node{latency} ;
\draw[black!50] (16.25,1.25) node{$\sim$} ;
\draw (16.25,0.75) node{$\mathcal{O}\left(n\log(n)\right)$} ;
\draw (16.25,0.25) node{$0$} ;
\draw (17.3,0) -- (17.3,2.4) ;
\end{tikzpicture}
}
\caption{Complexity of several algorithms to implement a shared memory.}
\label{fig:complexity}
\end{figure}
In this section, we analyze the algorithmic complexity of Algorithm~\ref{algo:SCS} in terms of the number of messages
and latency for snapshot and update operations. Fig.~\ref{fig:complexity} sums up this complexity and compares it with
the standard implementation of linearizable registers \cite{attiya1995sharing}, as well as with the construction of a snapshot object \cite{AttiyaR98}
implemented on top of registers.
In an asynchronous system as the one we consider, the latency $d$ and the uncertainty $u$ of the network can not be expressed by constants.
We therefore measure the complexity as the length of the longest chain of causally related messages to expect before an operation can complete.
For example, if a process sends a message to another process and then waits for its answer, the complexity will be $2$.
According to Lemma~\ref{lemma:broadcast}, it is clear that each update operation generates at most $n^2$ messages.
The time complexity of an update operation is $0$, as update operations return immediately.
No message is sent for snapshot operations. Considering its latency, in the worst case, a snapshot operation
is called immediately after two update operations $a$ and $b$. In this case, the process must wait until its own message for $a$
is received by the other processes, then to receive their acknowledgements, and then the same two messages must be routed for $b$,
which leads to a complexity of $4$. However, in the case of two consecutive snapshots, or if enough time has elapsed between a snapshot and the last update,
the snapshot can also return immediately.
In comparison, the ABD simulation uses solely a linear number of messages per operation (reads as well as writes), but waiting is necessary for both
kinds of operations. Even in the case of the read operation, our worst case corresponds to the latency of the ABD simulation.
Moreover, our solution directly implements the snapshot operation. Implementing a snapshot operation on top of a linearizable shared memory is
actually more costly than just reading each register once. The AR implementation \cite{AttiyaR98}, that is (to our knowledge)
the implementation of the snapshot that uses the least amount of operations on the registers, uses $\mathcal{O}(n\log n)$ operations on registers
to complete both a snapshot and an update operation. As each operation on memory
requires $\mathcal{O}(n)$ messages and has a latency of $\mathcal{O}(1)$, our approach leads to a better performance in all cases.
Algorithm~\ref{algo:SCS}, like~\cite{attiya1995sharing}, uses unbounded integer values to timestamp messages.
Therefore, the complexity of an operation depends on the number $m$ of operations executed before it, in the linear extension.
All messages sent by Algorithm~\ref{algo:SCS} have a size of
$\mathcal{O}\left(log(n m)\right)$. In comparison, ABD uses messages of size $\mathcal{O}\left(log(m)\right)$ but implements only one register,
so it would also require messages of size $\mathcal{O}\left(log(n m)\right)$ to implement an array of $n$ registers.
Considering the use of local memory, due to asynchrony, it is possible in some cases that $\AlgoG_i$ contains an entry $g$
for each value previously written. In that case, the space occupied by $\AlgoG_i$ may grow up to $\mathcal{O}(m n\log m)$.
Remark however that, according to Lemma~\ref{lemma:safety}, an entry $g$ is eventually removed from $\AlgoG_i$
(in a synchronous system, after $2$ time units if $g.\AlgoGK = i$ or $1$ time unit if $g.\AlgoGK \neq i$).
Therefore, this maximal bound is not likely to happen. Moreover, if all processes stop writing (which is the case in the round based model
we discussed in Section~\ref{sec:round}), then eventually $\AlgoG_i$ becomes empty and the space occupied by the algorithm drops down to
$\mathcal{O}(n\log m)$, which is comparable to ABD. In comparison, the AR implementation keeps a tree containing past values from all registers,
in each register, which leads to a much higher size of messages and local memory.
\vspace{-1mm}
\section{Conclusion}\label{sec:conclusion}
In this paper, we investigated the advantages of focusing on sequential consistency. Because of its non
composability, sequential consistency has received little focus so far. However, we show that in many applications,
this limitation is not a problem. The first case concerns applications built on a layered architecture.
If one layer contains only one object, then it is impossible for objects built on top of it to determine if this
object is sequentially consistent or linearizable. The other example concerns round-based algorithms: if processes
access to one different sequentially consistent object in each round, then the overall history is also sequentially consistent.
Using sequentially consistent objects instead of their linearizable counterpart can be very profitable in terms of execution time of operations. Whereas waiting
is necessary for both read and write operations when implementing linearizable memory,
we presented an algorithm in which waiting is only required for read operations when they follow directly a write operation.
This extends the result of Attiya and Welch (that only concerns synchronous failure-free systems) to asynchronous systems with crashes. Moreover, the proposed algorithm implements a sequentially consistent snapshot memory
for the same cost, which results in a better message and time comlexity, for both kinds of operations,
than the best known implementation of a snapshot memory.
Exhibiting such an algorithm is not an easy task for two reasons. First, as write operations are wait-free, a process may
write before its previous write has been acknowledged by other processes, which leads to ``concurrent'' write operations by the same process.
Second, proving that an implementation is sequentially consistent is more difficult than proving it is linearizable since
the condition on real time that must be respected by linearizability highly reduces the number of linear extensions that need to be considered.
\section{Acknowledgments}\label{sec:acknowledgments}
This work has been partially supported by the Franco-German ANR project DISCMAT under grant agreement ANR-14-CE35-0010-01.
The project is devoted to connections between mathematics and distributed computing.
\bibliographystyle{plain}
|
1,116,691,497,375 | arxiv | \section{Introduction} \label{sec1:intro}
Red early-type and blue late-type galaxies are two distinct populations in the color-magnitude diagrams from the local universe to $z \sim 3$, which are connected with stellar mass and kinematics \citep{York+2000,Strateva+2001,Stoughton+2002,Wyder+2007,Whitaker+2011,Tomczak+2014,Mortlock+2015,Straatman+2016,Davidzon+2017}.
In the local universe, early-type galaxies are usually found to be red ellipticals or bulge-dominated lenticulars with little star formation, while typical late-type galaxies exhibit disk-dominated appearance with spiral arms and intensive star formation \citep{Baldry+2004}. It is evidenced that the bimodal distributions are still observed at $z > 1$. The abundance of blue late-type galaxies tends to decrease over cosmic time, while red early-type galaxies become abundant since $z \sim 1$ \citep{Strateva+2001,Bell+2004,van_Dokkum+2006}.
Stellar mass and environment are two important factors to influence the evolution of galaxies. Massive galaxies possess higher specific star formation rates (sSFRs) at earlier times, and have a higher probability to be quenched than less massive galaxies, which is named as the ``downsizing'' scenario \citep{Bell+2003,Kauffmann+2003,Noeske+2007,Gu+2018}. Different environmental properties can measure the level of galaxy interaction and dependence of AGN \citep{Larson+1980, Moore+1999, Moran+2007, Li+2019}. Red early-type galaxies are also more inclined to locate at denser environment toward lower redshifts which suggests that environmental density is effective to star formation quenching as time proceeds \citep{Bundy+2006,Cooper+2008,Tasca+2009,Peng+2010,Gu+2018}.
The general picture has been established that the transformation from disk-dominated star-forming galaxies(SFGs) to the quiescent early types prefers to happen in the galaxies with higher stellar mass and in denser environment, coupling with the star formation quenching \citep{Bundy+2010,Gu+2018}. Morphological transformation and star formation quenching are usually thought to have a degeneracy, which means that the shutdown of star formation might be linked with the buildup of bulge structure. Merger and disk instability is able to boost the morphological change via rapid bulge growth, and to trigger the feedback from active galactic nucleus (AGN) which heats or drives out the cold gas \citep{Croton+2006,Faber+2007, Hopkins+2008, Schawinski+2014, Brennan+2017}. On one hand, some violent activities in a denser environment, such as galaxy-galaxy merger, tidal stripping, and galaxy harassment can efficiently transform the morphologies of galaxies and quench their star formation \citep{TT+1972,WR+1978,Moore+1996,Lake+1998,Hopkins+2008,KH+2013}. On the other hand, some mild processes such as strangulation and AGN heating can also cease the star formation in galaxies by cutting off the accretion of cold gas or suppressing gas from further cooling without destroying their disks \citep{Larson+1980,Fabian+1994,Balogh+2000,Weinmann+2006,Bekki+2009}.
The emergence of red spiral (i.e., passive late-type) and blue elliptical (i.e., star-forming early-type) galaxies breaks the degeneracy between star formation quenching and morphological transformation, which provides the possibility to figure out the two processes individually. The blue ellipticals are spheroidal-dominated galaxies with active star formation. Their formation might be attributed to the major merger on the basis of their high asymmetry indices \citep{Liu+2019}. In addition, the rejuvenation of red elliptical galaxies may also trigger star formation again at low mass by galaxy mergers \citep{Kannappan+2009,Kim+2018}.
Red spiral galaxies possess little star formation, but their disks still remain. Some recent efforts have been devoted to evaluate their physical properties and origins \citep{Bundy+2010,Masters+2010}. The abundance of red spiral galaxies suggests that they are not confined to a single process among environmental effect, disk regrowth and internal instability at $z \sim 1-2$ \citep{Bundy+2010}. \cite{Fraser-McKelvie+2018} proposed that massive and less massive red spiral galaxies may have experienced different processes: less massive galaxies are inclined to locate in clusters where ram-pressure stripping and strangulation are responsible for the star formation quenching; although massive red spiral galaxies possess higher bar fractions than star-forming counterparts, their quenching mechanism is less clear. \cite{Hao+2019} accessed their two-dimensional spectroscopy of blue and red spirals at $z \sim 1$. Similar to red ellipticals, red spiral galaxies are found to have older stellar populations and higher stellar metallicities than blue spirals. These red spirals are verified to be remnants of gas-rich mergers.
To understand the connection between morphology and star formation, a number of studies have been carried out in this regard. \cite{Liu+2019} considered the quenching of star formation and the transformation of morphology independently by classifying the galaxies at $0.01 < z < 0.12$ into four types: quenched early-type galaxies (qEs), quenched late-type galaxies (qLs), star-forming early-type galaxies (sEs) and star-forming late-type galaxies (sLs).
They found that morphological transformation is mainly regulated by stellar mass among various properties including stellar mass, halo mass, halo radius, and environment. The quenching of star formation is found to be mainly driven by stellar mass for more massive galaxies, but by halo mass for the lower-mass galaxies.
Using the IllustrisTNG simulation, \cite{Tacchella+2019} found that galaxy morphologies are basically organized during the phase of active star formation, and the bulge formation of intermediate-mass galaxies is mainly driven by mergers.
In order to address the morphological transformation and star formation quenching over a long period of cosmic time, we aim to focus on the massive ($M_* \geq 10^{10} M_{\odot}$) galaxies at $0.5 \leq z \leq 2.5$ in five 3D-HDT/CANDELS fields. Following \cite{Liu+2019}, we also classify the galaxies into four populations (i.e., qEs, qLs, sEs, and sLs), and then analyze how their abundances vary with stellar mass and environment. Since AGN feedback might have played a role on morphological transformation and star formation quenching, AGN fraction is also taken into consideration.
The implications are also discussed regarding to the formation of sEs and qLs.
This paper is structured as follows. We describe the data set of 3D-HST/CANDELS program and our sample construction in Section \ref{sec2:sample}. The distributions of stellar mass and environmental overdensity for these four galaxy populations are shown in Section \ref{sec3:mass and environment distributions}. The dependence of star formation quenching on mass and environment is presented in Section \ref{sec4:star formation quenching}, while their effects of mass and environment on morphological transformation are given in Section \ref{sec5:morphology transformation}. Possible roles of AGN feedback on the formation of qLs and sEs are discussed in Section \ref{sec6:AGN fractions}. In Section \ref{sec7:discussion}, we try to explain our findings from a viewing angle of the timescales of morphological transformation and star formation quenching. Finally, a summary is given in Section \ref{sec8:summary}. We assume the following cosmological parameters throughout the paper: $H_{0} = 70{\rm\ km}{\rm\ s^{-1}\ Mpc^{-1}}$, $\Omega_{\rm m} = 0.30$, $\Omega_{\Lambda} = 0.70$. All magnitudes given in this paper are in the AB system.
\section{Data and Sample Selection} \label{sec2:sample}
Covering $900~ {\rm arcmin}^{2}$ area in five fields (namely, AEGIS, COSMOS, GOODS-N, GOODS-S and UDS), the 3D-HST and CANDELS programs have provided a wealth of multi-wavelength database acquired by the WFC3 and ACS spectroscopy and photometry, which makes it possible to build the spectral energy distributions (SEDs) from ultraviolet to infrared bands \citep{Grogin+2011,Koekemoer+2011, Skelton+2014}.
Based on previous ground-based spectroscopic surveys, the available spectroscopic redshifts ($z_{\rm spec}$) in five fields are collected, while photometric redshifts ($z_{\rm phot}$) are derived by \cite{Skelton+2014} using the $0.3 - 8.0~ \mu m$ SEDs in EAZY code \citep{Brammer+2008}. Besides, \cite{Momcheva+2016} explored the grism observations and derived the ``best'' redshifts ($z_{\rm best}$) through merging grism spectrum redshifts ($z_{\rm grism}$) fits with the photometric and spectroscopic redshifts given by \cite{Skelton+2014}. The normalized median absolute deviations of $z_{\rm phot}$, defined by $\sigma_{\rm NMAD} = 1.48 \times {\rm median}[|(z_{\rm phot}-z_{\rm spec})-{\rm median}(z_{\rm phot}-z_{\rm spec})|/(1+z_{\rm spec})]$, are 0.022, 0.007, 0.026, 0.010, and 0.023 for the AEGIS, COSMOS, GOODS-N, GOODS-S, and UDS fields, respectively \citep{Skelton+2014}. While the typical redshifts error for $z_{\rm grism}$ is $\sigma_{z} \approx 0.003 \times (1 + z)$, indicating that $z_{\rm grism}$ is apparently of higher accuracy than $z_{\rm phot}$.
In this paper, the ``best'' redshifts ($z_{\rm best}$) are adopted, which means that we prefer to take the $z_{\rm spec}$ and $z_{\rm grism}$, otherwise the $z_{\rm phot}$ will be used instead.
Applying the well-constrained redshifts (i.e., $z_{\rm best}$), \cite{Momcheva+2016} have derived the stellar population parameters by using FAST code \citep{Kriek+2009}, assuming the exponentially declining star formation history with the e-folding time scale $\tau = 0.1 - 10$ Gyr and dust attenuation $A_{V} = 0 - 4$ in the \cite{Calzetti+2000} reddening law. They adopted the \cite{BC+2003} stellar population synthesis model library with a \cite{Chabrier+2003} initial mass function and solar metallicity. The derived stellar population parameters include the stellar mass ($M_{\ast}$), the dust attenuation ($A_{V}$) and the stellar ages.
Based on the released database, we only choose the galaxies with a flag of ``{\tt use{\_}phot} = 1'', which ensures that the target source is a reliable exposure with good signal-to-noise ratio, and not surrounded by bright stars (see \citealt{Skelton+2014} for more information).
\subsection{\rm Star-forming vs. Quiescent}\label{sec2.1:star forming vs. quiescent}
In previous studies, the state of star formation activity has usually been diagnosed by the $UVJ$ diagram (i.e., rest-frame $U - V$ vs. rest-frame $V - J$). The $UVJ$ diagram is now widely employed to distinguish SFGs and quiescent galaxies(QGs) at high redshifts \citep{Wuyts+2007,Williams+2009,Straatman+2016,Fang+2018}. The rest-frame $U - V $ color can separate blue and red colors on account of different levels of star formation. Yet blue populations will also appear redder due to the presence of a large amount of dust and gas at high redshifts. Therefore, dust attenuation should be considered by using the dust-absorbed infrared $J$-band to relieve this situation. In this work, we adopt the following criteria for separating the QGs from SFGs \citep{Williams+2009}:
\begin{eqnarray}
&&(U - V) > 1.3,(V - J) < 1.6, \\
&&(U - V) > 0.88 \times (V - J) +0.49\ (0.5 < z < 1.0), \\
&&(U - V) > 0.88 \times (V - J) +0.59\ (1.0 < z < 2.5),
\end{eqnarray}
where QGs reside in the wedged region while SFGs are scattered in the remaining area of the $UVJ$ diagram.
In a magnitude-limited sample, the stellar mass completeness depends on both the redshift and mass-to-light ratio ($M/L$). With higher $M/L$, QGs are more difficult to be detected \citep{Grogin+2011, Wuyts+2011, Newman+2012, Barro+2013, Pandya+2017}. Higher completeness of QGs guarantees the completeness of SFGs. In \cite{Chartab+2020}, following the methodology of \cite{Pozzetti+2010}, it had proved that the mass threshold ($M_* \geq 10^{10}M_\odot$) can ensure a 95\% completeness with magnitude limit $H_{\rm lim} < 26$ for QGs. But, the error of $z_{\rm phot}$ will increase from 0.002 to 0.0046 at $H_{\rm lim} = 25-26$ \citep{Bezanson+2016}. To ensure a higher completeness and accuracy of $z_{\rm phot}$, we construct a magnitude-limit sample ($H_{\rm lim} < 25$) of 9550 massive galaxies with $ M_* \geq 10^{10}M_\odot$ at $0.5 \leq z \leq 2.5$. As illustrated in Figure~1 of Gu et al. (2021, resubmitted), for magnitude-limited sample of $H_{\rm lim}<25$, this mass threshold can guarantee a 90\% completeness of QGs at $0.5<z<2.0$. However, at $2.0<z<2.5$, we remind that this mass threshold might only represent $\sim$80\% completeness.
Due to small areas of the coverage for five independent fields of 3D-HST/CANDELS survey, the cosmic variance might be a significant source of uncertainty \citep{Meneux+2009}. In the halo occupation models at different redshifts, the relationship between stellar mass and dark matter halo(subhalo) mass is empirically established, and the galaxy bias can be calculated by using a dissipationless N-body simulation \citep{Moster+2010,Moster+2011}. Applying the recipe described by \cite{Moster+2011}, we estimate the cosmic variance of massive ($\log M_*/M_{\odot} \geq 10$) galaxies in each of the four redshift intervals (i.e., $0.5<z<1.0$, $1.0<z<1.5$, $1.5<z<2.0$, $2.0<z<2.5$), and their typical uncertainties are 16\%, 16\%, 18\%, and 21\% respectively. In \cite{Huertas-Company+2016}, it is also evidenced that the cosmic variance tends to become larger at higher redshifts and stellar mass bins. In the following discussion, we need to keep in mind however that our results at high redshifts might attribute to higher uncertainty.
\subsection{\rm Early-type vs. Late-type} \label{sec2.2:Early type vs. Late type}
\begin{figure*}
\centering
\includegraphics[width=18cm]{fig1_UVJ_diagram.pdf}\\
\caption{The distributions of qEs, qLs, sEs and sLs in the $UVJ$ diagram at different $z$-bins. Purple and blue colors denote the qEs and sLs respectively, while the qLs and sEs are represented in red and green dots. The fractions for four subsamples out of total galaxies are shown in right-bottom corner of each panel.}\label{fig1_UVJ}
\end{figure*}
\begin{table*}
\caption{The numbers of qEs, qLs, sEs and sLs at different $z$-bins}\label{table1:numbers of four types}
\begin{tabular}{c|cccc|cc|cc}
\hline
\hline
Redshifts &qEs &qLs &sEs &sLs &early-type &late-type &quiescent &star-forming \\
\hline
$0.5 \leq z <1.0$ &896 &43 &547 &935 &1443 &978 &939 &1482 \\
$1.0 \leq z <1.5$ &781 &74 &418 &1485 &1199 &1559 &855 &1903 \\
$1.5 \leq z <2.0$ &625 &109 &348 &1336 &973 &1445 &734 &1684 \\
$2.0 \leq z \leq 2.5$ &265 &96 &319 &1113 &584 &1209 &361 &1432 \\
\hline
\end{tabular}
\end{table*}
For magnitude-limited samples of visual-like H-bands ($H_{\rm F160W} < 24.5$) in the CANDELS five fields, released in Rainbow database \footnote{\url{http://rainbowx.fis.ucm.es/Rainbow\_navigator\_public/}}, \cite{Huertas-Company+2015} have utilized the Convolutional Neural Networks (ConvNets) machine-learning algorithm to investigate the morphologies of $\sim 50,000$ galaxies with median redshifts of $\langle z \rangle \sim 1.25$. The algorithm is trained by the visual classification publicly available in GOODS-S field, and then applied to the other four fields to make a successful classification. For each galaxy, the possibilities of holding a spheroid or a disk, being irregular, point source or unclassified ($f_{\rm sph}$, $f_{\rm disk}$, $f_{\rm irr}$, $f_{\rm ps}$ or $f_{\rm unc}$) are estimated. The mis-classification rate is less than 1\%, and this is a major step forward compared with the other Concentration-Asymmetry-Smoothness (CAS)-based methods which yields 20\%-30\% contamination at high redshifts \citep{Huertas-Company+2014}. They also have provided possible thresholds of different frequencies to classify galaxy morphology and calibrate with visual inspection, which are:
\begin{enumerate}
\item pure bulges [SPH]:\\ $f_{\rm sph} > 2/3$,\qquad$f_{\rm disk} < 2/3$,\qquad and $f_{\rm irr} < 1/10$;
\item disk+sph [DSPH]:\\ $f_{\rm sph} > 2/3$,\qquad$f_{\rm disk} > 2/3$,\qquad and $f_{\rm irr} < 1/10$;
\item pure disk [DISK]:\\ $f_{\rm sph} < 2/3$,\qquad$f_{\rm disk} > 2/3$,\qquad and $f_{\rm irr} < 1/10$;
\item irregular disks [DIRR]:\\ $f_{\rm sph} < 2/3$,\qquad$f_{\rm disk} > 2/3$,\qquad and $f_{\rm irr} > 1/10$;
\item irregulars/mergers [IRR]:\\ $f_{\rm sph} < 2/3$,\qquad$f_{\rm disk} < 2/3$, and\qquad $f_{\rm irr} > 1/10$.
\end{enumerate}
In our sample, a vast majority of massive galaxies ($\sim 90 \%$) can be successfully classified into above morphologies. The remaining galaxies are either with fainter magnitudes in H-band or not included into above criteria (e.g., $f_{\rm sph}>2/3$ AND $f_{\rm irr}>0.1$). For these unclassified galaxies with $H_{\rm F160W} < 24.5$, we further classify them into above five typical morphologies with eyeballing inspection, which leads to about 98\% classification rate in our sample. Then we simplify the morphological types as early-type (including the SPH and DSPH) and late-type (including the DISK, DIRR and IRRs) galaxies, which depends on whether they have a dominated spheroidal component (i.e., $f_{\rm sph} > 2/3$ ) or not.
Accounting to above criteria, the sample of massive galaxies are divided into four subsamples: (1) 2567 quiescent early-type galaxies (qEs), (2) 322 quiescent late-type galaxies (qLs), (3) 1632 star-forming early-type galaxies (sEs), and (4) 4869 star-forming late-type galaxies (sLs). As mentioned above, only 2\% galaxies are discarded which is hard to distinguish via visual inspection due to the fainter brightness ($H_{\rm F160W} > 24.5$).
The detailed numbers of four galaxy types in different redshift ranges are listed in Table \ref{table1:numbers of four types}.
The locations in the $UVJ$ diagram for four galaxy types are presented in Figure \ref{fig1_UVJ}, and the fractions of four subsamples out of total galaxies is shown in the lower right corner of each panel. Obviously, it is distinct that both the fractions of early-type (qEs $+$ sEs) and quiescent (qEs $+$ qLs) galaxies tend to increase over cosmic time, while the fractions of late-type (qLs $+$ sLs) and star-forming (sEs $+$ sLs) galaxies decrease from high to low redshifts. On one hand, it is clear to see the popularity of two traditional bimodal galaxies (i.e., qEs and sLs), suggesting that the star formation quenching in the majority of galaxies is accompanied with the growth of their spheroidal components. On the other hand, the rarity of qLs and sEs indicates small probability to break the degeneracy between morphological transformation and star formation quenching.
\section{Mass and environment distributions}\label{sec3:mass and environment distributions}
To link the mass and environment factors to star formation and morphology of galaxy, we investigate the mass and environment distribution for four types of galaxies at first.
\subsection{\rm Stellar Mass Distribution}\label{sec3.1:stellar mass distribution}
\begin{figure*}
\centering
\includegraphics[width=18cm]{fig2_M.pdf}\\
\caption{The stellar mass distributions of qEs, qLs, sEs and sLs, which are colored in purple, red, green and blue respectively. The median value and their percentile ranges from 25\% to 75\% are shown in the upper right corner of each panel.}\label{fig2_M}
\end{figure*}
\begin{deluxetable*}{ccccc}
\tablewidth{0pt}
\tablecaption{The probability of K-S test for stellar mass distributions}\label{table2:K-S test of M}
\tablehead{
\colhead{Subsamples} & \colhead{$0.5 \leq z < 1.0$} & \colhead{$1.0 \leq z < 1.5$} & \colhead{$1.5 \leq z < 2.0$} & \colhead{$2.0 \leq z \leq 2.5$}}
\startdata
qEs vs. sEs & 0.017 &$1.491\times10^{-8}$ &$3.427\times10^{-16}$ &$1.195\times10^{-19}$ \\
qLs vs. sLs & 0.138 &$5.377\times10^{-6}$ &$1.969\times10^{-10}$ &$5.184\times10^{-13}$ \\
qEs vs. qLs & 0.058 &0.485 & 0.147 & 0.798 \\
sEs vs. sLs &$5.030\times10^{-14}$ &$0.003$ & $0.472$ & 0.593 \\
\hline
\enddata
\end{deluxetable*}
Mass distributions of four galaxy types in each redshift bin are presented in Figure \ref{fig2_M}. Median values and their corresponding 25-75th percentile ranges are shown in the top right corner of each panel. Furthermore, to identify their distributional differences, we perform the Kolmogorov-Smirnov (K-S) test for comparing stellar mass distributions of two subsamples (see Table \ref{table2:K-S test of M}). The probability that two subsamples in a given redshift range are drawn from the same underlying distributions is referred to as the quality $P$. We adopt $P = 0.05$ as the upper limit probability to verify that the two subsamples have different distributions at $2\sigma$ deviation.
To investigate the connection between stellar mass and star formation quenching, we firstly compare the stellar mass distribution of SFGs with that of QGs at a given morphological type (i.e., qEs vs. sEs and qLs vs. sLs). According to their median values in Figure \ref{fig2_M}, QGs populate at higher stellar mass compared with their star-forming counterparts. Since the probability of star formation quenching increases with the growth of stellar mass \citep{Peng+2010,Brammer+2011,Muzzin+2013}, it is conceivable that quiescent population is more inclined to be found at high-mass end. Moreover, the significance of differences between their distributions are shown in the first two rows of Table \ref{table2:K-S test of M}. It can be found that the quality $P$ is increasing with cosmic time but still below 0.05, except for the K-S test between qLs and sLs at $0.5 \leq z < 1.0$ ($P$ = 0.138). At a given morphological type, SFGs and QGs present vastly different mass distributions at higher redshifts ($z>1$), which hints that the quenching process of galaxy star formation is likely to have been accompanied by the assembly of stellar mass.
To correlate stellar mass with morphological transition, the K-S tests of stellar mass distribution between early-type galaxies (ETGs) and late-type galaxies (LTGs) in the star-forming or quiescent population are performed, as shown in the last two rows of Table \ref{table2:K-S test of M}.
At $z>1.5$, no significant difference is found between the mass distributions of ETGs and LTGs in both two populations.
Although the similar mass distributions are found for qEs and qLs at $z<1.5$, an obvious distributional difference is shown for sEs and sLs.
Hence, it indicates that the growth of spheroidal component has a closer connection with stellar mass at lower redshifts, which reinforces the notion that morphological transition is an in-situ process in the local universe \citep{Bamford+2009,Liu+2019}.
\subsection{\rm Environment Distribution} \label{sec3.2:environment distribution}
Environment is a crucial external factor for galaxy evolution \citep{Ilbert+2013,Darvish+2015,Joshi+2020}, so it is necessary to take the environmental condition around a galaxy into consideration. Traditional environment indicator is defined as $\Sigma_{N} = N/(\pi d_{N}^{2})$ by \cite{Dressler+1980}, which describes the local number density around a target within an area defined by the projected distance of the $N$-th nearest neighbor ($d_N$) within a given redshfit slice. Alternatively, a modified environment indicator has been introduced by \cite{Ivezic+2005} and \cite{CI+2008}, which is based on a Bayesian metric to incorporate the distances of all neighboring galaxies. They also evidenced that the probability distribution functions of environment by this Bayesian metric is more approach to the true distributions (see Figure 9 in \citealt{Ivezic+2005}).
Inspired by \cite{Ivezic+2005} and \cite{CI+2008}, we utilize the method of Bayesian metric to improve our measurement of environment (Gu et al., 2021, submitted). For a magnitude-limited sample of galaxies at $0.5 \leq z \leq 2.5$ with $H_{\rm F160W} < 25$, the local density is estimated by $\Sigma_{N}^{'} \propto 1/(\Sigma_{i=1}^{N}d_{i}^2)$, where $d_i$ is the projected distance to the $i$-th nearest neighbor within a redshift slice ($|\Delta z|<\sigma_z (1+z),\, \sigma_z=0.02$). A dimensionless overdensity, $1 + \delta_{N}^{'}$, is employed to measure the relative density of environment, which described as
\begin{equation}\label{func4_dencity}
1 + \delta_{N}^{'} = \frac{\Sigma_{N}^{'}}{\langle\Sigma_{N}^{'}\rangle_{\rm uniform}}=\frac{\Sigma_{N}^{'}}{k_{N}^{'}\Sigma_{\rm surface}},
\end{equation}
where $\langle\Sigma_{N}^{'}\rangle_{\rm uniform}$ is the Bayesian density in uniform condition. The $k_{N}^{'}$ is a correction factor of proportional relation between $\langle\Sigma_{N}^{'}\rangle_{\rm uniform}$ and surface number density $\Sigma_{\rm surface}$ within a given redshift slice (e.g., $N = 3$, $k_{3}^{'} = 0.80$). Obviously,
$1+\delta_{N}^{'}>1$ (i.e., $\log(1 + \delta_{N}^{'})>0$) implies the excess of standard level of environment density while 1 + $\delta_{N}^{'}<1$ indicates the opposite. In this paper, we prefer to apply the local density of three closest galaxies (1 + $\delta_{3}^{'}$) to indicate the nearest small-scale(local) environmental density for each targeted galaxy. It also has been confirmed that the neighboring number we adopt from 3 to 10 will not affect our main results. Moreover, following \cite{Kawinwanichakij+2017}, we test the ``edge effect'' by excluding galaxies near the survey edge ($\sim 2.95\%$). Our main results are not affected by this effect.
\begin{figure*}
\centering
\includegraphics[width=18cm]{fig3_overdensity.pdf}\\
\caption{The overdensity distributions of qEs, qLs, sEs and sLs in each $z$-bin. Also given are their median values and their percentile ranges from 25\% to 75\% as Figure \ref{fig2_M}.}\label{fig3_overdensity}
\end{figure*}
\begin{deluxetable*}{ccccc}
\tablecaption{The probability of K-S test for environment distributions}\label{table3:K-S test of density}
\tablehead{
\colhead{Subsamples} & \colhead{$0.5 \leq z < 1.0$} & \colhead{$1.0 \leq z < 1.5$} & \colhead{$1.5 \leq z < 2.0$} & \colhead{$2.0 \leq z \leq 2.5$}}
\startdata
qEs vs. sEs & $2.758\times10^{-5}$ &$8.985\times10^{-4}$ &$4.437\times10^{-4}$ & 0.001 \\
qLs vs. sLs & 0.019 & 0.022 & 0.425 & 0.817 \\
qEs vs. qLs & 0.429 & 0.843 & 0.387 & 0.323 \\
sEs vs. sLs & 0.039 & 0.730 & 0.462 & 0.565 \\
\enddata
\end{deluxetable*}
Figure \ref{fig3_overdensity} shows the distributions of the local overdensity for four galaxy types in different redshift bins.
If we consider the quenching process of star formation for a given morphological type (i.e., early- or late-type), compared with their median values (say qEs vs. sEs, or qLs vs. sLs), it can be found that QGs locate at denser environment than their star-forming counterparts. This result confirms the color-density relation in literatures
(\citealt{Lewis+2002,Kauffmann+2004,Rojas+2005,Weinmann+2006,Bamford+2008,Liu+2015,Moorman+2016}), suggesting that the dense environment may have played an important role in the quenching process via various modes, such as ram-pressure stripping \citep{GG+1972}, starvation \citep{Larson+1980}, and tidal interaction \citep{Merritt+1983}.
The corresponding probabilities of K-S test for environmental distributions are tabulated in the first two rows of Table \ref{table3:K-S test of density}. It can be seen that qEs and sEs present entirely different distributions of environment at $0.5 \leq z \leq 2.5$. The qLs and sLs at high redshifts ($1.5<z<2.5$) seem to reside in similar environment. However, significant environmental difference is displayed between the qLs and sLs at $0.5<z<1.5$. Therefore, it may imply that the star formation quenching is sensitive to local overdensity for both early- and late-type populations at $z<1.5$, which is in agreement with the results in \cite{Gu+2018} that the environment distribution of red galaxies differs from those of green and blue populations since $z \sim 1.5$.
Next, we analyze the environmental role on galaxy morphological transition for SFGs (including sEs and sLs) and QGs (including qEs and qLs). According to the probabilities of K-S test shown in the last two rows of Table \ref{table3:K-S test of density}, there is no significant difference between the ETGs and LTGs at $0.5\leq z \leq 2.5$, except for the star-forming populations (i.e., sEs vs. sLs) at $0.5<z<1.0$ ($P$ = 0.039). This implies that environment may not be the important factor for morphological transformation at $z>1$. For the SFGs at low redshifts ($0.5<z<1$), sEs are likely to reside in denser environment when compared with sLs.
\section{Star formation quenching}\label{sec4:star formation quenching}
The quiescent fractions in different morphological types are employed as the indicator of star formation quenching proposed by \cite{Liu+2019}. In this section, to study the impact of stellar mass and environment on quenching process, the quiescent fractions out of ETGs ($f_{\rm q}(E) = \frac{N_{\rm qE}}{N_{\rm E}}$), LTGs ($f_{\rm q}(L) = \frac{N_{\rm qL}}{N_{\rm L}}$) and the total galaxies ($f_{\rm q}(E+L) = \frac{N_{\rm qE}+N_{\rm qL}}{N_{\rm E}+N_{\rm L}}$) are denoted in darkred, red, and black colors as a function of stellar mass and local overdensity in different $z$-bins are shown in Figure \ref{fig4_f_q_M} and Figure \ref{fig5_f_q_env} respectively. Considering the binomial statistical distributions, the error of quiescent fraction can be computed as $\sigma_f = [f_{\rm q}(1 - f_{\rm q})/N_{\rm gal}]^{1/2}$, where $f_{\rm q}$ and $N_{\rm gal}$ are the quiescent fraction and the number of galaxies in subsamples, respectively.
\begin{figure*}
\centering
\includegraphics[width=18cm]{fig4_f_q_M.pdf}\\
\caption{Fractions of quiescent population out of ETGs ($ f_{\rm q}\rm (E) = \frac{N_{\rm qE}}{N_{\rm E}}$), LTGs ($f_{\rm q}\rm (L) = \frac{N_{\rm qL}}{N_{\rm L}}$) and both types of galaxies ($f_{\rm q} \rm (E+L) = \frac{N_{\rm qE}+N_{\rm qL}}{N_{\rm E}+N_{\rm L}}$) as a function of stellar mass in four redshift bins with $\Delta z = 0.5$. The $f_{\rm E}$,$f_{\rm L}$,$f_{\rm (E+L)}$ are denoted by darkred, red and black colors. At each panel, stellar mass is divided into four bins with roughly equal galaxy numbers. Within each stellar mass bin, the x-axis error represents the 25th to 75th percentiles. Considering a binomial distribution, the fraction uncertainty within is calculated as $\sigma_f = [f(1 - f)/N_{\rm gal}]^{1/2}$, where the $f$ represents $f_{\rm q}(\rm E)$, $f_{\rm q}{\rm(L)}$ and $f_{\rm q} \rm(E+L)$ while the $N_{\rm gal}$ represents $N_{\rm E}$, $N_{\rm L}$, and ($N_{\rm E}+N_{\rm L}$), respectively.} \label{fig4_f_q_M}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=18cm]{fig5_f_q_env.pdf}\\
\caption{Quiescent factions as a function of local overdensity in four redshift intervals. Similar to Figure \ref{fig4_f_q_M}, the x-axis error bars indicate the range of 25th to 75th percentiles at each mass bin. The error bars of $f_{\rm q}$ indicate the uncertainty based on binomial distribution.} \label{fig5_f_q_env}
\end{figure*}
In each panel of Figure \ref{fig4_f_q_M}, the total quiescent fraction colored by black lines, $f_{\rm q}(E+L)$, are first analyzed. With the domination of QGs at the high-mass region, we find a strong mass dependence of star formation quenching.
With redshifts decrease, the fraction $f_{\rm q}(E+L)$ grows gradually, whereas the mass dependence is weakened due to a more significant increase of quiescent fraction at low-mass region.
The quenching of star formation seems to occur at high-mass region first, then shifts to less massive galaxies with cosmic time.
It confirms the ``downsizing'' scenario on the evolution of galaxies \citep{Cowie+1996,Noeske+2007, Peng+2010, Brammer+2011, Goncalves+2012,Muzzin+2013, Gu+2018}.
The quiescent fractions for ETGs ($f_{\rm q}(E)$, darkred lines) and LTGs ($f_{\rm q}(L)$, red lines) are also present in the Figure \ref{fig4_f_q_M}, respectively.
It is clear that $f_{\rm q}(E)$ is much higher than $f_{\rm q}(L)$, showing that the quenching process can be accompanied by the buildup of spheroidal component. Besides, the mass dependence of $f_{\rm q}(E)$ and $f_{\rm q}(L)$ exists as well, while it becomes less obvious with decreasing redshifts, especially for $z<1.0$.
Therefore, it may indicate that the in-situ process (stellar mass) takes a greater responsibility for the cessation of star formation at high redshifts, while it becomes less significant over cosmic time.
\begin{figure*}
\centering
\includegraphics[width=18cm]{fig6_fq_M_env.pdf}\\
\caption{The quiescent fractions as a function of stellar mass at fixed local overdensity. The low- and high-environment are divided by the medians value of local overdensity for ETGs and LTGs at $0.5 \leq z \leq 2.5$.
ETGs (LTGs) in high- or low-environment bins are denoted by solid or hollow symbols in darkred (red) color. The x-axis errors show the 25th to 75th percentiles. The error bars of $f_{\rm q}$ indicate the uncertainty based on binomial distributions. }\label{fig6_fq_M_env}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=18cm]{fig7_fq_env_M.pdf}\\
\caption{The quiescent fractions as a function of local overdensity at fixed stellar mass. The median values of stellar mass for ETGs and LTGs at $0.5 \leq z \leq 2.5$ are used to define the galaxies in low- and high-mass bins.
ETGs (LTGs) in high- or low-mass bins are denoted by solid or hollow symbols in darkred (red) color. The x-axis errors show the 25th to 75th percentiles. The error bars of $f_{\rm q}$ indicate the uncertainty based on binomial distributions. }\label{fig7_fq_env_M}
\end{figure*}
The quiescent fractions as a function of local overdensity are shown in Figure \ref{fig5_f_q_env}. In a given redshift range, the QGs are more inclined to inhabit in a denser environment (i.e., $\log(1+\delta_3^\prime) > 0$). The highest environmental density is three times denser than its standard level (i.e., $\log(1+\delta_{3}^{\prime}) \sim 0.5$), which is the typical overdensity of clusters. For $f_{\rm q}(E+L)$, it shows a environmental dependence. Over cosmic time, it becomes stronger, especially at $z \sim 0.5$, which is supported by previous works, suggesting that the suppression of star formation can be regulated by environment-related events, and that such events come to play a dominate role since $z \sim 0.5$ (e.g., \citealt{BO+1978,Pandya+2017,Gu+2018}).
The distribution with environmental density for $f_{\rm q}(E)$ is similar to that of $f_{\rm q}(E+L)$, where environment gradually plays a more important role in star formation quenching towards lower redshifts. However, the environmental dependence for $f_{\rm q}(L)$ is less clear at the whole redshift ranges, which might attribute to the smaller number of qLs listed in Table \ref{table1:numbers of four types}.
In Figure \ref{fig4_f_q_M} and \ref{fig5_f_q_env}, we find that QGs tend to locate at higher stellar mass and local overdensity range. Since galaxies with higher stellar mass inclined to yield to denser environment, this result might be caused by the degeneracy between stellar mass and local overdensity. To check this problem, we re-calculate the quiescent fractions in ETGs and LTGs as a function of stellar mass at fixed local overdensity bins in Figure \ref{fig6_fq_M_env}. The low and high environment are divided by the median values of local environment distributions for ETGs and LTGs at $0.5 \leq z \leq 2.5$. The quiescent fraction as a function of local overdensity at fixed stellar mass bins is shown in Figure \ref{fig7_fq_env_M}. The median values of stellar mass at $0.5 \leq z \leq 2.5$ are used to define the low and high mass bins for ETGs and LTGs.
To check this, we re-calculate the quiescent fractions in ETGs and LTGs after spliting into two smaller bins of local overdensity and stellar mass in Figure \ref{fig6_fq_M_env} and \ref{fig7_fq_env_M}.
As shown in Figure \ref{fig6_fq_M_env}, a strong mass dependence of $f_{\rm q}$ still exists when considered in a fixed local overdensity bin. Along with cosmic time, mass dependence becomes less obvious, which in good agreements with previous works from local universe out to $z \sim 3$ \citep{Baldry+2006, van_den_Bosch+2008a, van_den_Bosch+2008b,Peng+2010, Balogh+2016, Grutzbauch+2011b, Grutzbauch+2011a, Kawinwanichakij+2017}.
\begin{figure*}
\centering
\includegraphics[width=18cm]{fig8_f_E_M.pdf}\\
\caption{Early-type fractions out of QGs ($f_{\rm E}(\rm q) = \frac{N_{\rm qE}}{N_{\rm q}}$), SFGs ($f_{\rm E}(\rm s) = \frac{N_{\rm sE}}{N_{\rm s}}$) and both two populations ($f_{\rm E}(\rm q+s) = \frac{N_{\rm qE}+N_{\rm sE}}{N_{\rm q}+N_{\rm s}}$) at different redshift intervals. The $f_{\rm q}$, $f_{\rm s}$ and $f_{\rm (q+s)}$ in each panel are colored in green, lightgreen and grey separately. Split into four stellar mass bins with the similar galaxy numbers, we can derive the 25th and 75th percentile as errors of x-axis in each stellar mass bin. Assuming binomial distribution, the statistic uncertainly of each fraction is $\sigma_f = [f(1 - f)/N_{\rm tot}]^{1/2}$, where $f$ is representative to $f_{\rm E}(\rm q)$, $f_{\rm E}(\rm s)$ and $f_{\rm E}(\rm q+s)$ while $N_{\rm tot}$ is $N_{\rm q}$, $N_{\rm s}$ and $(N_{\rm q}+N_{\rm s})$ respectively.}\label{fig8_f_E_M}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=18cm]{fig9_f_E_env.pdf}\\
\caption{Early-type fractions as a function of local overdensity with cosmic time. Same as Figure \ref{fig8_f_E_M}, the x-axis errors indicate the 25th to 75th percentile. The error bars of $f_{\rm E}$ indicate the uncertainty based on binomial distribution.} \label{fig9_f_E_env}
\end{figure*}
In Figure \ref{fig7_fq_env_M}, at fixed mass bins, the QGs are also inclined to locate at denser environment. This proves that the environment dependence of $f_{\rm q}$ still exists, which is not affected by the mass distribution difference between SFGs and QGs, especially at $z<1.0$. \cite{Kawinwanichakij+2017} also showed that QGs are more common in overdense regions compared to SFGs even taking into account the differences in redshift and stellar mass. However, although a similar star formation-density relation is confirmed at $z<1.2$ in many papers \citep{Baldry+2006,van_den_Bosch+2008b, Peng+2010}, no environment dependence of
$f_{\rm q}$ is found by \cite{Grutzbauch+2011b, Grutzbauch+2011a}, which may be caused by different environment density tracer, redshift range and selection criteria of QGs.
In general, the dependence of star formation quenching on stellar mass and environmental density can be found. These results point to an evolutionary trend that the quenching process of star formation is mainly regulated by stellar mass at high redshifts, while the environmental condition begins to dominate this process toward lower redshifts.
\section{Morphological transformation}\label{sec5:morphology transformation}
To figure out the influences of stellar mass and environmental overdensity on morphological transformation, we compute the early-type fractions out of QGs ($f_{\rm E}(\rm q) = \frac{N_{\rm qE}}{N_{\rm q}}$), SFGs ($f_{\rm E}(\rm s) = \frac{N_{\rm sE}}{N_{\rm s}}$) and both two populations ($f_{\rm E}(\rm q+s) = \frac{N_{\rm qE}+N_{\rm sE}}{N_{\rm q}+N_{\rm s}}$) colord in green, lightgreen and grey in Figure \ref{fig8_f_E_M} and Figure \ref{fig9_f_E_env}. The error of early-type fraction is considered as the binomial statistical distributions.
In Figure \ref{fig8_f_E_M}, the $f_{\rm E}(\rm q)$ is overall much higher than $f_{\rm E}(\rm s)$, which proves the connection between morphological transformation and star formation quenching again. For the total early-type fractions (i.e., $f_{\rm E}(\rm q+s)$), a mass dependence of morphological transformation is identified in the given $z$-bins, and it becomes more evident over cosmic time. This implies that the formation of spheroidal component is previously completed at high-mass end particularly at low redshifts.
\begin{figure*}
\centering
\includegraphics[width=18cm]{fig10_fE_M_env.pdf}\\
\caption{The early-type fractions as a function of stellar mass at fixed local overdensity and redshift bins. The QGs and SFGs are divided into low- and high-environment bins by the median values of local overdensity at $0.5 \leq z \leq 2.5$. QGs (SFGs) in high- or low-environment bins are denoted by solid or hollow symbols in green (lightgreen) color. The x-axis errors show the 25th to 75th percentiles. The error bars of $f_{\rm E}$ indicate the uncertainty based on binomial distributions.} \label{fig10_fE_M_env}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=18cm]{fig11_fE_env_M.pdf}\\
\caption{The early-type fractions as a function of local overdensity at fixed stellar mass. The low- and high-mass bins are defined by the median values of stellar mass for QGs and SFGs at $0.5 \leq z \leq 2.5$. QGs (SFGs) in high- or low-mass bins are denoted by solid and hollow symbols in green (lightgreen) color. The x-axis errors show the 25th to 75th percentiles. The error bars of $f_{\rm E}$ indicate the uncertainty based on binomial distributions.} \label{fig11_fE_env_M}
\end{figure*}
In the star-forming population, massive galaxies have larger early-type fractions especially for $z<1$, which demonstrates that the morphological transformation completes more frequently in massive galaxies when they are actively star-forming. However, this mass dependence of morphological transformation is unclear in QGs. Using the IllustrisTNG simulation, \cite{Tacchella+2019} proposed that galaxies have roughly built up the spheroidal component when their star formation being suppressed. Moreover, massive galaxies grow their spheroidal component more rapidly in star-forming phase, and the mature morphologies are formed once their star formation stops (see Figure 9 in \citealt{Tacchella+2019}). This picture coincides with our findings that a significant mass dependence exists in star-forming population, while it is not remarkable in QGs.
For early-type fractions at various local overdensities in Figure \ref{fig9_f_E_env}, the overall early-type fractions (i.e., $f_{\rm E}(\rm q+s)$) exist an obvious environment dependence since $z < 2$, reinforcing the morphology-density relation that the early-type fractions increases with environment densities in the local universe \citep{Norberg+2002,Goto+2003,Wolf+2007,Ball+2008,Joshi+2020}. The highest early-type fraction is located at the densest environment at $z<1$. Ram-pressure stripping of inter-cluster medium at denser environment can strip cold gas on a short timescales, and truncate the star formation of galaxies \citep{Quilis+2000}, which makes the spheroidal component more prominent.
However, the relations between early-type fractions in QGs and SFGs (i.e., $f_{\rm E}(\rm q)$ and $f_{\rm E}(\rm s)$) and local density are less clear. No statistical environment difference of $f_{\rm E}$ indicates that the morphological transformation is not affected by environmental condition at $0.5<z<2.5$.
To disentangle the influences of stellar mass and environment, we consider the early-type fractions in QGs and SFGs as a function of stellar mass at fixed local overdensity bins in Figure \ref{fig10_fE_M_env}, and as a function of local environment at fixed mass bins in Figure \ref{fig11_fE_env_M}. The low and high environment bins are divided by the median values of local overdensity at $0.5 \leq z \leq 2.5$ for QGs and SFGs. Similarly, the median values of stellar mass at $0.5 \leq z \leq 2.5$ are used to define the low and high mass bins for QGs and SFGs.
In Figure \ref{fig10_fE_M_env}, the distributional difference of $f_{\rm E}$ between high- and low-environment is negligible. With the domination of early-type population in massive regions, we confirm the strong mass dependence of $f_{\rm E}(\rm s)$ for $0.5<z<1.0$. As analyzed in Figure \ref{fig8_f_E_M}, no environment dependence of $f_{\rm E}(\rm q)$ is found, which might due to the fact that morphological transformation happens earlier than star formation quenching process proposed by \cite{Tacchella+2019}. Similar results are verified by \cite{Kawinwanichakij+2017} and \cite{Ownsworth+2016}, who used the S\'{e}rsic index $n$ as the morphological tracer. In Figure \ref{fig11_fE_env_M}, no clear environment dependence of $f_{\rm E}$ is shown at a fixed stellar mass. Based on different environment tracer and classification of QGs, morphological transformation seems to have not obviously influenced by environment condition \citep{van_den_Bosch+2008b, Grutzbauch+2011b, Grutzbauch+2011a, Kawinwanichakij+2017}.
In summary, the $f_{\rm E}$ behaviors in Figures \ref{fig8_f_E_M}, \ref{fig9_f_E_env}, \ref{fig10_fE_M_env}, \ref{fig11_fE_env_M} suggest that morphological transformation is mainly regulated by stellar mass at low redshifts, and it seems insensitive to the surrounding environmental densities. Extended to the local universe, our results should still hold according to the morphological analysis in \cite{Liu+2019}.
\section{AGN fractions}\label{sec6:AGN fractions}
The GOODS-N and GOODS-S fields are covered by the 2 Ms Chandra Deep Field-North (CDF-N), the 7 Ms Chandra Deep Field-South (CDF-S) surveys. Based on these surveys, \cite{Xue+2016} and \cite{Luo+2017} provided their estimations in three standard X-ray bands: 0.5-7 keV (full band), 0.5-2.0 keV (soft band) and 2.0-7.0 keV (hard band), and performed an identification of galaxy type (``AGN",``GALAXY", or ``STAR"). AGN can be identified by one of the following six criteria: (1) $L_{\rm X,int} \geq 3 \times 10^{42} ~ \rm erg ~ s^{-1}$, where $L_{\rm X,int}$ is the absoption-corrected intrinsic luminosity in full X-ray band; (2) $\Gamma \leq 1.0$, where $\Gamma$ is the effective power law index of photon, for selecting obscured AGNs; (3) $\log(f_{\rm X}/f_{\rm R}) > -1$, where $f_{\rm X}$ is the flux for soft, hard or full band, and $f_{\rm R}$ is the R-band flux; (4) $L_{\rm X,int}/L_{\rm 1.4GHz} \geq 2.4 \times 10^{18}$; (5) spectroscopically classified as AGNs; (6) $\log(f_{\rm X}/f_{\rm Ks}) > -1.2$, which is only used for the CDF-S field with the Ks-band photometry.
Within a searching radius of $1.5''$, we identify the host galaxies of AGNs in the GOODS-N and GOODS-S fields. To testify AGN effects on the formation of galaxies, we calculate the AGN fractions for our four galaxy types separately. The AGN fractions and their corresponding errors are shown in Figure \ref{fig12_agn}, as a function of redshift. Considering the binomial statistical distributions, the error bars are computed as $\sigma_f = [f_{\rm AGN}(1 - f_{\rm AGN})/N_{\rm tot}]^{1/2}$, where the $f_{\rm AGN}$ and $N_{\rm tot}$ are the AGN fraction and total number of galaxies in each redshift bin for four subsamples.
Apparently, quiescent late-type galaxies (i.e., qLs), where star formations are suppressed but spheroidal components are not predominated, exhibit the highest AGN fractions at $z>2$, and then drop rapidly until $z \sim 1.25$. Similarly, it is found that the higher AGN fractions also exhibit in red sequence and green valley at early epoch \citep{Nandra+2007,Salim+2007,Schawinski+2010,Gu+2018}, whose star formations are entirely or partially truncated. AGN activity is thought to drive out or heat cold gas to suppress star formations and maintain the quiescence of galaxies before the morphological transformation at $z \sim 2$ \citep{Wang+2017,Gu+2018}. In this scenario, negative AGN feedback could play a role in the formation of qLs. Over cosmic time, the decrement of AGN fractions in qLs could be attributed to the reduced AGN activities.
In addition, at $z < 2$, the highest AGN fraction is found in the sEs, and it increases slightly with cosmic time. These galaxies have built up their spheroidal components, but their star formations are still prevalent. This can be interpreted by the picture that the morphological transformation in sEs is associated with some previous gas-rich violent events (e.g., merger or disk instability) that subsequently triggered starbursts \citep{Toomre+1977,Hopkins+2008}.
In this scenario, AGN plays a positive feedback to star formation.
Hence, on account of the highest AGN fractions in this work and the larger asymmetry indices noted by \cite{Liu+2019} in sEs, we are also in favor of the positive AGN feedback, and suggest that the sEs are likely to be the remnants of gas-rich mergers. As for sLs and qEs, their AGN fractions are lower than 15\%, showing no prominent connection between their formations and AGN activities.
\begin{figure}
\centering
\includegraphics[width=7cm]{fig12_agn.pdf}\\
\caption{The AGN fractions in qEs, qLs, sEs and sLs subsamples as a function of redshift, denoted in red, yellow, blue and lightblue colors, respectively. The shaded region represents the uncertainty of $f_{\rm AGN}$ for each type of galaxies.}\label{fig12_agn}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7cm]{fig13_timescale.pdf}\\
\caption{Redshift evolution of morphological transformation(green) and star formation quenching(red). Green and red shaded regions are the propagated errors. The grey shaded region indicates the timescales that above the age-redshifts relation.}\label{fig13_timescale}
\end{figure}
\section{Discussion}\label{sec7:discussion}
\subsection{\rm Redshift Evolution of Transition Timescales} \label{sec7.1:timescales}
To quantify the difference in timescales between star formation quenching and morphological transformation, we estimate the average transition timescales following \cite{Pandya+2017}.
By employing a cubic polynomial fits to observed number densities of blue, green and red populations as a function of redshifts, the average transition timescale can be derived at given redshifts.
The average quenching timescale between $z_1$ and $z_2$ ($\langle t_{\rm transition} \rangle _{z_1,z_2}$) is evaluated as
\begin{equation}\label{func5_timescale}
\langle t_{\rm transition} \rangle _{z_1,z_2} = \langle n_{\rm transition}\rangle _{z_1,z_2} \times (\frac{dn_{\rm quiescent}}{dt})_{z_1,z_2}^{-1},
\end{equation}
where $\langle n_{\rm transition}\rangle _{z_1,z_2}$ is the average number density of transition population (i.e., green valley galaxies) and $(\frac{dn_{\rm quiescent}}{dt})_{z_1,z_2}^{-1}$ is the change of number density in QGs with regard to the cosmic time from $z_1$ to $z_2$. However, it should be noted that the average quenching timescale gives an upper limit of quenching timescale because this method is based on an extreme assumption that all green valley galaxies evolve from star-forming, and transform to quiescent population monodirectionally. Rejuvenation events or oscillatory excursions from star formation main sequence have not been taken into consideration.
The massive ($\log M_*/M_{\odot} \geq 10$) galaxies can be classified into blue, green and red populations according to the dust-corrected colors out to $z = 2.5$ (see \citealt{Wang+2017,Gu+2018} for detail). We have calculated the average quenching timescale as a function of redshift \citep{Gu+2019}, which is denoted by the red line in Figure \ref{fig13_timescale}. The result explicitly indicates that the quenching process happens in a shorter timescale at high redshifts, and the timescale becomes longer with decreasing redshifts, which is in a good agreement with previous works \citep{Barro+2013,Pandya+2017}.
Inspired by the method of deriving the average quenching timescale, we also evaluate the average timescale of morphological transformation. The great importance is to select a transition population in morphological transformation, just like the green valley population during star formation quenching. As mentioned in Section \ref{sec2.2:Early type vs. Late type}, \cite{Huertas-Company+2015} have proposed five morphological classifications (namely, [SPH], [DSPH], [DISK], [DIRR], and [IRR]) with total numbers of 2358, 1258, 1768, 2630, and 930, respectively. We adopt a monodirectional model of morphological transformation, assuming that irregular-dominated galaxies (i.e., [IRR] and [DIRR]) would evolve to disk-dominated population (i.e., [DISK]), and transform into spheroidal-dominated populations (i.e., [DSPH] and [SPH]). We do not take the intense merger events or regrowth of disks into account. Similarly, by adopting the disk-dominated galaxies as the transition population, we can estimate the average timescale of morphological transformation by
\begin{equation}\label{func6_mor_timescale}
\langle t_{\rm transition} \rangle _{z_1,z_2} = \langle n_{\rm [DISK]}\rangle _{z_1,z_2} \times (\frac{dn_{\rm [DSPH]+[SPH]}}{dt})_{z_1,z_2}^{-1}.
\end{equation}
In Figure \ref{fig13_timescale}, the redshift evolution of morphological transformation timescales is shown by the green line. The morphological transition timescale increases slightly with cosmic time. But basically, the morphological transformation spends a shorter time than the quenching process. It is conspicuous that the difference in timescales between morphological transformation and star formation quenching becomes larger over cosmic time.
To verify the reliability of our timescales, we further consider the redshift evolution of the $f_{\rm q}$ and $f_{\rm E}$ in Figure \ref{fig14_frac_z}.
The early-type fractions (shown in dot-dashed lines) increases with cosmic time, while the quiescent fractions (shown in solid lines) does not increase with cosmic time at a given morphological type. The total early-type fraction (colored in grey) is overall higher than total quiescent fractions (colored in black), and their difference is higher at later epoch.
This result supports that the shorter timescales in morphological transformation in lower redshifts which is in consistent with the result in Figure \ref{fig13_timescale}. The shorter timescale of morphological transformation may lead to an increasing number of the sEs at lower redshifts. Table \ref{table1:numbers of four types} shows that the number of sEs is greater than that of qLs. And we do find the a steeper slope of $f_{\rm E}(\rm s)$ at $z\sim 1$, which is probably due to the larger difference in timescales. It suggests that the morphological transformation might be accomplished much earlier than the suppression of star formation at low redshifts.
\subsection{\rm The Formation of qLs and sEs}\label{sec7.2:The formation of qLs and sEs}
``Merger hypothesis" is commonly connected with the explosive quasar or starburst phase, which leads to the morphological transition from rotation-dominated disks into pressure-dominated spheroids \citep{Toomre+1977,Hopkins+2008}. Strong star formation and black hole accretion are both triggered by the inflow of cold gas from the coalescence of gas-rich galaxies \citep{Springel+2005,Kaviraj+2009,George+2017}, which is considered as the so-called positive AGN feedback. However, once the static massive hot halo has formed, the accretion of shock heating low-energy gas towards the central supermassive black hole could result in the negative AGN feedback, which prevents gas from further cooling \citep{Croton+2006,Baldry+2008}. Meanwhile, interaction between galaxy and intercluster medium in a denser environment \citep{Moran+2007}, galaxy-galaxy harassment \citep{Moore+1999}, and starvation \citep{Larson+1980}, can be responsible for halting the supply of cold gas, ceasing the star formation, and rapidly building up the spheroidal component.
Star-forming early-type galaxies (sEs), that appear spheroid-dominated and undergo star formation, possess the highest AGN fractions especially at $z<2$. Since the difference in the environmental conditions between sEs and sLs is insignificant at $0.5 \leq z \leq 2.5$, we suppose that environment should have not played an important role on the formation of sEs. As explained in the scenario of positive AGN feedback, the galaxies experienced the violent gas-rich merger event would change their morphologies and trigger the activity of star formation. In this scenario, the morphological transformation is closely connected with the trigger of AGNs. According to Figure \ref{fig12_agn} and Figure \ref{fig13_timescale}, with respective to the quenching timescale, the morphological transition tends to be completed more efficiently at lower redshifts, which points to a higher AGN fraction in the sEs. An alternative origin of sEs is the rejuvenation of qEs, and external acquisition of gas (e.g., through minor gas-rich mergers or intergalactic medium accretion) could be able to drive star formation again \citep{Kannappan+2009,Kim+2018,Liu+2019}.
Hence, we suggest that some sEs might be the remnants of gas-rich mergers driven by positive AGN feedback, or the results of rejuvenated qEs at $z<2$.
\begin{figure}
\centering
\includegraphics[width=7cm]{fig14_frac_z.pdf}\\
\caption{Quenched fractions out of early, late and total types denoted in darkred, red and black dots. Early type fractions out of quiescent, star forming and both two populations are shown in green, lightgreen and grey colors. Their errorbars are identical Figure \ref{fig4_f_q_M} and Figure \ref{fig8_f_E_M}.}\label{fig14_frac_z}
\end{figure}
As illustrated in Figure \ref{fig12_agn}, the quiescent late-type galaxies (qLs) have the highest AGN fractions at $z>2$. It implies that negative AGN feedback could be effective at truncating the star formation and maintaining the quiescence of galaxies at $z>2$. It supports that negative AGN feedback might dominate the formations of qLs before the buildup of spheroidal component at early epoch. It is found that AGN fraction in qLs decreases along with cosmic time. Negative AGN feedback might be of less importance for the formation of qLs at low redshifts. In addition, unlikely the sLs at $z<1.5$, the qLs prefer to locate at denser environment. It hints that dense environment might act in halting the cold gas supply at later epoch. In a word, the formation of qLs could be attribute to the negative AGN feedback at high redshifts, while it might be driven by environmental quenching since $z \sim 1.5$.
\section{Summary}\label{sec8:summary}
In order to study the properties of star formation quenching and morphological transformation, we choose massive galaxies ($\log M_*/M_{\odot} \geq 10$) in five fields of 3D-HST at $0.5 \leq z \leq 2.5$. Our sample is divided into four galaxy types: quiescent early-type (qEs), quiescent late-type (qLs), star-forming early-type (sEs), and star-forming late-type (sLs) galaxies. SFGs and QGs are identified by the $UVJ$ diagnosis, whereas early- and late-type galaxies are identified by the possibility whether they possess a spheroid-dominated component on the basis on \cite{Huertas-Company+2015}. We analyze the stellar mass and local environment distributions of these four populations, and explore the mass and local environment dependence of star-formation quenching and morphological transformation. Moreover, to figure out the formation of sEs and qLs, we discuss the AGN fractions in four subsamples, and estimate the average timescales of morphological transformation and star formation quenching. Our results are summarized as follows:
(1) At the given morphology type, QGs present entirely different mass distributions at $z>1$ compared with the star-forming counterparts.
At the given star forming status (star-forming or quiescent), the difference of mass distribution between early- and late-type galaxies enlarge with cosmic time. Therefore, stellar mass plays an important role in star formation quenching at high redshifts, while it is significant to morphological transformation at later epoch.
(2) At a fixed morphological type, different local environment distributions between QGs and SFGs are found at $z<1.5$. Nevertheless, at the given star forming status, early- and late-type galaxies present the similar local overdensity distributions at $0.5<z<2.5$. It indicates that the local overdensity is significant in star formation quenching at later epoch, while it seems to be irresponsible for morphological transformation.
(3) The larger quiescent fractions are found at high-mass and high-density ends. The process of star formation quenching exhibits a strong dependence on stellar mass at early epoch, and the mass dependence of quenching tends to decrease with cosmic time. In addition, a crucial local environment dependence on star formation quenching is verified only at $z < 1.0$.
(4) Massive galaxies are likely to possess higher early-type fractions. Morphological transformation shows a clear dependence on stellar mass but not on local environment. And the mass dependence of morphology transformation becomes stronger over cosmic time.
(5) The highest AGN fraction at $z>2$ is exhibited in qLs, which indicates the negative AGN feedback could be responsible for the formation of qLs. At $z<2$, the sEs are identified to have the highest AGN fraction. It suggests that the positive AGN feedback might contribute to the formation of sEs.
~\\
This work is based on observations taken by the 3D-HST Treasury Program (GO 12177 and 12328) with the NASA/ESA HST, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS526555.
This work is supported by the National Natural Science Foundation of China (nos. 11873032, 11673004, 11433005) and by the Research Fund for the Doctoral Program of Higher Education of China (no.20133207110006). G.W.F. acknowledges the support from Chinese Space Station Telescope (CSST) Project. G.Y.Z acknowledges the support from China Postdoctoral Science Foundation (2020M681281) and Shanghai Post-doctoral Excellence Program (2020218).
|
1,116,691,497,376 | arxiv | \section{Introduction}
In the Euclidean space $\mathbb R^n$ it is well-known hat the Laplace operator $\Delta$ is invariant under orthogonal transformations. Hence $\mathbb R^n$ has the property that the Laplacian of a radial function (function depending only on the distance to the origin) is still radial. Then, is a Riemannian manifold $M$ with this property necessarily $\mathbb R^n$ or the space form? In regard to this interesting question, a harmonic manifold is introduced.
A complete Riemannian manifold $M$ is called {\it harmonic} if it satisfies one of the following equivalent conditions:
\begin{enumerate}
\item For any point $p\in M$ and the distance function $r(\cdot):={\rm dist}(p,\cdot)$, $\Delta r^2$ is radial;
\item For any $p\in M$ there exists a nonconstant radial harmonic function in a punctured neighborhood of $p$;
\item Every small geodesic sphere in $M$ has constant mean curvature;
\item Every harmonic function satisfies the mean value property \cite{Wi};
\item For any $p\in M$ the volume density function $\omega_p=\sqrt{{\rm det}g_{ij}}$ in normal coordinates centered at $p$ is radial.
\end{enumerate}
Lichnerowicz conjectured that every harmonic manifold $M^n$ is flat or rank 1 symmetric. This conjectue has been proved to be true for dimension $n\leq5$ \cite{B,L,N,Sz,Wa}. But Damek and Ricci \cite{DR} found that there are many {{counterexamples}} if dimension $n\geq7$. Euh, Park and Sekigawa \cite{EPS} provide {{a new}} proof of the Lichnerowicz
conjecture for {{dimension $n = 4, 5$ in a slightly}} more general setting using universal curvature identities.
In order to further characterize harmonic manifolds, Shah \cite{Sh}, Szab\'o \cite{Sz} and Ramachandran-Ranjan \cite{RR} paid attention to the volume density function $\omega_p(r)$ as defined in the equivalent condition (5) above. Shah proved that a harmonic manifold with the same volume density as $\mathbb R^n$ is flat, Szab\'o showed $\mathbb S^n$ is the only harmonic manifold with $\omega_p(r)=\frac{1}{r^{n-1}}\sin^{n-1}r$ and Ramachandran-Ranjan showed that a noncompact simply connected harmonic manifold $M^n$ with $\omega_p(r)=\frac{1}{r^{n-1}}\sinh^{n-1}r$ is $\mathbb H^n$. Ramachandran-Ranjan also proved that a noncompact simply connected K\"{a}hler harmonic manifold $M^{2n}$ with $\omega_p(r)=\frac{1}{r^{2n-1}}\sinh^{2n-1}r\cosh r$ is isometric to the complex hyperbolic space. A similar theorem was proved for the quaternionic hyperbolic space as well.
In this paper we remark the fact that the Laplacian of specific radial functions are very simple in space forms. It is well known that in $\mathbb R^n$
\begin{equation}\label{1}
\Delta r^{2-n}=0\,\,\,\,{\rm and}\,\,\,\,\Delta r^2=2n;
\end{equation}
in $\mathbb S^n$ and $\mathbb H^n$ \cite{CG},
\begin{equation}\label{2}\Delta\cos r=-n\cos r\,\,\,\,{\rm and}\,\,\,\,\Delta \cosh r=n\cosh r,\,\,\,\,{\rm respectively};\end{equation}
and for some hypergeometric function $f$ on $\mathbb CH^n$ and $\mathbb QH^n$,
\begin{equation}\label{3}\Delta f=4(n+1)f\,\,\,\,{\rm and} \,\,\,\,\Delta f=8(n+1)f,\,\,\,\,{\rm respectively}.\end{equation}
Motivated by this fact, we characterize harmonic manifolds in terms of these radial functions. It will be proved that if a radial harmonic function defined in a punctured neighborhood of a harmonic manifold $M$, as in the equivalent condition (2) above, is the same as the radial Green's function of a space form, $\mathbb CH^n$ or $\mathbb QH^n$, then $M$ is the space form, $\mathbb CH^n$ or $\mathbb QH^n$, repectively. We also prove that if a radial function on a harmonic manifold $M$ satisfies \eqref{1}, \eqref{2} or \eqref{3}, then $M$ must be $\mathbb R^n$, $\mathbb S^n$, $\mathbb H^n$, $\mathbb CH^n$ or $\mathbb QH^n$. Finally, we show that if the mean curvature of a geodesic sphere in a harmonic manifold $M$ is the same as that in a space form, $\mathbb CH^n$ or $\mathbb QH^n$, then $M$ is the space form, $\mathbb CH^n$ or $\mathbb QH^n$, respectively.
\section{Laplacian}
The radial Green's functions of $\mathbb R^n$, $\mathbb S^n$ and $\mathbb H^n$ are $ \frac{1}{(2-n)n\omega_n}r^{2-n}$ ($\frac{1}{2\pi}\log r$ if $n=2,\omega_n=$ volume of a unit ball in $\mathbb R^n$) and $G(r)$ such that $G'(r)=\frac{1}{n\omega_n}\sin^{1-n}r$, $G'(r)=\frac{1}{n\omega_n}\sinh^{1-n}r$, respectively.
\begin{thm}\label{thm:1}
Let $G_p(r)$ be a nonconstant radial harmonic function on a punctured neighborhood of $p$ in a simply connected harmonic manifold $M^n$ with $r(\cdot)={\rm dist}(p,\cdot)$. If $G_p(r)$ is the same as the radial Green's function of a space form, $\mathbb CH^n$ or $\mathbb QH^n$ at every point $p\in M$, then $M$ is the space form, $\mathbb CH^n$ or $\mathbb QH^n$, respectively.
\end{thm}
\begin{proof}
Let $\delta_p$ be the Dirac delta function centered at $p\in M$. Integrate $\Delta G_p(r)=\delta_p$ over a geodesic ball $D_r$ of radius $r$ with center at $p$:
$$1=\int_{D_r}\Delta G_p(r)=\int_{\partial D_r}G_p'(r).$$
Hence $${\rm vol}(\partial D_r)=\frac{1}{G_p'(r)}\,\,\,\,{\rm and}\,\,\,\,{\rm vol}(D_r)=\int_0^r\frac{1}{G_p'(r)}=\int_{{\rm exp}_p^{-1}(D_r)}\omega_p(r).$$
Then $M$ should have the same volume density $\omega_p(r)$ as a space form, $\mathbb CH^n$, or $\mathbb QH^n$. Therefore by Shah \cite{Sh}, Szab\'o \cite{Sz}, Ramachandran-Ranjan \cite{RR}{{,}} $M$ is $\mathbb R^n, \mathbb S^n, \mathbb H^n$, $\mathbb CH^n$ or $\mathbb QH^n$, respectively.
\end{proof}
\begin{cor}
If $\Delta r^2=2n$ for $r(\cdot)={\rm dist}(p,\cdot)$ at any point $p$ of a harmonic manifold $M^n$, then $M$ is flat.
\end{cor}
\begin{proof}
It is known that $$\Delta f^k=k(k-1)f^{k-2}|\nabla f|^2+kf^{k-1}\Delta f.$$
Setting $f=r^2$ and $k=1-n/2$, $n\neq2$, one can compute that
$$\Delta r^{2-n}=0.$$
Hence $M$ has a radial harmonic function $ \frac{1}{(2-n)n\omega_n}r^{2-n}$ which is the same as Green's function of $\mathbb R^n$. Therefore the conclusion follows from Theorem \ref{thm:1}. The proof for $n=2$ is similar.
\end{proof}
The condition (3) in Introduction says that the mean curvature of a small geodesic sphere in a harmonic manifold is constant. The following theorem characterizes a harmonic manifold in terms of the mean curvature.
\begin{thm}
Let $H(r)$ be the mean curvature of a geodesic sphere of radius $r$ in a simply connected harmonic manifold $M$. If $H(r)$ is the same as that in a space form, $\mathbb CH^n$ or $\mathbb QH^n$ for any point $p\in M$ with $r(\cdot)={\rm dist}(p,\cdot)$, then $M$ is the space form, $\mathbb CH^n$ or $\mathbb QH^n$, respectively.
\end{thm}
\begin{proof}
Let $\gamma$ be a geodesic from $p$ parametrized by arclength $r$ with $\gamma(0)=p$ in a Riemannian manifold $M^n$. Let $\{e_1,\ldots,e_n\}$ be an orthonormal frame at $\gamma(0)$ with $e_1=\gamma'(0)$ and extend it to a parallel orthnormal frame field $\{e_1(r),\ldots,e_n(r)\}$ along $\gamma(r)$ with $e_i(0)=e_i$. Define $Y_i(r),i=2,\ldots,n,$ to be the Jacobi field along $\gamma(r)$ satisfying $Y_i(0)=0$ and $Y_i'(0)=e_i$.
If $M$ is harmonic, then
\begin{equation}\label{Y}
\omega_p(r)=\frac{1}{r^{n-1}}\sqrt{{\rm det}\langle Y_i(r),Y_j(r)\rangle}:=\frac{1}{r^{n-1}}\Theta(r).
\end{equation}
In other words, the volume form $dV$ of $M$ in normal coordinates $x_1,\ldots,x_n$ becomes
$$dV=\omega_p(r)dx_1\cdots dx_n=\Theta(r)\, dr\,dA,$$
where $dA$ is the volume form on the unit sphere in $\mathbb R^n$. Since the volume of a geodesic sphere $\partial D_r$ is $\int_{S}\Theta(r)$ ($S$: unit sphere in $\mathbb R^n$), the first variation of area on the geodesic sphere $\partial D_r$ yields
\begin{equation}\label{H}
H(r)=\frac{\Theta'(r)}{\Theta(r)}.
\end{equation}
As $H(r)$ is the same as that of a space form, $\Theta(r)$ must be the same as that of the space form, and so $\omega_p(r)$ is the same as the volume density function of the space form. Similarly for $\mathbb CH^n$ and $\mathbb QH^n$ with $n$ replaced by $2n$ and $4n$, respectively. Therefore Shah, Szab\'o and Ramachandran-Ranjan's theorems complete the proof.
\end{proof}
\section{Eigenfunctions}
In \eqref{Y} $Y_i(r)$ has a Taylor series expression
$$Y_i(r)=e_i(r)r-\frac{1}{6}R(e_i(r),e_1(r))e_1(r)r^3+o(r^3).$$
Hence
$$\langle Y_i(r),Y_j(r)\rangle=r^2(\delta_{ij}-\frac{1}{3}\langle R(e_i(r),e_1(r))e_1(r),e_j(r)\rangle r^2+o(r^2))$$
and
$${\rm det}\langle Y_i(r),Y_j(r)\rangle=r^{2n-2}{\rm det}\left(I_{n-1}-\frac{1}{3}R_{i11j}(\gamma(r))r^2+o(r^2)\right).$$
If $M$ is harmonic, then
\begin{equation}\label{Ledger}
\frac{d^2}{dr^2}|_{r=0}\,\omega_p(r)=\frac{d^2}{dr^2}|_{r=0}\,\left(\frac{1}{r^{n-1}}\sqrt{{\rm det}\langle Y_i(r),Y_j(r)\rangle}\right)=-\frac{1}{3}Ric(p),
\end{equation}
which is called {\it Ledger's formula} (\cite{B}, p.161).
This formula implies that harmonic manifolds are Einstein.
\begin{thm}\label{thm:3}
{\rm a)} If $\Delta\cos r=-n\cos r$ on a complete simply connected harmonic manifold $M^n$ at any point $p\in M$ with $r(\cdot)={\rm dist}(p,\cdot)$, then $M=\mathbb S^n$.\\
{\rm b)} If $\Delta\cosh r=n\cosh r$ on a complete simply connected harmonic manifold $M^n$ at any point $p\in M$ with $r(\cdot)={\rm dist}(p,\cdot)$, then $M=\mathbb H^n$.
\end{thm}
\begin{proof}
a) Since $\Delta\cos r=-n\cos r$, it is not difficult to show
\begin{equation}\label{r}
\Delta r=(n-1)\cot r.
\end{equation}
Let $G_p(r)$ be the radial function on $M$ such that $G_p'(r)=\frac{1}{n\omega_n}\sin^{1-n}r$.
Then
\begin{eqnarray*}
\Delta G_p(r)&=&{\rm div}\nabla G_p(r)={\rm div}(\frac{1}{n\omega_n}\sin^{1-n}r\nabla r)\\
&=&\frac{(1-n)}{n\omega_n}\sin^{-n}r\cos r|\nabla r|^2+\frac{1}{n\omega_n}\sin^{1-n}r\Delta r\\&=&0.\,\,\,\,\,\,\,({\rm by} \,\,\eqref{r})
\end{eqnarray*}
Theorem \ref{thm:1} completes the proof.
({\it Another proof~}) It is easy to show that for a radial function $f$ on a harmonic manifold $M$
\begin{equation}\label{laplacian}
\Delta f=\frac{d^2f}{dr^2}+H(r)\frac{df}{dr},
\end{equation}
where $H(r)$ is the mean curvature of $\partial D_r$. Hence from \eqref{H} and \eqref{r} one gets for $f(r):=r$
$$\frac{\Theta'(r)}{\Theta(r)}=H=(n-1)\cot r.$$
Therefore $$\Theta(r)=\sin^{n-1}r\,\,\,\,{\rm and}\,\,\,\,\omega_p(r)=\frac{1}{r^{n-1}}\sin^{n-1}r.$$
Then
\begin{eqnarray*}
\omega_p'(r)&=&(n-1)\left(\frac{\sin r}{r}\right)^{n-2}\left(\frac{\sin r}{r}\right)',\\
\omega_p''(r) &=& (n-1)(n-2)\left(\frac{\sin r}{r}\right)^{n-3}\left(\left(\frac{\sin
r}{r}\right)'\right)^2\\
&&{{+(n-1)\left(\frac{\sin r}{r}\right)^{n-2}\left(\frac{\sin
r}{r}\right)''}}.
\end{eqnarray*}
Hence Ledger's formula \eqref{Ledger} implies
$$Ric(p)=-3\,\frac{d^2}{dr^2}|_{r=0}~\omega_p(r)=n-1$$
for any $p\in M$.
Using the Riccati equation for the second fundamental form $h$ on the geodesic sphere, one obtains
\begin{eqnarray*}
Ric(M)&=&-{\rm tr}h'-{\rm tr}h^2\\
&\leq&(n-1)\csc^2r-(n-1)\cot^2r\,\,\,(\because {\rm tr}h^2\geq\frac{1}{n-1}({\rm tr}h)^2) \\
&=&n-1.
\end{eqnarray*}
Since equality holds above, one should have ${\rm tr}h^2=\frac{1}{n-1}({\rm tr}h)^2$. Hence the linear operator $h$ is a multiple of the identity, meaning that every geodesic sphere is umbilic. So the sectional curvature is constant on the geodesic sphere. Therefore $M=\mathbb S^n$ as $M$ is Einstein.
Proof of b) is similar to a).
\end{proof}
\begin{thm}
{\rm a)} Let $f(r):=1+\frac{n+1}{n}\sinh^2r$ be a radial function on a complete \, simply connected \, K\"ahler hamonic \, manifold $M^{2n}$. If {{$\Delta f=4(n+1)f$}} at any point $p\in M$ with $r(\cdot)={\rm dist}(p,\cdot)$, then $M$ is isometric to the complex hyperbolic space $\mathbb CH^n$.\\
{\rm b)} Let $f(r):=1+\frac{n+1}{n}\sinh^2r$ be a radial function on a complete simply connected quaternionic K\"ahler hamonic manifold $M^{4n}$. If $\Delta f=8(n+1)f$ at any point $p\in M$ with $r(\cdot)={\rm dist}(p,\cdot)$, then $M$ is isometric to the quaternionic hyperbolic space $\mathbb QH^n$.
\end{thm}
\begin{proof}
a) \eqref{laplacian} and \eqref{H} yield
$$\Delta f=f''+\frac{\Theta'}{\Theta}f'=4(n+1)f.$$
Hence for $f(r)=1+\frac{n+1}{n}\sinh^2r$ one can compute
$$\frac{\Theta'(r)}{\Theta(r)}=(2n-1)\coth r +\tanh r.$$
Therefore
$$\Theta(r)=\sinh^{2n-1}\cosh r\,\,\,\, {\rm and} \,\,\,\,\omega_p(r)=\frac{1}{r^{2n-1}}\sinh^{2n-1}\cosh r.$$
Thus the theorem follows from Ramachandran-Ranjan's theorem \cite{RR}.
b) For $f(r)=1+\frac{n+1}{n}\sinh^2r$
$$\frac{\Theta'(r)}{\Theta(r)}=(4n-1)\coth r +3\tanh r \,\,\,\,{\rm and} \,\,\,\, \Theta(r)=\sinh^{4n-1}r\cosh^3 r .$$
Hence $\omega_p(r)=\frac{1}{r^{4n-1}}\sinh^{4n-1}r\cosh^3r$, which is the same as the volume density of $\mathbb QH^n$.
\end{proof}
|
1,116,691,497,377 | arxiv | \section{Particle on a segment}
The quantum stationary states and possible values of energy
of the particle on the segment of a line is given by the Schr\" odinger equation
\begin{equation} \label{sche}
-\frac{\hbar^2}{2m}\frac{{\rm d}^2\psi(x)}{{\rm d}x^2}=E\psi(x)
\end{equation}
and boundary conditions
\begin{equation} \label{bc}
\psi(0)=\psi(a)=0,
\end{equation}
where $m$ is the mass of the particle and $a$ is the length of the segment. It is well known
that the conditions (\ref{sche}) and (\ref{bc}) lead to the energy eigenfunctions
\begin{equation} \label{eef}
\psi_n(x)=\sqrt{\frac{2}{a}}\sin\left(\frac{n\pi x}{a}\right),
\quad \mbox{where} \quad n\in\{ 1,2,3,\dots \}
\end{equation}
and energy eigenvalues
\begin{equation} \label{eev}
E_n=\frac{\pi^2\hbar^2}{2ma^2}n^2\sim n^2.
\end{equation}
Particle on the segment $[0,a]$ can be modeled as particle moving in the line
$(-\infty,\infty)$ in the "intuitive" potential energy
\begin{equation} \label{poten}
V(x)=\left\{ \begin{array}{l}
0, \ x\in[0,a], \\
\infty, \ x\notin[0,a].
\end{array} \right.
\end{equation}
This kind of potential energy can be, also intuitively as plotted in the
figure (\ref{potenfig}), imagined as "the limit" of the following sequence
of potential energy functions
\begin{equation} \label{potenseq}
V_k(x)=A_0^{(k)}x^{2k}, \quad k\in\{ 1,2,3,\dots \}
\end{equation}
with some positive constants $A_0^{(k)}$.
\begin{figure}[h]
\centering
\includegraphics[width=7cm,height=5cm]{poten.eps}
\caption{There are ploted the first six functions (\ref{potenseq}) with
$A_0^{(k)}=1$ on the
panel.}
\label{potenfig}
\end{figure}
Our task is to show that the energy levels (\ref{eev}) can be
approximated by the energy levels of the particle moving in the
potentials (\ref{potenseq}). The tool we are going to use for this
purpose is the well-known Bohr-Sommerfeld quantization rule.
\section{Approximative expression for the energy levels of the particle in the potential well $A_0^{(k)}x^{2k}$}
The Hamiltonian of the particle in the potential well $V=A_0^{(k)}x^{2k}$ is given by
\begin{equation} \label{ham}
H(p,x)=\frac{p^2}{2m}+A_0^{(k)}x^{2k} .
\end{equation}
Let $\mathcal{C}(E)$ be the classical periodic trajectory of the particle with the energy $E$
in the phase space (there are no trajectories of other kind in our case). The
Bohr-Sommerfeld quantization rule chooses from the set of all classical trajectories only
the trajectories for which
\begin{equation} \label{bsrule}
\oint_{\mathcal{C}(E)}p{\rm d}x=2\pi\hbar(n+\gamma_n),
\end{equation}
where $n$ is natural number and $\gamma_n$ is the quantity of the order of unity.
For our purpose, it is sufficient to write down the rule in less accurate form
\begin{equation} \label{bsrule1}
\oint_{\mathcal{C}(E)}p{\rm d}x=2\pi\hbar n.
\end{equation}
In this way, Bohr-Sommerfeld quantization rule gives us the method to determine
approximately the energy levels of the particle by solving the equation (\ref{bsrule1})
with the unknown $E$ and parameter $n$. The accuracy of this formula grows with growing $n$.
In our case, the particle with the energy $E$ moves
in the range $[-x_M(E),x_M(E)]$, where the energy conservation law determines $x_M(E)$ as
\begin{equation} \label{xm}
x_M(E)=\left(\frac{E}{A_0^{(k)}}\right)^{\frac{1}{2k}} .
\end{equation}
Therefore, the left-hand side of Eq. (\ref{bsrule1}) reads
\begin{eqnarray*}
& &
\oint_{\mathcal{C}(E)}p{\rm d}x=4\int_0^{x_M}p(x){\rm d}x=4\sqrt{2m}\int_0^{x_M}
\sqrt{E-A_0^{(k)}x^{2k}}{\rm d}x= \\
& &
4x_M\sqrt{2mE}\int_0^1\sqrt{1-y^{2k}}{\rm d}y=
\frac{4x_M\sqrt{2mE}}{2k}\int_0^1(1-t)^{1/2}t^{\frac{1}{2k}-1}{\rm d}t= \\
& &
\frac{2}{k}(2m)^{1/2}\left(A_0^{(k)}\right)^{-\frac{1}{2k}}E^{\frac{1}{2}+\frac{1}{2k}}
\mathcal{B}\left(\frac{3}{2},\frac{1}{2k}\right),
\end{eqnarray*}
where $\mathcal{B}$ is the Euler's beta-function. Above written
equalities together with (\ref{bsrule1}) allow for expressing
energetic levels of the system in closed form
\begin{equation} \label{spect}
E_n^{(k)} =\left[\frac{\pi\hbar
k}{(2m)^{1/2}}\right]^{\frac{2k}{k+1}}
\left(A_0^{(k)}\right)^{\frac{1}{k+1}}
\left[\frac{1}{\mathcal{B}\left(\frac{3}{2},\frac{1}{2k}\right)}\right]^{\frac{2k}{k+1}}
n^{\frac{2k}{k+1}} .
\end{equation}
We are interested in asymptotic behavior of
$E_n^{(k)}$ at fixed $n$ and $k$ running to infinity. By making
use the facts
\begin{displaymath}
\mathcal{B}(x,y)=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)} \quad
\mbox{and} \quad \Gamma(x)=\frac{1}{x}-\gamma_E+\mathcal{O}(x) \ \mbox{for}\ x\to 0^+ ,
\end{displaymath}
where $\gamma_E$ is the Euler's constant, and expressing the
constant $A_0^{(k)}$ in the form appropriate to keep the width of
the well equal to a constant $2a$
\begin{displaymath}
A_0^{(k)}=\frac{V_0}{a^{2k}}, \quad V_0=const, \quad V_0>0,
\end{displaymath}
we can derive that for $k\to\infty$
\begin{displaymath}
E^{(k)}_n=\left[\frac{\pi^2\hbar^2}{2ma^2}k^2+k+\mathcal{O}(1)\right]
\left[\Gamma\left(\frac{1}{2k}\right)\right]^{-\frac{2k}{k+1}}\left[
n^2+\mathcal{O}\left(\frac{1}{k}\right)\right]
\end{displaymath} and therefore
\begin{equation} \label{limen}
\lim_{k\to\infty}E_n^{(k)}=\frac{\pi^2\hbar^2}{2m\left(2a\right)^2}n^2
\end{equation}
as it should be.
\end{document}
|
1,116,691,497,378 | arxiv | \section{Introduction}
In the last decade, gravitational waves from a binary black hole merger were
successfully detected by LIGO and Virgo \cite{Abbott:2016blz}, and
subsequently the first image of a supermassive black hole at the center of
galaxy M87 was photographed by the Event Horizon Telescope (EHT)
\cite{Akiyama:2019cqa,Akiyama:2019brx,Akiyama:2019sww,Akiyama:2019bqs,Akiyama:2019fyp,Akiyama:2019eap
, which opens a new era of black hole physics. Due to the event horizon, a
black hole is a dissipative system, eigenmodes of which are quasinormal modes.
The final stage of a binary black hole merger is ringdown, in which the
gravitational waveforms are described by a superposition of quasinormal modes
\cite{Berti:2007dg}. Quasinormal modes with complex frequencies have vast
applications in black hole physics
\cite{Ferrari:1984zz,Nollert:1999ji,Horowitz:1999jd,Berti:2003ud,Myung:2008pr,Berti:2009kk,Konoplya:2011qq,Yang:2012he,Cook:2016fge,Myung:2018jvi,Konoplya:2019hlu,Blazquez-Salcedo:2020jee
. In particular, a linear perturbation can induce a discrete set of
quasinormal modes, whose imaginary part is related to the damping time scale.
The spectrum of quasinormal modes therefore provides a perspective on the
study of stability of the background spacetime
\cite{Doneva:2010ke,Blazquez-Salcedo:2018jnn,Myung:2018vug,Myung:2019oua,Zou:2020zxq,Guo:2021zed
. Moreover, the most dominant quasinormal modes can be used to check the
validity of the strong cosmic censorship conjecture
\cite{Cardoso:2017soq,Gan:2019jac,Gan:2019ibg}.
Since exotic horizonless objects, e.g., wormholes and ultra-compact objects
(UCOs), have optical observations similar to black holes, they have recently
attracted great attention
\cite{Lemos:2008cv,Cunha:2017wao,Cunha:2018acu,Shaikh:2018oul,Huang:2019arj,Wielgus:2020uqz,Yang:2021diz,Peng:2021osd
. Interestingly, a reflecting boundary in the wormhole or UCO spacetime can
produce a set of time-delay echoes, which are characterized by quasinormal
modes \cite{Mark:2017dnq,Bueno:2017hyj,Cardoso:2019rvt,Ou:2021efv}.
Furthermore, UCOs have been conjectured to suffer from instabilities due to
the existence of a family of long-lived quasinormal modes, which appear in the
neighborhood of a stable circular null geodesic
\cite{Cardoso:2014sna,Keir:2014oka,Guo:2021bcw}. For instance, a linear
ergoregion instability associated with long-lived modes may occur for a
spinning object with a sufficiently high rotation speed
\cite{Friedman1978ErgosphereI,Chirenti:2008pf,Pani:2008bzt}.
Intriguingly, unstable null geodesics have been revealed to be closely related
to a class of quasinormal modes of perturbations in the black hole spacetime
\cite{Ferrari:1984zz,Nollert:1999ji,Cardoso:2008bp,Yang:2012he,Konoplya:2017wot,Jusufi:2019ltj,Cuadros-Melgar:2020kqn,Qian:2021aju
. In \cite{Ferrari:1984zz}, null geodesics were first found to be connected
with quasinormal modes in Schwarzschild and slowly rotating Kerr black holes.
Using the WKB approximation in the eikonal limit, the authors of
\cite{Cardoso:2008bp} elaborated the connection in the static, spherically
symmetric and asymptotically flat black hole background. To be more specific,
it verified that the real part of quasinormal modes is proportional to the
angular velocity of the corresponding unstable circular null geodesic, while
the imaginary part is determined by the Lyapunov exponent of the orbit.
Furthermore, the relation between null geodesics and quasinormal modes was
generalized to Kerr black holes of arbitrary spin in \cite{Yang:2012he}, which
showed an extra precession modification in the real part compared to
non-rotating black holes.
The No-hair theorem asserts that a black hole is uniquely determined by its
three parameters, i.e., mass, electric charge and angular momentum
\cite{Israel:1967wq,Carter:1971zc,Ruffini:1971bza}. However, hairy black holes
with extra freedom have been constructed in various models, which provide
counter-examples to the no-hair theorem
\cite{Volkov:1989fi,Bizon:1990sr,Greene:1992fw,Luckock:1986tr,Droz:1991cx,Kanti:1995vq,Mahapatra:2020wym
. Recently, a type of Einstein-Maxwell-scalar (EMS) models with a non-minimal
coupling between the scalar and electromagnetic fields have been extensively
studied in the literature
\cite{Herdeiro:2018wub,Myung:2018vug,Astefanesei:2019pfq,Fernandes:2019kmh,Fernandes:2019rez,Peng:2019cmm,Zou:2019bpt,Astefanesei:2020qxk,Blazquez-Salcedo:2020nhs,Fernandes:2020gay,Guo:2020zqm,Myung:2020dqt,Myung:2020ctt,Wang:2020ohb,Gan:2021pwu,Gan:2021xdl,Guo:2021zed,Guo:2021ere
. In the EMS models, the non-minimal coupling destabilizes scalar-free black
holes and induce the onset of spontaneous scalarization to form hairy black
holes with a scalar hair \cite{Herdeiro:2018wub}. In
\cite{Myung:2019oua,Zou:2020zxq,Myung:2020etf}, the stability of the hairy
black holes was analyzed by calculating their quasinormal modes of various
perturbations. Decaying quasinormal modes may suggest that the hairy black
holes are the endpoints of the dynamic evolution from unstable scalar-free
black hole solutions.
Surprisingly, it showed that in a certain parameter regime of the hairy black
holes, there exist two unstable and one stable null circular null geodesics on
the equatorial plane, which indicates three photon spheres of different sizes
outside the event horizon \cite{Gan:2021pwu,Gan:2021xdl}. Due to a double-peak
structure appearing in the potential of the photon radial motion, the
existence of two unstable photon spheres can remarkably affect the optical
appearance of black holes illuminated by the surrounding accretion disk, e.g.,
leading to bright rings of different radii in the black hole image
\cite{Gan:2021pwu} and significantly increasing the flux of the observed image
\cite{Gan:2021xdl}. The relation between null geodesics and quasinormal modes
has been rarely reported for black holes with more than one photon sphere.
Moreover, multiple photon spheres appearing in some spacetime signal the
existence of long-live modes, which may render the spacetime unstable
\cite{Cardoso:2014sna,Keir:2014oka,Guo:2021bcw}. Therefore, it is of great
interest to study quasinormal modes of the hairy black holes endowed with
three photon spheres. Note that multiple photon spheres have recently been
reported in different black hole models
\cite{Liu:2019rib,Brihaye:2021mqk,Huang:2021qwe}.
In this paper, we use the WKB method to calculate quasinormal modes localized
at circular null geodesics of the hairy black holes with three photon spheres.
The rest of the paper is organized as follows. In Section
\ref{Circular Orbits in the EMS Model}, we study null circular geodesics of
hairy black holes in the EMS model, as well as the orbital stability by
evaluating the Lyapunov exponent. Subsequently, quasinormal modes trapped at
different circular null geodesics are obtained in Section
\ref{sec:QNMs at CRs}. We conclude our main results in Section \ref{Sec:Conc}.
The Appendix \ref{sec:appd} is devoted to derivations of some WKB formulas. We
set $16\pi G=1$ throughout this paper.
\section{Hairy Black Holes}
\label{Circular Orbits in the EMS Model}
In this section, we first briefly review spherically symmetric hairy black
hole solutions in the EMS model. Subsequently, we study circular geodesics for
photons around the hairy black holes and compute the corresponding Lyapunov exponents.
\subsection{Black Hole Solution}
In the EMS model, the action is given by
\begin{equation}
S=\int d^{4}x\sqrt{-g}\left[ R-2\partial_{\mu}\phi\partial^{\mu
\phi-e^{\alpha\phi^{2}}F^{\mu\nu}F_{\mu\nu}\right] , \label{eq:Action
\end{equation}
where the scalar field $\phi$ is minimally coupled to the metric field and
non-minimally coupled to the electromagnetic field $A_{\mu}$. Here, $F_{\mu
\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$ is the electromagnetic field
strength tensor, and $e^{\alpha\phi^{2}}$ is the coupling function between
$\phi$ and $A_{\mu}$. Following \cite{Herdeiro:2018wub,Guo:2021zed}, we
restrict our attention to static, spherically symmetric and asymptotically
flat black hole solutions with the generic ansatz
\begin{align}
ds^{2} & =-N\left( r\right) e^{-2\delta\left( r\right) }dt^{2}+\frac
{1}{N\left( r\right) }dr^{2}+r^{2}\left( d\theta^{2}+\sin^{2}\theta
d\varphi^{2}\right) ,\nonumber\\
A_{\mu}dx^{\mu} & =V\left( r\right) dt\text{ and }\phi=\phi\left(
r\right) . \label{eq:ansatz
\end{align}
The equations of motion are then given by
\begin{align}
N^{\prime}\left( r\right) & =\frac{1-N\left( r\right) }{r}-\frac{Q^{2
}{r^{3}e^{\alpha\phi^{2}\left( r\right) }}-rN\left( r\right) \phi
^{\prime2}\left( r\right) ,\nonumber\\
\left[ r^{2}N\left( r\right) \phi^{\prime}\left( r\right) \right]
^{\prime} & =-\frac{\alpha\phi\left( r\right) Q^{2}}{e^{\alpha\phi
^{2}\left( r\right) }r^{2}}-r^{3}N\left( r\right) \phi^{\prime3}\left(
r\right) ,\nonumber\\
\delta^{\prime}\left( r\right) & =-r\phi^{\prime2}\left( r\right)
,\label{eq:NLEqs}\\
V^{\prime}\left( r\right) & =\frac{Q}{r^{2}e^{\alpha\phi^{2}\left(
r\right) }}e^{-\delta\left( r\right) },\nonumber
\end{align}
where primes denote derivatives with respect to $r$, and the integration
constant $Q$ is interpreted as the electric charge of the black hole solution.
To find black holes solutions from the non-linear ordinary differential
equations $\left( \ref{eq:NLEqs}\right) $, one needs to impose proper
boundary conditions at the event horizon $r_{h}$ and the spatial infinity,
\begin{align}
N(r_{h}) & =0\text{, }\delta(r_{h})=\delta_{0}\text{, }\phi(r_{h})=\phi
_{0}\text{, }V(r_{h})=0\text{,}\nonumber\\
N(\infty) & =1\text{, }\delta(\infty)=0\text{, }\phi(\infty)=0\text{,
}V(\infty)=\Phi\text{,}\label{eq:BC
\end{align}
where $\Phi$ is the electrostatic potential. The two parameters $\delta_{0}$
and $\phi_{0}$ determine the asymptotic behavior of the solutions in the
vicinity of the horizon. Moreover, the black hole mass $M$, which is related
to the ADM mass, can be obtained via $M=\lim\limits_{r\rightarrow\infty
}r\left[ 1-N(r)\right] /2$. In this paper, we set $M=1$ and use a shooting
method to numerically solve eqn. $\left( \ref{eq:NLEqs}\right) $ for black
hole solutions matching the boundary conditions $\left( \ref{eq:BC}\right)
$. It is manifest that the scalar-free solutions with $\phi=0$ (i.e.,
Reissner-Nordstr\"{o}m black holes) can exist in the EMS model. Nevertheless,
we focus on hairy black holes with the non-trivial profile of the scalar field
$\phi$. For instance, we exhibit the profile of the metric functions for the
hairy black hole solution with $\alpha=0.9$ and $Q=1.066$ in the left panel of
FIG. \ref{Hbh-plot}.
\subsection{Circular Null Geodesics}
Owing to strong gravity near a black hole, photons are forced to travel in
circular null geodesics on photon spheres, which play an important role in
determining properties of the black hole image seen by a distant observer
(e.g., the size of the black hole shadow). Here, circular null geodesics of
the spherically symmetric hairy black hole are studied. Without loss of
generality, we consider a photon moving on the equatorial plane with
$\theta=\pi/2$. To obtain equatorial geodesics, we start from the Lagrangian
\begin{equation}
\mathcal{L}=\frac{1}{2}\left( -N\left( r\right) e^{-2\delta\left(
r\right) }\dot{t}^{2}+\frac{1}{N\left( r\right) }\dot{r}^{2}+r^{2
\dot{\varphi}^{2}\right) , \label{eq:nullLag
\end{equation}
where dots denote derivatives with respect to the affine parameter $\tau$. The
generalized canonical momenta for this Lagrangian are defined as
\begin{align}
-p_{t} & =N\left( r\right) e^{-2\delta\left( r\right) }\dot
{t}=E,\nonumber\\
p_{\varphi} & =r^{2}\dot{\varphi}=L,\label{eq:CM}\\
p_{r} & =\frac{1}{N}\dot{r}.\nonumber
\end{align}
Note that the metric of the hairy black hole spacetime is independent of $t$
and $\varphi$. So the spacetime admits two Killing vectors, which are
associated with the conserved energy $E$ and momentum $L$, respectively, in
eqn. $\left( \ref{eq:CM}\right) $. Varying the Lagrangian $\left(
\ref{eq:nullLag}\right) $ with respect to $r$ yields the radial equation of
motion for the photon,
\begin{equation}
\frac{d}{d\tau}\frac{\partial\mathcal{L}}{\partial\dot{r}}=\frac
{\partial\mathcal{L}}{\partial r}. \label{eq:radial Eq
\end{equation}
With the help of eqn. $\left( \ref{eq:CM}\right) $, eqn. $\left(
\ref{eq:radial Eq}\right) $ becomes
\begin{equation}
e^{-2\delta\left( r\right) }\dot{r}^{2}=E^{2}-\frac{e^{-2\delta\left(
r\right) }N\left( r\right) }{r^{2}}L^{2}, \label{eq:radial eq
\end{equation}
which describes a null geodesic. For later use, one can introduce the
geometric potential as
\begin{equation}
V_{\text{geo}}\left( r\right) =\frac{e^{-2\delta\left( r\right) }N\left(
r\right) }{r^{2}}. \label{eq: Veff
\end{equation}
Accordingly, a null circular geodesic at $r=r_{c}$ can appear, provided that
the conditions $V_{\text{geo}}\left( r_{c}\right) =E^{2}/L^{2}$ and
$V_{\text{geo}}^{\prime}\left( r_{c}\right) =0$ are satisfied.
\begin{figure}[ptb]
\begin{centering}
\includegraphics[scale=0.85]{Hbh}
\par\end{centering}
\caption{Metric functions and geometric potential for hairy black hole
solutions, where we take $\alpha=0.9$. \textbf{Left}: The metric functions
$N\left( r\right) $ (solid line), $\phi\left( r\right) $ (dashed line) and
$\delta\left( r\right) $ (dotted line) for the hairy black hole solution
with $Q=1.066$ are plotted outside the event horizon (vertical dashed line).
\textbf{Right}: The geometric potential for different hairy black holes with
$Q=1.066$ (red line), $Q=1.064$ (orange line) and $Q=1.059$ (blue line). For a
large value of charge (e.g., the red and orange lines), the geometric
potential possesses a double-peak structure with one local minimum and two
local maxima. By contrast, the double-structure disappears as the charge
decreases (e.g., the blue line), leaving only a single maximum of the
geometric potential.
\label{Hbh-plot
\end{figure}
In FIG. \ref{Hbh-plot}, we present the geometric potential for hairy black
holes with $\alpha=0.9$ for $Q=1.066,\text{ }1.064\text{ and }1.059$ in the
right panel. When $Q=1.059$ (the blue line), the geometric potential possesses
a single maximum outside the event horizon. Intriguingly for $Q=1.066$ and
$1.064$ (the red and orange lines), it displays a double-peak structure with
one minimum and two maxima, which implies that there exist three null circular
geodesics located at the extrema.
\subsection{Lyapunov Exponent}
The Lyapunov exponent is proposed to characterize the rate of separation of
adjacent trajectories in the phase space \cite{Cornish:2003ig,Cardoso:2008bp}.
In a dynamical system, the sign of the Lyapunov exponent can be used to
determine whether adjacent trajectories converge or not. Specifically,
positive Lyapunov exponents correspond to the divergent trajectories, while
negative ones to the convergent trajectories. Therefore, we can study the
stability of null circular orbits around a hairy black hole by evaluating
their Lyapunov exponents. Moreover, the Lyapunov exponent can be closely
related to quasinormal modes of black holes. Indeed, it showed in
\cite{Cardoso:2008bp} that, if the geometric potential has a single maximum,
the imaginary part of quasinormal modes is determined by the Lyapunov exponent
of the unstable circular orbit at the maximum in the eikonal regime.
To describe circular orbits in spherically symmetric spacetime, we focus on a
two dimensional phase space spanned by $X_{i}\left( t\right) \equiv\left(
p_{r},r\right) $. The equations of motion in this phase space can be
schematically written as
\begin{equation}
\frac{dX_{i}}{dt}=H_{i}\left( X_{j}\right) . \label{eq:PhaseEOM
\end{equation}
To obtain the Lyapunov exponent of a given orbit, we need to linearize eqn.
$\left( \ref{eq:PhaseEOM}\right) $ around the orbit,
\begin{equation}
\frac{d\delta X_{i}\left( t\right) }{dt}=K_{ij}\left( t\right) \delta
X_{j}\left( t\right) , \label{eq:Leq
\end{equation}
where the linear stability matrix $K_{ij}\left( t\right) $ is
\begin{equation}
K_{ij}\left( t\right) =\left. \frac{\partial H_{i}}{\partial X_{j
}\right\vert _{X_{i}\left( t\right) }=\left(
\begin{array}
[c]{cc
0 & d\left( \dot{t}^{-1}\dot{p_{r}}\right) /dr\\
\dot{t}^{-1}N & 0
\end{array}
\right) . \label{eq:SMat
\end{equation}
The solution to the linearized equation $\left( \ref{eq:Leq}\right) $ can be
expressed as
\begin{equation}
\delta X_{i}\left( t\right) =L_{ij}\left( t\right) \delta X_{j}\left(
0\right) ,
\end{equation}
where the evolution matrix $L_{ij}\left( t\right) $ satisfies
\begin{equation}
\frac{dL_{ij}\left( t\right) }{dt}=K_{im}\left( t\right) L_{mj}\left(
t\right) ,
\end{equation}
and $L_{ij}\left( 0\right) =\delta_{ij}$. The principal Lyapunov exponent is
related to the determination of the eigenvalues of $L_{ij}$, i.e.,
\begin{equation}
\lambda=\lim_{t\rightarrow\infty}\frac{1}{t}\log\frac{L_{jj}\left( t\right)
}{L_{jj}\left( 0\right) }.
\end{equation}
From eqn. $\left( \ref{eq:SMat}\right) $, the principal Lyapunov exponent
can be written as
\begin{equation}
\lambda=\pm\sqrt{\dot{t}^{-1}N\frac{d}{dr}\left( \dot{t}^{-1}\dot{p_{r
}\right) },
\end{equation}
where we choose the $+$ sign for the Lyapunov exponent \cite{Cardoso:2008bp}.
Specifically, the Lyapunov exponent of a circular orbit at $r=r_{c}$ can be
expressed in terms of the geometric potential,
\begin{equation}
\left. \lambda\right\vert _{r=r_{c}}=\left. \sqrt{-\frac{L^{2}e^{2\delta
}{2\dot{t}^{2}}V_{\text{geo}}^{\prime\prime}}\right\vert _{r=r_{c}}=\left.
\sqrt{-\frac{1}{2V_{\text{geo}}}\frac{d^{2}}{dx^{2}}V_{\text{geo}}}\right\vert
_{r=r_{c}}, \label{eq:Lya
\end{equation}
where the tortoise coordinate $x$ is defined by $dx/dr\equiv e^{\delta\left(
r\right) }N^{-1}\left( r\right) $. For a circular null orbit located at a
local maximum of the geometric potential with $V_{\text{geo}}^{\prime\prime
}<0$, the Lyapunov exponent $\lambda$ is positive, which implies that the
orbit is unstable under small perturbations. On the other hand, a circular
orbit located at a local minimum has $V_{\text{geo}}^{\prime\prime}<0$ and
hence has a\ purely imaginary value of the Lyapunov exponent, which implies
that the orbit is stable. Therefore, a black hole with a double-peak geometric
potential (e.g., $Q=1.066$ and $1.064$ in FIG. \ref{Hbh-plot}) has two
unstable circular null geodesics at the local maxima and a stable one at the
local minimum. In addition, we introduce the angular velocity $\Omega$ of the
circular orbit at $r=r_{c}$ via
\begin{equation}
\left. \Omega\right\vert _{r=r_{c}}=\left. \frac{\dot{\varphi}}{\dot{t
}\right\vert _{r=r_{c}}=\left. \sqrt{V_{\text{geo}}}\right\vert _{r=r_{c}},
\label{eq:angular velocity
\end{equation}
which can be related to the real part of quasinormal modes
\cite{Cardoso:2008bp}.
\section{Quasinormal Modes}
\label{sec:QNMs at CRs}
In this section, we consider the perturbation of the scalar field and compute
its quasinormal modes that are associated with the circular null geodesics in
the hairy black hole background $\left( \ref{eq:ansatz}\right) $. For a
scalar perturbation of mode
\begin{equation}
\delta\phi=e^{-i\omega t}\frac{\Psi_{\omega l}\left( r\right) Y_{lm}\left(
\theta,\phi\right) }{r},
\end{equation}
we can separate the angular variables and express the linearized equation for
$\Psi_{\omega l}\left( r\right) $ in the following general form,
\begin{equation}
\left( \frac{d^{2}}{dx^{2}}+\omega^{2}-V_{l}\left( r\right) \right)
\Psi_{\omega l}\left( r\right) =0, \label{eq: Pert eq
\end{equation}
where the effective potential is
\begin{equation}
V_{l}\left( r\right) =\frac{e^{-2\delta\left( r\right) }N\left( r\right)
}{r^{2}}\left[ l\left( l+1\right) +1-N\left( r\right) -\frac{Q^{2}
{r^{2}e^{\alpha\phi\left( r\right) ^{2}}}-\left( \alpha+2\alpha^{2
\phi\left( r\right) ^{2}\right) \frac{Q^{2}}{r^{2}e^{\alpha\phi\left(
r\right) ^{2}}}\right] .
\end{equation}
In the eikonal regime $\left( l\gg1\right) $, the effective potential
reduces to
\begin{equation}
V_{l}\left( r\right) \simeq l^{2}\frac{e^{-2\delta\left( r\right)
}N\left( r\right) }{r^{2}}=l^{2}V_{\text{geo}}\left( r\right) ,
\label{eq:effgeoPotential
\end{equation}
where $V_{\text{geo}}\left( r\right) $ is the aforementioned geometric
potential. It is worth emphasizing that the full perturbed fields around the
hairy black hole background were considered in \cite{Myung:2019oua}. In the
eikonal limit, the scalar perturbation decouples with other perturbations, and
the scalar perturbative equation then reduces to eqn. $\left(
\ref{eq: Pert eq}\right) $ with the effective potential $\left(
\ref{eq:effgeoPotential}\right) $.
\begin{figure}[ptb]
\begin{centering}
\includegraphics{Veffx}
\par\end{centering}
\caption{The effective potential $V_{l}$ as a function of the tortoise
coordinate $x$. Here we present a local region of the effective potential for
the hairy black hole with $\alpha=0.9$ and $Q=1.066$. Note that the effective
potential tends to vanish as $x\rightarrow\pm\infty$. The horizontal dashed
lines represent the real parts of quasinormal modes $\omega^{2}$ in different
cases, which intersect the effective potential at the turning points labelled
by $x_{1}$, $x_{2}$, $x_{3}$ and $x_{4}$. The black points denote the extrema
of $V_{l}$ at $x_{a}$, $x_{b}$ and $x_{c}$, respectively, which correspond to
the circular null geodesics in the eikonal limit. Around the extrema, the
effective potential can be approximated by a parabola between the blue points
$x_{-}$ and $x_{+}$.
\label{Veffx
\end{figure}
In FIG. \ref{Veffx}, we plot the effective potential $V_{l}$ with $l=3$ for
the hairy black hole solution with $\alpha=0.9$ and $Q=1.066$. Like the
geometric potential, $V_{l}$ displays a double-peak structure, i.e., two local
maxima and one minimum. The effective potential $V_{l}$ with $l>3$ is found to
have a similar profile. Since quasinormal modes are determined by the
effective potential $V_{l}$, multiple local extrema of $V_{l}$ could lead to
distinct types of quasinormal modes. In the rest of the section, we apply the
WKB approximation to studying quasinormal modes trapped at the extrema of the
effective potential with a double-peak structure. These modes can be
interpreted as photons moving in the circular null geodesics determined by
$V_{\text{geo}}\left( r\right) $, which leak out slowly. Since we are
interested in the large $l$ limit, the WKB approximation can be accurate
enough for computing quasinormal modes. For comparison, we perform a direct
integration to numerically solve eqn. $\left( \ref{eq: Pert eq}\right) $ to
obtain the exact quasinormal modes of interest for the hairy black hole with
$\alpha=0.9$ and $Q=1.066$. Note that quasinormal modes occur only when
appropriate boundary conditions are imposed, i.e., purely ingoing modes at
$x=-\infty$ (at the event horizon) and purely outgoing modes at $x=+\infty$
(at the spatial infinity). Without causing any ambiguity, we denote the
geometric potential, the effective potential and the metric functions with
respect to the tortoise coordinate $x$ by $V_{\text{geo}}\left( x\right) $,
$V_{l}\left( x\right) $, $N\left( x\right) $ and $\delta\left( x\right)
$, respectively.
\subsection{Modes at Global Maximum}
We first discuss the quasinormal modes of frequency $\omega_{a}$ trapped at
the global maximum of the effective potential at $x=x_{a}$. For such modes,
the $\left( \omega_{a}^{2}\right) ^{R}$ line lies close to the global
maximum and intersects the potential at the turning points $x_{1}$ and $x_{2
$, around which the WKB approximation fails. There exists some small positive
constant $\delta$ such that the WKB approximation is valid for $x<x_{1
-\delta$ and $x>x_{2}+\delta$. Therefore, the solution of eqn. $\left(
\ref{eq: Pert eq}\right) $ can be approximated by the WKB expansions in
$\left( -\infty,x_{1}-\delta\right) $ and $\left( x_{2}+\delta
,+\infty\right) $, which match the ingoing and outgoing boundary conditions,
respectively. In the vicinity of $x=x_{a}$, the potential can be approximated
by a parabola. If $\left( \omega_{a}^{2}\right) ^{R}$ is close enough to the
global maximum, there exist $x_{-}$ and $x_{+}$ with $x_{-}<x_{1}-\delta
<x_{2}+\delta<$ $x_{+}$, such that the effective potential is well
approximated by a parabola for $x_{-}<x<x_{+}$ (see FIG. \ref{Veffx}). In
$\left( x_{-},x_{+}\right) $, eqn. $\left( \ref{eq: Pert eq}\right) $ with
an approximated parabolic potential can be exactly solved in terms of
parabolic cylinder functions. Furthermore, a complete solution requires that
the two WKB expansions should be smoothly connected by the exact solution in
the overlapping regions $\left( x_{-},x_{1}-\delta\right) $ and $\left(
x_{2}+\delta,x_{+}\right) $. The matching procedure then gives
\cite{Schutz:1985km}
\begin{equation}
\frac{\omega_{a}^{2}-V_{l}\left( x_{a}\right) }{\sqrt{-2V_{l}^{(2)}\left(
x_{a}\right) }}=i\left( n+\frac{1}{2}\right) \text{ with }n=0,1,2\cdots,
\label{eq:QNMcond1
\end{equation}
where the superscript of $V_{l}^{(2)}\left( x_{a}\right) $ denotes the
second derivative over the tortoise coordinate $x$.
The matching condition $\left( \ref{eq:QNMcond1}\right) $ leads to a set of
discrete quasinormal modes $\omega_{a}$, labelled by the integer $n$. In the
eikonal regime $\left( l\gg1\right) $, the quasinormal modes reduce to
\begin{equation}
\omega_{a}=l\sqrt{V_{\text{geo}}\left( x_{a}\right) }-i\left( n+\frac{1
{2}\right) \sqrt{-\frac{V_{\text{geo}}^{(2)}\left( x_{a}\right)
}{2V_{\text{geo}}\left( x_{a}\right) }}, \label{eq:QNM1
\end{equation}
where $x_{a}$ becomes the global maximum of the geometric potential in the
eikonal limit. Using eqns. $\left( \ref{eq:Lya}\right) $ and $\left(
\ref{eq:angular velocity}\right) $, one can express the quasinormal modes
$\left( \ref{eq:QNM1}\right) $ as
\begin{equation}
\omega_{a}=\Omega_{a}l-i\left( n+\frac{1}{2}\right) \lambda_{a},
\label{eq:gmQC
\end{equation}
where $\Omega_{a}$ and $\lambda_{a}$ are the angular velocity and the Lyapunov
exponent of the unstable circular orbit at the global maximum of the geometric
potential, respectively. Interestingly, since the Lyapunov exponent of the
unstable circular orbit describes the instability timescale of the geodesic
motion, the Lyapunov exponent contributes to the imaginary part of the
quasinormal modes. Note that the case with a single maximum of the geometric
potential was found to have the same relation between quasinormal modes and
circular null geodesics as eqn. $\left( \ref{eq:gmQC}\right) $
\cite{Cardoso:2008bp}.
\subsection{Long-lived Modes at Minimum}
In this subsection, the quasinormal modes of low and high excitations trapped
at the minimum of the effective potential are derived via the WKB
approximation, respectively. We find that there exist long-lived quasinormal
modes, which is related to the stable circular null geodesic with a purely
imaginary Lyapunov exponent. It is noteworthy that the existence of long-lived
modes has been reported in the spacetime of ECOs with an unstable photon orbit
\cite{Cardoso:2014sna}.
\begin{figure}[ptb]
\begin{centering}
\includegraphics[scale=0.6]{LLmodes}
\par\end{centering}
\caption{ Real (\textbf{Left}) and imaginary (\textbf{Right}) parts of the
lowest-lying quasinormal modes ($n=0$) trapped at the stable circular null
geodesic. Blue dashed lines represent the exact modes obtained by a numerical
method of direct integration, while gray dots denote the WKB results. The real
part $\omega_{b}^{R}$ increases monotonically with $l$. For a large $l$, the
ratio between $\omega_{b}^{R}$ and $l$ is the angular velocity of the circular
geodesic. The magnitude of $\omega_{b}^{I}$ decreases exponentially as $l$
increases, which indicates a class of long-lived modes.
\label{Long-lived modes
\end{figure}
We first discuss the low-lying quasinormal modes of frequency $\omega_{bl}$.
As illustrated in FIG. \ref{Veffx}, the $\left( \omega_{bl}^{2}\right) ^{R}$
line intersects the effective potential at the turning points $x_{1}$, $x_{2
$, $x_{3}$ and $x_{4}$. Suppose the WKB approximation is valid at a distance
$\delta$ away from these turning points, where $\delta>0$ is a small constant.
If the $\left( \omega_{bl}^{2}\right) ^{R}$ line is close enough to the
local minimum at $x=x_{b}$, both $x_{2}-\delta$ and $x_{3}+\delta$ can lie in
the interval $\left( x_{-},x_{+}\right) $, in which the effective potential
is well approximated by a parabolic expansion. With the approximated parabolic
potential, the exact solution of eqn. $\left( \ref{eq: Pert eq}\right) $ can
be expressed in terms of parabolic cylinder functions for $x\in\left(
x_{-},x_{+}\right) $. In addition, the effective potential can be
approximated by a linear function near $x_{1}$ and $x_{4}$, which leads to the
exact solutions in the neighborhoods of $x_{1}$ and $x_{4}$. The WKB
expansions are required to match the exact solution in $\left( x_{-
,x_{2}-\delta\right) $, $\left( x_{3}+\delta,x_{+}\right) $ and the
neighborhoods of $x_{1}$ and $x_{4}$. This matching strategy then gives a
family of quasinormal modes \cite{Karnakov2013},
\begin{align}
\left( \omega_{bl}^{2}\right) ^{R} & =V_{l}\left( x_{b}\right) +\left(
n+\frac{1}{2}\right) \sqrt{2V_{l}^{(2)}\left( x_{b}\right) },\nonumber\\
\left( \omega_{bl}^{2}\right) ^{I} & =-\frac{\gamma_{n}^{2}}{2\int_{x_{2
}^{x_{3}}\frac{dx}{\sqrt{\left( \omega_{bl}^{R}\right) ^{2}-V_{l}\left(
x\right) }}}\left( e^{-2\int_{x_{1}}^{x_{2}}\sqrt{V_{l}\left( x\right)
-\left( \omega_{bl}^{R}\right) ^{2}}dx}+e^{-2\int_{x_{3}}^{x_{4}}\sqrt
{V_{l}\left( x\right) -\left( \omega_{bl}^{R}\right) ^{2}}dx}\right) ,
\label{eq:mode-b
\end{align}
where
\begin{equation}
\gamma_{n}=\pi^{1/4}\frac{1}{2^{n/2}\sqrt{n!}}\left( \frac{2n+1}{e}\right)
^{\frac{1}{2}\left( n+\frac{1}{2}\right) }.
\end{equation}
Moreover, eqn. $\left( \ref{eq:mode-b}\right) $ indicates $\left(
\omega_{bl}^{2}\right) ^{R}-V_{l}\left( x_{b}\right) $ $\propto n+\frac
{1}{2}$, which means that a large $n$ can drive the $\left( \omega_{bl
^{2}\right) ^{R}$ line away from the local minimum, making $x_{2}\in\left(
x_{-},x_{+}\right) $ and $x_{3}\in\left( x_{-},x_{+}\right) $ impossible.
Therefore, eqn. $\left( \ref{eq:mode-b}\right) $ is only applicable to the
low-lying modes with small $n$. In FIG. \ref{Long-lived modes}, eqns. $\left(
\ref{eq:mode-b}\right) $ is used to evaluate the lowest-lying modes $\left(
n=0\right) $ of the hairy black hole with $\alpha=0.9$ and $Q=1.066$, which
are denoted by gray dots. Additionally, a direct integration numerically
solves eqn. $\left( \ref{eq: Pert eq}\right) $ for exact modes, which are
represented by blue dashed lines. We plot the real part $\omega_{bl}^{R}$
against $l$ in the left panel, which shows that the WKB results match well
with the numerical ones. As for the imaginary part $\omega_{b}^{I}$ in the
right panel, the WKB results differ a little from the numerical ones for a
small $l$. Nevertheless, the WKB results tend to approach the numerical ones
as $l$ increases, which demonstrates that the WKB approximation is accurate
enough when $l$ is large enough.
In the large $l$ limit, the real part $\omega_{bl}^{R}$ can be approximated
by,
\begin{equation}
\omega_{bl}^{R}\sim\Omega_{b}l-i\left( n+\frac{1}{2}\right) \lambda_{b
\sim\Omega_{b}l,\qquad l\gg1,
\end{equation}
where $\Omega_{b}$ is the angular velocity of the stable circular orbit at
$x=x_{b}$, and $\lambda_{b}$ is the corresponding Lyapunov exponent. Unlike
the global maximum case, the Lyapunov exponent $\lambda_{b}$ is purely
imaginary for the stable circular null geodesic, and hence contributes to the
real part of quasinormal modes. Nevertheless, the correction to $\omega
_{bl}^{R}$ due to $\lambda_{b}$ is negligible for a small value of $n$ in the
large $l$ limit. Hence, the real part of the low-lying quasinormal modes
trapped at the stable circular orbit is proportional to the corresponding
angular velocity. Schematically in the eikonal limit, the imaginary part
$\omega_{bl}^{I}$ can be expressed as
\begin{equation}
\omega_{bl}^{I}\sim-d\left( e^{-c_{1}l}+e^{-c_{2}l}\right) ,\qquad l\gg1,
\label{eq:reduced I of modes b
\end{equation}
where $d$, $c_{1}$ and $c_{2}$ are positive constants. It is observed that
$\omega_{bl}^{I}$ decays exponentially with respect to $l$ due to the double
potential barriers. In fact, $\omega_{bl}^{I}$ is related to the flux density
of leaking modes outside the double potential barriers. An exponentially small
value of $\omega_{bl}^{I}$ indicates that the double potential barriers trap
these modes in the potential valley with an exponentially large damping time.
The quasinormal modes living in the vicinity of the stable null geodesic are
thus dubbed as the long-lived modes. Since the long-lived modes can accumulate
around the stable null geodesic, their backreaction onto spacetime may render
the hairy black holes with a double-peak structure unstable
\cite{Cardoso:2014sna,Keir:2014oka,Guo:2021bcw}.
To study quasinormal modes at high excitation $\left( n\gg1\right) $, we
consider the $\left( \omega_{bh}^{2}\right) ^{R}$ line at some distance away
from the local minimum, which is illustrated in FIG. \ref{Veffx}. In the
vicinity of each turning point, the effective potential can be approximated by
a linear function. Then in the neighborhoods of the turning points, eqn.
$\left( \ref{eq: Pert eq}\right) $ can be exactly solved in terms of Airy
functions. Away from the turning points, WKB solutions provide a good
approximation. To obtain a complete solution, the WKB solutions should be
smoothly glued up by the exact solutions near the turning points, which leads
to the generalized Born-Sommerfeld quantization rule
\cite{Bender1999,Karnakov2013},
\begin{equation}
\int_{x_{2}}^{x_{3}}\sqrt{\omega_{bh}^{2}-V_{l}\left( x\right) }dx-\frac
{i}{4}\left( e^{-2\int_{x_{1}}^{x_{2}}\sqrt{V_{l}\left( x\right)
-\omega_{bh}^{2}}dx}+e^{-2\int_{x_{3}}^{x_{4}}\sqrt{V_{l}\left( x\right)
-\omega_{bh}^{2}}dx}\right) =\pi\left( n+\frac{1}{2}\right) .
\label{eq:BSq
\end{equation}
In the large $l$ limit, we extract the real part of quasinormal modes from the
quantization rule $\left( \ref{eq:BSq}\right) $, which reads
\begin{equation}
\int_{x_{2}}^{x_{3}}\sqrt{\left( \omega_{bh}^{R}\right) ^{2}-V_{l}\left(
x\right) }dx=\pi\left( n+\frac{1}{2}\right) . \label{eq:real eq
\end{equation}
After the real part $\omega_{bh}^{R}$ is obtained, the imaginary part
$\omega_{bh}^{I}$ is then given by
\begin{equation}
\omega_{bh}^{I}=-\frac{1}{4\omega_{bh}^{R}\int_{x_{2}}^{x_{3}}\frac
{\sigma\left( x\right) }{\sqrt{\left( \omega_{bh}^{R}\right) ^{2
-V_{l}\left( x\right) }}dx}\left( e^{-2\int_{x_{1}}^{x_{2}}\sqrt
{V_{l}\left( x\right) -\left( \omega_{bh}^{R}\right) ^{2}}dx
+e^{-2\int_{x_{3}}^{x_{4}}\sqrt{V_{l}\left( x\right) -\left( \omega
_{bh}^{R}\right) ^{2}}dx}\right) , \label{eq:imag eq
\end{equation}
where
\begin{equation}
\sigma\left( x\right) =2\cos^{2}\left( -\frac{\pi}{4}+\int_{x_{2}}^{x
\sqrt{\left( \omega_{bh}^{R}\right) ^{2}-V_{l}\left( x\right) }dx\right)
.
\end{equation}
Since the $\left( \omega_{bh}^{2}\right) ^{R}$ line is not close to the
local minimum, the left-hand side of eqn. $\left( \ref{eq:real eq}\right) $
becomes large in the eikonal limit, leading to a large $n$. Thus, eqns.
$\left( \ref{eq:real eq}\right) $ and $\left( \ref{eq:imag eq}\right) $
describe the quasinormal modes at high excitation with $n\gg1$. In this case,
$\sigma\left( x\right) $ oscillates dramatically between $x_{2}$ and $x_{3
$, and hence one has $\sigma\left( x\right) \approx1$ under the integration
of eqn. $\left( \ref{eq:imag eq}\right) $. Consequently, the imaginary part
of high excitation modes $\left( \ref{eq:imag eq}\right) $ can also be
schematically written as eqn. $\left( \ref{eq:reduced I of modes b}\right) $
in the eikonal limit. So for $l\gg1$, the quasinormal modes at high excitation
are also long-lived modes.
\subsection{Sub-long-lived Modes at Local Maximum}
\begin{figure}[ptb]
\begin{centering}
\includegraphics[scale=0.61]{LGmodes}
\par\end{centering}
\caption{Real part (\textbf{Left}), imaginary part (\textbf{Middle}) and
$\left\vert \nu+1/2\right\vert /l$ (\textbf{Right}) of the quasinormal modes,
which are trapped at the local maximum of the effective potential, for the
branches of $n=0$ (\textbf{Upper}) and $n=1$ (\textbf{Lower}). The left column
shows that the real parts of the WKB results (gray dots) agree well with these
of exact modes (blue dashed lines). In the middle column, the imaginary parts
of WKB and exact results match well for small $l$, and start to deviate from
each other when $l$ becomes larger. As shown in the right column, the value of
$\left\vert \nu+1/2\right\vert /l$ grows as $l$ increases, which means that
the WKB approximation may not be accurate for a large $l$.
\label{LGmodes
\end{figure}
Finally, the quasinormal modes of frequency $\omega_{c}$, which live near the
smaller local maximum (i.e., the local maximum smaller than the global
maximum), are investigated. As illustrated in FIG. \ref{Veffx}, we consider
the $\left( \omega_{c}^{2}\right) ^{R}$ line lying in the vicinity of the
local maximum at $x=x_{c}$, which is associated with the outer unstable
circular null geodesic. Following the analogous strategy as before, the
effective potential is approximated with a parabola in the interval $\left(
x_{-},x_{+}\right) $, and a linear function near the turning points $x_{1}$
and $x_{2}$. In addition, the turning points $x_{3}$ and $x_{4}$ are assumed
to lie in $\left( x_{-},x_{+}\right) $, which means that the WKB expansions
near $x_{3}$ and $x_{4}$ can match the exact solution with the parabolic
potential. Considering the boundary conditions and matching the WKB expansions
with the exact solutions near the turning points, we find the frequency of the
quasinormal modes is determined by (see Appendix \ref{sec:appd} for the
derivation)
\begin{equation}
\int_{x_{2}}^{x_{3}}\sqrt{\omega_{c}^{2}-V_{l}\left( x\right) }dx+\xi
+\frac{i}{4}e^{-2\int_{x_{1}}^{x_{2}}\sqrt{V_{l}\left( x\right) -\omega
_{c}^{2}}dx}=\left( n+\frac{1}{4}\right) \pi, \label{eq:QNMmatchingc
\end{equation}
where $\xi$ is defined by
\begin{equation}
e^{-2i\xi}=e^{-i\pi\nu+\left( \nu+1/2\right) }\left( \nu+\frac{1
{2}\right) ^{-\left( \nu+\frac{1}{2}\right) }\frac{\sqrt{2\pi}
{\Gamma\left( -\nu\right) },\nu+\frac{1}{2}=i\frac{\omega_{c}^{2
-V_{l}\left( x_{c}\right) }{\sqrt{-2V_{l}^{(2)}\left( x_{c}\right) }}.
\label{eq:xi
\end{equation}
Roughly speaking, the trapping in the potential valley and the tunneling
through the left potential barrier result in the first and last terms of eqn.
$\left( \ref{eq:QNMmatchingc}\right) $, respectively.
Since the distance between $x_{3}$ and $x_{4}$ is assumed to be small, the
value of $\nu+1/2$ should be tiny compared to $l$. In fact, the matching
condition $\left( \ref{eq:QNMmatchingc}\right) $ is valid to evaluate
quasinormal modes when the condition $\left\vert \nu+1/2\right\vert \ll l$ is
satisfied \cite{Karnakov2013}. Note that the valley of the effective potential
becomes deeper/shallower as $l$ increases/decreases. For a given $n$, a large
$l$ would drive the $\left( \omega_{c}^{2}\right) ^{R}$ line away from the
local maximum at $x=x_{c}$, thus making the first term in the left-hand side
of eqn. $\left( \ref{eq:QNMmatchingc}\right) $ small enough to satisfy eqn.
$\left( \ref{eq:QNMmatchingc}\right) $. However, if $\left( \omega_{c
^{2}\right) ^{R}$ is not close enough to the local maximum, the condition
$\left\vert \nu+1/2\right\vert \ll l$ can be violated, which indicates that
the WKB result $\left( \ref{eq:QNMmatchingc}\right) $ is not a good
approximation when $l$ is too large. In the case with large $l$, the effective
potential can be approximated by a linear function near the turning points
$x_{3}$ and $x_{4}$, which leads to the WKB results $\left( \ref{eq:real eq
\right) $ and $\left( \ref{eq:imag eq}\right) $. For an even larger $l$,
the $\left( \omega_{c}^{2}\right) ^{R}$ line can lie close to the minimum of
the potential, which corresponds to the aforementioned long-lived modes at low
excitations. On the other hand, when $l$ decreases, the first term in the
left-hand side of eqn. $\left( \ref{eq:QNMmatchingc}\right) $ requires that
the $\left( \omega_{c}^{2}\right) ^{R}$ line moves toward the local maximum
when $n$ is fixed. Interestingly, a too small $l$ can make $\left( \omega
_{c}^{2}\right) ^{R}$ greater than the local maximum, hence rendering the
turning points $x_{3}$ and $x_{4}$ unable to exist. So the WKB result $\left(
\ref{eq:QNMmatchingc}\right) $ may cease to exist when $l$ is too small.
In FIG. \ref{LGmodes}, two branches of quasinormal modes are obtained using
eqns. $\left( \ref{eq:QNMmatchingc}\right) $ and $\left( \ref{eq:xi
\right) $, i.e., $n=0$ in the upper row and $n=1$ in the lower row. Moreover,
we also plot the value of $\left\vert \nu+1/2\right\vert /l$ as a function of
$l$ for each branch of the quasinormal modes in the right column, which checks
the validity of the WKB approximation. Compared with exact modes (blue dashed
lines), the real part of the quasinormal modes (gray dots) is well
approximated by the WKB method. By contrast, the imaginary part of the WKB
result matches that of exact modes well except when $l$ is too large, for
which, as displayed in the right column, the condition $\left\vert
\nu+1/2\right\vert \ll l$ is not well satisfied. It also shows that there
exists a lowest $l$ for each branch (e.g., $l=6$ for $n=0$ and $l=32$ for
$n=1$), below which the WKB results do not exist. Note that the exact modes of
$n=0$ presented in FIGs. \ref{Long-lived modes} and \ref{LGmodes} are
precisely the same. As expected, the exact modes are accurately described by
the WKB results $\left( \ref{eq:QNMmatchingc}\right) $ and $\left(
\ref{eq:mode-b}\right) $ for small and large $l$, respectively.
\begin{table}[ptb]
\begin{centering}
\includegraphics[scale=0.48]{LPModes}
\par\end{centering}
\caption{Pairs of sub-long-lived modes for different values of $n$. These
quasinormal modes are evaluated near the local maximum of the effective
potential, corresponding to an unstable circular null geodesic. The exact
modes are numerically obtained using a direct integration, which can be
approximated well by the WKB method for small magnitude of $2\pi\left(
n-n^{\ast}\right) /I$.
\label{LPmodes
\end{table}
To find the relation between the quasinormal modes and the parameters of
circular null geodesics, we focus on the limit $\left\vert \nu+1/2\right\vert
\ll1$. In this limit, the matching condition $\left( \ref{eq:QNMmatchingc
\right) $ is further simplified (see Appendix \ref{sec:appd} for the
derivation),
\begin{align}
\left( \omega_{c}^{2}\right) ^{R} & \approx V_{l}\left( x_{c}\right)
+\frac{\sqrt{-2V_{l}^{(2)}(x_{c})}2\pi I}{I^{2}+\pi^{2}/4}\left( n-n^{\ast
}-\frac{e^{-2\int_{x_{1}}^{x_{2}}\sqrt{V_{l}\left( x\right) -\omega_{c}^{2
}dx}}{8I}\right) ,\nonumber\\
\left( \omega_{c}^{2}\right) ^{I} & \approx-\frac{\log2}{2I}\sqrt
{-2V_{l}^{(2)}\left( x_{c}\right) }-\frac{\sqrt{-2V_{l}^{(2)}(x_{c})}\pi
^{2}}{I^{2}+\pi^{2}/4}\left( n-n^{\ast}-\frac{Ie^{-2\int_{x_{1}}^{x_{2}
\sqrt{V_{l}\left( x\right) -\omega_{c}^{2}}dx}}{2\pi^{2}}\right) ,
\label{eq:Re Im-c
\end{align}
where $I$ is
\begin{equation}
I=\log\left( \sqrt{-2V_{l}^{(2)}\left( x_{c}\right) }\left( x_{c
-x_{2}\right) ^{2}\right) +2\int_{x_{2}}^{x_{c}}\left( \frac{\sqrt
{-2V_{l}^{(2)}\left( x_{c}\right) }}{2\sqrt{V_{l}\left( x_{c}\right)
-V_{l}\left( x\right) }}-\frac{1}{\left( x_{c}-x\right) }\right)
dx+\left( \gamma+\log2\pi\right) , \label{eq:I
\end{equation}
with a constant $\gamma=0.5772$. Here, the number $n^{\ast}$ is defined as
\begin{equation}
n^{\ast}\equiv\frac{1}{\pi}\int_{x_{2}}^{x_{c}}\sqrt{V_{l}\left(
x_{c}\right) -V_{l}\left( x\right) }dx+\frac{\log2}{8I}-\frac{1}{2},
\end{equation}
which can be interpreted as the number of a resonance filling the potential
well until $\left( \omega_{c}^{2}\right) ^{R}=V_{l}\left( x_{c}\right) $
in the eikonal limit. Moreover, the condition $|\nu+1/2|\ll1$ requires
\begin{equation}
\frac{2\pi|n-n^{\ast}|}{I}\ll1, \label{eq: delta n
\end{equation}
which provides a constraint on $n$ and $l$. For quasinormal modes of given
$n$, there appears to exist a pair of adjacent integers $l$ that well satisfy
the constraint $\left( \ref{eq: delta n}\right) $. For $n\leq5$, TABLE.
\ref{LPmodes} displays the quasinormal modes with such adjacent $l$. In this
case, it shows that the exact (obtained by a numerical direct integration
method) and WKB (obtained from eqn. $\left( \ref{eq:Re Im-c}\right) $)
results agree well with each other.
When $n$ is large enough, the value of $l$ satisfying the constraint $\left(
\ref{eq: delta n}\right) $ can be arbitrarily large. In the large $l$ limit,
eqn. $\left( \ref{eq:Re Im-c}\right) $ reduces to
\begin{align}
\omega_{c}^{R} & \sim l\sqrt{V_{\text{geo}}\left( x_{c}\right)
=\Omega_{c}l,\nonumber\\
\omega_{c}^{I} & \sim-\frac{\log2}{2\log l}\lambda_{c}, \label{eq:QNMc
\end{align}
where we use $I\sim\log l$ for $l\gg1$. Here $\Omega_{c}$ is the angular
velocity of the unstable circular null geodesic at $x=x_{c}$, and $\lambda
_{c}$ is the corresponding Lyapunov exponent. For these quasinormal modes, the
turning points $x_{3}$ and $x_{4}$ are very close to $x_{c}$, and hence their
real part $\omega_{c}^{R}$ is proportional to the angular velocity $\Omega
_{c}$. Similar to the global maximum case, the Lyapunov exponent of the outer
unstable circular orbit contributes to the imaginary part of the quasinormal
modes. However, these quasinormal modes can temporarily trap in the potential
valley, which gives a logarithmically decaying factor $1/\log l$ in their
imaginary part. For this reason, this type of quasinormal modes is dubbed as
sub-long-lived modes.
\section{Conclusions}
\label{Sec:Conc}
In this paper, we studied quasinormal modes of a scalar field in hairy black
hole spacetime, where the scalar field is minimally coupled to the gravity
sector and non-minimally coupled to the electromagnetic field with an
exponential coupling function. Intriguingly, the hairy black holes have been
demonstrated to possess two unstable and one stable circular null geodesics on
the equatorial plane outside the event horizon, corresponding to two maxima
and one minimum of the geometric potential for null geodesic motion,
respectively. It showed that, apart from a constant prefactor, the effective
potential governing quasinormal modes of the scalar perturbation can be well
approximated by the geometric potential in the eikonal regime. To explore the
relation between quasinormal modes and the parameters of the circular null
geodesics, we used the WKB method to compute quasinormal modes living near the
global maximum, the smaller local maximum and the minimum of the effective potential.
In the large $l$ limit, the real part of these quasinormal modes was shown to
be proportional to the angular velocity of the corresponding circular null
geodesics, which implies that the quasinormal modes can be related to
particles traveling along the circular null geodesics. However, since the
imaginary part $\omega_{I}$ describes the perturbation decay timescale,
$\omega_{I}$ was found to have distinct behaviors, depending on the location
of the quasinormal modes. For the quasinormal modes near the global maximum,
$\omega_{I}$ can be interpreted as slowly leaking out of particles trapped at
the unstable circular null geodesics, and is related to the Lyapunov exponent,
which reflects the instability timescale of geodesic motion. On the other
hand, the effective potential valley between two maxima plays a key role in
determining the behavior of $\omega_{I}$ of quasinormal modes near the local
maximum and the minimum in the eikonal regime. When $l\gg1$, the depth of the
potential valley was found to be proportional to $l^{2}$. The quasinormal
modes living at the bottom of the potential valley (i.e., the minimum) are
metastable states with tunneling out through the high potential barriers,
which gives that $\omega_{I}$ decays exponentially with respect to $l$. There
appeared to be two contributions to $\omega_{I}$ of the quasinormal modes near
the local maximum, i.e., classical leaking out from the unstable circular null
geodesics, which makes $\omega_{I}$ proportional to the Lyapunov exponent, and
tunneling out through the global maximum barrier, which makes $\omega_{I}$
inversely proportional to $\log l$. Due to the exponential and logarithmic
suppressions in $\omega_{I}$, the quasinormal modes in the neighborhoods of
the minimum and local maximum can live for a long time, and hence were dubbed
as long-lived and sub-long-lived modes, respectively.
The long-lived modes may accumulate along the stable circular orbit, and
eventually develop a non-linear instability. Moreover for a spinning object,
the existence of long-lived modes may also trigger an ergoregion instability
at the linear level in the static limit \cite{Cardoso:2014sna}. These
instabilities imply that long-lived modes trapped at the stable circular orbit
could destabilize the background spacetime by their backreaction. In the
future studies, it is of great interest to further address the instabilities
of long-lived and sub-long-lived modes in a hairy black hole, and explore the
end point of a hairy black hole possessing three circular null geodesics in a
dynamic evolution.
\begin{acknowledgments}
We are grateful to Yiqian Chen and Qingyu Gan for useful discussions and
valuable comments. This work is supported in part by NSFC (Grant No. 12105191,
11947225 and 11875196). Houwen Wu is supported by the International Visiting
Program for Excellent Young Scholars of Sichuan University.
\end{acknowledgments}
|
1,116,691,497,379 | arxiv | \section{Introduction}
\label{sect:intro}
\vspace{-1mm}
With the progress in recent years of quenched simulations of QCD,
deviations of the quenched hadron spectrum from experiment
are being uncovered.
For heavy quark systems precise calculations with NRQCD have shown
that the fine structure of quarkonium spectra can be reproduced only if sea
quark effects are taken into account \cite{NRQCD97}.
For the light hadron sector several reports have been made
that strange quark mass cannot be set consistently from
pseudo scalar and vector meson channels in quenched QCD
\cite{UKQCDJ,LosAlamos96,UKQCDKphi,YoshieCCP,UKQCDkenway}.
Most recently results of an extensive quenched
simulation on the CP-PACS indicate that there is a systematic disagreement
in the spectrum of baryons \cite{YoshieCCP}.
Clearly the time has come to bolster efforts toward full QCD simulations.
We have recently started an attempt in this direction using the CP-PACS
computer \cite{cppacs}.
Full QCD simulations are, however, extremely computer time consuming
compared to those of quenched QCD.
Even with the TFLOPS-class computers that are
becoming available, high statistics studies, indispensable for reliable
results, will be difficult for lattice sizes exceeding $32^3\times 64$.
Since a physical lattice size of
$L\approx 2.5$-3.0fm is needed to avoid finite-size effects
\cite{FukugitaAoki,MILC_fullKS,GF11},
the smallest lattice spacing one can reasonably reach will be
$a^{-1}\approx 2$GeV. Hence lattice discretization errors have to
be controlled with simulations carried out
at an inverse lattice spacing smaller than this value.
This will be a difficult task with the standard plaquette
and Wilson quark actions
since discretization errors are of order 20-30\% even at
$a^{-1}\approx 2$GeV \cite{Sloan97}.
This leads us to consider improved actions for our simulation of full QCD.
Studies of improved actions have been widely pursued in the last few
years.
Detailed tests of improvement for hadron spectrum,
however, have been mostly carried out within
quenched QCD (see, {\it e.g.}, Refs.~\cite{cornell,scri,bock,ImQQcdMILC}),
and only a few studies are available
for the case of full QCD \cite{ImFQcdSCRI}.
In particular a systematic investigation
of how various terms added to the gauge and quark actions, taken separately,
affect light hadron observables has not been carried out in full QCD.
We have decided to undertake such a study as the first subject of our full
QCD program. In this article we report preliminary results of this
on-going attempt.
\section{Choice of action and simulation parameters}
\label{sect:parameters}
\vspace{-1mm}
\begin{table} [t]
\setlength{\tabcolsep}{0.3pc}
\caption{Simulation parameters of full QCD runs on a $12^3\times32$
lattice.
Numbers in parenthesis denote number of configurations stored for
static quark potential analyses.
}
\label{tab:parameters}
\vspace{-0mm}
\begin{center}
\begin{tabular}{clllll}
\hline
& $\beta$ & $K$ & $C_{SW}$ &
\#config & $m_\pi/m_\rho$ \\
\hline
{\sf P-W} & 4.8 & .1846 & & 222 (0) & .83 \\
& & .1874 & & 200 (0) & .77 \\
& & .1891 & & 200 (0) & .70 \\
& 5.0 & .1779 & & 300 (100) & .85 \\
& & .1798 & & 301 (101) & .79 \\
& & .1811 & & 301 (101) & .71 \\
\hline
{\sf R-W} & 1.9 & .1632 & & 200 (0) & .90 \\
& & .1688 & & 200 (0) & .80 \\
& & .1713 & & 200 (0) & .69 \\
& 2.0 & .1583 & & 300 (100) & .90 \\
& & .1623 & & 300 (100) & .83 \\
& & .1644 & & 300 (100) & .74 \\
\hline
{\sf P-C} & 5.2 & .139 & 1.69 & 208 (100) & .83 \\
& & .141 & 1.655 & 207 (100) & .79 \\
& & .142 & 1.64 & 200 (100) & .73 \\
& 5.25& .139 & 1.637 & 198 (0) & .83 \\
& & .141 & 1.61 & 194 (0) & .76 \\
\hline
{\sf R-C} & 2.0 & .1300 & 1.54 & 201 (191) & .90 \\
& & .1370 & 1.52 & 200 (190) & .79 \\
& & .1388 & 1.515 & 138 (138) & .71 \\
\hline
\end{tabular}
\end{center}
\vspace{-5mm}
\end{table}
Improving the standard plaquette action for gluons requires the addition of
Wilson loops of six links or more in length. The precise forms of the added
terms and their coefficients differ depending on the principle one follows
for improvement. In our study we choose an action given by
\begin{equation}
S_g^{R} = {\beta \over 6}\left(c_0 \sum W_{1\times 1}
+ c_1 \sum W_{1\times 2}\right),
\label{eq:Raction}
\end{equation}
with $c0=1-8c_1$ and $c_1=-0.331$, which was obtained
by a renormalization group treatment \cite{Iwasaki83}.
The quenched static quark potential calculated with this action exhibits
good rotational symmetry and scaling already at
$a^{-1}\approx 1$GeV\cite{IwasakiPot97},
similar to those observed for tadpole-improved and fixed
point actions \cite{TIPot,FPPot}.
For improving the quark action we take the clover improvement due to
Sheikholeslami and Wohlert \cite{clover} defined by
\begin{equation}
D_{xy}^{C} = D_{xy}^{W}
+ \, \delta_{xy} c_{SW} K \sum_{\mu,\nu}
\frac{i}{4} \sigma_{\mu,\nu} F_{\mu,\nu}
\label{eq:Caction}
\end{equation}
with the standard Wilson matrix $D_{xy}^{W}$ given by
\begin{eqnarray}
D_{xy}^{W} & = & \delta_{xy}
- K \sum_\mu \{(1-\gamma_\mu)U_{x,\mu} \delta_{x+\hat\mu,y} \nonumber \\
& & +\, (1+\gamma_\mu)U_{x,\mu}^{\dag} \delta_{x,y+\hat\mu} \},
\label{eq:Waction}
\end{eqnarray}
For the clover coefficient $c_{SW}$, we adopt the meanfield
improved value \cite{meanfield}:
$c_{SW} = P^{-3/4}$ with $P$ the plaquette average.
\begin{figure*}[t]
\vspace{-1mm}
\begin{center}
\leavevmode
\epsfxsize=7.5cm \epsfbox{Figs/EPN.ps}
\hspace{3mm}
\epsfxsize=7.5cm \epsfbox{Figs/EPD.ps}
\end{center}
\vspace{-17mm}
\caption{
(a) $m_N/m_\rho$ and (b) $m_\Delta/m_\rho$ as a function of
$(m_\pi/m_\rho)^2$ for various combinations of the action.
Stars in (b) are experimental points corresponding to
$\Delta(1232)/\rho(770)$ and $\Omega(1672)/\phi(1020)$.
}
\label{fig:EPN}
\vspace{-4mm}
\end{figure*}
We carry out a comparative study of the light hadron spectrum and static quark
potential for four action combinations, choosing either the plaquette ({\sf P})
or the rectangular action above ({\sf R}) for gluons and either the Wilson ({\sf W})
or clover action ({\sf C}) for quarks.
We expect the extent of improvement to
be clearer at a coarser lattice spacing. We therefore attempt to
tune the coupling constant $\beta$ so that the lattice spacing
equals $a^{-1} \sim 1$GeV.
Our simulations are carried out for two flavors of quarks,
mostly on a $12^3\times 32$ lattice with
additional runs on an $8^3\times 16$ lattice to estimate parameters,
including a self-consistent value of $c_{SW}$.
We employ the hybrid
Monte Carlo algorithm to generate full QCD configurations at two or three
values of $K$ corresponding to $m_\pi/m_\rho \approx 0.7$-0.9.
The molecular dynamics step size is chosen
to yield an acceptance of 80-90\%.
After thermalizing for 100-200 trajectories we generate 1000-1500
trajectories.
We measure hadron propagators every 5 trajectories, using point and smeared
sources and point sinks following the method of our quenched
study \cite{YoshieCCP}.
The static quark potential is calculated on a subset of
configurations used for hadron propagator measurement.
The smearing technique of Ref.~\cite{StdST} is employed
choosing the number of smearing steps and fitting ranges from experience
in Ref.~\cite{IwasakiPot97}.
Errors are estimated by a single-elimination jackknife procedure.
Simulation parameters of our runs and
the number of configurations used for the spectrum and potential
measurements are summarized in Table~\ref{tab:parameters}.
\begin{figure*}[t]
\vspace{-1mm}
\centerline{
\epsfxsize=6.8cm \epsfbox{Figs/VvsR.PW.b50K1798.mrhophys}
\hspace{3mm}
\epsfxsize=6.8cm \epsfbox{Figs/VvsR.RW.b20K1623.mrhophys}
}
\vspace{-1mm}
\centerline{
\epsfxsize=6.8cm \epsfbox{Figs/VvsR.PC.b52K1410.mrhophys}
\hspace{3mm}
\epsfxsize=6.8cm \epsfbox{Figs/VvsR.RC.b20K1370.mrhophys}
}
\vspace{-12mm}
\caption{
Static quark potential for the {\sf P-W}, {\sf R-W}, {\sf P-C}, and {\sf R-C}\ actions
at $m_\pi/m_\rho \approx 0.8$. Scales are normalized by
the lattice spacing determined from $m_\rho$ in the
chiral limit.
}
\label{fig:pot}
\vspace{-4mm}
\end{figure*}
\section{Light hadron masses}
\label{sect:mass}
\vspace{-1mm}
Our main results for the effect of improved actions on hadron masses
are displayed in Fig.~\ref{fig:EPN} in which the ratio $m_N/m_\rho$
and $m_\Delta/m_\rho$ are plotted
as a function of $(m_\pi/m_\rho)^2$ for the four
action combinations.
The solid curves represent the well-known phenomenological mass
formula \cite{ONO}.
The inverse lattice spacing estimated from the $\rho$ meson mass in the
chiral limit is in the
range $a^{-1}\approx 1.1$-1.3GeV (see Table~\ref{tab:results}).
For the standard action combination {\sf P-W}, the ratios are well above the
phenomenological curve as may be expected at such a large lattice spacing.
When we improve the gauge action (the {\sf R-W}\ case),
the data points come closer to the curve.
By far the most conspicuous change, however, is observed
when we introduce the clover term to the quark action.
For both the {\sf P-C}\ and {\sf R-C}\ cases, the data points drop significantly
and lie on top of the phenomenological curve within errors.
The same trend is seen both for $m_N/m_\rho$ and $m_\Delta/m_\rho$.
\begin{table} [t]
\caption{Results of $a^{-1}$ and $J$ determined by
a linear fit of $m_\rho a$ in terms of $(m_\pi a)^2$
using data at $(m_\pi a)^2 \approx 0.3$-1.1
(see discussions in the text).
Errors are statistical only.
}
\label{tab:results}
\vspace{-0mm}
\begin{center}
\begin{tabular}{clll}
\hline
action & $\beta$ & $a^{-1}$[GeV] & $J$ \\
\hline
{\sf P-W}\ & 4.8 & 1.00(1) & 0.35(1) \\
& 5.0 & 1.08(2) & 0.35(1) \\
{\sf R-W}\ & 1.9 & 1.15(1) & 0.34(1) \\
& 2.0 & 1.29(1) & 0.34(1) \\
{\sf P-C}\ & 5.2 & 1.30(3) & 0.42(1) \\
& 5.25 & 1.48(6) & 0.41(2) \\
{\sf R-C}\ & 2.0 & 1.35(1) & 0.44(1) \\
\hline
\end{tabular}
\end{center}
\vspace{-5mm}
\end{table}
\begin{figure*}[t]
\vspace{-1mm}
\centerline{
\epsfxsize=6.8cm \epsfbox{Figs/a_vs_mpi2.PW12x32b50.eps}
\hspace{3mm}
\epsfxsize=6.8cm \epsfbox{Figs/a_vs_mpi2.RW12x32b20.eps}
}
\vspace{-1mm}
\centerline{
\epsfxsize=6.8cm \epsfbox{Figs/a_vs_mpi2.PC12x32b52.eps}
\hspace{3mm}
\epsfxsize=6.8cm \epsfbox{Figs/a_vs_mpi2.RC12x32b20.eps}
}
\vspace{-12mm}
\caption{
Lattice spacing $a$ in GeV$^{-1}$ as a function of $(m_\pi a)^2$,
for {\sf P-W}, {\sf R-W}, {\sf P-C}, and {\sf R-C}\ actions at
$m_\pi/m_\rho \approx 0.8$.
Note the difference in the scale of horizontal axis for {\sf R-C}.
}
\label{fig:a}
\vspace{-4mm}
\end{figure*}
A similar effect has been observed in quenched QCD
by the UKQCD Collaboration in simulations with the {\sf P-C}\ combination
at $\beta=5.7$-6.2 ($a^{-1}\approx 1.1$-2.5GeV) \cite{UKQCDKphi},
and also in a study with improved gluon actions and the clover and D234
quark actions at coarse lattice spacings of $a^{-1}\approx 0.5$-0.7GeV
\cite{bock}. It is therefore more natural to associate the origin of
the effect to valence quarks rather than to dynamical sea quarks in full QCD.
Nonetheless, changing the gluon and quark actions one at a time, we have
been able to see clearly a decisive role played by the clover term in
improving the
spectrum in a quantitative detail in the context of full QCD. In this
regard, improving the gluon action has much less effect.
In Table~\ref{tab:results} we compile our results for the
$J$ parameter, $J=m_V \, dm_V / dm_{PS}^2$
at $m_V/m_{PS}=1.8$ \cite{UKQCDJ}.
Here again the clover term brings the values
into better agreement with the experimental value of 0.48(2).
Another interesting feature in our hadron mass data is that
they exhibit a negative curvature in terms of $1/K$ toward the chiral limit.
This is in contrast to quenched QCD where hadron masses (mass squared for
pseudo scalar mesons) are generally well described by a linear function
of $1/K$ up to quite heavy quark. The curvature is reduced
if hadron masses are plotted against $m_\pi^2$, but still remains at a
significant level, especially for the {\sf R-C}\ combination. This is possibly
a full QCD effect due to sea quarks which increasingly ordered gauge
configurations toward the chiral limit.
For results with clover quark actions the
trend may be enhanced by the dependence of $c_{SW}$ on $K$
due to tadpole improvement.
The curvature causes a practical difficulty in the chiral extrapolation of
hadron masses since our runs have so far been made with at most three
values of $K$. For the estimates of $a^{-1}$ in Table~\ref{tab:results}
we employed a linear fit of hadron masses in the measured values of
$m_\pi^2$, excluding for the {\sf R-C}\ case the point of heaviest quark mass.
In this connection we note that a comparison such as in
Fig.~\ref{fig:EPN} and for
$J$ in Table~\ref{tab:results} should be made
at the same lattice spacing in physical units.
As we see in Table~\ref{tab:results} our estimate of $a^{-1}$ shows a
spread of some 20-30\% depending on the actions. Additional runs are
being conducted in an effort to match the lattice spacing more precisely.
Another point to note is that an agreement with the phenomenological
formula of Ref.~\cite{ONO} examined in Fig.~\ref{fig:EPN} is not an
improvement criterion that follows
from theoretical principles, albeit a reasonable one in view of the
success of the formula for describing the experimental spectrum.
Stability of results toward smaller lattice spacing has to be
checked, which we hope to pursue in future simulations.
\section{Static quark potential}
\label{sect:potential}
\vspace{-1mm}
We plot typical results for the static quark potential in Fig.~\ref{fig:pot}
for which $m_\pi/m_\rho \approx 0.8$.
Conversion to physical units is made with the lattice scale
given in Table~\ref{tab:results}.
Different symbols correspond to potential data measured in different spatial
directions along the vector given in the figure.
For the {\sf P-W}\ action rotational symmetry is badly violated.
Improving the quark action ({\sf P-C}) we observe that the potential exhibits a
much better rotational symmetry.
At present how much of the improvement is due to the change of the quark
action is not clear since the lattice spacing for the {\sf P-C}\
case ($a^{-1}\approx 1.3$GeV) is about 20\% smaller than for the case of
{\sf P-W}\ ($a^{-1}\approx 1.1$GeV), from which one expects a 40\% reduction in the
violation of rotational symmetry.
The best improvement is achieved with the {\sf R-W}\ and
{\sf R-C}\ actions. It is known that the
gauge action {\sf R}\ significantly improves rotational
symmetry already for the quenched case \cite{IwasakiPot97}.
The improvement naturally carries over to the present case of full QCD.
With our present statistics the potential can be measured with small
errors up to a distance of $r \approx 6a \approx 1$fm. In this
distance range we do not observe flattening of the potential due to
pair creation and annihilation effects. In fact the potential can be
well described by a linear plus Coulomb form $\sigma r +\alpha/r$.
We extract the string tension $\sigma$ by fitting potential data to
this form.
We then use the phenomenological value
$\sigma=(440{\rm MeV})^2$ to convert results to an estimate of $a$
for each value of $K$.
In Fig.~\ref{fig:a} we compare the value of $a$ obtained in this
way (filled circles) with that from $m_\rho=770$MeV,
also calculated for each value of $K$ (open circles).
For the latter quantity, results obtained after an extrapolation
of $m_\rho a$ to the chiral limit are also shown.
This extrapolation is linear for $a$ when we take
$(m_\pi a)^2$ as the horizontal axis, as adopted in Fig.~\ref{fig:a}.
We observe that the two estimates converge to a consistent value
in the chiral limit for the {\sf P-C}\ and {\sf R-C}\ combinations, while
an apparent deviation of order 40\% and 20\% are indicated
for the cases of {\sf P-W}\ and {\sf R-W}\, respectively.
Of course we expect the discrepancy observed for the last two cases
with the Wilson quark action to disappear in the continuum limit.
The agreement found for the {\sf P-C}\ and {\sf R-C}\ cases show that the clover
term helps improve the consistency of the two determinations of the
physical scale of lattice spacing already at $a^{-1}\approx 1.3$GeV.
The static quark potential at short distances is directly
relevant to the spectrum
of heavy quark systems. A recent NRQCD calculation carried out
on gauge configurations generated with the Kogut-Susskind action for sea
quark and the plaquette gluon action at $\beta=5.6$ shows a significant
deviation between the scale determined from the heavy and light hadron
spectrum\cite{Sloan97}. It would be interesting to check
for our {\sf P-C}\ and {\sf R-C}\ cases if the lattice scale determined from heavy
quark systems yield results consistent with those from $\sigma$
and $m_\rho$ .
\section{Conclusions}
\vspace{-1mm}
Our comparative study has shown that the clover term of
Sheikholeslami and Wohlert
drastically improves light hadron spectrum already
at $a^{-1} \approx 1.1$-1.3GeV. While improving the gauge action
has much less effect in this regard, we expect good rotational
symmetry of static quark potential, which is achieved with the action {\sf R},
to be important for heavy quark systems.
Our results lead us to believe that a significant step forward towards
a realistic simulation of full lattice QCD, encompassing
both heavy and light hadrons, can be
achieved with the current generation of dedicated parallel computers with
the application of improved actions. We are particularly encouraged to pursue
the use of the combination {\sf R-C}\ toward this goal.
This work is supported in part by the Grant-in-Aid
of Ministry of Education, Science and Culture
(Nos.\ 08NP0101, 08640349, 08640350, 08640404, 08740189,
and 08740221). Three of us (GB, RB, and TK) are supported by
the Japan Society for the Promotion of Science.
|
1,116,691,497,380 | arxiv | \section{Introduction}
Recently, the opposite deformations between proton
and neutron densities in C isotopes were theoretically suggested
\cite{Enyo1,Enyo3} by the method of antisymmetrized molecular dynamics
(AMD) : The proton density has an oblate deformation, while the neutron
density has a prolate deformation, and the symmetry axis of proton is
perpendicular to that of neutron in $^{10}$C and $^{16}$C.
Based on this picture,
the author also gave a qualitative explanation \cite{Enyo2}
for unusually small electric transition strength
$B(E2; 2_1^+ \to 0_1^+) = 0.63 \pm 0.19\ e^2 {\rm fm}^4$
in $^{16}$C, which is derived from life time measurement
\cite{Imai}.
According to Ref.\cite{Enyo2}, the 2$_1^+$ state is a rotational
excited state, and the rotational axis is perpendicular
to the neutron symmetry axis. In this excitation mechanism,
the proton transition strength is reduced due to the difference
of deformation between the proton and neutron distributions
mentioned above, and therefore, the $0_1^+ \to 2_1^+$ transition
is dominated by the neutron excitation.
In order to search for the possible difference of proton and neutron
contributions to excitation of the $2_1^+$ state in $^{16}$C,
inelastic scattering experiment of $^{16}$C on $^{208}$Pb target was
performed \cite{Elekes} applying the Coulomb-nuclear interference
method. The analysis was carried out by using
the deformed potential model, and the proton and neutron transition
matrix elements, $M_p$ and $M_n$, were extracted.
In Ref.\cite{Elekes}, it is mentioned that the experimental
transition probability is inconsistent with theoretical ones
(the AMD, extended AMD and shell-model calculations are cited).
However, the phenomenological analysis done in Ref.\cite{Elekes}
contains some assumptions, and hence,
it seems inappropriate to compare the $M_p$ and $M_n$ values
evaluated in Ref.\cite{Elekes} with those calculated theoretically.
To test the $^{16}$C internal wave function, we should directly
link the cross section with the wave function by calculating
the differential cross sections of the inelastic scattering of
$^{16}$C on $^{208}$Pb target with the microscopic coupled-channels
(MCC) method, and compare the calculated result with the experimental
reaction data measured in Ref.\cite{Elekes}.
The MCC method has been applied for studying
reactions of stable nuclei, such as the $^{6,7}$Li and $^9$Be elastic
and inelastic scattering \cite{Sakuragi,Hirabay1},
the resonance reactions of the
$^{12}$C+$^{12}$C system leading to inelastic
\cite{Hirabay2,Ito} and $^8$Be+$^{16}$O
$\alpha$-transferred channels \cite{Takashina},
the rainbow scattering of $^{16}$O+$^{16}$O system \cite{Katsuma},
etc., adopting the microscopic cluster model wave functions of
$^{6,7}$Li \cite{Sakuragi}, $^9$Be \cite{Okabe1},
$^{12}$C \cite{Fukushima}, and $^{16}$O \cite{Okabe2}.
Because the microscopic cluster model wave functions well reproduce
the measured charge form factors not only of the elastic
electron-scattering but also the inelastic one, the wave functions
are reliable and adopted for studying nuclear reaction
mechanisms. The MCC calculations successfully reproduce
the experimental reaction data. The reliability of the method has
already been established. Hence, we think it is possible to
examine inversely the validity of calculated internal wave functions
by comparing the result of MCC with experimental reaction data.
In this paper, we adopt the AMD internal wave function of $^{16}$C.
The reason why we think it is worthy to test the validity of the AMD
wave function in the present study is as follows.
(1) Because no inert cores and no clusters are assumed,
the AMD wave function is flexible.
Therefore, AMD is suited for structure study of general unstable nuclei.
The applicability has been proved in many works \cite{Enyo3}.
(2) Deformations of proton and neutron densities are obtained dynamically.
In other words, electromagnetic transition probability can be calculated
without introducing effective charge.
(3) In AMD, it is easy to carry out the spin-parity projection, which is
necessary for microscopic calculation of transition density.
It should be noted that
any nuclear structure models are applicable to MCC, if the diagonal
density and transition ones to excited states can be calculated from
a model wave function that gives no spurious center-of-mass component.
In the next section, we briefly describe the MCC method.
More detailed description is made
in Refs.\cite{Katsuma,Ito}.
\section{Formalism}
\subsection{Coupled-channels formalism}
The coupled-channels equation describing the collision of two nuclei
for a total angular momentum of the system $J$ is written as
\begin{eqnarray}
\left [\ T_R
+ U^{(J)}_{\alpha L, \alpha L}(R) - E_\alpha
\right ] \chi_{\alpha L}^{(J)}(R) \hspace{2.5cm} \nonumber \\
= - \sum_{(\alpha' L') \neq (\alpha L)} U^{(J)}_{\alpha L, \alpha' L'}(R)
\ \chi_{\alpha' L'}^{(J)}(R),
\label{eq:cc}
\end{eqnarray}
where $T_R$ denotes the kinetic-energy operator, and $\alpha$ and $L$
denote a channel and the orbital angular momentum associated to
the relative coordinate ${\mathbf R}$.
In the present study, we take into account the elastic and
$^{16}$C excitation channels, while only the ground state ($0^+$)
is considered for the target $^{208}$Pb nucleus.
Thus, the channel $\alpha$ is designated by the spin I and
the excitation energy $\epsilon_\alpha$ of $^{16}$C.
$E_\alpha$ represents the center-of-mass energy
for the projectile-target relative motion in channel $\alpha$
($E_\alpha=E_{\rm c.m.}-\epsilon_\alpha$).
$\chi_{\alpha' L'}^{(J)}(R)$ is the relative wave function and is
obtained by solving Eq.(\ref{eq:cc}) numerically.
In Eq.(\ref{eq:cc}), $U^{(J)}_{\alpha L, \alpha' L'}(R)$
represents the diagonal $(\alpha, L) = (\alpha', L')$ or the coupling
$(\alpha, L) \neq (\alpha', L')$ potential, which is composed of
the nuclear part $V^{N (J)}_{\alpha L, \alpha' L'}(R)$ and
the Coulomb part $V^{C (J)}_{\alpha L, \alpha' L'}(R)$.
The nuclear part is given by the double-folding model
and defined as
\begin{widetext}
\begin{eqnarray}
V^{N (J)}_{\alpha L, \alpha' L'}(R)
& = & \frac1{4\pi} \sum_\lambda \hat{L} \hat{L}'
\ i^{L'-L}\ (-1)^{J-I}\ W(ILI'L';J\lambda)\ (L0L'0|\lambda 0)
\nonumber \\
& \times &
\frac{\hat{I}}{\hat{\lambda}} \int d \hat{\mathbf R}\ d{\mathbf r}_1
d{\mathbf r}_2
\ \left [ v_{00}({\mathbf x})
\left ( \rho^{n(\lambda)}_{II'}(r_1)+\rho^{p(\lambda)}_{II'}(r_1) \right )
\left ( \rho^{n(0)}_{00}(r_2)+\rho^{p(0)}_{00}(r_2) \right ) \right .
\nonumber \\
& + & \left . v_{01}({\mathbf x})
\left ( \rho^{n(\lambda)}_{II'}(r_1)-\rho^{p(\lambda)}_{II'}(r_1) \right )
\left ( \rho^{n(0)}_{00}(r_2)-\rho^{p(0)}_{00}(r_2) \right ) \right ]
\left [\ Y_{\lambda}(\hat{\mathbf r}_1)
\ \otimes Y_{\lambda}(\hat{\mathbf R})
\ \right ]_{00},
\label{eq:fold}
\\[0.3cm]
&& ({\mathbf x}={\mathbf r}_1 + {\mathbf R} - {\mathbf r}_2) \nonumber
\end{eqnarray}
\end{widetext}
where $W(ILI'L';J\lambda)$ represents the ordinary Racah
coefficient and $\hat{I}$ is $\sqrt{2I+1}$.
$\rho^{p(\lambda)}_{II'}(r_1)$ and $\rho^{n(\lambda)}_{II'}(r_1)$
($\rho^{p(0)}_{00}(r_2)$ and $\rho^{n(0)}_{00}(r_2)$) are the
radial components of proton and neutron transition densities of $^{16}$C
($^{208}$Pb), respectively, which will be mentioned
in the next subsection in detail. $v_{00}({\mathbf x})$ represents
the spin- and isospin-scalar ($S=T=0$) component of an effective
nucleon-nucleon interaction, while $v_{01}({\mathbf x})$ represents
the spin-scalar, isospin-vector ($S=0$, $T=1$) component.
For this effective interaction,
we adopt the DDM3Y (density-dependent Michigan three-range Yukawa)
\cite{DDM3Y1,DDM3Y3}, which is defined by
\begin{equation}
v_{00(01)}(E, \rho; {\mathbf r}) = g_{00(01)}(E, {\mathbf r}) f(E, \rho),
\label{eq:DDM3Y}
\end{equation}
where ${\mathbf r}$ is the internucleon separation, and
$f(E, \rho)$ is a density dependent factor
\begin{equation}
f(E, \rho) = C(E)\ [ 1 + \alpha(E)\ e^{-\beta(E) \rho} ] .
\end{equation}
Here, $E$ denotes an incident energy per nucleon in the laboratory
system. The coefficients $C(E)$, $\alpha(E)$, and $\beta(E)$ in the
density-dependent factor $f(E, \rho)$ were determined at each energy
by fitting a volume integral of the $v_{00}(E, \rho; {\mathbf r})$
to the real part of the optical potential felt by a nucleon in
the nuclear matter \cite{JLM}.
$g_{00(01)}(E, {\mathbf r})$ in Eq.(\ref{eq:DDM3Y})
is the original M3Y interaction \cite{M3Y1,M3Y2}:
\begin{equation}
g_{00}(E, {\mathbf r}) = 7999 \frac{e^{-4r}}{4r} -
2134 \frac{e^{-2.5r}}{2.5r} + \hat{J}_{00}(E) \delta({\mathbf r})
\quad {\rm MeV},
\label{eq:M3Y}
\end{equation}
with
\begin{equation}
\hat{J}_{00}(E)\ =\ -\ 276\ (1-0.005 E)\quad {\rm MeV\ fm^3},
\end{equation}
and
\begin{equation}
g_{01}(E, {\mathbf r}) = -4886 \frac{e^{-4r}}{4r} +
1176 \frac{e^{-2.5r}}{2.5r} + \hat{J}_{01}(E) \delta({\mathbf r})
\quad {\rm MeV},
\label{eq:M3Y2}
\end{equation}
with
\begin{equation}
\hat{J}_{01}(E)\ =\ 228.4\ (1-0.005 E)\quad {\rm MeV\ fm^3}.
\end{equation}
The units for $E$ and $r$ are MeV/nucleon and fm, respectively.
The Coulomb part $V^{C (J)}_{\alpha L, \alpha' L'}(R)$ is also
given by the double-folding model. The double-folded Coulomb potential
is written in the same form as Eq.(\ref{eq:fold}), by replacing
the neutron densities and the nucleon-nucleon
interaction as
\begin{eqnarray*}
\rho^{n(\lambda)}_{II'}(r_1),\ \rho^{n(0)}_{00}(r_2) \to 0, \\
v_{00}({\mathbf x}) \to \frac{e^2}x, \\
v_{01}({\mathbf x}) \to 0.
\end{eqnarray*}
Since DDM3Y has no imaginary part, we add the imaginary
potential $W^{N (J)}_{\alpha L, \alpha' L'}(R)$ to the nuclear part,
which is assumed as $W^{N (J)}_{\alpha L, \alpha' L'}(R) =
N_I \cdot V^{N (J)}_{\alpha L, \alpha' L'}(R)$,
where $N_I$ is the only a phenomenological parameter of
the present MCC formalism.
The simple assumption for the imaginary part should be valid in
the present case, since we only discuss the cross sections at very
forward scattering angles, which is not sensitive to the detail
shape of the potential in whole radial range.
Hence, the interaction potential has the form as
\begin{equation}
U^{N (J)}_{\alpha L, \alpha' L'}(R)
= (1+ i N_I) V^{N (J)}_{\alpha L, \alpha' L'}(R)
+V^{C (J)}_{\alpha L, \alpha' L'}(R).
\end{equation}
\subsection{Transition density \label{ssec:trd}}
\begin{table}
\caption{ $B(E2)$, $M^{(2)}_p$ and $M^{(2)}_n$ values of $^{16}$C
calculated
by AMD, in which the strength of the spin-orbit force is set to
(i) $u_{ls}$=900 MeV, (ii) $u_{ls}$=1500 MeV,
and (iii) $u_{ls}$=2000 MeV.
The experimental data of $B(E2)$ is taken from Ref.\cite{Imai} .
}
\label{tab:be2}
\begin{ruledtabular}
\begin{tabular}{c c c c c}
& (i) & (ii) & (iii) & exp. \\ \hline
$B(E2; 2_1^+ \to 0_1^+) (e^2 {\rm fm}^4) $
& 1.9 & 1.4 & 0.9 & 0.63 $\pm 0.19$ \\
$M_p^{(2)}(2_1^+ \to 0_1^+)$ (fm$^2$) & 3.1 & 2.6 & 2.2 & - \\
$M_n^{(2)}(2_1^+ \to 0_1^+)$ (fm$^2$) & 13.0 & 12.2 & 8.9 & - \\
\end{tabular}
\end{ruledtabular}
\end{table}
The diagonal or transition density of proton at a position
${\mathbf r}$ with respect to the center-of-mass of the nucleus can
be expanded into multipole components :
\begin{eqnarray}
\rho^p_{I \nu, I' \nu'} ({\mathbf r}) =
\langle\ \psi^p_{I \nu}(\xi)\ | \sum_{i=1}^Z
\ \delta({\mathbf r} - {\mathbf r}_i)\ |\ \psi^p_{I' \nu'}(\xi)\ \rangle
\nonumber \\
= \sum_{\lambda, \mu}\ (I' \nu' \lambda \mu\ | I \nu)
\ \rho_{I I'}^{p(\lambda)}(r)
\ Y_{\lambda \mu}^* (\hat{\mathbf r}),
\label{eq:den}
\end{eqnarray}
where $\psi^p_{I \nu}(\xi)$ represents the proton wave function in
the nucleus.
$\rho_{I I'}^{p(\lambda)}(r)$ represents the radial component of
the transition density, which is used in Eq.(\ref{eq:fold}).
The radial component of the neutron transition density
$\rho_{I I'}^{n(\lambda)}(r)$ is obtained in the same manner
as the proton case in terms of the neutron wave function
$\psi^n_{I \nu}(\xi)$.
The proton or neutron matrix element of rank $\lambda$
is defined as
\begin{equation}
M_\tau^{(\lambda)}(I' \to I) = \hat{I}
\int \rho_{I I'}^{\tau(\lambda)}(r)\ r^{\lambda+2}\ dr,
\end{equation}
where $\tau$ represents $p$ or $n$. The proton matrix element is
related with the electric transition strength $B(E\lambda)$ as
\begin{equation}
B(E\lambda; I' \to I) = \frac{|M_p^{(\lambda)}(I' \to I)|^2}
{\hat{I}'^2}\ e^2.
\end{equation}
Here, we use the AMD wave function for $\psi^{p}_{I \nu}(\xi)$ and
$\psi^{n}_{I \nu}(\xi)$ to calculate the transition densities
defined in Eq.(\ref{eq:den}).
We consider the ground state (0$_1^+$) and first excited
2$_1^+$ state.
In Ref.\cite{Enyo2}, two versions of $^{16}$C internal wave function
are obtained in the variation before projection (VBP) formalism
changing the strength of the spin-orbit force, (i) $u_{ls}$=900 MeV
and (ii) $u_{ls}$=1500 MeV. The $B(E2)$ values as well as
the $M_p^{(2)}$ and $M_n^{(2)}$ ones obtained with AMD
wave functions (i) and (ii) are summarized in Table \ref{tab:be2}
with the experimental data of $B(E2)$.
With the spin-orbit forces (i) and (ii), the AMD calculation
reproduces well the systematic behavior of the $B(E2)$ value and
root-mean-square radius of the C isotopes as shown
in Ref.\cite{Enyo2}. In particular, the systematic feature
that the $B(E2)$ value of $^{16}$C is abnormally small compared
with other C isotopes ($^{10}$C, $^{12}$C and $^{14}$C)
is well reproduced by AMD with the spin-orbit
(i) and (ii), although the $B(E2)$ value of $^{16}$C is slightly
overestimated. In addition to above two, we also use the AMD
wave function for which the strength of the spin-orbit force is set
to (iii) $u_{ls}$=2000 MeV so as to reduce the $B(E2)$ value and
to be close to the experimental value. The $B(E2)$ value as well
as the $M_p^{(2)}$ and $M_n^{(2)}$ ones of
the case (iii) is also shown in Table \ref{tab:be2}.
One might think that the case (iii) gives the best wave function.
However, increasing the strength of the spin-orbit force to
reduce the $B(E2)$ value of only $^{16}$C as done in the case
(iii) may lead to an unrealistic situation,
because the systematic behavior of the other C isotopes
is not reproduced with such strong spin-orbit force.
Hereafter, we refer the three versions of AMD wave function
as AMD(i), AMD(ii) and AMD(iii).
\begin{figure}
\begin{center}
\includegraphics[width=60mm,keepaspectratio]{c16_e_den.ps}
\caption{ The radial components of the diagonal and transition
densities obtained with AMD(i) for $^{16}$C.
(A) $\lambda$=0 component. The solid curves represent
the proton and neutron diagonal density of the ground state (0$_1^+$),
while the dashed curves represent the those of the 2$_1^+$ state.
The proton density of the $2_1^+$ state is almost identical to that of
the $0_1^+$ state and the difference cannot be seen in the figure.
(B) $\lambda$=2 component. The dotted and solid curves represent
the proton and neutron transition densities from 2$_1^+$ to 0$_1^+$.
The double-dot-dashed and dashed curves represent the $\lambda$=2
components of the proton and neutron diagonal densities for
the 2$_1^+$ state. These two curves are displayed with opposite sign.
}
\label{fig:trd1}
\end{center}
\end{figure}
Figure \ref{fig:trd1} shows the radial components
of the diagonal and transition densities obtained
with AMD(i). In the upper panel (A), the $\lambda$=0 components
of the diagonal density are shown. The solid curves represent
the proton and neutron
densities of the ground state ($0_1^+$), and the dashed curves
represent those of the 2$_1^+$ state.
It is found that the shape of the $2_1^+$ diagonal densities are
almost the same as those of $0_1^+$, except for the region around
the origin. The proton density of the $2_1^+$ state is almost identical
to that of the $0_1^+$ state and the difference cannot be seen
in the figure. In the lower panel (B), the $\lambda$=2
components of the diagonal and transition density are shown.
The dotted and solid curves represent the proton and neutron
transition densities, respectively, for the $2_1^+ \to 0_1^+$
transition. The transition is found to be dominated by the neutron
component especially in the surface region. The double-dot-dashed and
dashed curves are the $\lambda$=2 components of the 2$_1^+$ diagonal
density for the proton and neutron. These two curves are displayed
with opposite sign in the figure.
The shapes of the double-dot-dashed and dashed curves
are similar to the dotted and solid curves, respectively.
It should be noted that the proton part of $\lambda$=2 component of
the diagonal density is proportional to the electric quadrupole moment
of the 2$_1^+$ state.
We neglect the $\lambda$=4 component of the 2$_1^+$ diagonal density,
because this component is very small and is expected to have a small
contribution to the inelastic scattering.
\begin{figure}
\begin{center}
\includegraphics[width=60mm,keepaspectratio]{c16_efh_212.ps}
\caption{ Comparison of the proton (A) and neutron (B) transition
densities for $2_1^+ \to 0_1^+$. The solid, dotted and dashed curves
represent the transition densities of AMD(i),
AMD(ii) and AMD(iii), respectively.
The solid curves in (A) and (B) are the same as the dotted and
solid curves in Fig.\ref{fig:trd1}(B), respectively.
}
\label{fig:trd2}
\end{center}
\end{figure}
In Fig.\ref{fig:trd2}(A), the proton transition density of AMD(i)
(solid curve) is compared with those of AMD(ii) and AMD(iii),
which are represented by the dotted and dashed curves, respectively.
The transition densities of AMD(ii) and AMD(iii) are smaller than that of
AMD(i), and the transition density of AMD(iii) is slightly
shifted to small $r$ side. The difference of the proton transition
density causes the difference of the electric transition strength,
as shown in Table \ref{tab:be2}.
In Fig.\ref{fig:trd2}(B), the neutron transition densities
of AMD(i), (ii) and (iii) are shown by the solid, dotted and dashed
curves, respectively. Although the overall
shapes of transition densities of (i) and (ii) are almost the same,
the magnitudes are found to be slightly different.
The magnitude of transition density of AMD(iii) is suppressed
when it is compared with those of (i) and (ii), especially
in inner region.
The proton density distribution of the $^{208}$Pb nucleus is
obtained by unfolding the charge density \cite{pb:cff},
which was obtained by a model-independent analysis of
electron-scattering experiment, with the realistic proton
charge form factor \cite{p:cff}. The neutron density distribution
is obtained by assuming that the shape of
the neutron density is the same as that of proton one, namely
$\rho_n({\mathbf r}) = (N/Z) \rho_p({\mathbf r})$.
This assumption is known to be valid for stable nuclei.
In the next section, we show the results of the MCC calculation
using the AMD transition densities described above.
\section{Results}
\begin{figure}
\begin{center}
\includegraphics[width=60mm,keepaspectratio]{fp5efb1r_p+n.ps}
\caption{
The diagonal (A) and coupling (B) potentials of
the $^{16}$C+$^{208}$Pb system calculated with AMD(i) are represented
by the solid curves. The dotted and dashed curves are
the contributions of the proton and neutron components of $^{16}$C,
respectively, where both the proton and neutron components
of $^{208}$Pb(0$_1^+$) are included.
}
\label{fig:foldpn}
\end{center}
\end{figure}
In Fig.\ref{fig:foldpn}, we show the double-folding model
potential using the densities obtained by AMD(i)
described in subsection \ref{ssec:trd}.
The solid curve represents the nuclear potential Eq.(\ref{eq:fold}).
The dotted and dashed curves are the
contributions of proton and neutron components of $^{16}$C, respectively,
where both the proton and neutron components of
$^{208}$Pb(0$_1^+$) are included. In Fig.\ref{fig:foldpn}(A),
the diagonal potential
of $^{16}$C(0$_1^+$) + $^{208}$Pb elastic channel is shown.
Since the diagonal density of the 2$_1^+$ state in $^{16}$C resembles
closely that of 0$_1^+$ state as shown in Fig.\ref{fig:trd1}(A),
the diagonal potential of $^{16}$C(2$_1^+$) + $^{208}$Pb is almost
the same as that of $^{16}$C(0$_1^+$) + $^{208}$Pb, and therefore,
not shown here. In Fig.\ref{fig:foldpn}(B), the coupling potential
of $^{16}$C(0$_1^+ \to 2_1^+$)+$^{208}$Pb is shown.
It is found that the neutron component has a dominant contribution
to the total potential, especially in vicinity of the strong absorption
radius $r_{\rm SA} \sim 11$ fm.
\begin{figure}
\begin{center}
\includegraphics[width=60mm,keepaspectratio]{fp5efb1r_ispd.ps}
\caption{The isoscalar (dotted) and isovector (dashed) components
of the diagonal (A) and coupling (B) potentials.
The solid curves are the same as those in Fig.\ref{fig:foldpn}.
}
\label{fig:foldisv}
\end{center}
\end{figure}
In order to see the effect of the isovector component,
we also decompose the diagonal and coupling potentials
into the isoscalar and
isovector components. The result is shown in Fig.\ref{fig:foldisv}.
The solid curves are the same as the solid ones in Fig.\ref{fig:foldpn},
and the dotted and dashed curves are the isoscalar (IS) and
isovector (IV) components, respectively.
Compared with the isoscalar component, the magnitude of the
isovector component is about 5$\%$, and the sign is opposite.
The 5$\%$ reduction of the coupling potential leads to about
10$\%$ reduction of the inelastic scattering cross sections
and is not negligible in the present case. Therefore,
we include the isovector component throughout the present
calculations.
\begin{figure}
\begin{center}
\includegraphics[width=60mm,keepaspectratio]{xsc5_efb1r.ps}
\caption{Angular distribution of $^{16}$C($0_1^+ \to 2_1^+$) inelastic
scattering on $^{208}$Pb target at E/A=52.7 MeV. The crosses are the
experimental data and the solid curve is the result of our calculation.
The dotted and dashed curves are the nuclear and Coulomb components,
respectively.
}
\label{fig:xsc}
\end{center}
\end{figure}
We perform a coupled-channels calculation using the diagonal
and coupling potentials shown in Fig.\ref{fig:foldpn}.
Because of the high incident energy, the coupled-channels
equations are solved numerically with the relativistic kinematics,
which has a non-negligible effect at forward-angle cross sections.
The parameter for the imaginary potential $N_I$ is set to 1.2.
Following the procedure of Ref.\cite{Elekes}, the calculated
cross sections are smoothed by Gaussian functions according
to the experimental angular uncertainty of 0.28$^\circ$.
The result is shown in Fig.\ref{fig:xsc}.
The differential cross sections are shown as
a function of scattering angle $\theta$ in the laboratory system.
The crosses are the experimental data \cite{Elekes} and the solid
curve represents the result of the coupled-channels calculation.
It is found that the MCC calculation with AMD(i) reproduces
the experimental data fairly well, although it slightly
underestimates the magnitude of the cross sections at large angles.
While the oscillatory shape of angular distribution is formed
by interference between the nuclear and Coulomb components,
which are represented by the dotted and dashed curves,
respectively, in Fig.\ref{fig:xsc},
the average strength of the calculated cross section
is determined by the nuclear excitation.
Particularly, the neutron component dominates
the nuclear excitation as understood from Fig.\ref{fig:foldpn}(B).
Therefore, present result indicates that AMD(i) slightly
underestimates the neutron excitation strength
by about 10 \%, while it overestimates the proton
excitation one as shown in Table \ref{tab:be2}.
\begin{figure}
\begin{center}
\includegraphics[width=60mm,keepaspectratio]{xsc5_efb1r_idep.ps}
\caption{$N_I$ dependence of the calculated differential cross sections.
The dotted, dashed, solid, dot-dashed and
double-dot-dashed curves are the results of the MCC calculations
with $N_I$=1.0, 1.1, 1.2, 1.3 and 1.4, respectively.
}
\label{fig:idep}
\end{center}
\end{figure}
The parameter $N_I$ cannot be determined theoretically in the present
MCC framework. In order to see the $N_I$ dependence of the calculated
result, we perform the same calculation with different $N_I$ values.
The results are shown in Fig.\ref{fig:idep}.
The dotted, dashed, solid, dot-dashed and double-dot-dashed curves
are the results of the MCC calculations with $N_I$=1.0, 1.1, 1.2, 1.3
and 1.4, respectively. It is seen that $N_I$-dependence is very weak,
although the cross sections at very forward angles slightly changes
with $N_I$. The angular distribution around
$\theta_{\rm lab}$ = 3 - 4 degrees is seen to be independent
of the $N_I$ value. Because the calculation with $N_I$=1.2 reproduces
the shape of the data a little better than others,
we choose $N_I$=1.2 in the present calculation.
\begin{figure}
\begin{center}
\includegraphics[width=60mm,keepaspectratio]{fp5efhfb1r.ps}
\includegraphics[width=60mm,keepaspectratio]{xsc5_efhfb1r.ps}
\caption{
(A) The coupling potentials calculated with AMD(i), (ii) and (iii)
are shown by the solid, dotted and dashed curves, respectively.
(B) The results of the MCC calculations with AMD(i), (ii) and (iii)
are shown by the solid, dotted and dashed curves, respectively.
The crosses are the experimental data.
}
\label{fig:hikaku}
\end{center}
\end{figure}
Next, we perform the MCC calculations using AMD(ii) and (iii).
In Fig.\ref{fig:hikaku}(A), the coupling potential with
AMD(i) represented by the solid curve is compared with
those with (ii) and (iii), which are represented by the dotted and
dashed curves, respectively. It is noticed that the strength of
the coupling potential is almost proportional to
the $M_p^{(2)}+M_n^{(2)}$ value; 16.1 fm$^2$ for (i),
14.8 fm$^2$ for (ii) and 11.1 fm$^2$ for (iii).
The results of the MCC calculations with AMD(ii) and (iii) are
shown in Fig.\ref{fig:hikaku}(B) by the dotted and dashed curves,
compared with the result of AMD(i) shown by the solid curve which
is the same as that in Fig.\ref{fig:xsc}, and the experimental data.
As expected from the strength of the coupling potential,
the differential cross sections with AMD(ii) are slightly smaller
than those calculated with AMD(i), and those with AMD(iii) severely
underestimates the magnitude of the measured cross sections.
The magnitude of differential cross section of inelastic scattering
directly reflects the electric and hadronic transition strength of
the $^{16}$C nucleus. For AMD(iii), the proton transition seems good
because it gives the $B(E2)$ value being close to the measured
one. However, the nuclear excitation strength of (iii) is too small
as shown in Fig.\ref{fig:hikaku}(B). Since the nuclear excitation
is dominated by the neutron component, this result indicates that
AMD(iii) fails to describe the neutron excitation correctly.
This fact cannot be known by the experimental data of electromagnetic
probe. Therefore, we think it is very important that the internal wave
function of a nucleus obtained theoretically
is tested by the experimental data of hadronic probe
to investigate the behavior of neutron component.
For further investigation, it is very interesting to analyze
the experimental data of the $^{16}$C inelastic scattering on
proton measured at RIKEN \cite{Jin},
which is more sensitive to the neutron excitation.
\begin{figure}
\begin{center}
\includegraphics[width=60mm,keepaspectratio]{c16_g_den.ps}
\includegraphics[width=60mm,keepaspectratio]{xsc5_gfb1r.ps}
\caption{(A) The proton transition density when the symmetry
axis is artificially rotated to be parallel to the neutron one
is shown by
the dotted curve. The neutron transition density shown by
the solid curve is the same as the dotted curve in Fig.\ref{fig:trd2}(B).
(B) The result of the MCC calculation when the aligned proton
density shown in (A) is used.
}
\label{fig:align}
\end{center}
\end{figure}
Finally, we see how the angular
distribution changes if the symmetry axis of proton is aligned
to that of neutron. In the original AMD wave function, the symmetry
axis of proton is perpendicular to that of neutron.
We artificially make the aligned proton density from
the AMD wave function (ii) by rotating the proton wave function to set
its symmetry axis to be parallel to the neutron one.
The aligned proton density is shown in Fig.\ref{fig:align}(A)
by the dotted curve.
The solid curve is the neutron transition density which is
the same as the dotted curve in Fig.\ref{fig:trd2}(B).
The proton transition
density has the opposite sign to the neutron one in the surface
region. The result of the MCC calculation using this aligned proton
transition density is shown in Fig.\ref{fig:align}(B).
It is found that the calculated
angular distribution is out-of-phase compared with the experimental
data. This result indicates that the proton transition density
should have the same sign as the neutron transition density
in the surface region.
\section{Summary and Conclusion}
In order to test the $^{16}$C internal wave function,
we studied the $^{16}$C($0_1^+ \to 2_1^+$)
inelastic scattering on $^{208}$Pb target at $E/A$=52.7 MeV
\cite{Elekes} by the microscopic coupled-channels (MCC) method
using the internal wave function of the $^{16}$C nucleus obtained
by the antisymmetrized molecular dynamics (AMD) \cite{Enyo2}.
In Ref.\cite{Enyo2}, two versions of wave function
are obtained with the strength of spin-orbit force
(i) $u_{\ell s}$= 900 MeV and (ii) $u_{\rm \ell s}$=1500 MeV.
It was shown in Ref.\cite{Enyo2} that these AMD calculations
reproduced the systematic behavior of $B(E2)$ value and
root-mean-square (RMS) radius of C isotopes. The MCC calculations using
these wave functions of $^{16}$C reproduce well the measured
differential cross sections, although they slightly underestimate
the magnitude of the cross sections at large angles.
Especially, the shape is rather well reproduced around
$\theta_{\rm lab}$= 3 - 4 degrees,
where the angular distribution is independent of the strength
parameter $N_I$ of the imaginary potential.
While the shape of differential cross section
due to the interference between the nuclear and Coulomb excitation
components is sensitive to the strength of the proton excitation,
the magnitude of the cross section
is sensitive to the strength of neutron excitation,
because the nuclear excitation is dominated by the neutron one
in the present case.
Therefore, we can conclude that the AMD wave function of (i)
predicts the neutron excitation strength of $^{16}$C reasonably well,
although the strength may be slightly underestimated by about 10 \%.
We also performed a coupled-channels calculation using AMD
wave function, for which the strength of the spin-orbit force
is set to (iii) $u_{\ell s}$=2000 MeV. Although this wave function
gives $B(E2)$ value of $^{16}$C close to the measured one,
the systematic behavior of $B(E2)$ and RMS radius of the other
C isotopes are failed to be reproduced due to the unrealistic
strength of the spin-orbit force. The MCC calculation using AMD(iii)
severely underestimates the differential cross sections, which
indicates that the neutron excitation is not properly described
when the spin-orbit force (iii) is used.
It can be said that testing the validity of calculated
wave function only with the electromagnetic experimental data,
such as $B(E2)$ value, may be insufficient.
Especially for neutron-rich nuclei, it is expected that
the proton density is largely different from the neutron one.
Therefore, we think it is very important that the internal wave
functions of a nucleus obtained by any nuclear structure theory
should be tested not only by the experimental data of electromagnetic
probe but also those of hadronic probe, particularly to investigate
the behavior of the neutron component, as done in the present paper.
We showed that the MCC calculation is a useful tool to link
the inelastic scattering data with the internal wave functions
obtained theoretically.
Note that since the diagonal density is also reflected by the behavior
of the calculated differential cross section through the diagonal
potential as the distorting effect,
the overall feature of diagonal and transition densities
of nucleus can be tested in the consistent procedure.
Nuclear reaction data themselves are available for
the nuclear structure study similarly to the RMS radius,
electromagnetic transition strength, charge form factor, etc.,
by applying the MCC method.
\section*{Acknowledgment}
The authors would like to thank Prof. T. Motobayashi for valuable
comments. They are also thankful to Dr. Z. Elekes for providing us
the experimental data. One of the authors (M.T.) is grateful
for the financial assistance
from the Special Postdoctoral Researchers Program of RIKEN.
This work is partially performed in
the "Research Project for Study of Unstable
Nuclei from Nuclear Cluster Aspects" sponsored by RIKEN.
|
1,116,691,497,381 | arxiv | \section{Introduction}
Reading comprehension (RC) is a widely studied topic in Natural Language Processing (NLP) due to its value in human-machine interaction.
In past relevant research, a variety of large-scale RC datasets were proposed, e.g., \textsc{CNN/DailyMail}~\cite{cnndaily}, \textsc{SQuAD}~\cite{squad}, \textsc{NewsQA}~\cite{newsqa}, \textsc{CoQA}~\cite{coqa} and \textsc{DROP}~\cite{drop}. With a large number of annotations, these datasets make training end-to-end deep neural models possible~\cite{rnet,qanet}. Recent studies also show that BERT~\cite{bert} model achieves higher answer accuracy than human on \textsc{SQuAD}.
However, only unlabeled data is available in many real-world applications. It is a common challenge that machine can learn knowledge well enough in one domain and then answer questions in other domains without any labels. Unfortunately, the generalization capabilities of some existing RC neural models were proven to be weak across different datasets~\cite{yogatama_general}. In fact, the same conclusion can be drawn for BERT according to our experiment, e.g., the performance drops on \textsc{CNN} dataset using the model trained on \textsc{SQuAD}. Therefore, studies to eliminate such performance gaps between various datasets deserve effort.
A potential direction to handle it is transferring knowledge from a labeled source domain to a different unlabeled target domain, which is known as unsupervised domain adaptation~\cite{transfersurvey}, leveraging data from both domains.
However, very few works have studied unsupervised domain adaptation on RC tasks. Although \citeauthor{qa_unsupervised} adapted models using a vanilla self-training, its self-labeling approach cannot ensure the labeling accuracy on the target domain that differs much from the source domain. Besides, it is only applied to some small RC datasets, so its effectiveness on large-scale datasets remains unclear and no general representation is learned. Research on large datasets is more meaningful, since they contains more different patterns than small ones. They pose a greater challenge and better fitting realistic conditions, being the basis to build strong deep neural models. In addition, analyzing the possible influential factors for transfer is also necessary, which provides guide for adaptation. Nevertheless, very limited works contribute to it~\cite{multiqa}.
In this paper, to make use of numerous unlabeled samples in real applications, we focus on unsupervised domain adaptation on large RC datasets.
We propose a novel adaptation method, named as Conditional Adversarial Self-training (CASe).
A fine-tuned BERT model will be obtained on the source domain firstly. Then specifically, in the adaptation stage, an alternated training strategy is applied, containing self-training and conditional adversarial learning in each epoch. The pseudo-labeled samples of the target dataset generated by the last model along with low-confidence filtering will be used for self-training. Compared to the method in~\cite{qa_unsupervised}, the filtering prevent model from learning error target domain distribution especially for large datasets. The conditional adversarial learning, whose discriminator input combines BERT features and final output logits, is utilized because the conditioning generates more comprehensive information than feature only. It encourages the model to learn generalized representations and avoid overfitting on the pseudo-labeled data.
Moreover, we test the generalization of BERT among 6 large RC datasets to prove the importance of adaptation since it fails under most conditions.
The influential factors that caused the failure are also illustrated via analysis.
We validate the proposed method on different pairs of these 6 datasets, and demonstrate the baseline performance.
Our contributions can be summarized as:
\begin{itemize}
\item We propose a new unsupervised domain adaptation method on RC, which is alternated-staged including self-training with low-confidence filtering and conditional adversarial learning.
\item We experimentally evaluate the method on 6 popular datasets, and it shows a comparable performance to models trained on target datasets, which can be regarded as a pioneer study and a baseline for future work.\footnote{Code available at: \url{https://github.com/caoyu1991/CASe}}
\item We show the transferability among different datasets not only depends on corpora, but also is affected by question forms significantly.
\end{itemize}
\section{Related Work}
Numerous models were proposed for RC tasks. R-NET integrates mutual attention and self-attention into RNN encoder to refine the representation~\cite{rnet}. QANET~\cite{qanet} leverages similar attention in a stacked convolutional encoder to promote performance. BERT~\cite{bert} stacks multiple transformers~\cite{transformer}. By applying unsupervised pre-training tasks and then fine-tuning on specific dataset, it achieves state-of-the-art performance in various NLP tasks including RC. However, none of them explores the model generalizability across different datasets, and their transferabilities still remain unknown.
Prior work on domain adaptation has been done for several NLP tasks.
Some works apply instance weighting on statistical machine translation (SMT)~\cite{mtda_discriminative} or cross-language text classification~\cite{da_textclassification}. Cross-entropy based method is used to select out-domain sentences for training SMT~\cite{mtda_pseudo}. There are also attempts for RC, showing that the performance of RC models on small datasets can be improved by supervised transferring from a large dataset~\cite{qa_transfer,da_qa} using annotations from both domains. MultiQA~\cite{multiqa} strengthens the generalizability of RC model by training on samples from various datasets. Though some studies concentrate on the generalization of RC models and analyze their performance on multiple datasets~\cite{yogatama_general,multitask_nlu}, they do not analyse the influential factors in detail. A parallel work for RC unsupervised domain adaptation~\cite{qa_unsupervised} utilizes a simple self-labeling for re-training, and it is evaluated on 3 small datasets containing thousands of samples.
Many relevant works focus on unsupervised domain adaptation for general CV tasks. Co-training~\cite{co-training} uses two classifiers and two data views to generate labels for unlabeled samples. Both tri-training~\cite{tri-training} and asymmetric tri-training~\cite{asymtri} extend co-training by using three classifiers to generate labels, i.e., labels will be added if two classifiers make an agreement. Some approaches try to learn domain-invariant representations by selecting similar instances between domains or adding a classifier to distinguish domains~\cite{disinvariantfeature,dabackprop}. ADDA~\cite{adda} leverages the Generative Adversarial Networks (GANs) loss on domain label to train a new network. CDAN~\cite{cdan} applies conditional adversarial learning which combines features and labels using a multilinear mapping.
Our work is part of research on unsupervised domain adaptation as well as generalization analysis, with an emphasis on large-scale reading comprehension datasets.
\section{Problem Definition}
We first describe a standard text-span-based RC task such as SQuAD~\cite{squad}. Given a supporting paragraph $\mathcal{P} = \left\langle {p_1},{p_2},...,{p_M}\right\rangle$ with $M$ tokens and a query $\mathcal{Q} = \left\langle {q_1},{q_2},...,{q_L}\right\rangle$ with $L$ tokens, the answer $\mathcal{A} = \left\langle {p_{a^{s}}},{p_{{a^{s}} + 1}},...,{p_{a^{e}}}\right\rangle$ is a text piece in the original paragraph. This task aims to find out the correct answer span $(a^{s},a^{e}), 0 \le a^{s} \le a^{e} \le M$. It means that a model needs to predict two values: the start index and the end index of the answer span.
Unsupervised domain adaptation for RC then is formally defined as follows. There is a source domain with labeled data and a target domain with unlabeled data. We have $n$ labeled samples $\{ (x_i,y_i)\} _{i = 1}^{{n}}$ in the source domain, in which text $x_i=(\mathcal{P}_i, \mathcal{Q}_i)$ and label $y_i=(a^s_i,a^e_i)$, and $n'$ unlabeled target domain samples $\{ (x_j')\} _{j = 1}^{{n'}}$, sharing the same standard RC task as described above. We assume that the data in the source domain is sampled from distribution $\mathcal{D}({x},{y})$ and the data in the target domain is sampled from distribution $\mathcal{D}'({x'},{y'})$, $\mathcal{D} \ne \mathcal{D}'$. Our goal is to find a deep neural model that can reduce the distribution shift and achieve the optimal performance on the target domain.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.94\textwidth]{new_CASE_framework.pdf}
\caption{Framework of CASe. (Solid boxes: parameters will be updated. Dashed boxes: parameters will not be updated)}
\label{fig:framework}
\end{figure*}
\section{Domain Adaptation Method}
The main purpose of our approach is to provide a way to transfer a model trained on labeled data from the source domain to the unlabeled target domain.
Generally, the model with good generalization can reduce the discrepancy of intermediate states generated from different distributions~\cite{da_theory}. We use the BERT model~\cite{bert}, which is a pre-trained contextual model based on unsupervised NLP tasks with a huge 3.3-billion-word corpus. Its model depth and huge training data size ensure that it can generate universal feature representations under a variety of linguistic conditions. And we consider applying adversarial learning to minimize cross-domain discrepancy between $\mathcal{D}({x},{y})$ and $\mathcal{D}'({x'},{y'})$~\cite{adda}. Moreover, pseudo-label based self-training~\cite{self-training} with low-confidence filtering is also utilized for further leveraging unlabeled data in the target domain.
The framework of the proposed Conditional Adversarial Self-training (CASe) approach for unsupervised domain adaptation on RC is illustrated in Figure~\ref{fig:framework}. Our model has three components: a BERT feature network, an output network, and a discriminator network. There are 3 steps in CASe. Firstly, we fine-tune the BERT feature model and output network on the source domain. Secondly, we use self-training on the target domain to get distribution-shifted model. Thirdly, we apply conditional adversarial learning on both domains to further reduce feature distribution divergence. The second and third steps will be proceed iteratively.
\subsection{Training on the Source Domain}
Since we have the labeled data in the source domain, we extend and fine-tune the unsupervised pre-trained base BERT model on these samples.
The BERT feature ${\overline{\bf{f}}} \in {\mathbb{R}^{m \times d}}$ is firstly obtained, in which $m$ and $d$ are the maximum input sequence length and the hidden state dimension in BERT respectively. Then a single-layer linear output network with 2-dimension output vector is added following BERT. One of its output value is used as the answer start logits ${\bf g}^s \in {\mathbb{R}^m}$ and the other one is used as the answer end logits ${\bf g}^e \in {\mathbb{R}^m}$.
Finally, the supervised pre-trained BERT model and output network can be obtained by optimizing the following loss function:
\begin{equation}
\mathcal{L} = \frac{1}{2}\left({f_{CE}({{\bf{g}}^s},{a^s}) + f_{CE}({{\bf{g}}^e},{a^e})}\right),
\end{equation}
where $f_{CE}$ is the cross entropy loss function, $a^s$ and $a^e$ are labels for the answer start and end indices, respectively.
To further enhance the regularization of BERT, we add a batch normalization layer~\cite{batchnorm} between the BERT feature ${\overline{\bf{f}}} \in {\mathbb{R}^{m \times d}}$ and the output network.
\subsection{Self-training on the Target Domain}
After obtaining the pre-trained model from the source domain, we use it to predict sample labels in the target domain. Although data distribution is possibly different between domains, we can still make an assumption that different domains share some similar characteristics. That is, some predicted answers will be similar to or the same as correct answer spans even in a new domain. These predictions combined with corresponding samples $x'=(\mathcal{P}, \mathcal{Q})$ in the target domain, named as pseudo-labeled samples, can be used to teach the model about a new distribution.
Similar to the method in asymmetric tri-training~\cite{asymtri}, to avoid significant error propagation, we select predictions of high confidence as pseudo labels. Since our model generates probabilities for every predicted answer start and end index, a threshold ${T_{prob}}$ will be employed to filter low-confidence samples.
Normally, we apply a softmax function to all output logits and regard generated values as possibilities for indices being the answer start or end index. However, the passage length is usually very large in RC tasks, leading to a very small probability value for each index. This method reduces the numerical distinctions between possibilities and brings more noise, which affects the effectiveness of threshold-based filtering. We thus select a set $\mathcal{U}$ of $n_{best}$ start and end index pairs firstly. These pairs have top-$n_{best}$ sums of start index logits $g_{i}^s$ and end index logits $g_{j}^e,0 \le i \le j \le M$ for corresponding answer spans involved in the target domain, i.e.,
\begin{equation}
\mathcal{U}=\{ {(i,j)_1,...,(i,j)_{n_{best}}}\} = \mathop {\arg {{\max }_{{n_{best}}}}}\limits_{(i,j)} (g_{{i}}^s + g_{{j}}^e).
\end{equation}
A softmax function then is applied to these $n_{best}$ sums. The span with the highest value after softmax will be regarded as the predicted span and its value is defined as the generating probability $p^{g}$ for current sample, i.e.,
\begin{equation}
{p^{g}} = \max ({\rm{softmax}}(\{ g_i^s + g_j^e\} )),(i,j) \in \mathcal{U}.
\end{equation}
Samples with $p^{g} \ge T_{prob}$ will be put into pseudo-labeled sample set using the predicted start and end indices as their labels, $\widehat {{a^s}'}$ and $\widehat {{a^e}'}$. The model is trained similar to (1), but $a^s$ and $a^e$ are replaced by $\widehat {{a^s}'}$ and $\widehat {{a^e}'}$, respectively.
In each epoch during adaptation, pseudo-labeled samples are always generated by the last model and previous ones will be abandoned, while ${T_{prob}}$ keeps the same.
\subsection{Conditional Adversarial Learning}\label{adv}
Adversarial learning leverages a discriminator to predict domain classes. But most models only use feature representations for prediction~\cite{adda,dabackprop}, which may be insufficient because the joint distribution of features and labels are not identical across domains.
Since our span-based RC tasks can be regarded as a multi-class classification problem and the span properties vary across domains, it poses more challenges for discriminators based only on features. Inspired by the Conditional Adversarial Network (CDAN)~\cite{cdan}, we utilize conditional adversarial learning fusing feature ${\bf{f}}$ and output logits ${\bf{g}}$ for a comprehensive representation, whose network architecture is illustrated in Figure~\ref{fig:discriminator}. It is noted that ${{\bf{f}} \in {\mathbb{R}^{m \times d}}}$ is the BERT feature after the batch normalization layer.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Discriminator.pdf}
\caption{Architecture of the conditional adversarial network used in CASe.}
\label{fig:discriminator}
\end{figure}
One approach to condition discriminator $D$ on ${\bf{g}}$ is using multilinear map, which is the outer product ${\bf{x}} \otimes {\bf{y}}$ of two vectors and is superior than concatenation~\cite{multilinearmap}. However, it results in dimension explosion and the output dimension is $m \times d \times 2m$ in our application, which is impossible to be embedded. Following CDAN, we tackle it in a randomized approach. The multilinear map of two pairs of features and outputs can be approximated by
\begin{equation}
\left\langle {{\bf{f}} \otimes {\bf{g}},{\bf{f}}' \otimes {\bf{g}}'} \right\rangle \approx \left\langle {{Z_R}({\bf{f}},{\bf{g}}),{Z_R}({\bf{f}}',{\bf{g}}')} \right\rangle,
\end{equation}
where $Z_R$ is a randomly sampled multilinear map and generates a vector of dimension $d_R \ll m \times d \times 2m$. Given two randomly initialized matrices fixed during training ${\bf{R}}_{\bf{f}} \in \mathbb{R}^{d_R \times m}$ and ${\bf{R}}_{\bf{g}} \in \mathbb{R}^{d_R \times 2m}$, $Z_R$ can be defined as
\begin{equation}
{Z_R}({\bf{f}},{\bf{g}}) = \frac{1}{{\sqrt {{d_R}} }}\left( {{{\bf{R}}_{\bf{f}}}{av{g_{{\rm{col}}}}}({\bf{f}})} \right) \circ \left( {{{\bf{R}}_{\bf{g}}}{\bf{g}}} \right).
\end{equation}
Here, ${\bf{g}}={\bf{g}}^s \oplus {\bf{g}}^e \in \mathbb{R}^{2m}$. ${av{g_{{\rm{col}}}}}$ means average along columns, transforming the feature matrix into a vector in ${\mathbb{R}^m}$, $\circ$ is element-wise multiplication.
The discriminator is a 3-layer linear network, whose final layer has a 1-dimension output with sigmoid as the activation function to get a scalar between 0 and 1. And we directly adopt $Z_R({\bf{f}},{\bf{g}})$ as its input for computation efficiency.
All 3 components, BERT feature network, output network, and discriminator network, are jointly optimized in this stage because discriminator conditions both features and outputs. The loss function is the binary cross entropy loss
\begin{equation}
{\mathcal{L}_{adv}} = y^d\log (\widehat y^d) + (1 - y^d)\log (1 - \widehat y^d),
\end{equation}
where $\widehat {y^d}$ is the prediction value from $D$ for domain label, while $y^d \in \{0,1\}$ is the ground truth label, 0 stands for the source domain and 1 for the target domain. Samples $x, x'$ from both domains will be used for joint training.
However, such an optimization imposes equal importance to different samples, while samples that are hard to transfer will pose negative effect on domain adaptation. We quantify the uncertainty of a sample using entropy $E({\bf{p}}) = - \sum\nolimits_{i = 1}^M ({p_i^s} \log p_i^s + p_i^e\log p_i^e$), to ensure a more effective transfer. $p_i^s$ and $p_i^e$ are probabilities for $i$-th token being the answer start or end index, which can be obtained by applying softmax to whole output logits ${{\bf{g}}^s}$ and ${{\bf{g}}^e}$. We encourage the discriminator to place a higher priority for samples that are easy to transfer. In other words, samples with lower entropy will have higher weights during the conditional adversarial learning (CASe+E). The adversarial loss function can be reformed using the weight $w$ derived from entropy, i.e.,
\begin{equation}
{\mathcal{L}_{adv-E}} = {w} \cdot {\mathcal{L}_{adv}}, {w}=1 + {e^{ - E({\bf{p}})}}.
\end{equation}
No matter which loss is employed, the conditional adversarial learning makes the feature model and the output model more transferable and generalizable.
\subsection{Algorithm}
The entire procedure of CASe is shown in Algorithm 1. It is noted that no adversarial learning is included in the last epoch of domain adaptation. This aims to make the final model better fit the target domain, because adversarial learning will enhance generalization while affects fitting in specific domains. In step 16 we balance the label number of different domains by removing samples randomly from the larger dataset in merging to avoid unbalanced training.
\begin{table}[htbp]
\centering
\begin{tabular}{l}
\toprule[1.5pt]
\specialrule{0em}{0pt}{0pt}
\textbf{Algorithm 1:CASe.} Given a BERT feature network $F$,\\
\specialrule{0em}{-0.5pt}{-0.5pt}
an output network $G$, and a discriminator $D$. Pre-\\
\specialrule{0em}{-0.5pt}{-0.5pt}
training epoch number is $N_{pre}$ and domain adaptation\\
\specialrule{0em}{-0.5pt}{-0.5pt}
training epoch number is $N_{da}$\\
\specialrule{0em}{0pt}{0pt}
\midrule
\textbf{Input:} data in the source domain $\mathcal{S}=\{(\mathcal{P}_i,\mathcal{Q}_i,a^s_i,$\\
$a^e_i)\}_{i = 1}^n$, data in the target domain $\mathcal{S}'=\{(\mathcal{P}_i',\mathcal{Q}_i')\}_{i = 1}^{n'}$. \\
\textbf{Output:} Optimal model $F$, $G$ in the target domain\\
1 \ \ \textbf{for} j=1 \textbf{to} $N_{pre}$ \textbf{do}\\
2 \ \ \quad Train $F$ and $G$ with mini-batch from $\mathcal{S}$ \\
3 \ \ \textbf{end for} \\
4 \ \ \textbf{for} j=1 \textbf{to} $N_{da}$ \textbf{do}\\
5 \ \ \quad Pseudo labeled set $\mathcal{S}^P=\emptyset$ \\
6 \ \ \quad \textbf{for} k=1 \textbf{to} $n'$ \textbf{do} \\
7 \ \ \qquad Use $F$, $G$ to predict the label $\widehat {{a_k^s}'}$ and $\widehat {{a_k^e}'}$ for \\
\ \ \ \ \ \quad \qquad $(\mathcal{P}_k',\mathcal{Q}_k')$ and get probability $p^g_k$\\
8 \ \ \qquad \textbf{if} $p^g_k \ge T_{prob}$ \textbf{do} \\
9 \ \ \quad \qquad Put $(\mathcal{P}_k',\mathcal{Q}_k',\widehat {{a_k^s}'},\widehat {{a_k^e}'})$ into $\mathcal{S}^P$ \\
10 \qquad \textbf{end if} \\
11 \quad \textbf{end for} \\
12 \quad \textbf{for} mini-batch $\mathcal{B}$ \textbf{in} $\mathcal{S}^P$ \\
13 \qquad Train $F$ and $G$ with mini-batch $\mathcal{B}$ \\
14 \quad \textbf{end for} \\
15 \quad \textbf{if} j $< N_{da}$ \textbf{do} \\
16 \qquad $\mathcal{R}=(\{(\mathcal{P}_i,\mathcal{Q}_i)\}_{i = 1}^{n}) \cup \mathcal{S}'$ \\
17 \qquad \textbf{for} mini-batch $\mathcal{B}$ \textbf{in} $\mathcal{R}$ \\
18 \quad \qquad Train $F$,$G$,$D$ with $\mathcal{B}$ and domain labels \\
19 \qquad \textbf{end for} \\
20 \quad \textbf{end if} \\
21 \textbf{end for} \\
\bottomrule[1.5pt]
\end{tabular}
\label{tab:my_label}
\vspace{-0.2cm}
\end{table}
\section{Experiments}
We first evaluate the generalization of BERT among 6 recently released RC datasets and analyze influential factors. Then we show the performance of proposed CASe for unsupervised domain adaptation on these datasets, along with ablation study and the effects of hyperparameters.
\subsection{Data}
\textsc{\textbf{SQuAD}}~\cite{squad} contains 87k training samples and 11k validation (dev) samples, with questions in natural language given by workers based on paragraphs from Wikipeida. Answers are in text span forms.
\noindent \textsc{\textbf{CNN}} and \textsc{\textbf{DailyMail}}~\cite{cnndaily} contains 374k training and 4k dev samples, 872k training and 64k dev samples respectively. Their questions are in cloze forms and answers are masked entities in passages.
\noindent \textsc{\textbf{NewsQA}}~\cite{newsqa} contains 120k samples in total, in which QA pairs were generated by crowded workers in natural forms with text spans based on stories from \textbf{CNN}.
\noindent \textsc{\textbf{CoQA}}~\cite{coqa} contains 109k training samples and 8k dev samples, questions are given as conversation forms with multiple turns and answers are in various types including text spans and yes/no.
\noindent \textsc{\textbf{DROP}}~\cite{drop} contains 77k training samples and 9.5k dev samples, given by workers on Wikipedia. It mainly focuses on numerical reasoning and involves answers in numbers or dates except text spans.
Since \textsc{CNN} and \textsc{DailyMail} is much larger than other datasets, we uniformly sampled subsets from two datasets as data source to speed up experiments. The keep ratio is 1/4 and 1/10 respectively, resulting in similar scales as others.
In addition, we pre-processed samples to conduct answer spans for several datasets.
The answers in \textsc{CNN} and \textsc{DailyMail} are mask symbols such as "\textit{@entity1}" which may appears several times in the text. We use a heuristic method to extract spans: 1) find all position indices $\{a_i\}$ of answer masks in a passage; 2) find all position indices $\{\{e^1_i\},...,\{e^K_i\}\}$ of all $K$ question entities in passage; 3) calculate the sum of absolute index distances between an answer appearance $a_j$ and every question entity nearest to it, and $a_j$ with the smallest sum will be used as answer index.
All masks in these two datasets are also replaced with homologous original tokens.
\textsc{CoQA} contains answers not in text span form. We follow the F1-socre-based method in original paper to obtain the best answer spans. And the concatenation of all previous QA pairs along with the original question in current turn is used as new question. Samples with yes/no as answers or no answer span being found will be discarded. Similarly, we only remain answerable questions with text spans as answers in \textsc{NewsQA} and \textsc{DROP}.
The characterizations of 6 processed datasets are shown in Table~\ref{datasets}. \textsc{DROP} is significantly smaller than others because answers of quantitive reasoning samples are not extractive.
\begin{table}[]
\begin{center}
\begin{tabular}{|l|c|c|c|c|}
\hline \bf Dataset & Train & Dev & Corpus & Question \\
\hline \textsc{SQuAD} & 87,599 & 10,570 & Wikipedia & crowd \\
\textsc{CNN} & 93,627 & 3,833 & CNN news & cloze \\
DailyMail & 87,253 & 6,372 & Daily mail & cloze \\
\textsc{NewsQA} & 76,341 & 4,327 & CNN news & crowd \\
\textsc{CoQA} & 86,077 & 6,272 & Multiple$^*$ & crowd \\
\textsc{DROP} & 28,267 & 3,389 & Wikipedia & crowd \\
\hline
\end{tabular}
\end{center}
\caption{\label{datasets} Characterizations of datasets {\bf after processing}. (*Including corpus from MCTest, CNN, Wikipedia etc.)}
\end{table}
\subsection{Implementation Detail}
We implement CASe based on the BERT implementation in PyTorch by Hugging Face,
using the \textit{base-uncased} pre-trained model with 12 layers and 768-dim hidden state. The maximum input length $m$ is 512 in which the maximum query length is 40. The random sampling dimension $d_R$ is 768. The input dimension of the first layer in the adversarial network is 768. And its intermediate dimension is 512, using ReLU as the activation function in first two layers. Generating probability threshold $T_{prob}$ is set as 0.4 and $n_{best}=20$. Adam optimizer~\cite{adam} is employed with learning rate $3\times10^{-5}$ in the source domain training, $2\times10^{-5}$ in the self-training and $10^{-5}$ in the adversarial learning, with batch size 12. A dropout with rate 0.2 is applied on both the BERT feature network and the discriminator. We set the epoch number $N_{pre}=3$ in pre-training and $N_{da}=4$ in domain adaptation.
Besides, since the input length may be larger than $m$, we truncate a passage using a sliding window to fit the input length whose moving step is 128. And text pieces excluding the answers will be discarded in training.
\subsection{Generalization and Influential Factors}
We firstly test the generalization capability of BERT by fine-tuning it on one dataset and directly applying it to another dataset without any change. We call such models as \textbf{zero-shot} models. The performance on dev sets for transferring among 6 datasets is shown in Table~\ref{generalization}.
In a high-level observation, the performance of zero-shot models drops significantly in most cases except the transferring between \textsc{CNN} and \textsc{DailyMail}. The average 55.8\% reduction in exact match (EM) and 50.0\% reduction in F1 compared to models trained on the target dataset (\textsc{Self}) prove that BERT cannot generalize well to unseen datasets, despite a huge corpus is used in unsupervised pre-training.
\begin{table*}[t!]
\begin{center}
\begin{tabular}{|l|cccccc|}
\hline
Datasets & \textsc{SQuAD} & \textsc{CNN} & \textsc{DailyMail} & \textsc{NewsQA} & \textsc{CoQA} & \textsc{DROP} \\
\hline
\textsc{SQuAD} & - & 16.72\,/\,26.42 & 21.12\,/\,21.70 & \bf 40.03\,/\,57.42 & \bf 29.58\,/\,39.58 & \bf 19.06\,/\,29.73 \\
\textsc{CNN} & 18.97\,/\,24.34 & - & \bf 81.53\,/\,83.59 & 9.38\,/\,15.36 & 7.10\,/\,10.26 & 4.40\,/\,7.50 \\
\textsc{DailyMail} & 9.72\,/\,14.76 & \bf 77.22\,/\,79.73 & - & 5.89\,/\,10.69 & 5.68\,/\,8.75 & 4.69\,/\,8.02 \\
\textsc{NewsQA} & 64.80\,/\,78.32 & 25.10\,/\,34.66 & 28.41\,/\,38.44 & - & 27.14\,/\,38.75 & 12.36\,/\,21.00\\
\textsc{CoQA} & \bf 65.25\,/\,74.92 & 18.21\,/\,24.76 & 22.65\,/\,28.12 & 37.74\,/\,53.85 & - & 14.75\,/\,21.60\\
\textsc{DROP} & 55.53\,/\,68.36 & 14.32\,/\,22.26 & 17.44\,/\,25.78 & 28.36\,/\,44.35 & 16.15\,/\,24.82 & -\\
\hline
\textsc{Self} & 79.85\,/\,87.46 & 82.76\,/\,84.73 & 81.37/\,/\,83.33 & 52.05\,/\,67.41 & 48.98\,/\,63.99 & 44.67\,/\,52.51\\
\hline
\end{tabular}
\end{center}
\caption{\label{generalization} Performance of zero-shot models on dev set when transferring among datasets. Rows correspond to source datasets and columns to target datasets. \textsc{Self} means training and testing on the same dataset. Left value in each cell is for {\bf exact match (EM)} while the right one is for \bf{F1 score}.}
\end{table*}
\begin{table*}[t!]
\begin{center}
\begin{tabular}{|l|cccccc|}
\hline
Datasets & \textsc{SQuAD} & \textsc{CNN} & \textsc{DailyMail} & \textsc{NewsQA} & \textsc{CoQA} & \textsc{DROP} \\
\hline
\textsc{SQuAD} & - & \bf 80.64\,/\,82.24 & 80.78\,/\,82.77 & \bf 52.69\,/\,68.15 & \bf 52.38\,/\,67.56 & \bf 50.34\,/\,57.53 \\
\textsc{CNN} & 79.86\,/\,87.65 & - & \bf 84.26\,/\,86.01 & 48.37\,/\,63.47 & 51.71\,/\,67.09 & 45.59\,/\,53.57 \\
\textsc{DailyMail} & 79.04\,/\,87.07 & 78.06\,/\,80.36 & - & 50.13\,/\,65.90 & 50.06\,/\,65.76 & 41.69\,/\,50.07 \\
\textsc{NewsQA} & \bf 80.17\,/\,88.14 & 79.60\,/\,81.57 & 80.93\,/\,82.99 & - & 50.05\,/\,66.49 & 47.36 / 56.42\\
\textsc{CoQA} & 78.38\,/\,85.93 & 74.75\,/\,76.65 & 76.87\,/\,78.88 & 51.21\,/\,65.83 & - & 42.08\,/\,50.07\\
\textsc{DROP} & 74.03\,/\,83.35 & 77.09\,/\,79.03 & 80.34\,/\,82.49 & 51.91\,/\,66.95 & 48.90\,/\,64.29 & -\\
\hline
\textsc{SQuAD} & - & 80.20\,/\,81.93 & 79.91\,/\,82.06 & \bf 51.56\,/\,66.79 & \bf 50.77\,/\,65.94 & \bf 48.45\,/\,57.33 \\
\textsc{CNN} & 78.59\,/\,86.39 & - & \bf 83.40\,/\,85.06 & 48.95\,/\,64.45 & 49.38\,/\,64.57 & 44.15\,/\,51.87 \\
\textsc{DailyMail} & 78.07\,/\,86.22 & \bf 82.44\,/\,84.36 & - & 50.91\,/\,65.90 & 48.64\,/\,63.80 & 41.58\,/\,47.74 \\
\textsc{NewsQA} & \bf 78.87\,/\,87.06 & 80.49\,/\,82.43 & 80.93\,/\,82.99 & - & 48.01\,/\,64.30 & 45.06\,/\,54.34\\
\textsc{CoQA} & 78.24\,/\,85.80 & 76.34\,/\,78.22 & 78.12\,/\,79.88 & 50.80\,/\,65.55 & - & 41.43\,/\,49.40\\
\textsc{DROP} & 74.81\,/\,83.67 & 80.38\,/\,82.21 & 80.78\,/\,82.96 & 50.01\,/\,65.16 & 46.27\,/\,62.67 & -\\
\hline
\textsc{Self} & 79.85\,/\,87.46 & 82.76\,/\,84.73 & 81.37/\,/\,83.33 & 52.05\,/\,67.41 & 48.98\,/\,63.99 & 44.67\,/\,52.51\\
\hline
\end{tabular}
\end{center}
\caption{\label{domain_adaptation} Domain adaptation performance of CASe on dev sets of datasets. The top of the table shows results for CASe+E (Entropy-weighted loss), while the bottom for standard CASe. Rows are source datasets and columns are target datasets. The left value in each cell is \textbf{exact match(EM)}, while right one is \textbf{F1 score}. \textsc{Self} stands for training and testing on the same dataset.}
\end{table*}
Taking a closer look, we can find the reductions vary across different dataset pairs. The drops of transferring among 4 datasets, \textsc{SQuAD}, \textsc{NewsQA}, \textsc{CoQA} and \textsc{DROP}, are smaller than transferring to/from rest 2 datasets, especially from latter 3 ones to \textsc{SQuAD}. And the transferring between \textsc{CNN} and \textsc{DailyMail} achieves equivalent performance to \textsc{Self}. \textsc{CNN} and \textsc{NewsQA} share the same corpus but the transferring fails due to different question forms(natural vs. cloze), and the corpus discrepancy of \textsc{SQuAD} and \textsc{NewsQA} leads to homologous result. On the other hand, the same question forms and similar corpora of \textsc{CNN} and \textsc{DailyMail} make successful transferring. Therefore, it can be concluded that not only the corpus but also the question form affect the generalization. It is also observed that the different focus as well as reasoning types affect the transfer between datasets even with same corpus and question type, i.e. simple single-sentence reasoning in \textsc{SQuAD} vs. complex reasoning (comparison, selection) in \textsc{DROP}.
We visualize the relations between 6 datasets using force-directed graph in Figure~\ref{fig:visualization}. The force between every two datasets can be calculate via ${F_{ij}} = {{{P_{ij}}} \mathord{\left/{\vphantom {{{P_{ij}}} {{P_j} + }}} \right.\kern-\nulldelimiterspace} {{P_j} + }}{{{P_{ji}}} \mathord{\left/{\vphantom {{{P_{ji}}} {{P_i}}}} \right.\kern-\nulldelimiterspace} {{P_i}}}$. $P_{ij}$ is the average performance of EM and F1 from source dataset $i$ to target dataset $j$, and $P_{i}$ is the average performance of \textsc{Self} model on dataset $i$. Edge widths are positively correlated to force $F$ between nodes, while the size of each node reflects dataset scale. It is noted that datasets cluster more significantly according to {\bf question forms} (node shapes), comparing to {\bf corpora} (node colors) who also affect it.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.35\textwidth]{dataset_visualization.pdf}
\caption{Visualization of relations between datasets based on performance. Node shape represents question form (rectangle:cloze, circle:natural). Node color represents corpus (red: Wikipedia, blue: news, purple: multiple).}
\label{fig:visualization}
\end{figure}
\subsection{Domain Adaptation Results}
We now evaluate the performance of proposed CASe method for unsupervised domain adaptation on RC datasets, including standard CASe and CASe with entropy-weighted loss in adversarial learning (CASe+E). The results are shown in Table~\ref{domain_adaptation}. Generally speaking, no matter which loss function is used in adversarial learning, CASe achieves significant performance improvement compared to zero-shot models. Despite annotated data is unavailable in the target domain, most results are comparable to \textsc{Self} models, and some of them are even better. In conclusion, CASe transfers knowledge from one domain to another one successfully.
\begin{figure*}[!ht]
\centering
\subfigure[Performance varies with $T_{prob}$ (Upper: EM, lower: F1).]{
\includegraphics[width=0.28\textwidth]{performance_with_th.pdf}
\label{fig:performance_tprob}
}\quad
\subfigure[Numbers of pseudo-labeled samples generated in each epoch under different $T_{prob}$.]{
\includegraphics[width=0.32\textwidth]{self_number_with_th.pdf}
\label{fig:number_tprob}
}\quad
\subfigure[Performance varies with adaptation stages and epoch numbers when $T_{prob}=0.4$.]{
\includegraphics[width=0.3\textwidth]{performance_with_epoch.pdf}
\label{fig:performance_epoch}
}
\caption{Influence of hyperparameters on adaptation performance of CASe and CASe+E under CNN to SQuAD (\textbf{C$\rightarrow$S}).}
\end{figure*}
Domain adapted models between two very alike datasets, \textsc{CNN} and \textsc{DailyMail}, show a higher EM than \textsc{Self}. They are similar on both corpora and question forms, which means more valid data can be utilized for self-training to get a model with deeper comprehension. Zero-shot model performs poorly when transferring between natural-question-based datasets and cloze-question-based datasets, e.g., \textsc{SQuAD} to \textsc{CNN}. But CASe can nearly eliminate such gaps between transferred model and \textsc{Self} models due to the new distribution learned in self-training and generalized representation optimized in adversarial learning. The performance of most adaptations on \textsc{CoQA} and \textsc{DROP} is better than \textsc{Self} because they benefit from more extra data.
Entropy-based loss weighting also show its effectiveness because it makes learning focus on samples simple to be transferred so as to obtain more correct knowledge in the target domain. And CASe+E shows 0.5\% to 2\% higher in EM accuracy than CASe under most conditions except some specific dataset pairs such as \textsc{DailyMail} to \textsc{CNN}.
\subsubsection{Ablation Study}
We do ablation test on 4 domain adaptation dataset pairs, which are \textsc{CNN} to \textsc{SQuAD} (\textbf{C$\rightarrow$S}), \textsc{DailyMail} to \textsc{CNN} (\textbf{D$\rightarrow$C}), \textsc{CNN} to \textsc{NewsQA} (\textbf{C$\rightarrow$N}) and \textsc{SQuAD} to \textsc{CoQA} (\textbf{S$\rightarrow$Co}), including adaptation between datasets with same/different question forms and/or corpora. The EM results on ablated models are shown in Table~\ref{ablation}, in which - \textit{conditional} means using unconditional adversarial learning instead of conditional one, while - \textit{Adv learning} for removing whole adversarial learning, - \textit{Self-training} for removing self-training and - \textit{Batch norm} for removing batch normalization, all based on CASe. It is observed that self-training plays the most important role under all configurations. Performance drops without discriminator conditioning on output or whole adversarial learning. Batch normalization has slight effect, removing it promotes the results under two configurations while it has opposite effect under others.
\subsubsection{Generalization after Domain Adaptation}
We test the performance of transferred models on the source datasets to check their generalization, which is shown in Table~\ref{after_adaptation}. 4 datasets pairs in ablation study is involved plus \textsc{NewsQA} to \textsc{DROP} (\textbf{N$\rightarrow$Dr}). There are performance declines compared to models trained on the source datasets, except \textbf{D$\rightarrow$C} in which datasets have very similar properties. It means our CASe method results in a good transferred model at the meantime leads to knowledge loss in the source domain.
\subsubsection{Impact of $T_{prob}$}
Figure~\ref{fig:performance_tprob} demonstrates the performance of CASe and CASe+E on \textbf{C$\rightarrow$S} varied with different generating probability $T_{prob}$ in terms of EM and F1 scores. CASe+E shows higher stability and performance than CASe under different $T_{prob}$. CASe and CASe+E reach their peaks at 0.3 and 0.4 respectively, while both of them show descending trends when $T_{prob} \ge 0.4$.
The numbers of generated pseudo-labeled samples in every epoch on \textbf{C$\rightarrow$S} with different $T_{prob}$ are shown in Figure~\ref{fig:number_tprob}. Obviously, a lower threshold results in more samples and longer training time. Although CASe generate more samples stably than previous epoch, samples generated by CASe+E may decrease in the 2nd epoch, but more samples will be generated latter compared to CASe. Thus CASe+E achieves better results under most conditions because more valid samples are utilized. Considering the overall performance as well as the trade-off between EM accuracy and complexity, we set $T_{prob}$ as 0.4 in our experiment.
\begin{table}[t]
\centering
\begin{tabular}{|l|c|c|c|c|}
\hline
& \textbf{C$\rightarrow$S} & \textbf{D$\rightarrow$C} & \textbf{C$\rightarrow$N} & \textbf{S$\rightarrow$Co} \\
\hline
CASe+E & 66.46 & 78.06 & 48.37 & 52.38\\
CASe & 65.24 & 82.44 & 48.95 & 50.77\\
\quad- \textit{conditional} & 64.47 & 82.26 & 47.31 & 50.25\\
\quad- \textit{Adv learning} & 65.05 & 81.21 & 47.89 & 49.05\\
\quad- \textit{Self-training} & 16.55 & 77.07 & 14.26 & 23.81\\
\quad- \textit{Batch norm} & 65.97 & 81.91 & 48.27 & 51.08\\
\hline
\end{tabular}
\caption{EM results of CASe ablation test on 4 dataset pairs.}
\label{ablation}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{|l|c|c|c|c|c|}
\hline
& \textbf{C$\rightarrow$S} & \textbf{D$\rightarrow$C} & \textbf{C$\rightarrow$N} & \textbf{S$\rightarrow$Co} & \textbf{N$\rightarrow$Dr} \\
\hline
CASe+E & 66.37 & 82.19 & 64.65 & 52.97 & 40.07\\
CASe & 68.61 & 81.61 & 65.43 & 51.48 & 40.17\\
\hline
\textsc{Self} & 80.77 & 80.85 & 80.77 & 66.51 & 52.05\\
\hline
\end{tabular}
\caption{EM results on source datasets after adaptaiton.}
\label{after_adaptation}
\end{table}
\subsubsection{Impact of Epoch Number}
In Figure~\ref{fig:performance_epoch}, we present the performance of CASe and CASe+E after different stages in every epoch on \textbf{C$\rightarrow$S}. E.g., \textit{1s} means result after the self-training stage in 1st epoch, \textit{2a} means results after conditional adversarial learning stage in 2nd epoch. CASe+E shows obvious fluctuations between the self-training and the adversarial learning compared to CASe. Not matter CASe or CASe+E, the performance tends to be saturated after 3 complete epochs. That is the reason why we set $N_{da}$ as 4.
\section{Conclusion}
In this paper, we explore the possibility of transferring knowledge for unsupervised domain adaptation on Reading Comprehension.
Our experiment proves that even the BERT model cannot generalize well between different domains, and the divergence of both corpora and question forms results in this failure.
Then we propose a new unsupervised domain adaptation method, Conditional Adversarial Self-training (CASe). After fine-tuning a BERT model on labeled data from the source domain, it uses self-training and conditional adversarial learning alternately in every epoch to make the model better fit the target domain and reduce the domain distribution discrepancy.
The experimental results among 6 RC datasets demonstrate the effectiveness of CASe. It promotes performance remarkably over zero-shot models, showing similar performance to supervised trained on the target domain.
\section*{Acknowledgements}
We thank Boqing Gong and the anonymous reviewers for insightful comments and feedback.
\fontsize{9.5pt}{10.5pt}
\selectfont
\bibliographystyle{aaai}
|
1,116,691,497,382 | arxiv |
\section{Introduction} \label{sec:intro}
Variational Inference (VI) \cite{blei2017variational, wainwright2008graphical, zhang2017advances} is a method to do approximate inference on a target distribution $p(z) = \unnorm p(z) / Z$ that is only known up to the normalization constant $Z$. The basic insights are, first, that the evidence lower bound (ELBO) $\mathbb{E}_{q(z)}[\log \unnorm p(z) - \log q(z)]$ lower-bounds $\log Z$ and, second, that maximizing the ELBO is equivalent to minimizing the KL-divergence from $q$ to $p$. The simplest VI method chooses a parameterized family for $q$ and optimizes its parameters to maximize the ELBO.
A recent direction involves combining VI with Markov chain Monte Carlo (MCMC) \cite{salimans2015markov, wolf2016variational}. These methods can be seen as an instance of the auxiliary VI framework \cite{agakov2004auxiliary} -- they create an augmented variational distribution that represents all intermediate random variables generated during the MCMC procedure. An augmented target distribution that attempts to capture the inverse MCMC dynamics is optimized jointly with this variational distribution. However, it has been observed that capturing inverse dynamics is challenging \cite[\S5.4]{wolf2016variational} (further discussion in Section \ref{sec:relwork}).
Annealed Importance Sampling (AIS) \cite{jarzynski1997equilibrium, neal2001ais} is a powerful technique used to build augmented distributions without the need of learning inverse dynamics. While it was originally proposed to estimate expectations using importance sampling, it can be easily used to build lower bounds on normalization constants of intractable densities \cite{grosse2015sandwiching, wu2016quantitative}. AIS creates a sequence of densities that bridge from a tractable initial approximation $q$ to the target $\unnorm p$. Then, the augmented variational distribution is given by a sequence of MCMC kernels targeting each bridging density, while the augmented target uses the {\em reversals} of those kernels. It turns out that the ratio of these augmented distributions can be computed using only evaluations of the bridging densities. Combining Hamiltonian MCMC kernels with AIS has been observed to produce strong lower bounds \cite{sohl2012hamiltonian, wu2016quantitative}.
However, these bounds are sensitive to numerous parameters, such as the initial distribution, bridging schedule, and parameters of the MCMC kernels. It would be desirable to optimize these parameters to tighten the bound. Unfortunately, the presence of Metropolis-Hastings acceptance steps means that the the final estimator is non-differentiable, and thus reparameterization gradients cannot be used.
In this work, we propose {\em Uncorrected} Hamiltonian Annealing (UHA), a differentiable alternative to Hamiltonian AIS. We define an augmented variational distribution using Hamiltonian MCMC kernels, but dropping the accept-reject steps. This is motivated by the fact that Hamiltonian dynamics sometimes have high acceptance rates. Since these uncorrected MCMC kernels do not exactly hold the bridging densities invariant, an augmented target distribution cannot be defined in terms of reversals. Instead, we define our augmented target by deriving an algorithm for the exact reversal of the original (corrected) MCMC kernel and dropping the accept-reject step. Surprisingly, this yields a very simple expression for the resulting lower bound.
We use reparameterization gradients to tune various parameters involved in the lower bound produced by UHA, including the initial approximation $q$, parameters of the uncorrected MCMC kernel, and the bridging densities. Experimentally, tuning all these leads to large gains. For example, in several inference tasks we observe that tuning UHA\ with $K = 64$ bridging densities gives better results than traditional Hamiltonian AIS with $K = 512$.
Finally, we use UHA\ to train VAEs \cite{vaes_welling, rezende2014stochastic}. In this case we observe that using UHA\ leads to higher ELBOs. In addition, we observe that increasing the number of bridging densities with UHA\ consistently leads to better results, and that for a large enough number of bridging densities the variational gap (difference between ELBO and true log-likelihood) becomes small, and models with higher log-likelihood are obtained.
\section{Discussion}
Since UHA\ yields a differentiable lower bound, one could tune other parameters not considered in this work. For instance, a different momentum distribution per bridging density could be used, that is, $\unnorm \pi_m(z, \rho) = \unnorm \pi_m(z) S_m(\rho)$. We believe additions such as this may yield further gains.
Also, our method can be used to get tight and differentiable upper bounds on $\log Z$ using the reversed AIS procedure described by Grosse et al. \cite{grosse2015sandwiching}.
Finally, removing accept-reject steps might sometimes lead to instabilities during optimization if the step-size $\epsilon$ becomes large. We observed this effect when training VAEs on some datasets for the larger values of $K$. We solved this by constraining the range of $\epsilon$ (previously done by Caterini et al. \cite{caterini2018hamiltonian}). While this simple solution works well, we believe that other approaches (e.g. regularization, automatic adaptation) could work even better. We leave the study of such alternatives for future work.
\section{Short Version of Story}
We have the estimator
\begin{equation}
R(z_1, \rho_1, \hdots, z_M, \rho_M) = \frac{\tilde p(z_M, \rho_M)}{q_w(z_1, \rho_1)} \prod_{m=1}^{M-1} \frac{U_m(z_m, \rho_m\vert z_{m+1}, \rho_{m+1})}{T_m(z_{m+1}, \rho_{m+1}\vert z_m, \rho_m)}. \label{eq:R2}
\end{equation}
\subsection{Hamiltonian AIS}
We need to find a way of defining the forward transitions $T_m(z_{m+1} \rho_{m+1} \vert z_m, \rho_m)$ and the reverse transitions $U_m(z_{m} \rho_{m} \vert z_{m+1} \rho_{m+1})$. Hamiltonian AIS provides one way of doing this. It sets the forward transition $T_m$ to be a Hamiltonian MCMC iteration (leaves $\pi_m(z, \rho) = \pi_m(z) S(\rho)$ invariant), and the reverse transition $U_m$ to be the reversal of $T_m$ with respect to $\pi_m(z, \rho)$. Specifically, the processes defining these transitions are given by:
\small\begin{equation}
\left . \begin{array}{l}
T_m(z_{m+1}, \rho_{m+1} \vert z_m, \rho_m): \\
\hspace{0.3cm} 1.\quad \rho' \sim s(\cdot \vert \rho_m)\\
\hspace{0.3cm} 2.\quad (z'', \rho'') = \mathcal{T}_m(z_m, \rho')\\
\hspace{0.3cm} 3.\quad \rho''' = -\rho''\\
\hspace{0.3cm} 4.\quad (z_{m+1}, \rho'''') = \mathrm{AR}((z'', \rho'''), (z_m, \rho'))\\
\hspace{0.3cm} 5.\quad \rho_{m+1} = -\rho'''
\end{array} \right .
\left . \begin{array}{l}
U_m(z_m, \rho_m \vert z_{m+1}, \rho_{m+1}): \\
\hspace{0.3cm} 1.\quad \rho' = -\rho_{m+1}\\
\hspace{0.3cm} 2.\quad (z'', \rho'') = \mathcal{T}_m(z_{m+1}, \rho')\\
\hspace{0.3cm} 3.\quad \rho''' = -\rho''\\
\hspace{0.3cm} 4.\quad (z_{m}, \rho'''') = \mathrm{AR}((z'', \rho'''), (z_{m+1}, \rho'))\\
\hspace{0.3cm} 5.\quad \rho_{m+1} \sim s(\cdot \vert \rho''')\\
\end{array} \right .
\label{eq:TUhais}\end{equation}\normalsize
(I'm thinking maybe do not give this in the main text, but in the appendix? Seems to complicated, and not extremely necessary?)
In order to compute the estimator $R$ from eq.~\ref{eq:R2} we need to compute the ratio $U_m / T_m$. While at a first glance this may appear to be complex, the main advantage of AIS comes from the fact that setting $U_m$ to be the reversal of $T_m$ leads to a simplification. By definition of the reversal of a Markov transition kernel we get that
\begin{equation}
\frac{U_m(z_m, \rho_m \vert z_{m+1}, \rho_{m+1})}{T_m(z_{m+1}, \rho_{m+1} \vert z_m, \rho_m)} = \frac{\tilde \pi_m(z_{m+1}, \rho_{m+1})}{\tilde \pi_m(z_m, \rho_m)},
\end{equation}
and thus the estimator $R$ from eq.~\ref{eq:R2} becomes
\begin{equation}
R(z_1, \rho_1, \hdots, z_M, \rho_M) = \frac{\tilde p(z_M, \rho_M)}{q_w(z_1, \rho_1)} \prod_{m=1}^{M-1} \frac{\tilde \pi_m(z_m, \rho_m)}{\tilde \pi_m(z_{m+1}, \rho_{m+1})}.
\end{equation}
Using $\tilde p(z, \rho) = \tilde p(z) S(\rho)$, $q_w(z, \rho) = q_w(z) S(\rho)$ and $\tilde \pi_m(z, \rho) = \tilde \pi_m(z) S(\rho)$ we get
\begin{equation}
R(z_1, \rho_1, \hdots, z_M, \rho_M) = \frac{\tilde p(z_M)}{q_w(z_1)} \prod_{m=1}^{M-1} \frac{\tilde \pi_m(z_m)}{\tilde \pi_m(z_{m+1})}.
\end{equation}
\subsection{Uncorrected Hamiltonian AIS}
We propose Uncorrected Hamiltonian AIS, which uses the transitions from eq.~\ref{eq:TUuhais}
\small\begin{equation}
\left . \begin{array}{l}
T_m(z_{m+1}, \rho_{m+1} \vert z_m, \rho_m): \\
\hspace{0.3cm} 1.\quad \rho' \sim s(\cdot \vert \rho_m)\\
\hspace{0.3cm} 2.\quad (z_{m+1}, \rho_{m+1}) = \mathcal{T}_m(z_m, \rho')\\
\end{array} \right .
\left . \begin{array}{l}
U_m(z_m, \rho_m \vert z_{m+1}, \rho_{m+1}): \\
\hspace{0.3cm} 1.\quad \rho' = -\rho_{m+1}\\
\hspace{0.3cm} 2.\quad (z_m, \rho'') = \mathcal{T}_m(z_{m+1}, \rho')\\
\hspace{0.3cm} 3.\quad \rho''' = -\rho''\\
\hspace{0.3cm} 4.\quad \rho_{m+1} \sim s(\cdot \vert \rho''')\\
\end{array} \right .
\label{eq:TUuhais}\end{equation}\normalsize
These are just the transitions from eq.~\ref{eq:TUhais} without the accept/reject step.
\textbf{Note:} Since $\mathcal{T}^{-1}$ is just negating the momentum, running $\mathcal{T}_m$, and negating the momentum again, the transitions from eq.~\ref{eq:TUuhais} can be written as (which I think is way nicer, but requires more of an explanation -- But I think I'd use these, the explanation is only about the inverse which is ok I guess, just a few lines)
\small\begin{equation}
\left . \begin{array}{l}
T_m(z_{m+1}, \rho_{m+1} \vert z_m, \rho_m): \\
\hspace{0.3cm} 1.\quad \rho_{m+1/2} \sim s(\cdot \vert \rho_m)\\
\hspace{0.3cm} 2.\quad (z_{m+1}, \rho_{m+1}) = \mathcal{T}_m(z_m, \rho_{m+1/2})\\
\end{array} \right .
\left . \begin{array}{l}
U_m(z_m, \rho_m \vert z_{m+1}, \rho_{m+1}): \\
\hspace{0.3cm} 1.\quad (z_m, \rho_{m+1/2}) = \mathcal{T}^{-1}_m(z_{m+1}, \rho_{m+1})\\
\hspace{0.3cm} 2.\quad \rho_{m} \sim s(\cdot \vert \rho_{m+1/2})\\
\end{array} \right .
\label{eq:TUuhais2}\end{equation}\normalsize
For these choices for $T_m$ and $U_m$ the ratio $U_m / T_m$ takes the form
\small\begin{equation}
\frac{U_m(z_m, \rho_m \vert z_{m+1}, \rho_{m+1})}{T_m(z_{m+1}, \rho_{m+1} \vert z_m, \rho_m)} = \frac{s(\rho_m\vert \rho_{m+1/2})}{s(\rho_{m+1/2}\vert \rho_m)}
, \quad \mbox{ and } \quad R = \frac{\tilde p(z_M) S(\rho_M)}{q_w(z_1) S(\rho_1)} \prod_{m = 1}^{M - 1} \frac{s(\rho_m\vert \rho_{m+1/2})}{s(\rho_{m+1/2}\vert \rho_m)}. \label{eq:R2first}
\end{equation}\normalsize
If, in addition, we consider a re-sampling distribution $s(\rho\vert \rho')$ that satisfies detailed balance with respect to $S(\rho)$ (satisfied by full and partial re-sampling), we can further simplify the expressions to
\small\begin{equation}
\frac{U_m(z_m, \rho_m \vert z_{m+1}, \rho_{m+1})}{T_m(z_{m+1}, \rho_{m+1} \vert z_m, \rho_m)} = \frac{S(\rho_m)}{S(\rho_{m+1/2})}
, \quad \mbox{ and } \quad R_2 = \frac{\tilde p(z_M)}{q_w(z_1)} \prod_{m = 1}^{M - 1} \frac{S(\rho_{m+1})}{S(\rho_{m+1/2})}. \label{eq:R2simple}
\end{equation}\normalsize
\section{Broader Impact}
Hola.
\section{Uncorrected Hamiltonian Annealing} \label{sec:uhais}
The contribution of this paper is the development of {\em uncorrected} Hamiltonian Annealing (UHA). This method is similar to Hamiltonian AIS (eq. \ref{eq:haisest}), but yields a differentiable lower bound. The main idea is simple. For any transitions $T_m$ and $U_m$, by the same logic as in eq.~\ref{eq:AISnormal}, we can define the ratio
\begin{equation}
\frac{\unnorm p(z_{1:M}, \rho_{1:M})}{q(z_{1:M}, \rho_{1:M})} = \frac{\unnorm p(z_M, \rho_M)}{q(z_1, \rho_1)} \prod_{m=1}^{M-1} \frac{U_m(z_m, \rho_m\vert z_{m+1}, \rho_{m+1})}{T_m(z_{m+1}, \rho_{m+1} \vert z_m, \rho_m)}.
\label{eq:haisobv}
\end{equation}
Hamiltonian AIS defines $T_m$ as a Hamiltonian MCMC kernel that holds $\pi_m$ invariant, and $U_m$ as the reversal of $T_m$ with respect to $\pi_m$. While this leads to a nice simplification, there is no {\em requirement} that these choices be made. We can use {\em any} transitions as long as the ratio $U_m / T_m$ is tractable.
We propose to use the "uncorrected" versions of the transitions $T_m$ and $U_m$ used by Hamiltonian AIS, obtained by dropping the accept-reject steps. To get an expression for the uncorrected $U_m$ we first derive the reversal $U_m$ used by Hamiltonian AIS (Alg.~\ref{alg:correctedum}). These uncorrected transitions are no longer reversible with respect to the bridging densities $\pi_m(z, \rho)$, and thus we cannot use the simplification used by AIS to get eq.~\ref{eq:haisest}. Despite this, we show that the ratio $U_m / T_m$ for the uncorrected transitions can still be easily computed (Thm. \ref{thm:ratioUT}). This produces a differentiable estimator, meaning the parameters can be tuned by stochastic gradient methods designed to maximize the ELBO.
We start by deriving the process that defines the transition $U_m$ used by Hamiltonian AIS. This is shown in Alg.~\ref{alg:correctedum}. It can be observed that $U_m$ follows the same three steps of $T_m$ (resample momentum, Hamiltonian simulation with accept-reject, momentum negation), but in reverse order.
\begin{algorithm}[ht]
\caption{Corrected $U_m(z_m, \rho_m \vert z_{m+1}, \rho_{m+1})$}
\label{alg:correctedum}
\begin{algorithmic}
\State 1. Set \small$(z'''_m, \rho'''_m) \leftarrow (z_{m+1}, -\rho_{m+1})$\normalsize.
\vspace{0.08cm}
\State 2. Simulate Hamiltonian dynamics as \small$(z''_m, \rho''_m) \leftarrow \T_m(z'''_m, \rho'''_m)$\normalsize.
\State \hspace{0.3cm} Calculate an acceptance probability \small$\alpha = \min\left(1, \unnorm \pi_m(z''_m, \rho''_m) / \unnorm \pi_m(z'''_m, \rho'''_m)\right)$\normalsize.
\State \hspace{0.3cm} With probability $\alpha$, set \small$(z'_m,\rho'_m) \leftarrow (z''_m,\rho''_m)$\normalsize. Otherwise, set \small$(z'_m, \rho'_m) \leftarrow (z'''_m, \rho'''_m)$\normalsize.
\vspace{0.08cm}
\State 3. Sample $\rho_m$ from $s_\mathrm{rev}(\rho_m\vert \rho'_m)$, the reversal of $s(\rho'_m\vert \rho_m)$ with respect to $S(\rho_m)$. Set $z_m \leftarrow z'_m$.
\State \Return $(z_{m}, \rho_{m})$
\end{algorithmic}
\end{algorithm}
\begin{lemma} \label{lemma:reversal}
The corrected $U_m$ (Alg.~\ref{alg:correctedum}) is the reversal of the corrected $T_m$ (Alg.~\ref{alg:correctedtm}) with respect to $\pi_m$.
\end{lemma}
\vspace{-0.3cm}
\begin{proof}[(Proof Sketch)]
First, we claim the general result that if $T_1$, $T_2$ and $T_3$ have reversals $U_1$, $U_2$ and $U_3$, respectively, then the composition $T = T_1 \circ T_2 \circ T_3$ has reversal $U = U_3 \circ U_2 \circ U_1$ (all reversals with respect to same density $\pi$). Then, we apply this to the corrected $T_m$ and $U_m$: $T_m$ is the composition of three steps that hold $\pi_m$ invariant. Thus, its reversal $U_m$ is given by the composition of the reversals of those steps, applied in reversed order. A full proof is in Appendix \ref{app:proofrev}.
\end{proof}
We now define the "uncorrected" transitions used by UHA, shown in Algs.~\ref{alg:uncorrectedtm} and \ref{alg:uncorrectedum}. These are just the transitions used by Hamiltonian AIS but without the accept-reject steps. (If Hamiltonian dynamics are simulated exactly, the acceptance rate is one and the uncorrected and corrected transitions are equivalent.) We emphasize that, for the "uncorrected" transitions, $T_m$ does not exactly hold $\pi_m$ invariant and $U_m$ is not the reversal of $T_m$. Thus, their ratio does not give a simple expression in terms of $\unnorm \pi_m$ as in eq.~\ref{eq:haisest}. Nevertheless, the following result shows that their ratio has a simple form.
\begin{algorithm}[ht]
\caption{Uncorrected $T_m(z_{m+1}, \rho_{m+1} \vert z_m, \rho_m )$}
\label{alg:uncorrectedtm}
\begin{algorithmic}
\State 1. Sample $\rho'_m$ from some $s(\rho'_m \vert \rho_m)$ that leaves $S(\rho)$ invariant. Set $z'_m \leftarrow z_m$.
\State 2. Simulate Hamiltonian dynamics as \small$(z''_m, \rho''_m) \leftarrow \T_m(z'_m, \rho'_m)$\normalsize.
\State 3. Reverse the momentum as \small$(z_{m+1}, \rho_{m+1}) \leftarrow (z''_m, -\rho''_m)$\normalsize.
\State \Return $(z_{m+1}, \rho_{m+1})$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[ht]
\caption{Uncorrected $U_m(z_m, \rho_m \vert z_{m+1}, \rho_{m+1})$}
\label{alg:uncorrectedum}
\begin{algorithmic}
\State 1. Set \small$(z''_m, \rho''_m) \leftarrow (z_{m+1}, -\rho_{m+1})$\normalsize.
\State 2. Simulate Hamiltonian dynamics as \small$(z'_m, \rho'_m) \leftarrow \T_m(z''_m, \rho''_m)$\normalsize.
\State 3. Sample $\rho_m$ from $s_\mathrm{rev}(\rho_m\vert \rho'_m)$, the reversal of $s(\rho'_m\vert \rho_m)$ with respect to $S(\rho_m)$. Set $z_m \leftarrow z'_m$.
\State \Return $(z_{m}, \rho_{m})$
\end{algorithmic}
\end{algorithm}
\begin{theorem} \label{thm:ratioUT}
Let $T_m$ and $U_m$ be the uncorrected transitions defined in Algs.~\ref{alg:uncorrectedtm} and \ref{alg:uncorrectedum}, and let the dynamics simulator $\T_m(z, \rho)$ be volume preserving and self inverting. Then,
\begin{equation}
\frac{U_m(z_m, \rho_m \vert z_{m+1}, \rho_{m+1})}{T_m(z_{m+1}, \rho_{m+1} \vert z_m, \rho_m)} = \frac{S(\rho_m)}{S(\rho'_m)}, \label{eq:R2thm1}
\end{equation}
where $\rho'_{m}$ is the second component of $\T_m(z_{m+1},-\rho_{m+1})$. (That is, $\rho'_{m}$ from Algs.~\ref{alg:uncorrectedtm} and \ref{alg:uncorrectedum}.)
\end{theorem}
\vspace{-0.4cm}
\begin{proof}[(Proof Sketch.)]
We consider variants of Algs.~\ref{alg:uncorrectedtm} and \ref{alg:uncorrectedum} in which each time $z$ is assigned we add Gaussian noise with some variance $aI$. We then derive the densities for $T_m$ and $U_m$ using the rule for transformation of densities under invertible mappings, using that $\mathcal{T}_m$ is self-inverting and volume preserving. Taking the ratio gives eq.~\ref{eq:R2thm1}. Since this is true for arbitrary $a$, we take the stated result as the limit as $a \to 0$. A full proof is in Appendix \ref{app:proofthm}.
\end{proof}
\vspace{-0.2cm}
As an immediately corollary of eq. \ref{eq:haisobv} and Theorem \ref{thm:ratioUT} we get that for UHA
\begin{equation}
\frac{\unnorm p(z_{1:M}, \rho_{1:M})}{q(z_{1:M}, \rho_{1:M})} = \frac{\unnorm p(z_M)}{q(z_1)} \prod_{m = 1}^{M - 1} \frac{S(\rho_{m+1})}{S(\rho'_{m})}. \label{eq:estuhais1}
\end{equation}
This ratio can be used to get an expression for the lower bound $\ELBO(q(z_{1:M}, \rho_{1:M}), \unnorm p(z_{1:M}, \rho_{1:M}))$ obtained with UHA. As mentioned in Section \ref{sec:preliminaries}, the parameters of the augmented distributions are tuned to maximize the ELBO, equivalent to minimizing the KL-divergence from $q$ to $\unnorm p$. While computing this ELBO exactly is typically intractable, an unbiased estimate can be obtained using a sample from $q(z_{1:M}, \rho_{1:M})$ as shown in Alg.~\ref{alg:sampleuhais}. If sampling is done using reparameterization, then unbiased reparameterization gradients may be used together with stochastic optimization algorithms to optimize the lower bound. In contrast, the variational lower bound obtained with Hamiltonian AIS (see Alg.~\ref{alg:sampleais} in Appendix \ref{app:HAISboundalg}) does not allow the computation of unbiased reparameterization gradients.
\begin{algorithm}[ht]
\caption{Generating the (differentiable) uncorrected Hamiltonian annealing variational bound.}
\label{alg:sampleuhais}
\begin{algorithmic}
\State Sample $z_1 \sim q$ and $\rho_1 \sim S$.
\State Initialize estimator as $\mathcal{L} \leftarrow -\log q(z_1)$.
\For{$m = 1, 2, \cdots , M-1$}
\State Run uncorrected $T_m$ (Alg.~\ref{alg:uncorrectedtm}) on input $(z_m, \rho_m)$, storing $\rho'_m$ and the output $(z_{m+1}, \rho_{m+1})$.
\State Update estimator as $\mathcal{L} \leftarrow \mathcal{L} + \log \left( S(\rho_{m+1}) / S(\rho'_m) \right)$.
\EndFor
\State Update estimator as $\mathcal{L} \leftarrow \mathcal{L} + \log \unnorm p(z_M)$.
\State \Return $R$
\end{algorithmic}
\end{algorithm}
\vspace{-0.3cm}
\subsection{Algorithm Details}
\textbf{Simulation of dynamics.} We use the leapfrog operator with step-size $\epsilon$ to simulate Hamiltonian dynamics. This has unit Jacobian and satisfies $\mathcal{T}_m = \mathcal{T}_m^{-1}$ (if the momentum is negated after the simulation), which are the properties required for eq.~\ref{eq:estuhais1} to be correct (see Theorem \ref{thm:ratioUT}).
\textbf{Momentum distribution and resampling.} We set the momentum distribution $S(\rho) = \mathcal{N}(\rho \vert 0, \Sigma)$ to be a Gaussian with mean zero and covariance $\Sigma$. The resampling distribution $s(\rho'\vert \rho)$ must hold this distribution invariant. As is common we use $s(\rho' \vert \rho) = \mathcal{N}(\rho' \vert \eta \rho, (1-\eta^2) \Sigma)$, where $\eta \in [0, 1)$ is the damping coefficient. If $\eta = 0$, the momentum is completely replaced with a new sample from $S$ in each iteration (used by HMC and Langevin dynamics \cite{neal2011mcmc, welling2011bayesian}). For larger $\eta$, the momentum becomes correlated between iterations, which may help suppress random walk behavior and encourage faster mixing \cite{cheng2018underdamped} (used by the underdamped variants of HMC and Langevin dynamics \cite{neal2011mcmc}).
\textbf{Bridging densities.} We set $\unnorm \pi_m(z, \rho) = q(z, \rho)^{1 - \beta_m} \unnorm p(z, \rho)^{\beta_m}$, where $\beta_m \in[0, 1]$ and $\beta_m < \beta_{m+1}$.
\textbf{Computing gradients.} We set the initial distribution $q(z_1)$ to be a Gaussian, and perform all sampling operations in Alg.~\ref{alg:sampleuhais} using reparameterization \cite{vaes_welling, rezende2014stochastic, doublystochastic_titsias}. Thus, the whole procedure is differentiable and reparameterization-based gradients may be used to tune parameters by maximizing the ELBO. These parameters include the initial distribution $q(z_1)$, the covariance $\Sigma$ of the momentum distribution, the step-size $\epsilon$ of the integrator, the damping coefficient $\eta$ of the momentum resampling distribution, and the parameters of the bridging densities (including $\beta$), among others. As observed in Section \ref{sec:tuningmore} tuning all of these parameters may lead to considerable performance improvements.
\section{Related Work} \label{sec:relwork}
UHA\ and slight variations have been proposed in concurrent work by Thin et al. \cite{thin2021monte}, who use uncorrected Langevin dynamics together with the uncorrected reversal to build variational lower bounds, and by Zhang et al. \cite{zhang2021differentiable}, who proposed UHA\ with under-damped Langevin dynamics together with a convergence analysis for linear regression models.
There are three other lines of work that produce differentiable variational bounds integrating Monte Carlo methods. One is Hamiltonian VI (HVI) \cite{salimans2015markov, wolf2016variational}. It uses eq.~\ref{eq:haisobv} to build a lower bound on $\log Z$, with $T_m$ set to an uncorrected Hamiltonian transition (like UHA\ but without bridging densities) and $U_m$ set to conditional Gaussians parameterized by learnable functions. Typically, a single transition is used, and the parameters of the transitions are learned by maximizing the resulting ELBO.\footnote{The formulation of HVI allows the use of more than one transition. However, this leads to an increased number of reverse models that must be learned, and thus not typically used in practice. Indeed, experiments by Salimans et al. \cite{salimans2015markov} use only one HMC step while varying the number of leapfrog integration steps, and results from Wolf et al. \cite{wolf2016variational} show that increasing the number of transitions may actually yield worse bounds (they conjecture that this is due to the difficulty of learning inverse dynamics.).}
A second method is given by Hamiltonian VAE (HVAE) \cite{caterini2018hamiltonian}, based on Hamiltonian Importance sampling \cite{neal2005hamiltonianis}. They augment the variational distribution with momentum variables, and use the leapfrog integrator to simulate Hamiltonian dynamics (a deterministic invertible transformation with unit Jacobian) with a tempering scheme as a target-informed flow \cite{rezende2015variational, tabak2013family}.
The third method is Importance Weighting (IW) \cite{IWVAE, cremer2017reinterpreting, domke2018importance}. Here, the idea is that $\ELBO(q(z), \unnorm p(z)) \leq \mathbb{E} \log \frac{1}{K} \sum_k \unnorm p(z_k) / q(z_k)$, and that the latter bound can be optimized, rather than the traditional ELBO. More generally, other Monte-Carlo estimators can be used \cite{domke2019divide}.
Some work defines novel contrastive-divergence-like objectives in terms of the final iteration of an MCMC chain \cite{ruiz2019contrastive, li2017approximate}. These do not provide an ELBO-like variational bound. While in some cases the initial distribution can be optimized to minimize the objective \cite{ruiz2019contrastive}, gradients do not flow through the MCMC chains, meaning MCMC parameters cannot be optimized by gradient methods.
For latent variable models, Hoffman \cite{hoffman2017MCMCvae} suggested to run a few MCMC steps after sampling from the variational distribution before computing gradients with respect to the model parameters, which is expected to "debias" the gradient estimator to be closer to the true likelihood gradient. The variational distribution is simultaneously learned to optimize a standard ELBO. (AIS can also be used \cite{ding2019vaeAIS}.)
\section{Experiments and Results} \label{sec:exps}
This section presents results using UHA\ for Bayesian inference problems on several models of varying dimensionality and for VAE training. We compare against Hamiltonian AIS, IW, HVI and HVAE. We report the performance of each method for different values of $K$, the number of likelihood evaluations required to build the lower bound (e.g. number of samples used for IW, number of bridging densities plus one for UHA). Note that, for a fixed $K$, all methods have the same oracle complexity (i.e. number of target/target's gradient evaluation), and that for $K=1$ they all reduce to plain VI.
For UHA\ and Hamiltonian AIS we use under-damped Langevin dynamics, that is, we perform just one leapfrog step per transition and partially resample momentum. We implement all algorithms using Jax \cite{jax2018github}.
\subsection{Toy example}
This section compares results obtained with UHA\ and IW when the target is set to a factorized Student-t with mean zero, scale one, and three degrees of freedom. We tested three different dimensionalities: $20$, $200$ and $500$. In all cases we have $\log Z = 0$, so we can exactly analyze the tightness of the bounds obtained by the methods. We set the initial approximation to be a mean-field Gaussian, and optimize the objective using Adam \cite{adam} with a step-size of $0.001$ for $5000$ steps. For UHA\ we tune the initial approximation $q(z)$, the integrator's step-size $\epsilon$ and the damping coefficient $\eta$.
We ran UHA\ for $K\in\{4, 16, 64, 128\}$ and IW for $K\in\{128, 1024\}$. Table \ref{tab:toy} shows the results for the three dimensionalities considered. It can be observed that UHA\ performs significantly better than IW as the dimensionality increases; for the target with dimension $500$, UHA\ with $K=16$ yields better bounds than IW with $K=1024$. On the other hand, the methods perform similarly for the low dimensional target. Finally, in this case both methods have similar time costs. For instance, for $K=128$ UHA\ takes $14.2$ seconds to optimize and IW takes $13.9$.
\begin{table}[]
\caption{\textbf{Our method (UHA) yields better bounds than importance weighting (IW) for moderate or high dimensions.} ELBO achieved by different methods when using a Student-t target distribution of varying dimensionality, higher is better. Since the target is normalized, a perfect inference algorithm would achieve the true value of $\log Z = 0$.}
\label{tab:toy}
\centering
\begin{tabular}{llllllll}
\toprule
Target & Plain VI & \multicolumn{4}{c}{UHA} & \multicolumn{2}{c}{IW} \\
\cmidrule(l{2pt}r{2pt}){2-2} \cmidrule(l{2pt}r{2pt}){3-6} \cmidrule(l{2pt}r{2pt}){7-8}
dimension &$K=1$& $K=4$ & $K=16$ & $K=64$ & $K=128$ & $K=128$ & $K=1024$ \\
\midrule
$20$ & $-0.82$ & $-0.55$ & $-0.36$ & $-0.19$ & $-0.14$ & $-0.14$ & $-0.088$ \\
$200$ & $-8.1$ & $-5.5$ & $-3.5$ & $-1.9$ & $-1.4$ & $-3.7$ & $-2.9$ \\
$500$ & $-20.5$ & $-13.9$ & $-9.0$ & $-5.2$ & $-3.8$ & $-12.0$ & $-10.4$ \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Inference tasks} \label{sec:exps_real}
This section shows results using UHA\ for Bayesian inference tasks. For this set of experiments, for UHA\ we tune the initial distribution $q(z)$, the integrator's step-size $\epsilon$ and the damping coefficient $\eta$. We include detailed results tuning more parameters in Section \ref{sec:tuningmore}.
\textbf{Models.} We consider four models: \textit{Brownian motion} ($d = 32$), which models a Brownian Motion process with a Gaussian observation model; \textit{Convection Lorenz bridge} ($d = 90$), which models a nonlinear dynamical system for atmospheric convection; and \textit{Logistic regression} with the a1a ($d = 120$) and madelon ($d = 500$) datasets. The first two obtained from the ``Inference gym'' \cite{inferencegym2020}.
\textbf{Baselines.} We compare UHA\ against IW, HVAE, a simple variant of HVI, and Hamiltonian AIS (HAIS). For all methods which rely on HMC (i.e. all except IW) we use a singe integration step-size $\epsilon$ common to all dimensions and fix the momentum distribution to a standard Gaussian. For HVI we learn the initial distribution $q(z)$, integration step-size $\epsilon$ and the reverse dynamics $U_m$ (set to a factorized Gaussian with mean and variance given by affine functions), and for HVAE we learn $q(z)$, $\epsilon$ and the tempering scheme (we use the quadratic scheme parameterized by a single parameter).
\textbf{Training details.} We set $q(z)$ to be a mean-field Gaussian initialized to a maximizer of the ELBO, and tune the parameters of each method by running Adam for $5000$ steps. We repeat all simulations for different step-sizes in $\{10^{-3}, 10^{-4}, 10^{-5}\}$, and select the best one for each method. Since Hamiltonian AIS' parameters cannot be tuned by gradient descent, we find a good pair $(\epsilon, \eta)$ by grid search. We consider $\eta \in \{0.5, 0.9, 0.99\}$ and three values of $\epsilon$ that correspond to three different rejection rates: $0.05, 0.25$ and $0.5$. We tested all 9 possible combinations and selected the best one.
Results are shown in Fig.~\ref{fig:comparison1}.
Our method yields better lower bounds than all other competing approaches for all models considered, and that increasing the number of bridging densities consistently leads to better results. The next best performing method is Hamiltonian AIS. IW also shows a good performance for the lower dimensional model \textit{Brownian motion}. However, for models of higher dimensionality IW leads to bounds that are several nats worse than the ones achieved by UHA. Finally, HVI and HVAE yield bounds that are much worse than those achieved by the other three methods, and do not appear to improve consistently for larger $K$. For HVAE, these results are consistent with the ones in the original paper \cite[\S4]{caterini2018hamiltonian}, in that higher $K$ may sometimes hurt performance. For HVI, we believe this is related to the use of just one HMC step and suboptimal inverse dynamics.
Optimization times for Plain VI, IW and UHA\ (the latter two with $K=32$) are $2.4, 3.4 \mbox{ and } 4.4$ seconds for the \textit{Brownian motion} dataset, $2.5, 6.8 \mbox{ and } 6.9$ seconds for \textit{Lorenz convection}, $2.8, 8.3 \mbox{ and } 19.9$ seconds for \textit{Logistic regression (A1A)}, and $4.6, 16.6 \mbox{ and } 121.2$ seconds for \textit{Logistic regression (Madelon)}. While IW and UHA\ have the same oracle complexity for the same $K$, we see that the difference between their time cost depends on the specific model under consideration. All other methods that use HMC have essentially the same time cost as UHA.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.29, trim = {0 2.1cm 0 0}, clip]{./figures/comparisonHVAE_IW_HVIbrownian.pdf}
\includegraphics[scale=0.29, trim = {1.3cm 2.1cm 0 0}, clip]{./figures/comparisonHVAE_IW_HVIlorenz.pdf}
\includegraphics[scale=0.29, trim = {0 0 0 0}, clip]{./figures/comparisonHVAE_IW_HVIlog_a1a.pdf}
\includegraphics[scale=0.29, trim = {1.3cm 0 0 0}, clip]{./figures/comparisonHVAE_IW_HVIlog_madelon.pdf}
\caption{\textbf{Our method achieves much better bounds than other competing methods.} $K$ represents the number of likelihood evaluations to build the lower bound required by each method. The leftmost point of all lines coincide because, for $K = 1$, all methods reduce to plain VI. Vertical bars indicate one standard deviation obtained by running simulations with four different random seeds.
}
\label{fig:comparison1}
\end{figure}
\subsubsection{Tuning More Parameters with UHA} \label{sec:tuningmore}
A basic version of UHA\ involves fitting a variational distribution using plain VI, and then tuning the integration step-size $\epsilon$ and the damping coefficient $\eta$. However, more parameters could be tuned:
\vspace{-0.2cm}
\begin{itemize}[leftmargin=0.5cm]
\setlength\itemsep{-0.06cm}
\item Moment distribution cov $\Sigma$: We propose to learn a diagonal matrix instead of using the identity.
\item Bridging densities' coefficients $\beta_m$: Typically $\beta_m = m / M$. We propose to learn the sequence $\beta$, with the restrictions $\beta_0 = 0$, $\beta_M = 1$, $\beta_m < \beta_{m+1}$ and $\beta_m \in [0, 1]$.
\item Initial distribution $q(z)$: Instead of fixing $q(z)$ to be a maximizer of the typical ELBO, we propose to learn it to maximize the augmented ELBO obtained using UHA.
\item Integrator's step-size $\epsilon$: Instead of learning a unique step-size $\epsilon$, we propose to learn a step-size that is a function of $\beta$, i.e. $\epsilon(\beta)$. In our experiments we use an affine function.
\item Bridging densities parameters $\psi$: Instead of setting the $m$-th bridging density to be $q^{1 - \beta_m} p^{\beta_m}$, we propose to set it to $q_{\psi(\beta_m)}^{1 - \beta_m} \, p^{\beta_m}$, where $q_{\psi(\beta_m)}$ is a mean-field Gaussian with a mean and diagonal covariance specified as affine functions of $\beta$.
\end{itemize}
We consider the four models described previously and compare three methods: UHA\ tuning all parameters described above, UHA\ tuning only the pair $(\epsilon, \eta)$, and Hamiltonian AIS with parameters $(\epsilon, \eta)$ obtained by grid-search. We perform the comparison for $K$ ranging from $2$ to $512$. (For $K \geq 64$ we tune the UHA's parameters using $K = 64$ and extrapolate them as explained in Appendix \ref{sec:extrapol}.)
Results are shown in Fig.~\ref{fig:tuning_more_1}. It can be observed that tuning all parameters with UHA\ leads to significantly better lower bounds than those obtained by Hamiltonian AIS (or UHA\ tuning only $\epsilon$ and $\eta$). Indeed, for the Logistic regression models, UHA\ tuning all parameters for $K = 64$ leads to results comparable to the ones obtained by Hamiltonian AIS with $K = 512$.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.2, trim = {0 0 0 0}, clip]{./figures/brownian_compareextrapolea.pdf}\hfill
\includegraphics[scale=0.2, trim = {1.5cm 0 0 0}, clip]{./figures/lorenz_compareextrapolea.pdf}\hfill
\includegraphics[scale=0.2, trim = {1.5cm 0 0 0}, clip]{./figures/log_a1a_compareextrapolea.pdf}\hfill
\includegraphics[scale=0.2, trim = {1.5cm 0 0 0}, clip]{./figures/log_madelon_compareextrapolea.pdf}
\caption{\textbf{UHA\ tuning all parameters leads to better performance than other methods.}}
\label{fig:tuning_more_1}
\end{figure}
To verify what parameters lead to larger performance improvements, we tested UHA\ with $K = 64$ tuning different subsets of $\{\epsilon, \eta, \Sigma, \beta, q(z), \epsilon(\beta), \psi(\beta)\}$. Fig.~\ref{fig:tuning_more_1_fig} shows the results. It can be observed that tuning the bridging parameters $\beta$ and the initial approximation $q(z)$ leads to the largest gains in performance, and that tuning all parameters always outperforms tuning smaller subsets of parameters. We show a more thorough analysis, including more subsets and values of $K$ in Appendix \ref{app:tunemoreuha}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.21, trim = {0 0 0 0}, clip]{./figures/Brownian_motion.pdf}\hfill
\includegraphics[scale=0.21, trim = {0 0 0 0}, clip]{./figures/Lorenz_convection.pdf}\hfill
\includegraphics[scale=0.21, trim = {0 0 0 0}, clip]{./figures/Log_reg_a1a.pdf}\hfill
\includegraphics[scale=0.21, trim = {0 0 0 0}, clip]{./figures/Log_reg_madelon.pdf}
\caption{\textbf{Tuning all parameters leads to better results than tuning subsets of them. Largest gains are obtained by tuning bridging coefficients $\beta$ and initial distribution $q$.} ELBO achieved as a function of parameters tuned (x-axis), for $K = 64$. The subsets are ordered in terms of increasing performance (same ordering is used for all four models). Parameters are step-size $\epsilon$, damping coefficient $\eta$, moment covariance $\Sigma$, bridging densities parameters $\beta$ and $\psi$, initial distribution $q$.}
\label{fig:tuning_more_1_fig}
\end{figure}
Finally, Appendix \ref{app:comparehmc} shows results comparing UHA\ (tuning several parameters) against HMC, mean field VI and IW in terms of the approximation accuracy achieved on a logistic regression model with a fixed computational budget.
\subsection{VAE training}
Our method can be used to train latent variable models, such as Variational Auto-encoders (VAE) \cite{vaes_welling, rezende2014stochastic}. In this case the initial approximation $q(z|x)$ and the model $p(x, z)$ are parameterized by two neural networks (encoder and decoder), whose parameters are trained by maximizing the ELBO. UHA\ can be used to train VAEs by augmenting these two distributions as described in Section \ref{sec:uhais}.
\textbf{Datasets.} We use three datasets: mnist \cite{lecun1998gradient} (numbers 1-9), emnist-letters \cite{cohen2017emnist} (letters A-Z), and kmnist \cite{clanuwat2018deep} (cursive Kuzushiji). All consist on greyscale images of $28\times 28$ pixels. In all cases we use stochastic binarization \cite{salakhutdinov2008quantitative} and a training set of $50000$ samples, a validation set of $10000$ samples, and a test set of $10000$ samples. All datasets are available in tensorflow-datasets \cite{TFDS}.
\textbf{Baselines.} We compare against Importance Weighted Auto-encoders \cite{IWVAE} and plain VAE training \cite{vaes_welling}.
\textbf{Architecture details.} We set $q(z|x)$ to a diagonal Gaussian, $p(z)$ to a standard Normal, and $p(x\vert z)$ to a Bernoulli. We consider two architectures for the encoder and decoder: (1) Feed forward networks with one hidden layer of size $450$ and Relu non-linearities, with a latent space dimensionality of $64$; (2) Architecture used by Burda et al. \cite{IWVAE}, feed forward networks with two hidden layers of size $200$ with tanh non-linearities, with a latent space dimensionality of $50$.
\textbf{Training details.} In all cases the encoder and decoder are initialized to parameters that maximize the ELBO. For IW we tune the encoder and decoder parameters (using the doubly-reparameterized estimator \cite{doublyrep}), and for UHA we tune the integration step-size $\epsilon$, damping coefficient $\eta$, bridging parameters $\beta$, momentum covariance $\Sigma$ (diagonal), and the decoder parameters. Following Caterini et al. \cite{caterini2018hamiltonian} we constrain $\epsilon \in (0, 0.05)$ to avoid unstable behavior of the leapfrog discretization. We use Adam with a step-size of $10^{-4}$ to train for $100$ epochs and use the validation set for early stopping. We repeated all simulations for three different random seeds. In all cases the standard deviation of the results was less than $0.1$ nats (not shown in tables).
All methods achieved better results using the architecture with one hidden layer. These results are shown in Tables \ref{table:resultsVAEELBO} and \ref{table:resultsVAENMLL}. The first one shows the ELBO on the test set achieved for different values of $K$, and the second one the log-likelihood on the test set estimated with AIS \cite{wu2016quantitative}. It can be observed that UHA\ leads to higher ELBOs, higher log-likelihoods, and smaller variational gaps (difference between ELBO and log-likelihood) than IW for all datasets, with the difference between both methods' performance increasing for increasing $K$. Notably, for $K = 64$, the variational gap for UHA\ becomes quite small, ranging from $0.8$ to $1.4$ nats depending on the dataset.
Results for the architecture from Burda et al. \cite{IWVAE} (two hidden layers) are shown in Tables \ref{table:resultsVAEELBOA} and \ref{table:resultsVAENMLLA} (Appendix \ref{app:resultsIWAE}). Again, we observe that UHA\ consistently leads to higher ELBOs and the best test log-likelihood was consistently achieved by UHA with $K = 64$. However, for smaller $K$, IW sometimes had better log-likelihoods than UHA (despite worse ELBOs).
\renewcommand{\arraystretch}{1}
\begin{table}[ht]
\caption{ELBO on the test set (higher is better). For $K = 1$ both methods reduce to plain VI.}
\label{table:resultsVAEELBO}
\centering
\begin{tabular}{lllllll}
\toprule
& & $K = 1$ & $K = 8$ & $K = 16$ & $K = 32$ & $K = 64$ \\
\midrule
\multirow{2}{1cm}{mnist} & UHA & $-93.4$ & $-89.8$ & $-88.8$ & $-88.1$ & $-87.6$ \\
& IW & $-93.4$ & $-90.5$ & $-89.9$ & $-89.4$ & $-89.0$ \\
\midrule
\multirow{2}{1cm}{letters} & UHA & $-137.9$ & $-133.5$ & $-132.3$ & $-131.5$ & $-130.9$ \\
& IW & $-137.9$ & $-134.6$ & $-133.9$ & $-133.2$ & $-132.7$ \\
\midrule
\multirow{2}{1cm}{kmnist} & UHA & $-184.2$ & $-176.6$ & $-174.6$ & $-173.2$ & $-171.6$ \\
& IW & $-184.2$ & $-179.7$ & $-178.7$ & $-177.8$ & $-177.0$ \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[ht]
\caption{Log-likelihood on the test set (higher is better). This is estimated using AIS with under-damped HMC using $2000$ bridging densities, $1$ HMC iteration with $16$ leapfrog steps per bridging density, integration step-size $\epsilon = 0.06$, and damping coefficient $\eta = 0.8$.}
\label{table:resultsVAENMLL}
\centering
\begin{tabular}{lllllll}
\toprule
& & $K = 1$ & $K = 8$ & $K = 16$ & $K = 32$ & $K = 64$ \\
\midrule
\multirow{2}{1cm}{mnist} & UHA & $-88.5$ & $-87.5$ & $-87.2$ & $-87.0$ & $-86.9$ \\
& IW & $-88.5$ & $-87.6$ & $-87.5$ & $-87.3$ & $-87.2$ \\
\midrule
\multirow{2}{1cm}{letters} & UHA & $-131.9$ & $-130.7$ & $-130.3$ & $-130.1$ & $-129.9$ \\
& IW & $-131.9$ & $-130.9$ & $-130.7$ & $-130.6$ & $-130.4$ \\
\midrule
\multirow{2}{1cm}{kmnist} & UHA & $-174.3$ & $-172.2$ & $-171.6$ & $-171.2$ & $-170.2$ \\
& IW & $-174.3$ & $-173.0$ & $-172.6$ & $-172.4$ & $-172.2$ \\
\bottomrule
\end{tabular}
\end{table}
\section{Preliminaries} \label{sec:preliminaries}
\textbf{Variational inference and augmentation.} Suppose that $p(z) = \frac{1}{Z}\unnorm p(z)$ is some target density, where $\unnorm p$ is unnormalized and $Z=\int \unnorm p(z) dz$ is the corresponding normalizer, and let
\begin{equation}
\ELBO(q(z), \unnorm p(z)) = \E_{q(z)} \log \frac{\unnorm p(z)}{q(z)}
\end{equation}
be the "ELBO operator". Variational inference (VI) is based on the fact that for any $q(z)$ we have \cite{blei2017variational}
\begin{equation}
\log Z = \ELBO(q(z), \unnorm p(z)) + \mathrm{KL}(q(z) \Vert p(z)).
\end{equation}
In VI, the parameters of $q$ are tuned to maximize the "evidence lower bound" (ELBO). Since the KL-divergence is non-negative, this is always a lower bound on $\log Z$. Also, maximizing the ELBO is equivalent to minimizing the KL-divergence from $q$ to $p$.
To get tighter bounds and better approximations recent work has made use of {\em augmented} distributions \cite{agakov2004auxiliary, huang2018improving}. Let $z_{1:M}=(z_1, \cdots, z_M)$
and suppose that $\unnorm p(z_{1:M}) = \unnorm p(z_M) p(z_{1:M-1}\vert z_M)$ augments the original target density while preserving its normalization constant. Then, for any $q(z_{1:M})$ we have
\begin{equation}
\log Z = \ELBO(q(z_{1:M}), \unnorm p(z_{1:M})) + \mathrm{KL}(q(z_{1:M}) \Vert p(z_{1:M})). \label{eq:augELBO}
\end{equation}
The first term is called the "augmented" ELBO and again lower bounds $\log Z$. By the chain rule of KL-divergence \cite{cover1999elements}, the KL-divergence from $q$ to $p$ over $z_{1:M}$ upper-bounds the KL-divergence over $z_M$. This justifies using the marginal of $q$ over $z_M$ to approximate the original target distribution.
\textbf{Annealed Importance Sampling.} A successful approach for creating augmented distributions is Annealed Importance Sampling (AIS) \cite{neal2001ais}. It creates an augmented proposal distribution $q$ by applying a sequence of transition densities $T_m(z_{m+1} \vert z_m)$, and an augmented target by defining transition densities $U_m(z_m \vert z_{m+1})$. This gives the augmented densities
\begin{equation}
q(z_{1:M}) = q(z_1) \prod_{m=1}^{M-1} T_m(z_{m+1} \vert z_m) \quad \mbox{ and } \quad \unnorm p(z_{1:M}) = \unnorm p(z_M) \prod_{m=1}^{M-1} U_m(z_m \vert z_{m + 1}).
\end{equation}
Naively, the ratio of these densities is
\begin{equation}
\frac{\unnorm p(z_{1:M})}{q(z_{1:M})} = \frac{\unnorm p(z_M)}{q(z_1)} \prod_{m=1}^{M-1} \frac{U_m(z_m\vert z_{m+1})}{T_m(z_{m+1}\vert z_m)}. \label{eq:AISnormal}
\end{equation}
To define the transitions $T_m$ and $U_m$, AIS creates a sequence of unnormalized densities $\unnorm \pi_1, \hdots, \unnorm \pi_{M-1}$ that ``bridge'' from a starting distribution $q$ to the target $\unnorm p$, meaning that $\unnorm \pi_1$ is close to $q$ and $\unnorm \pi_{M-1}$ is close to $\unnorm p$.
Then, for each intermediate distribution, $T_m(z_{m+1} \vert z_m)$ is chosen to be a Markov kernel that holds $\pi_m$ invariant, and $U_m$ to be the reversal of $T_m$ with respect to $\pi_m$, defined as
\begin{equation}
U_m(z_m \vert z_{m+1}) = T(z_{m+1} \vert z_m) \frac{\pi_m (z_m)}{\pi_m(z_{m+1})}.
\end{equation}
This choice produces a simplification so that eq.~\ref{eq:AISnormal} becomes
\begin{equation}
\frac{\unnorm p(z_{1:M})}{q(z_{1:M})} = \frac{\unnorm p(z_M)}{q(z_1)} \prod_{m=1}^{M-1} \frac{\unnorm \pi_m(z_m)}{\unnorm \pi_{m}(z_{m+1})}. \label{eq:AISest}
\end{equation}
This can be easily evaluated without needing to evaluate the transition densities. The ratio from eq.~\ref{eq:AISest} can be used to get an expression for the lower bound $\ELBO(q(z_{1:M}), \unnorm p(z_{1:M}))$. Research has shown that the AIS augmentation may lead to extremely tight lower bounds \cite{grosse2015sandwiching, grosse2016measuring, sohl2012hamiltonian, wu2016quantitative}.
\textbf{Hamiltonian Dynamics.} Many MCMC methods used to sample from $p(z)$ are based on Hamiltonian dynamics \cite{betancourt2017geometric, chen2014stochastic, neal2011mcmc, welling2011bayesian}. The idea is to create an augmented distribution $p(z, \rho) = p(z) S(\rho)$, where $S(\rho)$ is a distribution over a momentum variable $\rho$ (e.g. a Multivariate Gaussian). Then, one can define numerical integration schemes where $z$ and $\rho$ evolve while nearly holding $p(z, \rho)$ constant. When corrected by a Metropolis-Hastings acceptance step, this can be made to exactly hold $p(z, \rho)$ invariant. This is alternated with a scheme that resamples the momentum $\rho$ while holding $S(\rho)$ invariant. When Hamiltonian dynamics work well, $z$ can quickly move around, suppressing random-walk behavior.
There are a variety of different Hamiltonian MCMC methods, corresponding to different integration schemes, momentum distributions, and ways of resampling the momentum. For instance, HMC and Langevin dynamics use the leapfrog integrator, a Gaussian for the momentum variables and a full resampling of the momentum variables at each step \cite{neal2011mcmc, welling2011bayesian}. On the other hand, if the momentum variables are only partially resampled, the under-damped variants of HMC and Langevin dynamics are recovered \cite{neal2011mcmc}. It was observed that partial resampling may lead to improved perfomance \cite{cheng2018underdamped}.
It is easy to integrate Hamiltonian dynamics into AIS. First, define an augmented target $\unnorm p(z,\rho)=\unnorm p(z) S(\rho)$ and an augmented starting distribution $q(z,\rho)=q(z)S(\rho)$. Then, create a series of augmented densities $\unnorm \pi_1(z, \rho), \hdots, \unnorm \pi_{M-1}(z, \rho)$ bridging the two as $\unnorm \pi_m(z,\rho) = \unnorm \pi_m(z) S(\rho)$. Finally, define the forward transition $T_m(z_{m+1}, \rho_{m+1} \vert z_m, \rho_m)$ to be an iteration of a Hamiltonian MCMC method that leaves $\pi_m(z, \rho)$ invariant. We will describe a single transition $T_m$ as a sequence of three steps: (1) resample the momentum; (2) simulate Hamiltonian dynamics and apply an accept-reject step; and (3) negate the momentum. The precise process that defines the transition is shown in Alg.~\ref{alg:correctedtm}. Note that this algorithm is quite general, and compatible with HMC, Langevin dynamics and their underdamped variants (by selecting an appropriate integrator and resampling method).
\begin{algorithm}[ht]
\caption{Corrected $T_m(z_{m+1}, \rho_{m+1} \vert z_m, \rho_m )$}
\label{alg:correctedtm}
\begin{algorithmic}
\State 1. Sample $\rho'_m$ from some $s(\rho'_m\vert \rho_m)$ that leaves $S(\rho)$ invariant. Set $z'_m \leftarrow z_m$.
\vspace{0.08cm}
\State 2. Simulate Hamiltonian dynamics as \small$(z''_m, \rho''_m) \leftarrow \T_m(z'_m, \rho'_m)$\normalsize.
\State \hspace{0.3cm} Calculate an acceptance probability \small$\alpha = \min\left(1, \unnorm \pi_m(z''_m, \rho''_m) / \unnorm \pi_m(z'_m, \rho'_m)\right)$\normalsize.
\State \hspace{0.3cm} With probability $\alpha$, set \small$(z'''_m,\rho'''_m) \leftarrow (z''_m,\rho''_m)$\normalsize. Otherwise, set \small$(z'''_m, \rho'''_m) \leftarrow (z'_m, \rho'_m)$\normalsize.
\vspace{0.08cm}
\State 3. Reverse the momentum as \small$(z_{m+1}, \rho_{m+1}) \leftarrow (z'''_m, -\rho'''_m)$\normalsize.
\State \Return $(z_{m+1}, \rho_{m+1})$
\end{algorithmic}
\end{algorithm}
Representing $T_m$ this way makes it easy to show it holds the density $\pi_m(z,\rho)$ invariant. The overall strategy is to show that each of the steps 1-3 holds $\pi_m$ invariant, and so does the composition of them \cite[\S3.2]{neal2011mcmc}. For steps 1 and 3 this is trivial, provided that $S(\rho)=S(-\rho)$. For step 2, we require that the simulation $\T_m$ has unit Jacobian and satisfies $\T_m^{-1} = \T_m$. Then, $\T_m$ can be interpreted as a symmetric Metropolis-Hastings proposal, meaning the Metroplis-Hastings acceptance probability $\alpha$ is as given. A typical choice for $\T_m$ that satisfies these requirements is the leapfrog integrator with a momentum reversal at the end. (This reversal then gets "un-reversed" in step 3 for accepted moves.)
Since $T_m$ holds $\pi_m$ invariant, we can define $U_m$ as the reversal of $T_m$ wrt $\pi_m$. Then, eq.~\ref{eq:AISest} becomes
\begin{equation}
\frac{\unnorm p(z_{1:M}, \rho_{1:M})}{q(z_{1:M}, \rho_{1:M})} = \frac{\unnorm p(z_M, \rho_M)}{q(z_1, \rho_1)} \prod_{m=1}^{M-1} \frac{\unnorm \pi_m(z_m, \rho_m)}{\unnorm \pi_m(z_{m+1}, \rho_{m+1})}. \label{eq:haisest}
\end{equation}
Using this ratio we get an expression for the lower bound $\ELBO(q(z_{1:M}, \rho_{1:M}), \unnorm p(z_{1:M}, \rho_{1:M}))$ obtained with Hamiltonian AIS. While this method has been observed to yield strong lower bounds on $\log Z$ \cite{sohl2012hamiltonian, wu2016quantitative} (see also Section \ref{sec:exps_real}), its performance depends on many parameters: initial distribution $q(z)$, momentum distribution $S$, momentum resampling scheme, simulator $\mathcal{T}_m$, and bridging densities. We would like to tune these parameters by maximizing the ELBO using reparameterization-based estimators. However, due to the accept-reject step required by the Hamiltonian MCMC transition, the resulting bound is not differentiable, and thus reparameterization gradients are not available.
\section{Generating the Hamiltonian AIS bound} \label{app:HAISboundalg}
\begin{algorithm}[H]
\caption{Generating the (non-differentiable) Hamiltonian AIS variational bound.}
\label{alg:sampleais}
\begin{algorithmic}
\State Sample $z_1 \sim q$ and $\rho_1 \sim S$.
\State Initialize estimator as $\mathcal{L} \leftarrow -\log q(z_1, \rho_1)$.
\For{$m = 1, 2, \cdots , M-1$}
\State Run corrected $T_m$ (Alg.~\ref{alg:correctedtm}) on input $(z_m, \rho_m)$, storing the output $(z_{m+1}, \rho_{m+1})$.
\State Update estimator as $\mathcal{L} \leftarrow \mathcal{L} + \log \left( \unnorm \pi_m(z_m, \rho_m) / \unnorm \pi_m(z_{m+1}, \rho_{m+1}) \right)$.
\EndFor
\State Update estimator as $\mathcal{L} \leftarrow \mathcal{L} + \log \unnorm p(z_M, \rho_M)$.
\State \Return $\mathcal{L}$
\end{algorithmic}
\end{algorithm}
\section{More results tuning more subsets of parameters for UHA} \label{app:tunemoreuha}
We tested UHA\ tuning different subsets of $\{\epsilon, \eta, \Sigma, \beta, q(z), \epsilon(\beta), \psi(\beta)\}$. Fig.~\ref{fig:tuning_more} shows the results. The first row shows the results obtained by tuning the pair $(\epsilon, \eta)$ and each other parameter individually for different values of $K$, and the second row shows the results obtained by tuning increasingly more parameters. It can be observed that tuning $\beta$ and $q(z)$ lead to the largest gains in performance.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4, trim = {0 1.5cm 0 0}, clip]{./figures/each_indbrownian_smallcompare.pdf}\hfill
\includegraphics[scale=0.4, trim = {1.1cm 1.5cm 0 0}, clip]{./figures/each_indlorenz_smallcompare.pdf}\hfill
\includegraphics[scale=0.4, trim = {1.3cm 1.5cm 0 0}, clip]{./figures/each_indlog_a1a_smallcompare.pdf}\hfill
\includegraphics[scale=0.4, trim = {1.3cm 1.5cm 0 0}, clip]{./figures/each_indlog_madelon_smallcompare.pdf}
\hspace{0.8cm}
\vspace{0.2cm}
\includegraphics[scale=0.4, trim = {0 0 0 0.86cm}, clip]{./figures/one_ata_timebrownian_smallcompare.pdf}\hfill
\includegraphics[scale=0.4, trim = {1.1cm 0 0 0.87cm}, clip]{./figures/one_ata_timelorenz_smallcompare.pdf}\hfill
\includegraphics[scale=0.4, trim = {1.3cm 0 0 0.9cm}, clip]{./figures/one_ata_timelog_a1a_smallcompare.pdf}\hfill
\includegraphics[scale=0.4, trim = {1.3cm 0 0 0.9cm}, clip]{./figures/one_ata_timelog_madelon_smallcompare.pdf}
\caption{\textbf{Tuning more parameters leads to significantly better results.} Legends indicate what parameters are being trained. Parameters are step-size $\epsilon$, damping coefficient $\eta$, moment covariance $\Sigma$, bridging densities parameters $\beta$ and $\psi$, initial distribution $q$. $\epsilon(\beta)$ indicates we are learning the step-size as an affine function of $\beta$.}
\label{fig:tuning_more}
\end{figure}
\section{Results using architecture from Burda et al. \cite{IWVAE}} \label{app:resultsIWAE}
In this section we show the results achieved for VAE training using the architecture from Burda et al. \cite{IWVAE} (with $1$ stochastic layer). In this case the encoder and decoder consist on feed forward neural networks with two hidden layers of size $200$ with \textit{Tanh} non-linearity, and latent space dimensionality of $50$. All training details are the same, but with the constraint $\epsilon \in (0, 0.04)$. Tables \ref{table:resultsVAEELBOA} and \ref{table:resultsVAENMLLA} show the results.
\renewcommand{\arraystretch}{1}
\begin{table}[ht]
\caption{ELBO on the test set (higher is better). For $K = 1$ both methods reduce to plain VI.}
\label{table:resultsVAEELBOA}
\centering
\begin{tabular}{lllllll}
\toprule
& & $K = 1$ & $K = 8$ & $K = 16$ & $K = 32$ & $K = 64$ \\
\midrule
\multirow{2}{1cm}{mnist} & UHA & $-92.4$ & $-89.2$ & $-88.5$ & $-88.1$ & $-87.1$ \\
& IW & $-92.4$ & $-89.9$ & $-89.3$ & $-88.8$ & $-88.5$ \\
\midrule
\multirow{2}{1cm}{letters} & UHA & $-139.0$ & $-134.3$ & $-133.3$ & $-132.6$ & $-131.2$ \\
& IW & $-139.0$ & $-135.5$ & $-134.7$ & $-134.0$ & $-133.4$ \\
\midrule
\multirow{2}{1cm}{kmnist} & UHA & $-197.5$ & $-189.5$ & $-188.1$ & $-187.1$ & $-180.3$ \\
& IW & $-197.5$ & $-191.8$ & $-190.2$ & $-188.8$ & $-187.6$ \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[ht]
\caption{Log-likelihood on the test set (higher is better). This is estimated using AIS with under-damped HMC using $2000$ bridging densities, $1$ HMC iteration with $16$ leapfrog steps per bridging density, integration step-size $\epsilon = 0.05$, and damping coefficient $\eta = 0.8$.}
\label{table:resultsVAENMLLA}
\centering
\begin{tabular}{lllllll}
\toprule
& & $K = 1$ & $K = 8$ & $K = 16$ & $K = 32$ & $K = 64$ \\
\midrule
\multirow{2}{1cm}{mnist} & UHA & $-88.3$ & $-87.6$ & $-87.4$ & $-87.3$ & $-86.3$ \\
& IW & $-88.3$ & $-87.3$ & $-87.0$ & $-86.8$ & $-86.6$ \\
\midrule
\multirow{2}{1cm}{letters} & UHA & $-133.0$ & $-131.8$ & $-131.4$ & $-131.2$ & $-129.9$ \\
& IW & $-133.0$ & $-131.6$ & $-131.2$ & $-130.9$ & $-130.6$ \\
\midrule
\multirow{2}{1cm}{kmnist} & UHA & $-188.3$ & $-186.3$ & $-185.8$ & $-185.3$ & $-177.4$ \\
& IW & $-188.3$ & $-184.4$ & $-183.2$ & $-182.1$ & $-181.2$ \\
\bottomrule
\end{tabular}
\end{table}
\section{Extrapolating optimal parameters for UHA} \label{sec:extrapol}
Some results in Section \ref{sec:tuningmore} (and Appendix \ref{app:tunemoreuha}) use a number of bridging densities $K$ up to 512. As mentioned previously, for those simulations, if $K_1 \geq 64$ bridging densities were used, we optimized the parameters for $K_2 = 64$ and extrapolate the parameters to work with $K_1$. We now explain this procedure.
From the parameters considered, $\{\epsilon, \eta, \Sigma, \beta, q(z), \epsilon(\beta), \psi(\beta)\}$, the only ones that need to be "extrapolated" are the step-size $\epsilon$ and the bridging parameters $\beta$. All other parameters are tuned for $K_2 = 64$ bridging densities and the values obtained are directly used with $K_1$ bridging densities.
For $\beta$ we use a simple interpolation. Define $f(x)$ to be the piecewise linear function (with $K_2 = 64$ "pieces") that satisfies $f(x_k) = \beta_k$, for $x_k = k / K_2$ and $k = 0, \cdots, K_2$ (this is a bijection from $[0, 1]$ to $[0, 1]$). Then, when using $K_1$, we simply define $\beta_k = f(x_k)$, where $x_k = k / K_1$ and $k = 0, \cdots, K_1$.
For $\epsilon$, we use the transformation $\epsilon_{K_1} = \epsilon_{K_2} \frac{\log K_2}{\log K_1}$. While other transformations could be used (e.g. without the $\log$), we observed this to work best in practice. (In fact, we obtained this rule by analyzing the dependence of the optimal $\epsilon$ on $K$ for several tasks and values of $K$.)
\section{Approximation accuracy} \label{app:comparehmc}
We study the accuracy of the approximation provided by UHA by analyzing the posterior moment errors: We estimate the mean and covariance of the target distribution using UHA and compute the mean absolute error of these estimates. (We get the ground truth values using approximate samples obtained running NUTS \cite{hoffman2014no} for $500000$ steps.) We consider a logistic regression model with the \textit{sonar} dataset ($d = 61$), and compare against mean field VI, IW, and HMC. We give each method the same computational budget $B$, measured as the total number of model evaluations (or gradient), and perform simulations for $B\in \{10^5, 5\times 10^5, 10^6\}$.
For HMC, we use half of the budget for the warm-up phase and half to draw samples. For mean field VI we use the whole budget for optimization, and use the final mean and variance parameters for the approximation. For UHA\ and IW we train using $K=32$ for $3000$ steps, and use the remaining budget of model evaluations to draw samples (used to estimate posterior moments) using $K=256$.\footnote{For UHA\ we use the extrapolation explained in Appendix \ref{sec:extrapol}} For UHA\ we tune the step-size $\epsilon$, the damping coefficient $\eta$, the momentum distribution covariance (diagonal), the bridging densities coefficients $\beta$, and the parameters of the initial distribution $q(z)$.
Fig. \ref{fig:posterior_approx} shows the results for the posterior covariance. We do not include the results for the posterior mean because all methods perform similarly. It can be observed that HMC achieves the lowest error, followed by UHA. Both mean field VI and IW yield significantly worse results.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4, trim = {0 0 0 0}, clip]{./figures/sonar.pdf}
\caption{Mean absolute error for posterior covariance approximation. Standard errors computed by repeating the simulations using five different random seeds.}
\label{fig:posterior_approx}
\end{figure}
\section{Proof of Lemma \ref{lemma:reversal}} \label{app:proofrev}
We begin with the following result.
\begin{lemma}
Let $T_1$, $T_2$ and $T_3$ be three transitions that leave some distribution $\pi$ invariant and satisfy $T_i(z'\vert z) \pi(z) = U_i(z\vert z') \pi(z')$ (i.e. $U_i$ is the reversal of $T_i$ with respect to $\pi$). Then the reversal of $T$ with respect to $\pi$ is given by $U = U_3 \circ U_2 \circ U_1$.
\end{lemma}
\begin{proof}
\begin{align}
T(z'\vert z) \pi(z) & = \int T_3(z' \vert z_2) T_2(z_2 \vert z_1) T_1(z_1 \vert z) \pi(z) \,dz_1\,dz_2\\
& = \int T_3(z' \vert z_2) T_2(z_2 \vert z_1) \pi(z_1 ) U_1(z \vert z_1) \,dz_1\,dz_2\\
& = \int T_3(z' \vert z_2) \pi(z_2) U_2(z_1 \vert z_2) U_1(z \vert z_1) \,dz_1\,dz_2\\
& = \pi(z') \int U_3(z_2 \vert z') U_2(z_1 \vert z_2) U_1(z \vert z_1) \,dz_1\,dz_2\\
& = \pi(z') \int U_1(z \vert z_1) U_2(z_1 \vert z_2) U_3(z_2 \vert z') \,dz_1\,dz_2\\
& = \pi(z') U(z \vert z').
\end{align}
\end{proof}
The rest of the proof is straightforward. Let the three steps from the corrected version of $T_m$ (Alg.~\ref{alg:correctedtm}) be denoted $T_m^1$, $T_m^2$ and $T_m^3$. The latter two (Hamiltonian simulation with accept-reject step and momentum negation) satisfy detailed balance with respect to $\pi_m(z, \rho)$ \cite[\S3.2]{neal2011mcmc}. Thus, for these two, $U_m^i$ is defined by the same process as $T_m^i$. For $T_m^1$ (momentum resampling), its reversal is given by the reversal of $s(\rho'\vert \rho)$ with respect to $S(\rho)$. We call this $s_{\mathrm{rev}}(\rho\vert \rho')$, and it satisfies
\begin{equation}
s_{\mathrm{rev}}(\rho\vert \rho') = s(\rho'\vert \rho) \frac{S(\rho)}{S(\rho')}.
\end{equation}
\section{Proof of Theorem \ref{thm:ratioUT}} \label{app:proofthm}
To deal with delta functions, whenever the transition states [Set $z' \leftarrow z$], we use $z' \sim \mathcal{N}(z, a)$, and take the limit $a \to 0$. We use $g_a(z)$ to denote the density of a Gaussian with mean zero and variance $a$ evaluated at $z$, and $\gamma(z, \rho) = (z, -\rho)$ (operator that negates momentum).
We first compute $T_m(z_{m+1}, \rho_{m+1} \vert z_m, \rho_m)$. We have that $\rho'_m \sim s(\cdot\vert \rho_m)$ and $z'_m \sim \mathcal{N}(z_m, a)$. Thus, \begin{equation}
T_m'(z_{m}', \rho_{m}' \vert z_m, \rho_m) = s(\rho_m'\vert \rho_m) g_a(z'_m - z_m).
\end{equation}
Also, we have $(z_{m+1}, \rho_{m+1}) = (\gamma \circ \mathcal{T}_m)(z'_m, \rho'_m)$. Since $\gamma \circ \mathcal{T}_m$ is an invertible transformation with unit Jacobian and inverse $(\gamma \circ \mathcal{T}_m)^{-1} = \mathcal{T}_m \circ \gamma$, we get that
\begin{align}
T_m(z_{m+1}, \rho_{m+1} \vert z_m, \rho_m) & = T_m'((\mathcal{T}_m \circ \gamma) \, (z_{m+1}, \rho_{m+1}) \vert z_m, \rho_m)\\
& = s(\mathcal{T}_m^\rho(z_{m+1}, -\rho_{m+1})\vert \rho_m) \, g_a(\mathcal{T}_m^z(z_{m+1}, -\rho_{m+1}) - z_m),
\end{align}
where $\mathcal{T}_m^\rho$ is the operator that applies $\mathcal{T}_m$ and returns the second component of the result (and similarly for $\mathcal{T}_m^z$).
Now, we compute $U_m(z_m, \rho_m \vert z_{m+1}, \rho_{m+1})$. We have that $(z'_m, \rho'_m) = (\mathcal{T}_m \circ \gamma) \, (z_{m+1}, \rho_{m+1})$. Thus,
\begin{align}
U_m(z_m, \rho_m \vert z_{m+1}, \rho_{m+1}) & = U_m(z_m, \rho_m \vert z_{m}', \rho_{m}')\\
& = s_\mathrm{rev}(\rho_m \vert \rho_m') \, g_a(z_m - z_m')\\
& = s_\mathrm{rev}(\rho_m \vert \mathcal{T}_m^\rho(z_{m+1}, -\rho_{m+1}) \, g_a(z_m - \mathcal{T}_m^z(z_{m+1}, -\rho_{m+1})).
\end{align}
Taking the ratio $U_m(z_m, \rho_m \vert z_{m+1}, \rho_{m+1}) / T_m(z_{m+1}, \rho_{m+1} \vert z_m, \rho_m)$ the factors involving the Gaussian pdf cancel (the density of a mean zero Gaussian is symmetric) and using that
\begin{equation}
s_\mathrm{rev}(\rho_m\vert \rho'_m) S(\rho'_m) = s(\rho'_m\vert \rho_m) S(\rho_m) \longrightarrow
\frac{s_\mathrm{rev}(\rho_m\vert \rho'_m)}{s(\rho'_m\vert \rho_m)} = \frac{S(\rho_m)}{S(\rho'_m)}
\end{equation}
yields get the desired result.
|
1,116,691,497,383 | arxiv | \section{Introduction}
\label{intro}
Face recognition has many applications in the fields of image processing and computer vision; advancements in related technologies allow its efficient and accurate integration in many areas from individual face recognition for unlocking a mobile device to crowd surveillance. Companies have also invested heavily in this field; Google's facial recognition in the Google Glass project~\cite{mandal2014wearable}, Facebook's DeepFace technology~\cite{macaulay2016queen}, and Apple's patented face identification system~\cite{bhagavatula2015biometric} are examples of the growing number of facial identification systems. Existing face recognition technologies and the widespread use of biometrics introduce a serious threat to individuals' privacy, exacerbated by the fact that biometric identification is often done quietly, without proper consent from observed people. For example, the UK uses an estimated 4.2 million surveillance cameras to monitor public areas~\cite{erkin2009privacy}. However, it is not feasible to obtain explicit consent from an extremely large number of persons being watched. Nevertheless, facial images directly reflect the owners' identity, and they can be easily linked to other sensitive information such as health records and financial records, raising privacy concerns. Biometric data analysis systems often need to employ high-performance third-party servers to conduct complex computational operations on large numbers of biometric data inputs. However, these third-party servers can be accessed by untrusted parties causing privacy issues.
Among different definitions, information privacy can be defined as the ``controlled information release" that permits an anticipated level of utility via a private function that protects the identity of the data owners~\cite{chamikara2019efficient}. Privacy-preserving face recognition involves at least two main parties: one needs to recognize an image (party 1), and the other holds the database of images (party 2). Data encryption would allow party 1 to learn the result without learning the execution of the recognition algorithm or its parameters, whereas party 2 would not learn the input image or the result of the recognition process~\cite{erkin2009privacy}. However, the high computational complexity and the need to trust the parties for their respective responsibilities can be major issues. Proposed in this paper is data perturbation, which is significantly less computationally complex, but incurs a certain level of utility loss. Data perturbation allows all parties to be untrusted~\cite{chamikaraprocal}. The parties will learn only the classification result (e.g. name/tag of the image) with a certain level of confidence, but will not have access to the original image. The literature identifies two major application scenarios of recognition technologies in which a third party server is used. They are (1) the use of biometric data such as face images and fingerprint to identify and authenticate a person (e.g. at border crossings) and (2) deploy surveillance cameras in public places to automatically match or identify faces (offender tracking/criminal investigations ~\cite{chamikara2016fuzzy}). There are a few methods that are based on encryption to provide privacy-preserving face recognition~\cite{erkin2009privacy,sadeghi2009efficient,xiang2016privacy}, which need one or more trusted third parties in a server-based setting (e.g. cloud servers). However, in an environment where no trusted party is present, such semi-honest approaches raise privacy concerns, as the authorized trusted parties are still allowed to access the original image data (raw or encrypted). Moreover, an encryption-based mechanism for scenarios that process millions of faces would be extremely inefficient and difficult to maintain. The methods such as $k-same$ ~\cite{newton2005preserving} for preserving privacy by de-identifying face images can avoid the necessity of a trusted third-party. However, such methods introduce utility issues in large scale scenarios with millions of faces, due to the limitations of the underlying privacy models used (e.g. $k-anonymity$)~\cite{chamikaraprocal}. We identify five main types of issues (TYIS) with the existing privacy-preserving approaches for face recognition. They are as follows. TYIS 1: face biometrics should not be linkable to other sensitive data, TYIS 2: the method should be scalable and resource friendly, TYIS 3: face biometrics should not be accessible by anyone (i.e. use one-way transformation), TYIS 4: face biometrics of the same person from two different applications should not be linkable, and TYIS 5: face biometrics should be revocable (if data is leaked, the application should have a way of revoking them to prevent any malicious use).
This paper proposes a method to control privacy leakage from face recognition, answering the five TYIS better than the existing privacy-preserving face recognition approaches. We propose an approach that stores data in a perturbed form. The method utilizes differential privacy to devise a novel technique (named PEEP: \underline{P}rivacy using \underline{E}ig\underline{E}nface \underline{P}erturbation) for privacy-preserving face recognition. PEEP uses the properties of local differential privacy to apply perturbation on input image data to limit potential privacy leaks due to the involvement of untrusted third-party servers and users. To avoid the necessity of a trusted third party, we apply randomization to the data used for training and testing. Due to the extremely low complexity, PEEP can be easily implemented on resource-constrained devices, allowing the possibility of perturbation at the input end. The ability to control the level of privacy via adjusting the privacy budget is an additional advantage of the proposed method. The privacy budget is used to signify the level of privacy provided by a privacy-preserving algorithm; the higher the privacy budget, the lower the privacy. PEEP utilizes local differential privacy at the cost of as low as $6$ percent drop in accuracy (e.g. $85\%$ to $79\%$) with a privacy budget of $\varepsilon=8$. A mechanism with a privacy budget ($\varepsilon$) of $0<\varepsilon\leq 9$ is considered to provide an acceptable level of privacy~\cite{abadi2016deep,arachchige2019local}. Consequently, PEEP is capable of adjusting the privacy-accuracy trade-off by changing the privacy budget through added noise.
The rest of the paper is organized as follows. Section \ref{relwork} provides a summary of existing related work. The foundations of the proposed work are briefly discussed in Section \ref{fndatins}. Section \ref{ourapprch} provides the technical details of the proposed approach. The results are discussed in Section \ref{resdis}. The paper is concluded in Section \ref{concls}.
\section{Related Work}
\label{relwork}
Literature shows a vast advancement in the area of face recognition that has employed different approaches, such as input image preprocessing~\cite{heseltine2003face}, statistical approaches~\cite{tsalakanidou2003use,delac2005appearance}, and deep learning~\cite{parkhi2015deep}. The continuous improvements in the field have significantly improved the accuracy of face recognition making it a vastly used approach in many fields~\cite{parkhi2015deep}. Furthermore, the approaches, such as proposed by Cendrillon et al., show the dynamic capabilities of face recognition approaches that allow real-time processing~\cite{cendrillon2000real}. However, biometric data analysis is a vast area not limited to face recognition. With biometric data, a major threat is privacy violation~\cite{bhargav2007privacy}. Biometric data are almost always non-revocable and can be used to identify a person in a large set of individuals easily; hence, it is essential to apply some privacy-preserving mechanism when using biometrics, e.g. for identification and authentication~\cite{bringer2013privacy}. Literature shows a few approaches to address privacy issues in face recognition. Zekeriya Erkin et al. (ZEYN)~\cite{erkin2009privacy} introduced a privacy-preserving face recognition method based on a cryptographic protocol for comparing two Pailler-encrypted values. Their solution focuses on a two-party scenario where one party holds the privacy-preserving algorithm and the database of face images, and the other party wants to recognize/classify a facial image input. ZEYN requires O(log M) rounds, and it needs computationally expensive operations on homomorphically encrypted data to recognize a face in a database of images, hence not suitable for large scale scenarios. Ahman-Reza Sadehi et al. (ANRA) ~\cite{sadeghi2009efficient} introduced a relatively efficient method based on homomorphic encryption with garbled circuits. Nevertheless, the complexity of ANRA also has the same problem of failing to address large scale scenarios. Xiang et al. tried to overcome the computational complexities of the previous methods by introducing another cryptographic mechanism that uses the cloud~\cite{xiang2016privacy} for outsourced computations. However, being a semi-honest model, introducing another untrusted module such as the cloud increases the possibility of privacy leak. PE-MIU (Privacy-Enhancing face recognition approach based on Minimum Information Units)~\cite{terhorst2020pe} and POR (lightweight privacy-preserving adaptive boosting (AdaBoost) classification framework for face recognition)~\cite{ma2019lightweight} are two other recently developed privacy-preserving face recognition approaches. PE-MIU is based on the concept of minimum information units, whereas POR is based on additive secret sharing. PE-MIU is also a semi-honest approach, which lacks a proper privacy definition in its mechanism. Moreover, the scalability of PE-MIU can be limited due to the exponential template comparisons necessary during the execution of the proposed algorithm. POR provides a relatively efficient approach compared to the previous encryption-based approaches. However, being a semi-honest approach, POR inherits the issues of any semi-honest approach discussed above. The proposed cryptographic methods cannot work without a trusted third party, and these trusted parties may later behave maliciously. Newton et al. proposed a de-identification approach for face images (named as $k-same$), which does not need complex cryptographic operations~\cite{newton2005preserving}. The proposed method is based on $k-anonymity$~\cite{chamikaraprocal,chamikara2019infosci}. However, $k-anonymity$ tends to reduce accuracy and increase information leak when introduced with high dimensional data~\cite{chamikaraprocal}. The same problem can occur when using $k-same$ for large scale scenarios involving the surveillance of millions of people. In addition to these works, researchers have looked at complementary techniques such as developing privacy-friendly surveillance cameras~\cite{dufaux2006scrambling,yu2008privacy}, but these methods do not provide sufficient accuracy for privacy-preserving face recognition.
Fingerprint data and iris data are two other heavily used biometrics for identification and authentication. Privacy-preserving finger code authentication~\cite{barni2010privacy}, and privacy-preserving key generation for iris biometrics~\cite{rathgeb2010privacy} are two approaches that apply cryptographic methods to maintain the privacy of fingerprint and iris data. However, these solutions also need more efficient procedures, as cryptographic approaches are inefficient in calculations~\cite{rathgeb2010privacy, gai2016privacy}. Privacy-preserving fingerprint and iris analysis can be possible future applications for PEEP, but this needs further investigation. Classification is the most commonly applied data mining technique that is used in biometric systems~\cite{brady1999biometric}. Encryption and data perturbation are two main approaches also used for privacy-preserving data mining (PPDM)~\cite{ yang2017efficient}. Data perturbation often entails lower computational complexity than encryption at the expense of utility. Hence, data perturbation is better at producing high efficiency in large scale data mining. Noise addition, geometric transformation, randomization, condensation, and hybrid perturbation are a few of the perturbation approaches~\cite{zhong2012mu,chamikaraprocal}. As data perturbation methods do not change the original input data formats, they may concede some privacy leak~\cite{machanavajjhala2015designing}. A privacy model defines the constraints on the level of privacy of a particular perturbation mechanism~\cite{machanavajjhala2015designing}; $k-anonymity$, $l-diversity$, $(\alpha, k)-anonymity$, $t-closeness$ and differential privacy (DP) are some of such privacy models~\cite{chamikaraprocal}. DP was developed to provide a better level of privacy guarantee compared to previous privacy models that are vulnerable to different privacy attacks ~\cite{dwork2009differential,9000905}. Laplace mechanism, Gaussian mechanism ~\cite{chanyaswad2018mvg}, geometric mechanism, randomized response ~\cite{qin2016heavy}, and staircase mechanism ~\cite{kairouz2014extremal} are a few of the fundamental mechanisms used to achieve DP. There are many practical examples where these fundamental mechanisms have been used to build differentially private algorithms/methods. LDPMiner ~\cite{qin2016heavy}, PINQ~\cite{mcsherry2009privacy}, RAPPOR~\cite{erlingsson2014rappor}, and Deep Learning with DP~\cite{abadi2016deep} are a few examples of such practical applications of DP.
\section{Foundations of Differential Privacy and Eigenface recognition}
\label{fndatins}
In this section, we describe the background of the techniques used in the proposed solution. PEEP conducts privacy-preserving face recognition utilizing the concepts of differential privacy and eigenface recognition.
\subsection{Differential Privacy (DP)}
DP is a privacy model that is known to render maximum privacy by minimizing the chance of individual record identification~\cite{kairouz2014extremal}. In principle, DP defines the bounds to how much information can be revealed to a third party/adversary about someone's data being present in a particular database. Conventionally $\varepsilon$ (epsilon) is used to denote the level of privacy rendered by a randomized privacy-preserving algorithm ($\mathcal{M}$) over a particular database ($\mathcal{D}$); $\varepsilon$ is called the privacy budget that provides an insight into the privacy loss of a DP algorithm. The higher the value of $\varepsilon$, the higher the privacy loss.
Let us take two adjacent datasets of $\mathcal{D}$, $x$ and $y$, where $y$ differs from $x$ only by (plus or minus) one person. Then $\mathcal{M}$ satisfies ($\varepsilon$)-DP if Equation \eqref{dpeq} holds. Assume, datasets $x$ and $y$ as being collections of records from a universe $\mathcal{X}$ and $\mathbb{N}$ denotes the set of all non-negative integers including zero.
\begin{mydef}
A randomized algorithm $\mathcal{M}$ with domain $\mathcal{N}^{|\mathcal{X}|}$ and
range $R$: is $\varepsilon$-differentially private if for every adjacent $x$, $y$ $\in$ $\mathcal{N}^{|\mathcal{X}|}$
and for any subset $\mathcal{S} \subseteq \mathcal{R}$
\label{difpriv}
\end{mydef}
\begin{equation}
\mathcal{P}r[\mathcal{(M}(x) \in \mathcal{S})] \leq \exp(\varepsilon)~\mathcal{P}r[\mathcal{(M}(y) \in \mathcal{S})]
\label{dpeq}
\end{equation}
\subsection{Global vs. Local Differential Privacy}
Global differential privacy (GDP) and local differential privacy (LDP) are the two main approaches to DP. In the GDP setting, there is a trusted curator who applies carefully calibrated random noise to the real values returned for a particular query. The GDP setting is also called the trusted curator model~\cite{chan2012differentially}. Laplace mechanism and Gaussian mechanism~\cite{dwork2014algorithmic} are two of the most frequently used noise generation methods in GDP~\cite{dwork2014algorithmic}. A randomized algorithm, $\mathcal{M}$ provides $\varepsilon$-GDP if Equation \eqref{dpeq} holds. LDP randomizes data before the curator can access them, without the need of a trusted curator. LDP is also called the untrusted curator model ~\cite{kairouz2014extremal}. LDP can also be used by a trusted party to randomize all records in a database at once. LDP algorithms may often produce too noisy data, as noise is applied to achieve individual record privacy. LDP is considered to be a strong and rigorous notion of privacy that provides plausible deniability and deemed to be a state-of-the-art approach for privacy-preserving data collection and distribution. A randomized algorithm $\mathcal{A}$ provides $\varepsilon$-LDP if Equation \eqref{ldpeq} holds ~\cite{erlingsson2014rappor}.
\begin{mydef}
A randomized algorithm $\mathcal{A}$ satisfies $\varepsilon$-LDP if for all pairs of users' inputs $v_1$ and $v_2$ and for all $\mathcal{Q} \subseteq Range(\mathcal{A})$, and for ($\varepsilon \geq 0$) Equation \eqref{ldpeq} holds. $Range(\mathcal{A})$ is the set of all possible outputs of the randomized algorithm $\mathcal{A}$.
\end{mydef}
\begin{equation}
\mathcal{P}r[\mathcal{A}(v_1) \in \mathcal{Q}] \leq \exp(\varepsilon)~Pr[\mathcal{A}(v_2) \in \mathcal{Q}]
\label{ldpeq}
\end{equation}
\subsection{Sensitivity}
Sensitivity is defined as the maximum influence that a single individual can have on the result of a numeric query. Consider a function $f$, the sensitivity ($\Delta f$) of $f$ can be given as in Equation \eqref{seneq} where x and y are two neighboring databases (or in LDP, adjacent records) and $\lVert . \rVert_1$ represents the $L1$ norm of a vector~\cite{wang2016using}.
\begin{equation}
\Delta f=max\{\lVert f(x)-f(y) \rVert_1\}
\label{seneq}
\end{equation}
\subsection{Laplace Mechanism}
The Laplace mechanism is considered to be one of the most generic approaches to achieve DP~\cite{dwork2014algorithmic}.
Laplace noise can be added to a function output ($\mathcal{F}(\mathcal{D})$) as given in Equation \ref{diffeq2} to produce a differentially private output. $\Delta f$ denotes the sensitivity of the function $f$. In local differentially private setting, the scale of the Laplacian noise is equal to $\Delta f/\varepsilon$, and the position is the current input value ($\mathcal{F}(\mathcal{D})$).
\begin{equation}
\mathcal{PF}(\mathcal{D})= \mathcal{F}(\mathcal{D})+Lap(\frac{\Delta f}{\varepsilon})
\label{diffeq}
\end{equation}
\begin{equation}
\mathcal{PF}(\mathcal{D})= \frac{\varepsilon}{2\Delta f}~e^{-\frac{|x-\mathcal{F}(\mathcal{D})|\varepsilon}{\Delta f}}
\label{diffeq2}
\end{equation}
\subsection{Eigenfaces and Eigenface recognition}
The process of face recognition involves data classification where input data are images, and output classes are persons' names. A face recognition algorithm needs to be first trained with an existing database of faces. The trained model will then be used to recognize a person's name using an image input. The training algorithm often needs various images to have high accuracy. When the model needs to be trained to recognize a large number of persons, the training algorithm also needs a large number of training images. Image data are often large, and the higher the number of faces to be trained, the slower the algorithm. However, facial recognition systems need high efficiency, as many of them are employed in real-time systems such as citizen surveillance~\cite{zhang1997face}. When an artificial neural network (ANN) is used for face recognition, the input images need to be flattened into 1-d vectors. An image with the dimensions $m\times n$ will result in an $mn\times 1$ vector. High-resolution images will result in extremely long 1-d vectors, which leads to slow training and testing of the corresponding ANN. Dimensionality reduction methods can be used to avoid such complexities, and allow face recognition to concentrate on the essential features, and to ignore the noise in the input images. In dimensionality reduction, the points are projected onto a higher-dimensional line, which is named as a hyperplane. Principal component analysis (PCA) is a dimensionality reduction technique that represents a hyperplane with maximum variance. This hyperplane can be determined using eigenvectors, which can be computed using the covariance matrix of input data ~\cite{zhang1997face}.
\begin{center}
\scalebox{0.9}{
\begin{minipage}{1.1\linewidth}
\let\@latex@error\@gobble
\begin{algorithm}[H]
\caption{Generating Eigenfaces}
\label{eigen_algo}
\KwIn{
\begin{tabular}{l c l}
$\{x^c_1,\dots, x^c_n\} $ & $\gets $ & normalized and centered examples\\
$nc$ & $\gets $ & expected number of PCA components\\
\end{tabular}
}
\KwOut{
\begin{tabular}{ l c l }
$\mathcal{EIMAT}$ & $ \gets $ & matrix of eigenfaces \\
\end{tabular}
}
\For{each $x^c_i$}{
flatten $x^c_i$ to produce vector $t_i$}
compute the mean face vector ($\mathcal{F}_m$), $\mathcal{F}_m=\frac{1}{n}\Sigma_{i=1}^{n}t_i$\;
\For{each $x^c_i$}{
$s_i=t_i - \mathcal{F}_m$\;
}
generate covariance matrix, $\mathcal{C}$,\newline
$\mathcal{C}=\frac{1}{n}\Sigma_{i=1}^{n}s_i\times s_i^\mathcal{T}=\mathcal{AA}^\mathcal{T}$, where, $\mathcal{A}=[s_1s_2\dots s_n$]\;
calculate the eigenvectors $e_i$ of $\mathcal{AA}^T$\newline
since, $\mathcal{AA}^T$ can be extensive, derive $e_i$ from the eigenvectors $u_i$ of $\mathcal{A}^\mathcal{T}\mathcal{A}$, where, $e_i=\mathcal{A}u_i$\;
compute the $n$ best eigenvectors $e_i$ such that, $\left\|e_{i}\right\|=1$\;
return $nc$ eigenvectors which corresponds to the $nc$ largest eigenvalues
\end{algorithm}
\end{minipage}%
}
\end{center}
Algorithm \ref{eigen_algo} shows the steps for generating Eigenfaces. As shown in the algorithm, an eigenface~\cite{turk1991eigenfaces} utilizes PCA to represent a dimensionality-reduced version of an input image. A particular eigenface considers a predefined number of the largest eigenvectors as the principal axes that we project our data on to, hence producing reduced dimensions~\cite{zhang1997face}. We can reduce the dimensions of an $m\times n$ image into a $k$ dimensional eigenface where $k$ is the largest $k$ eigenvectors. By doing this, we can consider only the most essential characteristics of an input image and increase the speed of a facial recognition algorithm while preserving high accuracy. Equation \ref{eigeq} provides the mathematical representation of an eigenface where $\mathcal{F}$ is a new face, $\mathcal{F}_m$ is the mean or the average face, $\mathcal{F}_i$ is an EigenFace, and $\alpha_i$ are scalar multipliers which we have to choose in order to create new faces.
\begin{equation}
\mathcal{F}=\mathcal{F}_m+\sum_{i=1}^{n}\alpha_i\mathcal{F}_i
\label{eigeq}
\end{equation}
\section{Our Approach: PEEP}
\label{ourapprch}
In this section, we discuss the steps employed in the proposed privacy-preserving face recognition approach (named as PEEP). We utilize DP to apply confidentiality to face recognition. PEEP applies randomization upon the eigenfaces to create privacy-preserving versions of input images. We assume that any input device used to capture the facial images uses PEEP to apply randomization before sending the images to the storage devices/servers.
\begin{figure}[H]
\centering
\scalebox{0.48}{
\includegraphics[width=1\textwidth, trim=0cm 0cm 0cm 0cm]{facerecog_eigenfaces.pdf}
}
\caption{Privacy-preserving face recognition using PEEP. The figure shows the placement of PEEP in a face recognition system. As shown, PEEP randomizes both training and testing images so that the untrusted third-party servers do not leak any private data to untrusted users. The callout figure in the left-hand side shows the basic flow of randomization inside PEEP, which applies Laplacian noise over eigenfaces.}
\label{peepprivacymodel}
\end{figure}
As depicted by the callout box in Figure \ref{peepprivacymodel}, PEEP involves three primary steps to enforce privacy on face recognition. They are, 1. accepting original face images, 2. generating eigenfaces, and 3. adding Laplacian noise to randomize the images. In the proposed setting, the face recognition model (e.g. MLPClassifier) will be trained solely using randomized data. In this setup, an untrusted server will hold only a privacy-preserving version of the face recognition model.
\subsection{Distributed eigenface generation}
When the number of input faces increases to a large number, it is important that the eigenface calculation (generation) can be distributed in order to maintain efficiency. Algorithm \ref{eigen_algo_dist} shows an incremental calculation approach of eigenfaces where a central computer (CC) in the local edge contributes to the calculation of eigenfaces in a distributed fashion. As shown in step \ref{globmean} in Algorithm \ref{eigen_algo_dist}, the mean face vectors, $\mathcal{F}_m^i$ that are generated for each partition of input data are collected and merged (using Equation \ref{meancalc}) by the CC to generate the global mean face vector $\mathcal{F}_m^{glob}$. Similarly, the CC generates the global covariance matrix, $\mathcal{C}^{glob}$ (refer step \ref{globcov} Algorithm \ref{eigen_algo_dist}) using the covariance matrices generated for each partition using Equation \ref{matcovupdate}. In this way, PEEP manages to maintain the efficiency of eigenface generation for extensive datasets.
\begin{equation}
\mathcal{F}_m^{glob}=
\begin{bmatrix}
\frac{m_{1}\times\overline{y_{11}}+m_{2}\times\overline{y_{12}}+\hdots+m_{k}\times\overline{y_{1k}}}{m_{1}+m_{2}+\hdots +m_{k}}\\
\frac{m_{1}\times\overline{y_{21}}+m_{2}\times\overline{y_{22}}+\hdots+m_{k}\times\overline{y_{2k}}}{m_{1}+m_{2}+\hdots +m_{k}}\\
\vdots\\
\frac{m_{1}\times\overline{y_{n1}}+m_{2}\times\overline{y_{n2}}+\hdots+m_{k}\times\overline{y_{nk}}}{m_{1}+m_{2}+\hdots +m_{k}}\\
\end{bmatrix}_{n\times 1}
\label{meancalc}
\end{equation}
In Equation \ref{meancalc}, $m_{i}$ refers to the number of eigenfaces in the $i^{th}$ partition, whereas $\overline{y_{ij}}$ refers to the mean of the $j^{th}$ index of the $i^{th}$ partition. To merge the covariance matrices, the pairwise covariance update formula introduced in \cite{bennett2009numerically} is adapted as shown in Equation \ref{matcovupdate}~\cite{chamikara2020privacy}. The pairwise covariance update formula for the two merged two column ($u$ and $v$) data partitions, $A$ and $B$, can be written as shown in Equation \ref{covupdate} where the merged dataset is denoted as $ X $.
\begin{equation}
\resizebox{0.8\textwidth}{!}{$\displaystyle
Cov(X)=\frac{\frac{C_A}{(m_A-1)}+\frac{C_B}{(m_B-1)}+(\mu_{u,A}-\mu_{u,B})(\mu_{v,A}-\mu_{v,B}).\frac{m_A.m_B}{m_X}}{(m_X-1)}
$}
\label{covupdate}
\end{equation}
Where, $\mu_{u,A}, \mu_{u,A}, \mu_{v,A}, \mu_{v,B}$ are means of $u$ and $v$ of the two data partitions $\ A $ and $\ B $, respectively. $C_A$ and $C_B$ are the co-moments of the two data partitions $A$ and $B$ where the co-moment of a two column ($u$ and $v$) dataset $D$ is represented as,
\begin{equation}
C_D=\sum_{(u,v)\in D} (u-\mu_u)(v-\mu_v)
\end{equation}
Therefore, the variance-covariance matrix update formula of the two data partitions $\ D_g $ and $\ D_i $ can be written as shown in Equation \ref{matcovupdate},
\begin{equation}
\resizebox{0.9\textwidth}{!}{$\displaystyle
\mathcal{C}^{glob}=\frac{\frac{\mathcal{C}^{glob}}{(m_{D_g}-1)}+\frac{\mathcal{C}_i}{(m_{D_i}-1)}+(\mu_{D_g}(MI_g)-\mu_{D_i}(MI_g))(\mu_{D_g}(MI_i)-\mu_{D_i}(MI_i)).\frac{m_{D_g}.m_{D_i}}{m_{D_{new}}}}{(m_{D_{new}}-1)}
$}
\label{matcovupdate}
\end{equation}
In Equation \ref{matcovupdate}, assume that $\mathcal{C}^{glob}$ and $\mathcal{C}_i$ are the covariance matrices returned for the data partitions $\ D_g $ and $\ D_i $ respectively, where $D_g$ represents the global partition (concatenation of all the former partition), whereas $D_i$ represents the new partition introduced to the calculation. $\ D_{new} $ is the merged dataset of the the data partitions, $\ D_g $ and $\ D_i $. $\mu_{D_g} $ and $\mu_{D_i} $ are mean vectors of $\ D_g $ and $\ D_i $ respectively. $m_D$ represents the number of eigenfaces in the corresponding dataset. Equation \ref{matcovupdate} will be iteratively calculated for all the data partitions to generate the final value of $\ D_g $. $\mathcal{C}^{glob}$ is initialized with the first partition, and $D_i$ will start from the second partition and,
\begin{equation}
MI_i=
\begin{bmatrix}
[1]_n\\
[2]_n\\
[3]_n\\
\vdots\\
[n]_n\\
\end{bmatrix}_{n\times n}
\end{equation}
We can also run Algorithm \ref{eigen_algo_dist} in distributed computing nodes (DCN) within the local edge to conduct efficient eigenface generation. In such a setting, DCNs will communicate with a central computer (in the local edge) to generate the global mean face ($\mathcal{F}_m^{glob}$) and the global covariance matrix ($\mathcal{C}^{glob}$). In this way, an agency can deal with a large number of input faces by maintaining a feasible number of DCNs.
\begin{center}
\scalebox{0.9}{
\begin{minipage}{1.1\linewidth}
\let\@latex@error\@gobble
\begin{algorithm}[H]
\caption{Incremental calculation of Eigenfaces using data partitions}
\label{eigen_algo_dist}
\KwIn{
\begin{tabular}{l c l}
$\{x^{pk}_1,\dots, x^{pk}_n\} $ & $\gets $ & normalized and centered example partition, $pk$\\
$nc$ & $\gets $ & expected number of PCA components\\
\end{tabular}
}
\KwOut{
\begin{tabular}{ l c l }
$\mathcal{EIMAT}$ & $ \gets $ & matrix of eigenfaces \\
\end{tabular}
}
\For{each $x^{pk}_i$}{
flatten $x^{pk}_i$ to produce vector $t_i$}
compute the mean face vector ($\mathcal{F}_m^i$), $\mathcal{F}_m^i=\frac{1}{n}\Sigma_{i=1}^{n}t_i$\;
collect $\mathcal{F}_m^i$ at a central computer (CC) in the local edge \;
receive global mean face vector, $\mathcal{F}_m^{glob}$ from the CC\;\label{globmean}
\For{each $x^c_i$}{
$s_i=t_i - \mathcal{F}_m^{glob}$\;
}
generate covariance matrix, $\mathcal{C}_i$,\newline
$\mathcal{C}_i=\frac{1}{n}\Sigma_{i=1}^{n}s_i\times s_i^\mathcal{T}=\mathcal{A}_i\mathcal{A}_i^\mathcal{T}$, where, $\mathcal{A}_i=[s_1s_2\dots s_n$]\;
collect $\mathcal{C}_i$ at the CC\;
receive global covariance matrix, $\mathcal{C}^{glob}$ from the CC\; \label{globcov}
calculate the eigenvectors $e_i$ of $\mathcal{AA}^T$, where $\mathcal{C}^{glob}$ = $\mathcal{AA}^T$\newline
since, $\mathcal{AA}^T$ can be extensive, derive $e_i$ from the eigenvectors $u_i$ of $\mathcal{A}^\mathcal{T}\mathcal{A}$, where, $e_i=\mathcal{A}u_i$\;
compute the $n$ best eigenvectors $e_i$ such that, $\left\|e_{i}\right\|=1$\;
return $nc$ eigenvectors which corresponds to the $nc$ largest eigenvalues
\end{algorithm}
\end{minipage}%
}
\end{center}
\subsection{Generation of the principal components}
After accepting the image inputs, PEEP normalizes the images to match a predefined resolution (which is accepted by PEEP as an input). We consider a default resolution normalization of $47\times62$. However, based on the input image sizes and the computational power of the edge devices, the users can increase or decrease the values of $irw$ and $irh$ suitably. Following the steps of Algorithm \ref{eigen_algo}, PEEP calculates the principal components by considering the eigenvectors using the corresponding covariance matrix. The largest $nc$ (the number of principal components) number of eigenvectors are used to create a particular eigenface ($nc$ is taken as input). The higher the $nc$, the higher the representation of input features, the lower the efficiency. It is important to select a suitable number for $nc$ that can provide high accuracy and high efficiency simultaneously. A reliable number for $nc$ can be determined by investigating the change in the trained model's accuracy.
\subsection{Declaring the sensitivity before noise addition}
PEEP scales the indices of the identified PCA vectors within the interval [0,1] as the next step after generating the eigenfaces. In LDP, the sensitivity is the maximum difference between two adjacent records. In PEEP, the inputs are images, and each image is dimensionality reduced to form a vector by using PCA (PCA\_vectors). As PEEP adds noise to these vectors (PCA\_vectors), the sensitivity of PEEP is the maximum difference between two such PCA\_vectors which can be denoted by Equation \ref{seneq_new}, where $\mathcal{FSV}^j$ represents a flattened image vector scaled within the interval [0,1], $\mathcal{FSV}^{j+1}$ is adjacent to $\mathcal{FSV}^j$. Since PEEP examines the Cartesian system, we can consider the maximum Euclidean distance for the sensitivity, which is equal to a maximum of $\sqrt{nc}$ where $nc$ is the number of principal components. As the normalized PCA\_vectors are bounded by 0 and 1, a sensitivity much greater than 1 would entail a substantial level of noise, which can reduce the utility drastically as we use LDP for the noise application mechanism. Hence, we select the sensitivity to be the maximum difference between two indices, which is equal to 1. Now the scale of the Laplacian noise will be equal to $1/\varepsilon$. As future work, we are conducting further algebraic analysis of sensitivity to improve the precision and flexibility of the Laplace mechanism in the proposed approach of face recognition. After defining the position and scale parameters, PEEP adds Laplacian noise to each index of PCA\_vectors. We take the position of the noise to be the index values and the scale of the noise to be $1/\varepsilon$. To generate the private versions of images ($\mathcal{PI}$), we can perturb each index according to Equation \ref{diffeq2_new}, where $\mathcal{FSV}_i$ represents an index of the flattened image vectors scaled within the interval [0,1].
\begin{equation}
\Delta f=max\{\lVert \mathcal{FSV}^j-\mathcal{FSV}^{(j+1)} \rVert_1\}
\label{seneq_new}
\end{equation}
\subsection{Introducing Laplacian noise}
After defining the position and scale parameters, PEEP adds Laplacian noise to each index of PCA\_vectors. We take the position of the noise to be the index values and the scale of the noise to be $1/\varepsilon$. To generate the private versions of images ($\mathcal{PI}$), we perturb each index according to Equation \ref{diffeq2_new}, where $FSV_i$ represents an index of the flattened image vectors scaled between 0 and 1. The user can provide a suitable $\varepsilon$ value depending on the amount of privacy required and after considering the following guidelines. The higher the $\varepsilon$ value, the lower the privacy. As a norm, $0<\varepsilon\leq 9$ is considered as an acceptable level of privacy~\cite{abadi2016deep}. We follow the same standard and use an upper limit of 9 for $\varepsilon$.
\begin{equation}
\mathcal{PI}= \frac{\varepsilon}{2\Delta f}~e^{-\frac{|x-\mathcal{FSV}_i|\varepsilon}{\Delta f}}
\label{diffeq2_new}
\end{equation}
\begin{center}
\scalebox{0.8}{
\begin{minipage}{1.1\linewidth}
\let\@latex@error\@gobble
\begin{algorithm}[H]
\caption{Differentially private facial recognition: PEEP}
\label{ranalgo}
\KwIn{
\begin{tabular}{l c l}
$\{x_1,\dots, x_n\} $ & $\gets $ & examples\\
$imthresh$ & $\gets$ & number of images per face\\
$\varepsilon $ & $\gets $ & privacy budget\\
$irw$ & $\gets $ & pixel width (default = 47)\\
$irh$ & $\gets $ & pixel height (default = 62)\\
$nc$ & $\gets $ & number of PCA components\\
\end{tabular}
}
\KwOut{
\begin{tabular}{ l c l }
$\mathcal{DPFRS}$ & $ \gets $ & privacy preserving \\
& & facial recognition model
\end{tabular}
}
Find the minimum width of all image ($w_{min}$)\;
Fine the minimum height of all image ($h_{min}$)\;
\If{$irw<w_{min}\lor irh<h_{min}$ }{ \label{stepnc}
$irw=w_{min}$\\
$irh=h_{min}$
}
normalize the example resolution to $irw\times irh$ \label{step1}\;
\If{$nc>irw\lor nc>irh$ }{
$nc=min(irw,irh)$
}
generate the flattened vectors ($v_i$) for each $x_i$\;
generate the first $nc$ PCA components ($\mathcal{PCA}_i$) for each input, $v_i$, according to Algorithm \ref{eigen_algo}\;
scale all the indices of $v_i$ between $0$ and $1$\ to generate $sv_i$\label{genind}\;
apply $\frac{\varepsilon}{2\Delta f}~e^{-\frac{|x-\mathcal{FSV}_i|\varepsilon}{\Delta F}}$ to each index of $sv_i$ with $sensitivity~(\Delta f)=1$ \label{step7}\label{applyrand}\;
feed $\{sv_1,\dots, sv_n\}$ and corresponding targets to the classification model\;
train the classification model using the randomized data to produce a differentially private classification model ($\mathcal{DPFRS}$)\;
release the $\mathcal{DPFRS}$\;
\end{algorithm}
\end{minipage}%
}
\end{center}
\subsection{Algorithm for generating a differentially private face recognition model}
Algorithm \ref{ranalgo} shows the steps of PEEP in conducting privacy-preserving face recognition model training. As shown in the algorithm, $irw$ and $irh$ parameters are used to increase the resolution of the input images. We use the input parameter, $imthresh$, to accept the number of images considered per single face (person). Since the main task of face recognition is image classification, each face represents a class. In order to produce good accuracy, a classification model should have a good image representation. Consequently, $imthresh$ is a valuable parameter that directly influences the accuracy, where a higher value of $imthresh$ will certainly contribute to higher accuracy due to the better representation of images between the classes (faces). Hence, $imthresh$ allows the algorithm to extract eigenfaces that provide a better representation of the input images resulting in better accuracy. Step \ref{stepnc} makes sure that the number of PCA components selected does not go beyond the allowed threshold.
\subsection{Privacy preserving face recognition using PEEP}
As shown in Figure \ref{peepprivacymodel}, each image input will be subjected to PEEP randomization before training or testing. The Eigenface generation and randomization take place within the local edge bounds. We assume that all input devices communicate with the third party servers only through PEEP, and the face recognition database stores only the perturbed images. Since the face recognition model (e.g. MLPClassifier) is trained only using perturbed images (perturbed eigenfaces), the trained model will not leak private information. Any untrusted access to the server will not allow any loss of valuable biometric data to malicious third parties. Since PEEP perturbs testing data, there is minimal privacy leak from testing data (testing image inputs) as well.
\subsection{Theoretical privacy guarantee of PEEP on trained classifier}
Although additional computations are carried out on the outcome of a differentially private algorithm, they do not weaken the privacy guarantee. The results of additional computations on $\varepsilon$-DP outcome will still be $\varepsilon$-DP. This property of DP is called the postprocessing invariance/robustness~\cite{bun2016concentrated}. Since PEEP utilizes DP, PEEP also inherits postprocessing invariance. The postprocessing invariance property guarantees that the trained model of perturbed data satisfies the same privacy imposed by PEEP. Therefore, the proposed method ensures that there is a minimal level of privacy leak from the third party untrusted servers. However, we further investigate the privacy strength of PEEP using empirical evidence under Section \ref{resdis}.
\subsection{Datasets}
\label{datadetails}
We used the open face image dataset and the large-scale CelebFaces Attributes (CelebA) dataset (see Figure \ref{samplefigures} for sample images) to test the performance of PEEP. Open face image dataset named lfw-funneled is available at the University of Massachusetts website named ``Labeled Faces in the Wild"\footnote{http://vis-www.cs.umass.edu/lfw/}. The lfw-funneled dataset has 13,233 gray images. We limit the minimum number of faces per person to 100, which limits the number of images to 1,140 with five classes; ``Colin Powell", ``Donald Rumsfeld", ``George W Bush", ``Gerhard Schroeder", and ``Tony Blair"\footnote{The diversity of the classes of the dataset are as follows, ``Colin Powell": 236, ``Donald Rumsfeld": 121, ``George W Bush": 530, ``Gerhard Schroeder": 109, and ``Tony Blair": 144.}. Figure \ref{samplefigures} shows the appearance of 8 sample images that are available in the datasets used. We used 70\% of the input dataset for training and 30\% for testing. CelebA\footnote{http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html} dataset has more than 200K celebrity images, each with 40 attribute annotations. CelebA has 10,177 number of identities, 202,599 number of face images, 5 landmark locations, and 40 binary attributes annotations per image.
\begin{figure}[H]
\centering
\scalebox{0.48}{
\includegraphics[width=1\textwidth, trim=0cm 0cm 0cm 0cm]{sample_images.pdf}
}
\caption{Sample images of the two databases. The lfw-funneled dataset is composed of gray images whereas the CelebA dataset is composed of colored images. }
\label{samplefigures}
\end{figure}
\subsection{Eigenfaces and Eigenface perturbation}
Figure \ref{eigenfaces} shows 8 sample eigenfaces before perturbation. As the figure shows, eigenfaces already hide some features of the original images due to the dimensionality reduction~\cite{aggarwal2004condensation}. However, eigenfaces alone would not provide enough privacy as they display the most important biometric features, and there are effective face reconstruction techniques~\cite{turk1991eigenfaces, pissarenko2002eigenface} for eigenfaces as demonstrated in Figure \ref{perteigen}, which shows the same set of eigenfaces (available in Figure \ref{eigenfaces}) after noise addition by PEEP with $\varepsilon=4$. As the figure shows, the naked eye cannot detect any biometric features from the perturbed eigenfaces. Even at an extreme case of a privacy budget $(\varepsilon=100)$, the perturbed eigenfaces show mild levels of facial features to the naked eyes, as shown in Figure \ref{perteigen100}.
\begin{figure}[H]
\centering
\scalebox{0.48}{
\includegraphics[width=1\textwidth, trim=0cm 0cm 0cm 0cm]{eigenfaces_crop.pdf}
}
\caption{Eigenfaces. The figure shows a collection of sample eigenfaces generated from the input face images. The eigenfaces show only the most essential features of the input images. }
\label{eigenfaces}
\end{figure}
\begin{figure}[H]
\centering
\scalebox{0.48}{
\includegraphics[width=1\textwidth, trim=0cm 0cm 0cm 0cm]{pert_eigen_crop.pdf}
}
\caption{Perturbed eigenfaces at $\varepsilon=4$. The randomized images appear to show no biometric features to the naked eye at $\varepsilon=4$.}
\label{perteigen}
\end{figure}
\begin{figure}[H]
\centering
\scalebox{0.48}{
\includegraphics[width=1\textwidth, trim=0cm 0cm 0cm 0cm]{eigen_pert_100_crop.pdf}
}
\caption{Perturbed eigenfaces at $\varepsilon=100$. Here we try to demonstrate that even at an extreme case of the privacy budget (which is 100 and is not an acceptable value for $\varepsilon$, since $0<\varepsilon\leq 9$ is considered as the acceptable range for $\varepsilon$~\cite{abadi2016deep}), PEEP is capable of hiding a lot of biometric features from the eigenfaces.}
\label{perteigen100}
\end{figure}
\section{Results and Discussion}
\label{resdis}
In this section, we discuss the experiments, experimental configurations, and their results. We used MLPClassifier to test the accuracy of face recognition with PEEP. MLPClassifier is a multi-layer perceptron classifier available in the scikit learn\footnote{https://scikit-learn.org/stable/index.html} Python library. We conducted all the experiments on a Windows 10 (Home 64-bit, Build 17134) computer with Intel (R) i5-6200U (6$^{th}$ generation) CPU (2 cores with 4 logical threads, 2.3 GHz with turbo up to 2.8 GHz) and 8192 MB RAM. Then we provide an efficiency comparison and a privacy comparison of PEEP against two other privacy-preserving face recognition approaches developed by Zekeriya Erkin et al. (we abbreviate it as ZEYN for simplicity)~\cite{erkin2009privacy} and Ahman-Reza Sadehi et al. (we abbreviate it as ANRA for simplicity) ~\cite{sadeghi2009efficient}. Both ZEYN and ANRA are cryptographic methods that use homomorphic encryption.
\subsection{Training the MLPClassifier for perturbed eigenface recognition}
\label{mlpclassifier}
We trained the MLPClassifier\footnote{Settings used for the MLP classifier; activation=`relu', batch\_size=100, early\_stopping =False, hidden\_layer\_sizes=(512, 1024, 2014, 1024, 512), max\_iter =200, shuffle=True, and solver=`adam', alpha=0.0001, beta\_1=0.9, beta\_2=0.999, epsilon=1e-08, learning\_rate=`constant', learning\_rate\_init=0.001, momentum=0.9, nesterovs\_momentum=True, power\_t=0.5, random\_state=None, tol=0.0001, validation\_fraction=0.1, verbose=True, warm\_start=False.} under different levels of $\varepsilon$ ranging from 0.5 to 8 as plotted in Figure \ref{performancepeep}. Due to the heavy noise, the datasets with lower privacy budgets exhibited difficulty for training the MLPClassifier. However, we didn't conduct any parameter tuning to increase the performance of the MLPClassifier in order to make sure that we investigate the absolute impact of perturbation on the model. Figure \ref{modelloss4} shows the model loss of the training process of MLPClassifier when $\varepsilon=4$. As the figure shows, the model converges after around 14 epochs.
\begin{figure}[H]
\centering
\scalebox{0.4}{
\includegraphics[width=1\textwidth, trim=0cm 0cm 0cm 0cm]{loss_4_crop.pdf}
}
\caption{Model loss when PEEP with $\varepsilon=4$. As shown in the figure, the MLPClassifier converges after around 14 epochs.}
\label{modelloss4}
\end{figure}
\subsection{Classification accuracy vs. privacy budget}
We recorded the accuracy of the trained MLPClassifier in the means of the weighted average of precision, recall, and f$_1$-score against varying levels of privacy budget, and plotted the corresponding data as shown in Figure \ref{performancepeep}. As discussed in Section \ref{datadetails}, the class, ``George W Bush" showed a higher performance as there was a higher proportion of the input image instances related to that class. As shown in Figure \ref{performancepeep}, increasing the privacy budget increases accuracy, as higher privacy budgets impose less amount of randomization on the eigenfaces. We can see that PEEP produces reasonable accuracy for privacy budgets greater than 4 and less than or equal to 8, where $0<\varepsilon\leq 9$ is considered as an acceptable level of privacy~\cite{abadi2016deep}.
\begin{figure}[H]
\centering
\scalebox{0.5}{
\includegraphics[width=1\textwidth, trim=0cm 0cm 0cm 0cm]{accuracy_vs_epsilon.pdf}
}
\caption{Performance of face recognition with privacy introduced by PEEP. WP refers to the instance of classification model without privacy where no randomization is applied to the input images.}
\label{performancepeep}
\end{figure}
Figure \ref{instface4} shows the classification results of 8 random input images in the testing sample at $\varepsilon=4$. According to the figure, only in one case out of eight have been misclassified. The parameters such as the minimum number of faces per each class, the size of the input dataset, and the hyperparameters of the MLPClassifier have a direct impact on accuracy. We can improve the accuracy of the MLPClassifier by changing the input parameters and conducting hyperparameter tuning. Moreover, the dataset has a higher number of instances for the class ``George W Bush" compared to the other classes. A more balanced dataset would also provide better accuracy. However, in this paper, we investigate only the absolute effect of the privacy parameters on the performance of the MLPClassifier.
\begin{figure}[H]
\centering
\scalebox{0.48}{
\includegraphics[width=1\textwidth, trim=0cm 0cm 0cm 0cm]{faces_4_crop.pdf}
}
\caption{Instance of the face recognition when the images are randomized using PEEP at $\varepsilon=4$ (the randomized images at $\varepsilon=4$ are shown in Figure \ref{perteigen}). The figure shows the predicted labels of the images against the original true labels. }
\label{instface4}
\end{figure}
\subsection{Effect of $imthresh$ on the performance of face recognition}
In this section, we test the effect of $imthresh$ (the number of images per single face) on the performance of face recognition (refer to Figure \ref{performthresh}). During the experiment, we maintained an $\varepsilon$ value of 8 and the number of PCA components at 128. As shown in the plots, the performance of classification improves with $imthresh$. This is a predicted observation as face recognition is a classification problem. A higher value of $imthresh$ provides a higher representation for the corresponding face (class), generating higher accuracy. Hence, the proposed concept prefers a higher value for $imthresh$. This feature encourages having the highest value possible for $imthresh$, in order to generate the highest accuracy possible.
\begin{figure}[H]
\centering
\scalebox{0.5}{
\includegraphics[width=1\textwidth, trim=0cm 0cm 0cm 0cm]{accuracy_vs_threshold.pdf}
}
\caption{Performance of face recognition Vs. $imthresh$.}
\label{performthresh}
\end{figure}
\subsection{Effect of the number of PCA components on the performance of face recognition}
In this section, we investigate the effect of the number of PCA components on the performance of face recognition. During the experiment, we maintained an $\varepsilon$ value of 8, and $imthresh$ was maintained at 100. As shown by the plot (refer Figure \ref{performancenumcomponents}), there is an immediate increment of performance when the number of PCA components increased from 10 to 20. As the number of PCA components increase, there is a gradual increase in performance after 20 PCA components. This is due to the first 20 to 40 PCA components representing the most significant features of the input images. Although the effect of the number of PCA components after 40 is low, the improved performance suggests that it is better to have a higher number of PCA components to produce better performance.
\begin{figure}[H]
\centering
\scalebox{0.5}{
\includegraphics[width=1\textwidth, trim=0cm 0cm 0cm 0cm]{accuracy_vs_numcomponents.pdf}
}
\caption{Performance of face recognition Vs. the number of PCA components.}
\label{performancenumcomponents}
\end{figure}
\subsection{Face reconstruction attack setup}
\label{facereconsec}
It is essential that the randomized images cannot be used to reconstruct the original images that reveal the owners' identities. We prepared an experimental setup to investigate the robustness of PEEP against face reconstruction~\cite{turk1991eigenfaces,pissarenko2002eigenface} applied by adversaries on the randomized images.
\begin{figure}[H]
\centering
\scalebox{0.34}{
\includegraphics[width=1\textwidth, trim=0cm 0cm 0cm 0cm]{eigenfacegeneration.pdf}
}
\caption{Face reconstruction from perturbed eigenfaces. The figure shows the experimental setup used for the reconstruction of the original input face images using the perturbed eigenfaces.}
\label{facereconstructattack}
\end{figure}
As shown in Figure \ref{facereconstructattack}, first, we create a PCAmodel (PCA: Principal Component Analysis) using 2,000 training images (first 1,000 images of the CelebA database and the vertically flipped versions of them). The resolution of each image is $89\times 109$. The trained PCAmodel has the 2,000 eigenvectors of length 29,103 ($89\times 109\times 3$), and the mean vector (of 2,000 eigenvectors) of length 29,103. Next, the testing image (of size $89\times 109\times$) is read and flattened to form a vectorized form of the original image. The mean vector is then subtracted from it, and the resulting vector is randomized using PEEP to generate the privacy-preserving representation of the testing vector ($\mathcal{PV}$). Finally, we generate the eigenfaces ($\mathcal{F}_i$) and the average face by reshaping the eigenvectors ($\mathcal{FV}_i$) and mean vector available in the PCAmodel. Now we can reconstruct the original testing image from $\mathcal{PV}$ using Equation \ref{reconst1} where $n$ is the number of training images used for the PCAmodel, and $\mathcal{RI}$ is the recovered image.
\begin{equation}
\mathcal{RI}=\sum_{i=1}^{n} \mathcal{F}_i\times(\mathcal{PV}\bullet \mathcal{FV}_i)
\label{reconst1}
\end{equation}
\subsection{Empirical privacy of PEEP}
\label{empriv}
Figure \ref{facereconstruct} shows the effectiveness of eigenface reconstruction attack (explained in Section \ref{facereconsec}) of a face image. The figure includes the results of the attack on two testing images. Figure \ref{perteigen} provides the empirical evidence to the level of privacy rendered by PEEP in which the lower the $\varepsilon$, the higher the privacy. At $\varepsilon=0.5$, the attack is not successful in generating any underlying features of an image. At $\varepsilon=4$ and above, we can see that the reconstructed images have some features, but they are not detailed enough to identify the person shown in that image.
\begin{figure}[H]
\centering
\scalebox{0.45}{
\includegraphics[width=1\textwidth, trim=0cm 0cm 0cm 0cm]{faceimage_reconstruction.pdf}
}
\caption{Reconstructing images using the setup depicted in Figure \ref{facereconstructattack}. The first row shows original images. The second row shows the reconstructed images using the eigenfaces of the first row images without privacy. The three remaining rows show the face reconstruction at the privacy levels of $\varepsilon$ equals to $8, 4$, and $0.5$, respectively.}
\label{facereconstruct}
\end{figure}
\subsection{Performance of PEEP against other approaches}
In this section, we discuss the privacy guarantee of PEEP and the comparable methods with regards to five privacy issues (TYIS 1, 2, 3, 4, and 5) in face recognition systems, as identified in Section \ref{intro}. The first six rows of Table \ref{privacyprotect} provide the summary of the evaluation, where a tick mark indicates effective addressing of a particular issue, while a cross mark shows failure. Partially addressed issues are denoted by a ``$\partial$" symbol. PEEP satisfies TYIS 1 and TYIS 4 by randomizing the input images (both training and testing) so that the randomized images do not provide any linkability to other sensitive data. Both ZEYN and ANRA are semi-honest mechanisms and need database owners to maintain the facial image databases. ZEYN and ANRA satisfy TYIS 1, if and only if the database owners are fully trusted, which can be challenging in a cloud setting, as untrusted third parties with malicious intent can access the cloud servers. As shown in Section \ref{empriv}, the randomized eigenfaces cannot be used to reconstruct original images. As the PEEP stores only randomized data in the servers, PEEP does not have to worry about the security of the cloud server. As a result, any data leak from the cloud server will not have an adverse effect on user privacy. The scalability results of the three methods given in the last row of Table \ref{privacyprotect} show that PEEP satisfies TYIS 2 by providing better scalability than ZEYN and ANRA. PEEP satisfies TYIS 3 because it uses no trusted party, whereas ZEYN and ANRA must have trusted database owners. PEEP provides some level of guarantee towards TYIS 5 by randomizing all the subsequent face image inputs related to the same person, which can come from the same device or different devices. Consequently, two input images related to the same person will have two different levels of randomization, leaving a low probability of linkability.
\begin{table}[H]
\centering
\caption{Performance of PEEP against other approaches}
\resizebox{0.65\columnwidth}{!}{%
\begin{tabular}{|l|l|c|c|c|}
\hline
\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Qualitative\\ comparison\end{tabular}} & \begin{tabular}[c]{@{}l@{}}Type of issue\\ (TYIS)\end{tabular} & ZEYN & ANRA & PEEP \\
\cline{2-5}
& \begin{tabular}[c]{@{}l@{}}1. biometric should \\not be linkable to \\ other sensitive data \end{tabular} & \textcolor[rgb]{ .329, .51, .208}{ \large $\partial$} & \textcolor[rgb]{.329, .51, .208}{\large $\partial$} & \textcolor[rgb]{ .329, .51, .208}{\checkmark} \\
\cline{2-5}
& \begin{tabular}[c]{@{}l@{}}2. scalable and \\ resource friendly\end{tabular} & \textcolor[rgb]{ 1, 0, 0}{\Large $\times$} & \textcolor[rgb]{ 1, 0, 0}{\Large $\times$} & \textcolor[rgb]{ .329, .51, .208}{\checkmark} \\
\cline{2-5}
& \begin{tabular}[c]{@{}l@{}}3. biometrics should \\not be accessible\\ by a third-party\end{tabular} & \textcolor[rgb]{ 1, 0, 0}{\Large $\times$} & \textcolor[rgb]{ 1, 0, 0}{\Large $\times$} & \textcolor[rgb]{ .329, .51, .208}{\checkmark } \\
\cline{2-5}
& \begin{tabular}[c]{@{}l@{}}4. biometrics of the\\ same person from\\ two applications\\ should not be\\linkable\end{tabular} & \textcolor[rgb]{ .329, .51, .208}{ \large $\partial$} & \textcolor[rgb]{ .329, .51, .208}{ \large $\partial$} & \textcolor[rgb]{ .329, .51, .208}{\checkmark } \\
\cline{2-5}
& \begin{tabular}[c]{@{}l@{}}5. biometrics should\\ be revocable\end{tabular} & \textcolor[rgb]{ 1, 0, 0}{\Large $\times$} & \textcolor[rgb]{ 1, 0, 0}{\Large $\times$} & \textcolor[rgb]{ .329, .51, .208}{ \large $\partial$} \\
\hline
\hline
\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Quantitative\\ comparison\end{tabular}} & \begin{tabular}[c]{@{}l@{}}Average time to \\ recognize one\\ image in seconds\\ when the database\\ has 798 images\end{tabular} & \begin{tabular}[c]{@{}l@{}} $\sim$ 24 to 43\end{tabular} & \begin{tabular}[c]{@{}l@{}}$\sim$ 10 \\ \end{tabular} & 0.006 \\
\hline
\end{tabular}%
}
\newline\newline
\textcolor[rgb]{ .329, .51, .208}{\checkmark } = fully satisfied, \textcolor[rgb]{ .329, .51, .208}{ \large $\partial$} = partially satisfied, \textcolor[rgb]{ 1, 0, 0}{\Large $\times$} = not satisfied
\label{privacyprotect}%
\end{table}%
\subsection{Computational complexity}
PEEP involves two independent segments (components) in recognizing a particular face image. Component 1 is the randomization process, and component 2 is the recognition process. The two components conduct independent operations; hence they need independent evaluations for computational complexity. Moreover, as PEEP does not need a secure communication channel, the complexity behind maintaining a secure channel does not affect the performance of PEEP. For a particular instance of PEEP (refer to Algorithm \ref{ranalgo}), step \ref{genind} to step \ref{applyrand} display linear complexity of $O(nc)$, where $nc$ is the number of principal components, and the image resolution (width in pixels, height in pixels) will remain constant during a particular instance of perturbation and recognition. When width in pixels=47, height in pixels=62, and the number of PCA components=128, PEEP takes around 0.004 seconds to randomize a single input image. Component 2 can be composed of any suitable classification model; in our case, we use the MLPClassifier (refer Section \ref{mlpclassifier}) as the facial recognition module that was trained using 798 images. Under the same input settings (width in pixels=47, height in pixels=62, and the number of PCA components=128), the trained model takes 0.002 seconds to recognize a facial image input. Since the prediction is always done on a converged model, the time taken for prediction will be constant and follow a complexity of $O(1)$. For randomization and prediction PEEP roughly consumes around 0.006 seconds under the given experimental settings. The runtime plots shown in Figure \ref{comcomplex} further validate the computational complexities evaluated above. According to the last row of Table \ref{privacyprotect}, PEEP is considerably faster than comparative methods; PEEP provides a more effective and efficient approach towards the recognition of images against millions of faces in a privacy-preserving manner. In further examining the performance of PEEP for its efficiency, we investigated PE-MIU~\cite{terhorst2020pe}, and POR~\cite{ma2019lightweight} (refer to Section \ref{relwork}), which are two recently developed approaches. PE-MIU consumes a complete MIU-verification time of 0.0072 seconds for a block size of 4 in a computer with an Intel(R) Core(TM) i7-7700 processor. POR consumes a testing time of around 0.011 seconds per one image in an Intel(R) Core(TM) i5-7200 CPU @2.50GHz and 8.00GB of RAM. Hence, under the proposed experimental settings, a prediction time of 0.006 seconds consumed by PEEP can be considered as efficient and reliable.
\begin{figure}[H]
\centering
\scalebox{0.5}{
\includegraphics[width=1\textwidth, trim=0cm 0cm 0cm 0cm]{efficiency.pdf}
}
\caption{The time consumption of PEEP to randomize and recognize one input image against the increasing number of principal components used for the eigenface generation. }
\label{comcomplex}
\end{figure}
\section{Conclusions}
\label{concls}
We proposed a novel mechanism named PEEP for privacy-preserving face recognition using data perturbation. PEEP utilizes the properties of differential privacy, which can provide a strong level of privacy to facial recognition technologies. PEEP does not need a trusted party and employs a local approach where randomization is applied before the images reach an untrusted server. PEEP forwards only randomized data, which requires no secure channel. PEEP is an efficient and lightweight approach that can be easily integrated into any resource-constrained device. As the training and testing/recognition of facial images done solely on the randomized data, PEEP does not incur any efficiency loss during the recognition of a face. The differentially private notions allow users to tweak the privacy parameters according to domain requirements. All things considered, PEEP is a state of the art approach for privacy-preserving face recognition.
Using the proposed approach with different biometric algorithms and areas like fingerprint and iris recognition will be looked at in the future, in particular with regards to effectiveness and sensitivity in different domains of inputs.
\input{main.bbl}
\end{document} |
1,116,691,497,384 | arxiv | \section{Introduction}\label{introduction}
Accelerated expansion of the universe and related concept of dark energy are among the most important concepts of modern cosmology. Cosmological studies before the discovery of the accelerated expansion of our universe were mainly focused at constraining the expansion rate of our universe via Hubble Constant and the deceleration parameter (q) \cite{Weinberg 2008} as then the matter dominated universe was assumed to be expanding with a decelerating rate. In this study we are constraining the deceleration parameter to see how this evolution evolves over the expansion history of universe. In an expanding universe, the deceleration parameter gives negative values which is observed by various observational results. While going back in time we can observe the transition of our universe from current accelerating phase to the deceleration phase in past at some redshift point known as transition redshift. We will also measure transition redshift by deriving it using matter and dark energy density parameters in $\Lambda$-CDM model and then compare it with transition redshift observed via the deceleration parameter under $\Lambda$-CDM, WCDM and CPL model assumptions.
In an isotropic and homogeneous universe, we can start by defining the
distance element as \cite{Weinberg 2008} \cite{Liddle 2003} \cite{Norton 2016}:
\begin{equation}
$\(ds^{2} = - dt^{2} + a(t)^{2}(\frac{dr^{2}}{1 - kr^{2}} + r^{2}d\Omega^{2})\)$
\end{equation}
Here a(t) is the scaling factor and k represent the spatial curvature.
The ratio of the rate of change of the scale factor to the current value of
scale factor is represented by the Hubble parameter:
\begin{equation}
$\(H(t) = \frac{\text{da}}{\text{adt}}\)$
\end{equation}
The deceleration parameter in relation with the Hubble parameter can be
written as \cite{Rapetti et al. 2007} \cite{Weinberg 2008} \cite{Capozziello et al. 2019}:
\begin{equation}
$\(q(t) = - 1 - \frac{H'}{H^{2}}\)$
\end{equation}
Here H' is the time derivative of the Hubble parameter. We can write the
deceleration parameter as a function of redshift in a flat universe as:
\begin{equation}
\label{qz}
$\(q(z) = \Omega_{r}(z) + \frac{\Omega_{m}(z)}{2} + \Omega_{\Lambda}(z)\frac{1 + 3w(z)}{2}\)$
\end{equation}
Here \(\Omega_{r}\) is radiation density parameter, \(\Omega_{m}\) is the
matter density parameter,\(\Omega_{\Lambda}\) is the dark energy density
parameter and w(z) is the dark energy equation of state parameter which in
\(\Lambda\)-CDM model is taken as a constant value of w=-1.
\section{Measuring the deceleration parameter using H(z), ISW effect and type Ia supernovae data}\label{Deriving the deceleration parameter using H(z), ISW effect and type Ia supernovae data}
\subsection{Cosmic Chronometers}
In order to derive the results using equation \ref{qz}, we will first fit energy densities and dark energy equation of state (EoS) parameters using some available datasets. For this purpose we use direct H(z) measurement data which uses the cosmic chronometers approach to study the expansion rate of the Universe as a function of redshift. As discussed in Moresco et al. 2018 \cite{Moresco et al. 2018} \cite{Moresco2016}, the expansion history of the Universe via the Hubble parameter as a function of redshift (z) can be constrained by measuring the differential age evolution of cosmic chronometers without assuming any particular cosmology. This cosmology independent way of measurement of H(z) evolution itself provides an interesting opportunity to test various models of cosmology quickly and more directly than some other methods.
Theoretically, for flat \(\Lambda\)-CDM model and extensions H(z) can be written as :
\begin{equation}
$\(H(z) = H_{0}\sqrt{\Omega_{\Lambda}I(z) + \Omega_{r}(1 + z)^{4} + \Omega_{m}(1 + z)^{3}}\)$
\end{equation}
Where, I(z) is defined as :
\begin{equation}
$\[I(z) = exp(\int_{0}^{z}{\left( \frac{1 + w_{\text{de}}\left( z^{'} \right)}{1 + z^{'}} \right)dz^{'}}\]$
\end{equation}
I(z) depends on the parametrization of the dark energy equation of state
(EoS) and for standard \(\Lambda\)-CDM model with EoS as
\(w_{\text{de}}\)(z)=-1 (constant), the multiplier I(z) becomes \(‘1’\)
.
Using the cosmic chronometer data, we can also measure the current
deceleration parameter value at z=0 (q\textsubscript{0}) by using the
Taylor expansion for H(y)=H\textsubscript{0}E(y).
For this we can define a value y as \cite{Capozziello 2011} \cite{Rezaei 2020}:
\[y = \frac{z}{z + 1}\]
Now, we can define E(y) as:
\begin{equation}
E(y) =1 + k\textsubscript{1}y +
(k\textsubscript{2}(y\textsuperscript{2})/2) +
(k\textsubscript{3}(y\textsuperscript{3})/6) +
(k\textsubscript{4}(y\textsuperscript{4})/24)
\end{equation}
With,
k\textsubscript{1} =1 + q\textsubscript{0}
k\textsubscript{2} = 2 - (q\textsubscript{0}\textsuperscript{2}) +
2q\textsubscript{0} + j\textsubscript{0}
k\textsubscript{3} = 6 + 3(q\textsubscript{0}\textsuperscript{3}) -
3(q\textsubscript{0}\textsuperscript{2}) + 6q\textsubscript{0} -
4q\textsubscript{0}j\textsubscript{0} + 3j\textsubscript{0} -
s\textsubscript{0}
k\textsubscript{4} = -15(q\textsubscript{0}\textsuperscript{4}) +
12(q\textsubscript{0}\textsuperscript{3}) +
25(q\textsubscript{0}\textsuperscript{2})j\textsubscript{0} +
7q\textsubscript{0}s\textsubscript{0} -
4(j\textsubscript{0}\textsuperscript{2})
-16q\textsubscript{0}j\textsubscript{0} -
12(q\textsubscript{0}\textsuperscript{2}) + l\textsubscript{0} -
4s\textsubscript{0} + 12j\textsubscript{0} + 24q\textsubscript{0} + 24
\vspace{\baselineskip}
For \(\Lambda\)-CDM case, we use
j\textsubscript{0}=1,s\textsubscript{0}=0 and l\textsubscript{0}=0
\cite{MuthukrishnaParkinson 2016}. For non-$\Lambda$ CDM cases, we use three configurations. In the first we one we fix s\textsubscript{0}=0 and l\textsubscript{0} as zero and keep H\textsubscript{0}, q\textsubscript{0} and j\textsubscript{0} free. In the second non-$\Lambda$ CDM case, we fix H\textsubscript{0} to $\Lambda$-CDM result, and s\textsubscript{0}=0 and l\textsubscript{0}=0. In the third case, we only fix H\textsubscript{0} and use other parameters as free. Table \ref{tab:table_1} presents the results for different models. For our study we use boundary conditions\(\ 65 \leq H_{0} \leq 75\), \(\ -5 \leq q_{0} \leq 5\), \(\ -10 \leq j_{0} \leq 10\), \(\ -500 \leq s_{0} \leq 500\) and \(\ -2000 \leq l_{0} \leq 2000\).
\begin{table}[htbp]
\centering
\caption{Measurement of q\textsubscript{0}, j\textsubscript{0}, l\textsubscript{0} and s\textsubscript{0} from Taylor series expression by using Moresco et al., 2016 data \cite{Moresco2016} with H(y=\(\frac{\mathbf{z}}{\mathbf{z + 1}}\) from Capozziello et al., 2011 \cite{Capozziello 2011} and Rezaei et al. 2020 \cite{Rezaei 2020}.}
\begin{tabular}{lrlllll}
\toprule
\textbf{Model\textbackslash{}Parameters} & & \textbf{H\textsubscript{0}} & \textbf{q\textsubscript{0}} & \textbf{j\textsubscript{0}} & \textbf{s\textsubscript{0}} & \textbf{l\textsubscript{0}} \\
\midrule
$\Lambda$-CDM & & 68.4±3.1 & -0.34±0.24 & 1(fixed) & 0(fixed) & 0(fixed) \\
Non-Lambda-CDM-I & & 70.1±3.2 & -0.82±0.26 & 2.46±0.63 & 0(fixed) & 0(fixed) \\
Non-Lambda-CDM-II & & 68.4(fixed) & -0.72±0.19 & 2.32±0.64 & 0(fixed) & 0(fixed) \\
Non-Lambda-CDM-III & & 68.4(fixed) & -0.45±0.48 & -0.8±6.6 & -9±90 & 210±740 \\
\end{tabular
\label{tab:table_1
\end{table
We use H(z) data set provided by Moresco et al., 2016 \cite{Moresco2016} \cite{Brinckmann et al. 2019} for our cosmic chronometer
related analysis. We an see in table \ref{tab:table_1} that Taylor series expansion results show extremely large uncertainties if we use snap and lerk as free parameters. Also, they don't give us much idea about the underlying cosmology. Therefore, we will use the derived deceleration and jerk parameters for our further analysis. Table \ref{tab:table_2} presents the results for the derived deceleration and jerk parameters using the cosmic chronometers or H(z) data.
\subsection{Type Ia supernovae}
Type Ia supernovae have been a useful tool in constraining cosmological
parameters especially in context of the accelerated expansion of our
universe. We can write the distance modulus for as a difference of
apparent magnitude (m) and absolute magnitude (M) of the type Ia
supernovae as \cite{Davis 2012} \cite{Suzuki et al. 2012} \cite{Amanullah et al. 2010}:
\[\mu = m - M\]
Also, luminosity distance and distance modulus are related as:
\begin{equation}
\label{eq_muz}
$\(\mu(z) = 5log\lbrack DL(z)\rbrack + 25\)$
\end{equation}
We can get apparent magnitude in the form:
\begin{equation}
\label{eq_mz}
$\(m(z) = 5log(DL(z)) + M + 25\)$
\end{equation}
With the inclusion of observational factors like color (k), shape(s) and
the probability that the supernova belongs in the low-host-mass category
(P), equation \ref{eq_mz} becomes :
\begin{equation}
\label{eq_mz_2}
$$ m(z) = 5log(DL(z)) + M - \alpha s + \beta k - \delta P + 25 $$
\end{equation}
We use Union 2.1 distance modulus dataset which is publicly shared by
Supernova Cosmology Project (SCP). The dataset is comprised of 580 type
Ia supernovae which passed the usability cuts. The dataset is comprised
of redshift range \(0.015 \leq z \leq 1.414\) with median redshift at
\(z \approx 0.294\). Due to the degeneracy issue between Hubble Constant
(H\textsubscript{0}) and the absolute magnitude of type Ia supernovae
(M) (as explored in our previous chapter), we can separate contribution
of H\textsubscript{0} and uncertainty in the absolute magnitude 'M' from equations \ref{eq_muz} and \ref{eq_mz}, as \cite{Davis 2012} :
\begin{equation}
$$ M' = 25 + 5log( \frac{c}{H_{0}}) + \sigma_{M} $$
\end{equation}
We can then marginalize over this part to constrain densities and EoS
parameters. We use Union 2.1 data set for our type Ia supernovae related analysis \cite{Suzuki et al. 2012}.
\subsection{ISW Effect}
The late time integrated Sachs-Wolfe effect (ISW) deals with the blue-shifting and red-shifting of the CMB photons due to the presence of large scale structures and super-voids respectively (please read: \cite{SachsWolfe 1967} \cite{Afshordi 2004} \cite{RahmanIqbal 2019} \cite{Hojjate et al. 2011} \cite{Laureijis et al. 2011} \cite{Loverde 2008} \cite{Nolta et al. 2004} \cite{Vagnozzi2020}). The cross-correlation angular power spectrum
coefficient `Cl' can be calculated as:
\begin{equation}
$$Cl_{\text{gt}} = 4\pi\int\frac{\text{dk}}{k}\Delta^{2}(k)Wl_{g}(k)Wl_{t}(k)$$
\end{equation}
Here, \(\text{Wl}_{g}(k)\text{\ and\ W}l_{t}(k)\) are galaxy and
temperature window functions respectively and $\Delta^{2}(k)$ is the
logarithmic matter power spectrum. Galaxy window function depends on
redshift distribution (dN/dz) and galaxy bias (b).We use redshift
distribution as in dataset for NVSS-ISW provided by Stölzner et al. 2018 \cite{Stolzner et al. 2018}, Brinckmann \& Lesgourgues 2019 \cite{Brinckmann et al. 2019} and Audren et al. 2013 \cite{Audren et al. 2013} which provide CMB-galaxy cross-correlation angular power spectrum for Planck 2015 \cite{Planck 2016} \cite{Planck 2016b} and NVSS \cite{Condon1998} \cite{Blake2004}. We also use redshift dependent galaxy bias for NVSS \cite{Planck 2016b} \cite{Planck 2016}.
\begin{equation}
$$b(z) = 0.90 [1 + 0.54(1 + z)^{2}]$$
\end{equation}
Another factor which affects the galaxy window function and so the ISW effect is the magnification bias due to gravitational lensing. Magnification bias is dependent on the slope ‘$\alpha$’ of the integral count, $N(>S)=CS^{-\alpha}$. However as discussed in \cite{Bianchini2015} \cite{Loverde 2007}, ‘$\alpha$’ only plays its part when it is greater or less than ‘1’. In NVSS integral count’s case, ‘$\alpha$’ is almost equal to ‘1’ and so did not significantly affect our theoretical calculations.
For \(\chi^{2}\) and mean likelihood, we use
diagonal of the covariance matrix provided by Stölzner et al. 2018 \cite{Stolzner et al. 2018}, Brinckmann \& Lesgourgues 2019 \cite{Brinckmann et al. 2019} and Audren et al. 2013 \cite{Audren et al. 2013}. We use
cross-correlation Cl values from multipole (l)=10 to 100 as for higher
multipole ranges CMB lensing and Sunyaev--Zeldovich effect play a
significant part in the overall CMB anisotropy power spectrum \cite{SunyaevZeldovich 1972} \cite{ShajibWright 2016}. Apart from cosmology parameters, we also fit a parameter AISW which quantifies ISW amplitude
as in Stölzner et al. 2018 \cite{Stolzner et al. 2018} and also to deal with possible effect on
'Cl' values due to error in modeling the galaxy bias, b(z). AISW is not equal to 1 can
either mean disagreement with \(\Lambda\)-CDM or modeling issues with
the galaxy bias.
\section{Likelihood}
We calculated likelihood (L) as:
\begin{equation}
$$L = \text{exp}\frac{- \Delta\chi^{2}}{2}$$
\end{equation}
Here, \(\Delta\chi^{2}\) is the chi-square minus the minimum chi-square value for the parameter sets being tested by using the theoretical models and data \footnote{Sample code for likelihood calculations using H(z) data: \url{https://github.com/faisalrahman36/Workshop_labs/tree/master/Workshop_2_Cosmology_examples/Hz_Data}}. For parameter fitting, we use mean likelihood instead of maximum
likelihood in order to minimize the effect of parameter boundary cuts \cite{Davis 2012}.
For our study we use boundary conditions\(\ 65 \leq H_{0} \leq 75\),
\(0.6 \leq \ \mathrm{\Omega}_{\Lambda}\ \leq 0.8,\ - 1.5 \leq w_{0} \leq - \ 1/3\ \)and\(\ - 2 \leq w_{a} \leq 2\).
We are assuming flatness with all the models for our analysis.
We use mean likelihood analysis to constrain parameters \cite{Davis 2012}
with \(\Lambda\)-CDM, WCDM and CPL models \cite{ChevallierPolarski 2001} \cite{Linder 2003}. Figures (\ref{fig:qzhz} \ref{fig:qzunion2p1} \ref{fig:qziswnvss}) show deceleration
parameter evolution over different redshifts from H(z),NVSS-ISW, and
Union 2.1 for \(\Lambda\)-CDM, WCDM and CPL models and their best fit
values can be seen in table \ref{tab:table_2}.
The results are given in \ref{tab:table_2}. We can see that for\(\Lambda\)-CDM
model, results for the deceleration parameter (q\textsubscript{0})
derived from densities and EoS parameters using H(z), ISW and Union 2.1
type Ia supernovae are not only consistent with each other within one
standard deviation but are also in agreement with table \ref{tab:table_1}, within one
standard deviation.
However, results for WCDM and CPL parameters, derived from H(z), ISW and
Union 2.1 type Ia supernovae, are quite different from non
\(\Lambda\)-CDM q\textsubscript{0} results from table \ref{tab:table_1}. This is
mainly because of the limitations with Taylor series luminosity distance
form used for measurements in table \ref{tab:table_1}. It is also worth noting that
all $w_{0}$ results from WCDM and CPL using H(z), ISW and Union 2.1 type Ia
supernovae are not too different from our Lambda-CDM assumption of a
constant equation of EoS parameter, w=-1. They all agree with w=-1,
within one standard deviation despite being different in nature and
redshift ranges. Even though we kept phantom energy region
(w\textless-1) in our boundary conditions, but our results still agree
with a likely constant w=-1. Phantom energy is theoretical concept
associated with the acceleration of our universe where w\textless-1 can
lead to the possibility of a "Big Rip" fate of our universe, according
to which all matter from large scale structures to subatomic particles
getting torn apart \cite{Farnes2018} \cite{Vikman2005}. Vagnozzi et al. 2018 \cite{Vagnozzi2018} suggested that quintessence dark energy models can be ruled out in coming years, independently of cosmological observations if long-baseline neutrino experiments measure the neutrino mass ordering to be inverted. However, this will require confirmation from cosmological studies using other signatures as well.
Another interesting thing is value of Hubble constant
(H\textsubscript{0}),in H(z) and ISW from NVSS, is in agreement with
each other. The mean value is closer to 70 as observed by the
gravitational waves and H\textsubscript{0} based on a calibration of the
Tip of the Red Giant Branch (TRGB) applied to type Ia supernovae (SNeIa)
\cite{Freedman et al. 2019} \cite{LIGO 2017} \cite{Abbott et al. 2016} \cite{Abbott et al. 2017} and
standard devation margins bring them in the range of the extreme values
observed by Planck's CMB measurements, Dark Energy Survey (DES),
H0LiCOW's combined results \cite{Macaulay et al. 2019} \cite{Birrer et al. 2018} \cite{Riess et al. 2019}. There are still other challenges associated with the observational signatures from which the datasets are taken,
apart from disagreements over the expansion rate of our universe. There
are issues like the cosmic cold spot or the CMB cold spot \cite{Rudnick et al. 2007} \cite{Rahman 2020} which isn't agreeing with the ISW effect results based on current observations. This can be due to issues related
to galaxy bias, magnification bias, galaxy classification, redshift
distribution and other issues involved in understanding the galaxy
window function and galaxy-CMB maps cross-correlations \cite{Nolta et al. 2004} \cite{Raccanelli2008} \cite{Raccanelli2012}. There are also challenges associated with the type Ia supernovae observations which is
attracting some attention mainly due to the expansion rate debate.
There are also possibilities of such issues arising due to our lack of
understanding of things like curvature \cite{Valentino et al. 2020} \cite{Vagnozzi2021a} \cite{Vagnozzi2021b} or
nature of inflation in different areas \cite{Wang et. al. 2016} or some other
exotic phenomenon \cite{Rahman 2020}.
\section{Transition redshift}\label{transition-redshift}
A useful quantity to study the accelerated expansion of our universe is
to measure the transition redshift (zt). The transition redshift can be
defined as the redshift at which universe enters into the accelerating
phase from an earlier decelerating phase. In flat Lambda-CDM models, we
can write zt as \cite{Moresco2016}:
\begin{equation}
$$zt = \frac{2\Omega_{\Lambda}}{\Omega_{m}}^{1/3} - 1$$
\end{equation}
Table \ref{tab:table_3} gives the results we get from H(z),NVSS-ISW, and Union 2.1 for
flat Lambda-CDM model. The results are consistent within one standard
deviation, with the results obtained from previous studies using CMB
anisotropies, BAO, type Ia supernovae and cosmic chronometers for
\(\Lambda\)-CDM model \cite{Moresco2016} \cite{Busca et al. 2013} \cite{Lima et al. 2012} \cite{Hinshaw et al. 2013} \cite{Planck 2018}. However, our best fit transition redshift
results are slightly larger than the measurements from cosmological
model independent approach discussed in Moresco et al. 2016 \cite{Moresco2016}. Figures
(\ref{fig:qzhz} \ref{fig:qzunion2p1} \ref{fig:qziswnvss}) show transition redshifts from H(z),NVSS-ISW, and
Union 2.1 for flat \(\Lambda\)-CDM model and their best fit values can
be seen in table \ref{tab:table_3}.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{qz_Hz}
\caption[Deceleration parameter and transition redshift measurements from H(z) data.]{Deceleration parameter and transition redshift measurements from H(z) data.}
\label{fig:qzhz}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{qz_union2p1}
\caption[Deceleration parameter and transition redshift from Union 2.1 type Ia supernovae data.]{Deceleration parameter and transition redshift from Union 2.1 type Ia supernovae data.}
\label{fig:qzunion2p1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{qz_isw_nvss}
\caption[Deceleration parameter and transition redshift from NVSS-ISW data.]{Deceleration parameter and transition redshift from NVSS-ISW data.}
\label{fig:qziswnvss}
\end{figure}
\section{Jerk parameter}
Another quantity which is helpful in understanding the acceleration of the universe especially its deviations from the $\Lambda$-CDM is the jerk parameter. Jerk parameter can be helpful in understanding the transitions between phases of different cosmic accelerations.
The jerk parameter, j \cite{Weinberg 2008} \cite{Blandford2004} \cite{MamonDas2017}, is a dimensionless quantity obtained by taking the third derivative of the scale factor ‘a(t)’ w.r.t cosmic time \cite{Blandford2004}. We can calculate the jerk parameter as a function of redshift, using the deceleration parameter, q as \cite{MamonDas2017}:
\begin{equation}
$$j(z)= [q(z)(2q(z)+ 1)+ \frac{dq}{dz} (1 + z) ]$$
\end{equation}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{jz_Hz}
\caption{Jerk parameter and transition redshift measurements from H(z) data.}
\label{fig:jzhz}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{jz_union2p1}
\caption{Jerk parameter and transition redshift from Union 2.1 type Ia supernovae data.}
\label{fig:jzunion2p1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{jz_isw_nvss}
\caption{Jerk parameter and transition redshift from NVSS-ISW data.}
\label{fig:jziswnvss}
\end{figure}
Figures (\ref{fig:jzhz} \ref{fig:jzunion2p1} \ref{fig:jziswnvss}) show the evolution of jerk parameter as a function of redshift. We can see that they agree with the $\Lambda$-CDM assumption of j\textsubscript{0}=1 for all models and data. In the figures (\ref{fig:jzhz} \ref{fig:jzunion2p1} \ref{fig:jziswnvss}), we plot both j(z)=1 assumption and j(z) evolution from the data for $\Lambda$-CDM along with WCDM and CPL models. Similar can be seen in tables \ref{tab:table_2} and \ref{tab:table_4} for the current j\textsubscript{0} values using H(z), Union 2.1 and ISW-NVSS datasets with $\Lambda$-CDM, WCDM and CPL models.
\section{Combined constraints}\label{combined-constraints}
In order to obtain combined constraints for the deceleration parameter
(q\textsubscript{0}), jerk parameter (j\textsubscript{0}) and transition redshift (zt), we use
inverse-variance weighted average mean. We first combine lower redshift
signatures from Union 2.1 and H(z), then we combine both higher redshift
ISW signatures and then we combine results from all datasets in our study.
We can see in table \ref{tab:table_4} that results agree with each other within one
standard deviation. However in case of both (q\textsubscript{0}) and (j\textsubscript{0}), deviations for WCDM and CPL are
relatively higher especially for high redshift ISW studies. However,
standard deviations improved when all datasets are combined together.
For \(\Lambda\) -CDM case, results from tables \ref{tab:table_1} \ref{tab:table_2} and \ref{tab:table_4} agree with
each other within one standard deviation but for non-\(\Lambda\) CDM cases, we can see some
disagreement in the jerk parameter which is likely due to the greater uncertainties involved with the
H(z)'s Taylor series expression approximations when we free jerk, snap and lerk parameters.
We can also see that results for transition redshift in table \ref{tab:table_5} giving a value $\approx$0.724, giving slightly lesser standard deviations than table \ref{tab:table_3}. The combined transition redshift is closed to previous \(\Lambda\) -CDM based estimates \cite{Moresco2016} \cite{Busca et al. 2013} \cite{Lima et al. 2012} \cite{Hinshaw et al. 2013} \cite{Planck 2018} but higher than the cosmological model independent method results presented in Moresco et al. 2016 \cite{Moresco2016}.
\begin{sidewaystable}
\centering
\caption{Cosmological parameters and derived deceleration parameters from H(z),NVSS-ISW and Union 2.1.}
\begin{tabular}{llllllll}
\toprule
\multicolumn{1}{|l}{\textbf{Dataset/Model}} & \textbf{AISW} & \textbf{H0} & \textbf{Omega\_Lambda} & \textbf{w0} & \textbf{wa} & \textbf{q0 (derived)} & \textbf{j0 (derived)} \\
\midrule
\textbf{H(z)} & & & & \textcolor[rgb]{ .133, .133, .133}{} & \textcolor[rgb]{ .133, .133, .133}{} & & \\
Lambda-CDM & N/A & 68.8±2.4 & 0.688±0.046 & & & -0.532±0.069 & 1± 3.4786e-06 \\
WCDM & N/A & 69.5±3 & 0.688±0.048 & -1.1±0.26 & & -0.6352±0.28 & 1.1±0.2684 \\
CPL & N/A & 69.7±3 & 0.689±0.053 & -1.11±0.26 & -0.2±1.2 & -0.647±0.282 & 1.1137±0.269 \\
& & & & & & & \\
\textbf{NVSS-ISW} & & & & & & & \\
Lambda-CDM & 2.92±0.79 & 70.2±3.4 & 0.742±0.039 & & & -0.613±0.0585 & 1±4.633e-06 \\
WCDM & 2.93±0.78 & 70.2 (fixed from LCDM) & 0.743±0.039 & -0.86±0.37 & & -0.458±0.415 & 0.844±0.41245 \\
CPL & 2.93 (fixed from WCDM) & 70.2 (fixed from LCDM) & 0.746±0.036 & -0.9±0.39 & -0.4±1.1 & -0.507±0.439 & 0.888±0.4364 \\
& & & & & & & \\
\textbf{Union 2.1} & & & & & & & \\
Lambda-CDM & N/A & Marginalized & 0.721±0.02 & & & -0.5815±0.03 & 1±4.012e-07 \\
WCDM & N/A & Marginalized & 0.705±0.055 & -1.07±0.17 & & -0.6315±0.2 & 1.074±0.18 \\
CPL & N/A & Marginalized & 0.692±0.059 & -1.07±0.16 & -0.4±1 & -0.61066±0.19118 & 1.07266±0.1662 \\
\end{tabular
\label{tab:table_2
\end{sidewaystable}
\begin{table}[htbp]
\centering
\caption{Transition redshift(zt) from H(z), NVSS-ISW and Union 2.1 for flat Lambda-CDM model.}
\begin{tabular}{ll}
\toprule
\textbf{Dataset} & \textbf{Tranision Redshift (zt)} \\
\midrule
& \\
H(z) & 0.639915± 0.11714 \\
NVSS-ISW & 0.792 ± 0.122 \\
Union 2.1 & 0.728968 ± 0.0573 \\
\end{tabular
\label{tab:table_3
\end{table
\begin{table}[htbp]
\centering
\caption{Combined deceleration and jerk parameters from H(z),NVSS-ISW and Union 2.1}
\begin{tabular}{rlll}
\toprule
\multicolumn{1}{l}{\textbf{Model}} & \textbf{Datasets} & \textbf{q0} & \textbf{j0} \\
\midrule
& & & \\
\multicolumn{1}{l}{Lambda-CDM} & & & \\
& H(z)+Union 2.1 & -0.574±0.0275 & 1±3.986e-07 \\
& H(z)+Union 2.1+NVSS-ISW & -0.5808±0.025 & 1±3.971e-07 \\
& & & \\
\multicolumn{1}{l}{WCDM} & & & \\
& H(z)+Union 2.1 & -0.633±0.163 & 1.082±0.1495 \\
& H(z)+Union 2.1+NVSS-ISW & -0.61±0.15 & 1.054±0.141 \\
& & & \\
\multicolumn{1}{l}{CPL} & & & \\
& H(z)+Union 2.1 & -0.622±0.158 & 1.084±0.1414 \\
& H(z)+Union 2.1+NVSS-ISW & -0.61±0.15 & 1.0654±0.1345 \\
\end{tabular
\label{tab:table_4
\end{table
\begin{table}[htbp]
\centering
\caption{Combined transition redshift (zt) from H(z), NVSS-ISW and Union 2.1 for flat Lambda-CDM model}
\begin{tabular}{ll}
\toprule
\textbf{Datasets} & \textbf{Tranision Redshift (zt)} \\
\midrule
H(z)+Union 2.1 & 0.712±0.051 \\
H(z)+Union 2.1+NVSS-ISW & 0.724±0.047 \\
\end{tabular
\label{tab:table_5
\end{table
\section{Conclusion}\label{conclusion}
In this study, we started with an introduction to the deceleration parameter (q\textsubscript{0}) and then estimated it directly using the cosmic chronometer or H(z) data, and in the later parts of the chapter, we estimated q\textsubscript{0} by deriving it from the cosmological parameter results obtained using the cosmic chronometers data, type Ia supernovae data and then ISW data. We obtained the combined q\textsubscript{0}=-0.5808±0.025 for Lambda-CDM model, and q\textsubscript{0}=-0.61±0.15 for both WCDM and CPL models. We also measured the jerk parameter (j\textsubscript{0}). The best fit measurement we got for the combined jerk parameter for Lambda-CDM model is 1±3.971e-07, for WCDM model is 1.054±0.141 and for CPL model is 1.0654±0.1345. We also estimated the transition redshift. We measured the combined zt=0.724±0.047 from the Lambda-CDM model parameters.
|
1,116,691,497,385 | arxiv | \section{Introduction}\label{intro}
Stellar spectra contain a wealth of information on the photospheres of
stars, including their chemical makeup. In spite of decades of measuring and studying stellar spectra,
we have a very limited knowledge of how many stars with a given chemical composition exist
in the Galaxy. Large spectroscopic surveys such as RAVE \citep[][]{2006AJ....132.1645S} or
SDSS \citep[e.g.,][]{2002AJ....123..485S}
have increased by several orders of magnitude the number of stars with measured spectra,
yet these surveys are limited in scope, reaching only a particular cross-section of the stellar population
of the Milky Way, and biased, in that they target stars that have been preselected based on their
color, distance or brightness. This situation will soon change as new large projects such as
Gaia \citep[e.g.,][]{2005ESASP.576.....T}, or HETDEX \citep{2008ASPC..399..115H}, with
well controlled samples, appear in the scene.
The advent of these new data sets poses an obvious problem.
As the data flow continuously increases, performing systematic studies becomes
more of an issue, and new efficient
means of analyzing stellar spectra are needed.
The study of stellar spectra to quantify the physical properties
of stars requires model atmospheres, radiative transfer calculations, atomic and molecular data,
and involves a large number of approximations.
The model ingredients and our recipes to apply them to interpret observations are in
constant evolution, and so are the values for the inferred quantities. Avoiding such changes is
one of the motivations behind spectroscopic classification in general, and in particular of the MK
system of \citet{1943QB881.M6.......} and \citet{1973ARA&A..11...29M}
which is still in use today enhanced with extensions.
It assigns spectral classes in a way that is purely empirical and repeatable,
providing basic information as a preliminary step for further,
more detailed, analysis.
This kind of typing system based on a series of predefined criteria
is feasible as long as the criteria are set and applied by humans. Such systems
resemble the taxonomical classification of animal species \citep[see][]{2006A&A...455..845F}
and, by definition,
are not optimal since they rely on subjective judgments.
If such systems have to be updated for application
to very large number of spectra, the classification has to be
easily implemented and performed with computers, in a fast
and homogeneous fashion.
Moreover, \citet{san05} maintained
that physics must not be used to drive a classification;
otherwise the arguments become circular when using the
classification to support physics. Thus, one is inclined
to consider unsupervised classification
systems, which are themselves defined by algorithms and data.
The potentials of one of such methods, k-means,
is explored in this paper. However, it must be stressed
that {\em unsupervised} does not mean {\em absolute} or
model {\em independent}. The classification criteria are
implicitly set by the algorithm, and the resulting classes
depend on the specific dataset under analysis.
In data mining parlance, the spectrum of a star
is a point in the high-dimensional space where each
coordinate corresponds to the intensity at a particular
wavelength.
Given a comprehensive set of stellar spectra,
classifying consists of identifying clusters
in this high-dimensional space.
The problem of finding structures in a multidimensional data set goes also by the
name of cluster analysis \citep[see e.g.,][]{eve95,bis06}. One of the
most widely used algorithms is k-means clustering \citep{mac67},
and it fulfills the requirements put forward above.
Moreover, k-means is simple to code and robust, even when exploring clustering
in a high-dimensional space.
Previous works have shown successful applications
of the method to the classification of spectra in
various astrophysical contexts, e.g.,
stars \citep[][]{1996A&A...311..145B,2012arXiv1209.0495S},
solar polarization spectra \citep{2000ApJ...532.1215S,2011A&A...530A..14V},
X-ray spectra \citep{2007ApJ...659..585H},
spectra from asteroids \citep{2008AIPC.1082..165G},
and galaxy spectra \citep{2009ApJ...698.1497S,2010ApJ...714..487S,2011ApJ...743...77M}.
We now explore its application to medium-resolution stellar
spectra from the Sloan Digital Sky Survey.
SDSS currently provides the largest available homogenous database of stellar spectra.
The
original SDSS survey \citep{2002AJ....123..485S,2009ApJS..182..543A}
together with
SEGUE and SEGUE-2 contain
somewhere over half a million stellar spectra
\citep{2009AJ....137.4377Y}.
The Baryon Oscillations Spectroscopic Survey
\citep[BOSS;][]{2011AJ....142...72E},
part of SDSS-III, uses similar but upgraded spectrographs, and
in the first two years of operation has already obtained over 100,000 additional spectra of stars.
Even though these stars do not provide a fair sample of the Milky Way stellar population,
this rich data set is a good place to explore the application of clustering algorithms
for classifying stars.
Actually, the set has already been used for this
purpose. \citet{2010AJ....139.1261M} apply principal component
analysis (PCA) to spectra having narrow color bins, so as to separate stars of the same effective
temperature according to their gravity and metallicity. In this case the spectra are not
classified directly by the automated procedure, but the color cuts
introduce human supervision into the classification.
\citet{2011AJ....142..203D} apply local linear embedding (LLE), which
is a type of PCA decomposition that preserves the
nonlinear structure within high-dimensional data sets. The stellar spectra
are found to form a 1D family when projected into the first three eigenvectors.
Finally, \citet{2010ApJ...719.1759A} also use SDSS data to show that stellar spectra
are highly compressible, so that a small number of parameters suffice
to reproduce the bulk of the observed spectra.
In this paper we focus on the k-means classification of stars observed as part of the SEGUE
and SEGUE-2 surveys \citep{2009AJ....137.4377Y}.
Section~\ref{data} presents the selection of spectra used in the analysis,
and Sect.~\ref{algorithm} describes the basics behind k-means.
Section~\ref{classification} applies the algorithm to SDSS
spectra, first considering continuum (Sect.~\ref{class_with}), and then
without continuum (Sect.~\ref{class_out}).
The classification allows us to identify outliers, often
rare objects that do not show up unless the catalogs are large
enough and which turn out to be extremely revealing
\citep[e.g.,][]{2012ApJS..200...14M}.
The properties of these outliers are analyzed in Sect.~\ref{outliers}.
Section~\ref{conclusions} explains the main results and outlines
additional uses of the classification.
\section{Data set}\label{data}
The spectra come from SEGUE and SEGUE-2 \citep{2009AJ....137.4377Y}. These programs obtained
stellar spectra using the SDSS 2.5-m telescope and
the SDSS double spectrograph \citep{2006AJ....131.2332G,2012arXiv1208.2233S} between
2004 and 2009.
The spectra contain 3849 wavelengths
covering the range 3800-9200\,\AA\ at a resolving power $R\simeq 1800$.
They were downloaded from the SDSS Data Release~8
\citep[DR8; ][]{2011ApJS..193...29A}.
SEGUE observed numerous types of targets, from very hot WD to very cool M and L~types,
each chosen based on color criteria, and in some cases additional information such as proper motion.
The survey observations sample the Galaxy at mid and high galactic latitudes,
covering very sparsely 3/4 of the sky, with only 3 plates
(less than 0.5\% of the spectra) at $|b|<10^\circ$.
The sample of DR8 spectra associated with SEGUE and SEGUE-2 programs
include 355,840 spectra in 525 plug-plates. Each plate, as for SDSS, admits 640 fibers.
We selected stars with a median signal-to-noise ratio ${\rm S/N} >10$, a radial velocity module smaller
than 600~km\,s$^{-1}$ (redshift $<0.002$), and which
were not labeled as one of the
following classes of objects: GALAXY, NA, QA, ROSAT\_D, QSO, SER, and CATY\_VAR.
We processed the spectra to eliminate by interpolation the [OIII] nightglow lines
at 5577 and 6300\,\AA , and corrected the Doppler shifts associated with the radial velocities.
The original spectra have units of flux per unit wavelength
(i.e., erg\,cm$^{-2}$\,s$^{-1}$\,\AA$^{-1}$).
We normalized them dividing
by the median value of their fluxes in the
spectral band between 5000 and 6000~\AA . This step preserves the shape
of the spectral energy distribution, while places all the spectra on the same
scale regardless of the intrinsic luminosity of the stars, their distance, and the
amount of interstellar absorption. Missing sections of
spectra were patched by interpolation, since any regions with extreme (wrong) values can
damage the classification algorithm.
A sample 173,390 stellar spectra passed all the selection criteria, and
we refer to them
as {\em the reference set}.
Interstellar extinction is not corrected for, however, we analyze
a version of the same data with a running mean subtracted from the spectral
energy distribution, leaving only absorption features. This procedure is intended to
partly remove the reddening produced by interstellar extinction, thus minimizing its
potential impact on the results.
Two additional sets of spectra are mentioned in the paper. They were
used only for testing in early stages of the
work, but they are explicitly mentioned here because
the sanity checks performed with them provide confidence on the
technique.
As far as we can tell, they are equivalent to the reference set
in a statistical sense.
For lack of imagination, we refer to them as
{\em 1st auxiliary set} and {\em 2nd auxiliary set}.
Auxiliary set~1 comprises 63,611 stellar spectra from 3800 to 8000\,\AA , drawn
from SDSS/DR6 and then purged to retain only those with best S/N. They are uniformly
resampled
in log-wavelength as to have 1617 wavelengths. This set is particularly
rich in main sequence F-type stars, since they were used as spectrophotometric
calibrators in the original SDSS survey (only 162 distinct SEGUE plates were included in DR6).
Auxiliary set~2 comes from DR8, but the noise thresholding and other
selection criteria differ from the reference set, and leaves only 121,272 targets.
\begin{figure*}
\includegraphics[width=0.9\textwidth]{diagnostics7h.eps}
\caption{
Distribution of MK spectral types in the reference dataset.
We use ELODIE-based types as provided by the SSPP.
}
\label{original_classes2}
\end{figure*}
Effective temperatures $T_{\rm eff}$, surface gravities ($\log {\rm g}$) and metallicities ([Fe/H]) computed
with the SEGUE Stellar Parameter Pipeline
\citep[hereafter SSPP;][]{
2008AJ....136.2022L,
2008AJ....136.2050L,
2008AJ....136.2070A,
2011AJ....141...90L,
2011AJ....141...89S}
are used here to characterize the physical properties of the stars.
Each one of these physical parameters results from the robust
average of various independent estimates, discarding those which do not seem to
be consistent \citep[for details, see][]{2008AJ....136.2022L}.
The MK types of the targets mentioned in the paper
are also from the SSPP. Most of our discussion
is based on the so-called ELODIE MK types, derived
by best-fitting templates from the ELODIE library
of synthetic spectra \citep{2001A&A...369.1048P}.
The distribution of MK types of the reference
set is shown in Fig.~\ref{original_classes2}.
These types encompass a broad range of stellar
properties from O to L and include WDs, however, they
face some difficulties when interpreting intermediate spectral types
\citep[e.g., the distribution of MK types in Fig.~\ref{original_classes2} peaks at
type F whereas most of the SEGUE stars have been selected to be of G type; see][]{2009AJ....137.4377Y}.
The SSPP also provides a second set of MK types only for cool stars, the Hammer MK types,
which were inferred using the spectral typing software developed and described by
\citet{2007AJ....134.2398C}. They cope much better with types G and F, and
we apply them when appropriate.
\section{Classification algorithm}\label{algorithm}
We use {\em k-means} to carry out the classification, which is a robust tool
commonly used in data mining and artificial intelligence \citep[e.g.,][]{eve95,bra98}.
The actual realization of the algorithm employed in our analysis is described
in detail by \citet[][\S~2]{2010ApJ...714..487S}, and we refer to that work for details.
However, for comprehensiveness, this section sketches the operation of
the algorithm, with its pros and cons.
As for most classification algorithms, the stellar spectra are vectors in
a high dimensional linear space, with as many dimensions as
the number of wavelengths. Therefore the spectra to be classified are a set
of points in this space. The points (i.e., the spectra) are assumed to be
clustered around a number of centers. Classifying consists of
(a) finding the number of clusters and their centers, (b)
assigning each spectrum to one of these centers, and
(c) estimating the probability that the choice is correct.
The third step should be regarded as a sanity check that allows us to
quantify the goodness of the classification for each particular spectrum.
In the standard formulation, k-means starts by selecting at
random from the full data set a number $k$ of spectra. These spectra are
assumed to be the center of a cluster. Then each spectrum of the data set is assigned
to the cluster center
that is closest in a least squares sense\footnote{
This means using the Euclidean metric to assign distances between
points in the high dimensional classification space.
Actually, we use the plain Euclidean distance,
where all the wavelengths are equally weighted
\citep[][Eq.~(2)]{2010ApJ...714..487S}.
Observational errors are not included in the
metric for simplicity.
}.
Once all spectra have been assigned to one of the classes, the cluster centers are recomputed
as the average of the spectra in the cluster. The procedure is iterated with the new
cluster centers, quitting when most spectra are not re-assigned in two
successive steps (99\,\% of them in our realization).
The number of clusters is arbitrarily chosen but, the results are insensitive to
such selection since only a few clusters possess a significant number of
members. Thus the algorithm provides the number of
clusters, their corresponding cluster centers, as well as the
classification of all the original spectra now
assigned to one of the clusters. This information completes steps (a) and (b) of
the classification procedure.
In order to estimate the probability that the assignation is correct (step c),
we compute for each cluster the distribution of the distances to the cluster center
considering all spectra assigned to the cluster.
We then assume that this distribution
describes the probability that any star with a given distance
from the cluster center belongs to the class. Specifically,
the probability that a given star belongs to a cluster is estimated as the fraction
of stars in the cluster with distances equal to or larger than the distance
of the star.
It is a sensible assumption; it
gives high probability to spectra close to the cluster center, and then drops
down smoothly toward the outskirts of the cluster. The scale of this smooth
decrease is provided by the measured distribution of distances in the class.
The algorithm is simple, robust and fast, which makes it ideal to
treat large data sets. It guarantees that similar spectra end up in
the same cluster. Moreover, it is unsupervised since no prior
knowledge of the stellar properties is used,
and the spectra to be classified are the only
information passed on to the algorithm\footnote{
For the sake of comparison, artificial neuronal network
classifications use a training set that informs the algorithm on the
existing spectra and spectral types -- see, e.g.,
\citet[][]{2012A&A...538A..76N} and references therein.
}.
These two properties ensure
that the resulting classification is not biased by our (physical)
prejudices, which follows the spirit of a good classification as
advocated by \citet{san05}. Unfortunately, it also has three major
drawbacks. One of them is technical, whereas the other two deal with
the physical interpretation of the classes. The algorithm yields
different classifications depending on the random initialization. This
difficulty is overcome by repeating the classification multiple times,
thus studying the dependence of the final classes on the random seeds.
In addition, our implementation refines the
initialization so that the random seeds are not chosen uniformly but
according to the distribution of points in the classification space
\citep[for details, see ][]{2010ApJ...714..487S}. The second
difficulty has to do with interpreting the classes as actual clusters
in the classification space, or as parts of larger structures. The
algorithm does not guarantee that the derived classes correspond to
actual clusters. However, one can figure out whether each particular
class is isolated or belongs to a larger structure by studying the distances of the
spectra in the class to the other classes. Well defined classes contain
stars that are distant from the other classes. The third difficulty refers
to the physical interpretation of the resulting classes, which is
not provided by the algorithm. The physical sense of a particular
class and its cluster center (dubbed in the paper as template spectrum)
has to be figured out later on.
Actually, most of this paper is devoted to this task, i.e., to
interpreting in terms of known stellar physics the classes resulting from the
k-means classification.
\subsection{Repeatability of the classification}\label{repeatability}
As customary, the dependence of the classification on the random
initialization was studied by
repeating the classification 100 times, and then comparing the
results. This internal comparison was carried out using three parameters
that we name:
(1) {\em coincidence}, for the percentage of spectra in equivalent
classes,
(2) {\em dispersion}, for the rms fluctuations of the spectra in a
class with respect its cluster center,
and (3) {\em number of classes}, for the number of classes
that contain 99\,\% of the spectra (major classes).
In order to decide which classes in two different classifications
are {\em equivalent}, we compute the number of stars in common
between each pair of classes formed by one class from one
classification and the second class from the second classification.
The two classes sharing the largest
number of stars are assumed to be {\em equivalent}.
The same criterion is repeated until all the classes
of one of the classifications have been paired.
This criterion maximizes the number of stars sharing
the same class in the two classifications.
Figure~\ref{the_class4} shows scatter plots of the three
diagnostic parameters corresponding to repeated classifications of the
reference dataset (\S~\ref{data}) including continuum. One finds
classifications having between 15 and 20 major classes,
a dispersion in the range 0.07 to 0.08, and a
mean
coincidence between
62\,\% and 68\,\%. The ranges in these values are fairly narrow.
The fact that the coincidence is about 65\,\% means that one can
pair the classes of any two classifications, and they will share
about 65\,\% of the spectra.
(These apparently low values are discussed below.)
The fact that the dispersion is
of the order of 0.075 implies that the differences between
the class template spectra and the spectra in the class are of the
order of 7.5\,\% rms. These numbers refer to the reference
dataset including continua (\S~\ref{class_with}) but are
similar to those obtained when spectra without continua
are used (\S~\ref{class_out}), or when using the auxiliary sets.
\begin{figure}
\includegraphics[width=0.45\textwidth]{the_classification4.eps}
\caption{
Scatter plots with the three parameters characterizing the
100 independent classifications of the reference dataset with
continuum.
(a) Percentage of stars common to all other
classifications (coincidence) versus typical scatter
of the spectra with respect to the class template
(dispersion).
(b) Number of major classes (i.e., having 99\,\% of the spectra)
versus dispersion. (c) Coincidence versus number of major
classes.
}
\label{the_class4}
\end{figure}
In addition to the above tests, we made a numerical experiment
splitting the 1st auxiliary set into two randomly chosen
disjoint subsets, which were classified independently.
The differences give an idea on the dependence of the classes
on the particular dataset that is employed.
The results are summarized in Fig.~\ref{split1}, which shows the
template spectra of equivalent classes in the two classifications.
(Only nine classes are included, but they are representative of the
general behavior.) The differences between spectra are also plotted,
and turn out to be of a few percent, i.e., smaller than the
scatter among the spectra included in each class (the dispersion
of these classifications was of the order of 5\,\%). We have also
computed the colors of the templates and of the individual stars, and
the differences between the colors of the templates
($\sim$0.025 mag) are smaller than the scatter among
individual stars in a class ($\sim$0.05 mag).
\begin{figure*}
\includegraphics[width=1.\textwidth,angle=0]{diagnostics1a.eps}
\caption{
Spectra of several equivalent classes resulting from the
k-means classification of two disjoint subsets drawn from the 1st auxiliary
set. We show the two template spectra and their difference (uplifted adding 0.5
so as to fit in the plot); see the inset in the top left panel.
Wavelengths are given in $\mu$m.
}
\label{split1}
\end{figure*}
As mentioned above, repeating the classification
several times leaves only 60--70\% of the spectra in equivalent
classes. This issue with k-means is not directly due to the
random initialization -- the cluster centers in equivalent classes
are very similar, as shown in
Fig.~\ref{split1} and in \citet[][Sect.~2.1]{2010ApJ...714..487S}.
It is due to the fact that k-means slices a rather
continuum distribution. Then small changes in the borders
between clusters produce a significant relocation
of the in-between stars.
The effect is boosted because the clusters are in a
high dimensional space
\citep[see the analytic justification in the appendix of][]{2010ApJ...714..487S}.
We show in Sect.~\ref{class_with} how some of the resulting
clusters really have many stars near their borders.
The actual classifications were carried out in parallel using
several workstations.
The reference data set has a volume of 6.1\,GB,
and the process requires some two CPU hours per single
classification, or two hundred hours for
studying the effects of the random initialization.
These figures are mentioned to
stress that the k-mean classification of a sizable data set,
including studying initial conditions, is easily doable
using standard hardware facilities.
\section{The classification: classes and their main properties}\label{classification}
The description given in this section refers to the reference set
defined in \S~\ref{data}, however, the same procedures and analysis
have been repeated with the two auxiliary sets, giving always
consistent results.
The classifications use all the 3849 wavelengths equally weighted.
We consider two cases for the analysis. In the first case,
the full spectra are used (\S~\ref{class_with}). In the second,
a running average 193 pixels wide ($\equiv$17,400\,km\,s$^{-1}$) is subtracted from each
spectrum (\S~\ref{class_out}). This high-pass filtering removes the
continuum but leaves the spectral lines almost untouched.
\subsection{Classes with continuum}\label{class_with}
In order to study the dependence of the classes on the random seeds
inherent to k-means, we carried out 100 independent classifications
(\S~\ref{repeatability}).
They are equally valid classifications, but we have to
choose among them one to be used as {\em the classification}.
Taking advantage of having all these possibilities, we try to selected
the one that is as representative as possible of all of them,
the spectra in their classes are as similar as possible, and has the smallest
number of classes. In the parlance used in \S~\ref{repeatability}, we
try to select a classification having large coincidence,
small dispersion and few classes.
The scatter plots for these three parameters among the 100 independent
classifications are shown in Fig.~\ref{the_class4}. Asking the coincidence
to be larger than 66\,\%, the dispersion to be smaller than 0.074,
and the major classes to be fewer than 17, one finds only two
classifications. Those are represented as asterisks in Fig.~\ref{the_class4}.
For lacking a better criterion, we choose one of them at random.
Its coincidence is 67\,\%, its dispersion 0.073, and it has
16 major classes (26 in total, but some seem to correspond
to failures of the SDSS pipeline, as we explain later on).
The average of all the spectra in the classes (i.e., the cluster centers
or the cluster templates) are shown
as solid lines
in Fig.~\ref{the_class_temp4}.
The figure also includes the standard deviation among all the spectra
in each class (the dotted line), which quantifies
the intra-class dispersion.
The number of stars in each class is represented as an histogram
in Fig.~\ref{the_class4_his}. It shows that the classes have
been numbered according to the stars than contain, being
class~0 the most numerous, class~1 the second most numerous,
and so on and so forth.
Since the template spectra come from averaging
thousands of individual spectra, they have extremely high
signal-to-noise ratios -- from 200 to 2000 depending on the
number of spectra in the class.
The spectra of classes~22 and 24 are not included in
Fig.~\ref{the_class_temp4}.
They collect faulty spectra that are similar to class~17
(see the template spectrum in Fig.~\ref{the_class_temp4},
that has a large unphysical spike at the bluest wavelength).
\begin{figure*}
\includegraphics[width=0.89\textwidth,angle=0]{diagnostics7b.eps}
\includegraphics[width=0.90\textwidth,angle=0]{diagnostics7b1.eps}
\caption{
Template spectra of all classes in the classification that includes
continuum (the solid lines).
The spectra are normalized to the intensity at some 5500\,\AA,
and the individual plots are scaled from minimum to maximum.
The classes are identified in the insets, and they have been ordered
from red to blue (from left to right and from top to bottom)
following Fig.~\ref{the_class_temp4otro}.
This order breaks down with the abnormal classes 19, 20, 21 and 17,
that are shown at the end of the sequence.
Classes 22 and 24 are not included,
since they correspond to failures in the reduction pipeline and
are similar to class~17.
The panels also include the intra-class
standard deviation, that quantifies the dispersion among the
spectra included in the class (the dotted lines).
Wavelengths are given in $\mu$m.
}
\label{the_class_temp4}
\end{figure*}
\begin{figure}
\includegraphics[width=0.5\textwidth,angle=0]{diagnostics7a.eps}
\caption{
Histogram with the number of spectra in each class as derived
from the k-means classification of stellar spectra that includes
continuum.
The class number has been assigned
according to the number of stars in the
class, being class~0 the class with the largest number of
elements.
}
\label{the_class4_his}
\end{figure}
The templates are also represented in Fig.~\ref{the_class_temp4otro} as a stack-plot
ordered so that the image looks as smooth as possible. This image excludes
those classes that are failures of the pipeline (classes 17, 22 and 24) and
binary systems (classes 20 and 21; see below).
\begin{figure}
\includegraphics[angle=0,width=0.5\textwidth]{diagnostics8.eps}
\caption{
Composite image with the templates spectra of all the classes in the k-means
classification of spectra with continuum. They have been ordered so that the
image looks smooth. All spectra are normalized to their maximum intensities.
The color palette tries to mimic the human eye sensitivity. The classes that correspond to
failures of the pipeline (classes 17, 22 and 24) and binary systems (classes 20 and 21)
are excluded.
}
\label{the_class_temp4otro}
\end{figure}
As we discuss in Sect.~\ref{algorithm}, k-means does not guarantee
the inferred classes to be real clusters in the classification space.
They may be parts of larger structures that have been sliced by the algorithm.
A way to study whether the classes are isolated was explored in
\citet{2010ApJ...714..487S}, and it is used here too.
One can estimate the probability that each particular star belongs to the
class it was assigned to. It depends on how far from the
cluster center the star is as compared to the other members in its class.
Similarly, one can
estimate the probability of the star belonging to any other class.
Well defined clusters will have most of
their elements with a probability of belonging to any other
cluster significantly smaller than the probability of belonging to the cluster.
Figure~\ref{groups_stars_pub} shows histograms of the ratios
between probabilities of belonging to the 2nd nearest cluster and to the
assigned cluster for a few representative classes.
There are classes where the histogram peaks at low ratios, thus
indicating a well defined structure in the classification space
(e.g., class~0 in Fig.~\ref{groups_stars_pub}).
Conversely, other classes present a rather flat histogram indicating
a dispersed structure (e.g., class~3 in Fig.~\ref{groups_stars_pub}).
Classes 3, 5, 11 and 19 represent spread-out classes, whereas
the rest are clustered classes.
Note, however, that even the histograms of
well defined clusters have a significant tail towards large ratios, indicating the
presence of many stars in the boundaries between clusters. Those stars are partly
responsible for the non-uniqueness of the classification studied in Sect.~\ref{repeatability}.
\begin{figure}
\includegraphics[angle=0,width=0.5\textwidth]{groups_stars_pub.eps}
\caption{
Histograms of the ratio between probabilities of belonging to the
2nd nearest cluster and to the assigned cluster for
stars included in classes 0 to 3 (as the inset indicates).
The abscissae are by definition bound between 0 and 1. Histograms
peaking at low ratio characterize well defined classes (e.g., class~0)
whereas histograms with large counts at large ratios indicate
a fuzzy class (e.g., class~3).
}
\label{groups_stars_pub}
\end{figure}
The physical interpretation of some of the classes is relatively
straightforward.
Classes 20 and 21, with upturns in both the blue and the red, are
most likely composite
spectra of systems with two (or more) stars with very different effective
temperatures ($T_{\rm eff}$).
They can be gravitationally bounded stellar systems, or stars that
happen to be along the line of sight.
The luminosity of the stars that contribute
to the combined spectrum has to be similar, therefore,
in case of binary systems both stars cannot be in the main sequence
because the hot star would outshine any cold companion.
One common possibility is a hot white dwarf (WD) and
a cold dwarf or giant,
and this is indeed the conclusion reached when trying to reproduce the
templates of classes~20 and 21 as a linear superposition of templates of
two other classes. The best fit is obtained combining classes 16 and 18, as
shown in Fig.~\ref{composite1}.
As we discuss later on, class~16 contains WDs, and class~18 corresponds
to K-type giants.
We note that the templates of classes~20 and 21 resemble spectra
of post-common envelope binaries, as identified and studied
using SDSS data
\citep[e.g.,][]{2007MNRAS.382.1377R,2008A&A...484..441S,2008MNRAS.390.1635R}.
\begin{figure}
\includegraphics[width=0.5\textwidth,angle=0]{composite1.eps}
\caption{
The class 21 template spectrum (the solid line), with upturns in the blue and the red,
seems to be the composite spectrum of a stellar system.
Assuming it to be binary, the
best fit (the dotted line) is obtained as a linear superposition of a
WD (class~16) and a late type giant (class~18). The fit is not perfect;
the difference between class~21 and the best fit is shown as the dashed line.
}
\label{composite1}
\end{figure}
Some other classes are really awkward, and so difficult to interpret unless they
are associated with failures in the reduction pipeline (e.g., classes~17 and 19).
Figure~\ref{the_class_col4} shows an image with the distribution
of $u-g$ vs $g-r$ colors of the full set, with the individual classes
overlaid as contours containing 68\% of the stars.
The colors have been derived
from the spectra using the transmission bandpasses of the
broad-band SDSS filters. Note that the classification is basically a color classification.
Disregarding classes gathering failures of the pipeline
(classes 17, 19, 22, and 24) and multiple systems (classes~20 and 21),
the k-means classification of stellar spectra with continuum seems to
separate stars according to
their position on the color-color plot. The classes form a one-dimensional
set in the diagram, with a bifurcation at $g-r\simeq 0.5$.
The bifurcation separates dwarf stars (on top)
from giant stars, a result further discussed below.
We note, in passing, that multiple systems are
well separated in this color-color plot and, therefore, it
can be used to select them.
Main sequence stars
have $\log({\rm g})$ larger than 3.8 (with the
surface gravity g in cm\,s$^{-2}$), which is even larger ($\log({\rm g})> 4$)
for stars with $T_{\rm eff} < 9000\,$K \citep[e.g.][]{2000asqu.book..381D}.
\begin{figure}
\includegraphics[width=0.5\textwidth,angle=0]{diagnostics7c.eps}
\caption{
$u-g$ vs $g-r$ plot for the full set (the image in the background)
and for the different classes separately. The contours show the region
with 68\% of the spectra in the class, and have been
labeled with the corresponding class number.
This plot corresponds to the full spectrum classification.
Classes 17, 19, 22, and 24 seem to be failures of the
reduction pipeline. Classes~20 and 21 are binary or multiple systems with
spectra blue in the blue filters (i.e., $u-g$) and red in the red
filters (i.e., $g-r$).
The rest of classes form a 1D sequence that bifurcates at $g-r\simeq 0.5$
(at classes 0 and 3).
}
\label{the_class_col4}
\end{figure}
Figure~\ref{density4} shows the two-dimensional distribution
of $\log({\rm g})$~vs~$T_{\rm eff}$, together with contours
with the region containing 68\,\% of the stars in
the class. The gravities and effective temperatures of individual
stars have been taken from the SSPP (parameters labeled ADOP), as we
explain in Sect.~\ref{data}. The plot does not include all the classes
since many of them overlap and would clutter the figure. Only those classes
relevant for our argumentation are included, in particular
classes~0 and 3 have similar $T_{\rm eff}$ but are parts of the two
different branches of the color sequence (see Fig.~\ref{the_class_col4}).
Note that class~0 gathers only main sequence stars ($\log({\rm g}) > 4$)
whereas most class~3 targets are giants.
Something similar occurs with the pairs of classes 1 and 5, and
8 and 11. Classes 0, 1, 4 and 8 contain only main sequence stars
(see classes~0 and 8 in Fig.~\ref{density4}).
Classes 14 and 18 contain only giant
stars (see class~18 in Fig.~\ref{density4}). Several classes do not
have enough valid $T_{\rm eff}$ and
$\log({\rm g})$ to know their location in the
$\log({\rm g})$~vs~$T_{\rm eff}$ plot, including the classes
with faults plus classes 16 and 25. Class 25 is a minor class
with few elements, but class~16 is not. The lack of
effective temperatures and gravities for class~16 seems
to be associated with the fact that it collects WDs, for
which no proper physical data are provided by
the SSPP
\citep[but see, e.g.,][]{2006ApJS..167...40E,2010AIPC.1273..156K,2011ApJ...730..128T}.
\begin{figure}
\includegraphics[width=0.5\textwidth]{diagnostics7d.eps}
\caption{$\log({\rm g})$~vs~$T_{\rm eff}$ for
different classes resulting from the classification
of stellar spectra with continuum. The
contours containing 68\,\%\ of the stars in the class.
Only classes~0, 3, 7, 8, 9, 15 and 18 are included
to avoid cluttering the figure. The class numbers
have been placed at the location of the mean of the
corresponding distribution.
}
\label{density4}
\end{figure}
We note that classes 20 and 21, multiple systems whose spectra combine
hot and cold components (see Figs.~\ref{the_class_temp4} and \ref{composite1}),
appear in Fig.~\ref{density4} as main sequence stars with
$T_{\rm eff}$ between 5000\,K and 6000\,K. They are also metal rich systems according to the plot
discussed in the next paragraph.
Figure~\ref{fe_vs_teff4} shows the two-dimensional distribution of [Fe/H]~vs~$T_{\rm eff}$,
together with contours with the region containing 68\,\% of the stars in
the class. As the rest of physical parameters of stars,
the metallicity [Fe/H] comes from the SSPP.
The figure shows how the classes often contain both high and low metallicity
stars. If the threshold between low and high metallicities is set at one tenth of
the solar value (i.e., [Fe/H]$= -1$), the classes that contain
only high metallicity stars are 0, 1, 4, 8, 14, and 18. Similarly,
classes 12 and 15 include only low metallicity stars. Some of these
classes are included in Fig.~\ref{fe_vs_teff4}.
Class 18 contains low gravity, low temperature high metallicity stars -- probably
K giants.
Class 15 contains low gravity high temperature low metallicity stars.
\begin{figure}
\includegraphics[width=0.5\textwidth]{diagnostics7e.eps}
\caption{[Fe/H]~vs~$T_{\rm eff}$ for some representative
classes resulting from the classification of spectra including continuum.
The represented classes are the same as those in Fig.~\ref{density4}.
The contours embrace 68\,\%\ of the stars in the class, and the class
numbers have been placed at the mean point of the corresponding
distributions.
}
\label{fe_vs_teff4}
\end{figure}
The k-means classes do not exactly coincide with
the classical MK types assigned to the stars by
\citet[][]{2008AJ....136.2050L}
(see Sect.~\ref{data}).
Figure~\ref{mkclasses4} presents the histogram of
MK types corresponding to the k-means classes in
Figs.~\ref{density4} and \ref{fe_vs_teff4}. As the histograms show,
most classes can be ascribed to a single MK type or to a narrow range
of them (e.g., class~0, 8 and 9). However, the spread in MK
types is sometimes large (e.g., class~18), becoming
extreme in the bluest classes (e.g., class~15), which often group
A-type stars (mainly on the horizontal branch)
with WD.
\begin{figure*}
\includegraphics[width=0.8\textwidth]{diagnostics7f.eps}
\caption{
Histograms of the distribution of MK types of selected k-means classes
corresponding to spectra with continuum. (The classes
are those chosen for Figs.~\ref{density4}
and \ref{fe_vs_teff4}.) Splitting the histograms
into two panels avoids overcrowding the figure.
The MK types shown here and in Fig.~\ref{original_classes2}
are directly comparable. The histograms have been
normalized to one, including objects for which the
MK type is not available.
}
\label{mkclasses4}
\end{figure*}
To recapitulate, the k-means classification of spectra with continuum seems to be
basically
a color classification. The stellar colors are driven mostly by the overall continuum
shape, therefore, the k-mean classes separate stars according to
their continua.
\subsection{Classes with continuum removed}\label{class_out}
As explained in the previous section, the k-means classification of stellar
spectra with continuum is essentially a color classification. Since the
colors are dictated mostly by the continuum, the classification is
driven by the shape of the continuum.
Dust extinction and errors in the spectro-photometric calibrations
corrupt continua but not so much spectral lines, which retain the
information on the stellar properties.
In order to study the potential of k-means to identify
and separate spectra according to subtle spectral-line differences, we repeated
the classification using spectra without continuum. Explicitly,
the spectra to be classified are the original spectra after removal
of a running mean filter 193 pixels wide, which corresponds to
170\,\AA\ in the blue and 400\,\AA\ in the red.
The width of the numerical filter was determined as a trade off to be
broader than most spectral features, yet narrow enough to be representative
of the local continua. Because SDSS spectra are sampled in log wavelength, our
constant width in pixels represents a single Doppler broadening
of the order of 17,400 km\,s$^{-1}$.
The procedure leading to the classification is similar to that used
for the full spectra described in the previous Sect.~\ref{class_with}.
The {\em dispersion}, {\em coincidence}, and
{\em number of major classes} were used to select one among 100
independent initializations. The selection criteria try to make this classification
as representative of the rest as possible. The selected class has
${\rm coincidence}=75.7\,$\%, ${\rm dispersion}=0.051$,
13~mayor classes, and 1~minor class. Note that the coincidence is
larger than that for the classification with continuum, and the dispersion
and number of classes smaller.
The number of stars in each class is shown in Fig.~\ref{the_class6_his}.
As we did for the classes resulting from classifying the spectra
with continuum (Sect.~\ref{class_with}), the new classes are also named
class~0, class~1, and so on, with the number increasing as the
elements in the class decrease.
\begin{figure}
\includegraphics[width=0.5\textwidth,angle=0]{diagnostics9a.eps}
\caption{
Histogram with the number of spectra in each class for the
classification of spectra without continuum.
The class number has been assigned
according to the number of stars in the
class, being 0 the class with the largest number of members.
}
\label{the_class6_his}
\end{figure}
The average spectra of all the stars in each class are shown in
Fig.~\ref{the_class_temp6}. We just show a small portion of the blue
spectrum where individual spectral lines can be appreciated.
(Two spectra spanning the full spectral range are shown for illustration,
but then it is impossible to appreciate details of the lines.)
Note that each template spectrum is
the average of thousands of individual spectra, so all the
small noise-looking wiggles are real spectral features.
\begin{figure*}
\includegraphics[width=1\textwidth,angle=0]{diagnostics9b.eps}
\caption{
Template spectra of all the classes in the classification
with the continuum removed. We just show a small portion in the
blue range, otherwise it is impossible to appreciate individual
spectral lines. The exception is given by the two panels
at the lower right corner, where classes 2 and 6 are repeated
in their full spectral range. The class numbers are given as
insets, an some of the main spectral features in the region are
also labelled.
The classes have been ordered following
Fig.~\ref{the_class_temp6otro}
(from left to right and from top to bottom).
}
\label{the_class_temp6}
\end{figure*}
The full templates are shown as a stack-plot image in Fig.~\ref{the_class_temp6otro}
(c.f. Fig.~\ref{the_class_temp4otro}).
They have been ordered so that the sequence looks as smooth as possible.
The Balmer lines, which are the only ones present in class~12,
decrease in strength as one moves up in the image. The
conspicuous molecular bands of TiO are present only
in class~7. Figure~\ref{the_class_temp6mas} also shows the
average spectra of the classes, but this average was computed using
the original spectra with their continua intact. The comparison of these spectra
with those corresponding to the classification including
continuum (Fig.~\ref{the_class_temp4}) renders a few
differences. First, the faulty classes with a fake emission peak
in the blue (e.g., class~17 in Fig.~\ref{the_class_temp4})
have disappeared. This is a side-effect of removing the continuum from
the spectra, which in our implementation blacks out the 193 pixels
in the extreme wavelengths, thus removing the problem.
Second, there is a new class that shows spectra with
emission lines (class~9), which most probably are not
real but poorly corrected sky lines at the wavelengths of the
Ca II IR triplet \citep[see., e.g.,][]{2008AJ....136.2050L}.
Finally, the spectra corresponding to binary systems do not form
a separate class, so that they have to show up as outliers of the
classification (Sect.~\ref{outliers}).
\begin{figure}
\includegraphics[angle=0,width=0.5\textwidth]{diagnostics10.eps}
\caption{
Template spectra of all the classes corresponding to the
classification without continuum. They have been ordered
so that the image looks smooth.
Class~13 on top seems to collect spectra with instrumental problems.
}
\label{the_class_temp6otro}
\end{figure}
\begin{figure*}
\includegraphics[angle=0,width=1\textwidth]{diagnostics11.eps}
\caption{
Average spectra of the classes resulting from classifying
the spectra without their continua. Even though the continuum was removed
for classification, it has been included in this average.
Note the suspicious shape of class~13,
and the emission line features of class~9 at some 0.9~$\mu$m.
The classes have been ordered following
Fig.~\ref{the_class_temp6otro}, as in Fig.~\ref{the_class_temp6}.
}
\label{the_class_temp6mas}
\end{figure*}
The classes are shown in the $u-g$ vs $g-r$ color plot in Fig.~\ref{the_class_col6}.
They overlap more than the classes inferred when the continuum is included; see
Fig.~\ref{the_class_col4}.
There are several conclusions to be drawn from the comparison of these two figures.
The continua influence the classification or, in other words, the classifications with and
without continua do not fully agree. However, most classes can be
viewed as mergers of classes with continuum. Even if the continuum is not
included for classifying, the different classes have different colors --
classes 0 and 9 represent an exception since they overlap in the
color-color plot (see Fig.~\ref{the_class_col6}).
\begin{figure}
\includegraphics[width=0.5\textwidth,angle=0]{diagnostics9c.eps}
\caption{
$u-g$ vs $g-r$ plot for the reference set of stellar spectra
(background image) and for the different classes separately. The contours
show the region with 68\% of the spectra in the class, and the
centroid of each distribution has been labeled with the class number
it belongs to. This plot corresponds to the classification where
the continuum was removed.
}
\label{the_class_col6}
\end{figure}
The color-color plot also shows two parallel sequences that
split at $g-r\sim 0.2$ (or at class~6). As it happens with the
classification including continuum, the upper branch corresponds
to main sequence stars whereas the lower branch includes giants.
This separation by stellar size is more clear in Fig.~\ref{density6},
which shows the 2D distribution
of $\log({\rm g})$~vs~$T_{\rm eff}$ for
the stars in a number
of selected classes -- the classes in the
lower branch of Fig.~\ref{the_class_col6} include low gravity
stars in Fig.~\ref{density6} (see classes~6 and 2).
\begin{figure}
\includegraphics[angle=0,width=0.5\textwidth,origin=c]{diagnostics9d.eps}
\caption{$\log({\rm g})$~vs~$T_{\rm eff}$ for a number of selected classes resulting
from the classification of the stellar spectra with the continuum removed.
The image shows the full set, and the contours indicate the regions
containing 68\,\%\
of the stars in the class. The class numbers appear at the centroid of the
distributions.
The distribution of class~11 has two separated peaks
at $\log({\rm g})\sim3.2$ and 4.2, so that the corresponding label appears
between them.
}
\label{density6}
\end{figure}
\begin{figure}
\includegraphics[angle=0,width=0.5\textwidth]{diagnostics9e.eps}
\caption{[Fe/H]~vs~$T_{\rm eff}$ for the
different classes resulting from the classification
of the SEGUE stars
with their continua removed. The image displays the histogram of the full
set, whereas the contours mark the regions containing 68\,\%\ of the stars in the
class.
Class numbers are located at the center of the corresponding distribution.
}
\label{fe_vs_teff6}
\end{figure}
Figure~\ref{fe_vs_teff6} shows the 2D distribution
of [Fe/H]~vs~$T_{\rm eff}$ for the full set of stars, together with
contours with the region containing 68\,\% of the stars in
selected classes. Note how well separated are the classes in this
plot, in contrast with the overlap present in the color-color
plot (Fig.~\ref{the_class_col6}). The behavior is opposite to that
of the classes resulting from classifying spectra with continua,
which are well separated in the color-color plot (Fig.~\ref{the_class_col4}),
but overlap in the [Fe/H]-$T_{\rm eff}$ diagram (Fig.~\ref{fe_vs_teff4}).
Note also that classes 5 and 6 occupy the same
region of the color-color plot and overlap in the
$\log({\rm g})$-$T_{\rm eff}$ plot, but they have different metallicities.
These are F stars that approximately split according
to their membership to the thick disk and the halo \citep[][]{2006ApJ...636..804A},
therefore, the classification
provides a quick-look tool to separate disk and halo stars.
Class~12 is dominated by DA WDs (see below and
the class template in Fig.~\ref{the_class_temp6mas}),
however, it shows up as extremely low metallicity in Fig.~\ref{density6}.
Since the SSPP does not deal with WDs, these must have been confused
with A-type stars and analyzed as such, finding that they are best matched
with no metals.
Figure~\ref{mkclasses6} shows the distribution
of MK spectral types corresponding to the k-means classes.
Note how each class tends to belong to a single spectral
type, but not always. Moreover, the correspondence seems to be
better than that for the classification including continuum
(cf., Fig.\ref{mkclasses4}). Note how class~12 is basically formed
by WDs, whereas classes 8, 10 and 11 are made of type A stars.
The most numerous class~0 contains almost
exclusively F9 stars,
as it also happens with the classification with continuum
(Fig.~\ref{mkclasses4}).
We think that the concentration of class~0 around a particular type is
real, but the particular type is not, since most stars selected by SEGUE are
G-type rather than F-type (see Sect.~\ref{data}). There seems to be a problem
with the MK typing based on ELODIE templates because, as expected,
the Hammer MK types associated with class~0 are late G types.
Figure~\ref{mkclasses6_otro} is equivalent
to Fig.~\ref{mkclasses6} but showing Hammer types
(Sect.~\ref{data}). Class~0 corresponds to types between
G6 and K2. Note, in passing, that the classes of hot stars
have disappeared from the histograms since Hammer typing
does not allocate classes to them.
\begin{figure*}
\includegraphics[width=0.8\textwidth]{diagnostics9f.eps}
\caption{
Histograms of the distribution of ELODIE MK types of selected k-means classes
corresponding to spectra without continuum (see Fig~\ref{the_class_temp6otro}).
The histograms have been split in two panels to avoid overcrowding.
Each one shows a set of k-means classes as described in the insets.
The histograms have been
normalized to one, including objects where the
MK type is not available.
}
\label{mkclasses6}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.81\textwidth]{diagnostics9g.eps}
\caption{
Similar to Fig.~\ref{mkclasses6} but showing Hammer MK types rather than
ELODIE types.
The histograms have been
normalized to one, including objects where the
type is not available. Classes 8, 10, 11 and 12 do not appear
because they contain hot stars with no associated
Hammer type. Note that class~0 collects types G and early K, whereas it appears
as formed by F9 stars in Fig.~\ref{mkclasses6}.
}
\label{mkclasses6_otro}
\end{figure*}
Table~\ref{summary} gives a summary of the physical properties of the classes, namely,
it contains mean values and standard deviations for colors, temperatures, gravities
and metallicities. (Class~13 has been excluded since it seems to be collecting faulty
spectra.) Note that some classes present a fairly small range of physical parameters.
For instance, if a star is assigned to class~0 then we know its temperature, gravity and
metallicity with standard deviations of 190~K, 0.25~dex, and 0.36~dex, respectively.
These uncertainties are comparable to those associated
with other approaches currently used to estimate effective
temperature and gravity.
This fact opens up the possibility of using k-means for a quick-look estimate
of the main physical parameters of the stars at a minimum computational
cost. Once the classes are known, assigning a new spectrum to
one of them is virtually instantaneous -- it is just a matter of computing the
difference between the new spectrum and the class templates,
and then selecting the class of least rms deviation.
\begin{deluxetable}{cccccc}
\tablecolumns{6}
\tablewidth{0pc}
\tablecaption{Properties of the stellar classes from spectra without continuum}
\tablehead{
\colhead{Class}&\colhead{$g-r$}&\colhead{$u-g$}&\colhead{$T_{\rm eff}\tablenotemark{a}$}&$\log({\rm g})\tablenotemark{b}$
& [Fe/H]\tablenotemark{c}
}
\startdata
0& 0.64$\pm$0.08&1.39$\pm$0.20&5370$\pm$190& 4.51$\pm$0.25&-0.53$\pm$0.36\\
1& 0.53$\pm$0.10&1.03$\pm$0.17&5660$\pm$290& 4.13$\pm$0.47&-0.79$\pm$0.40\\
2& 0.60$\pm$0.13&0.96$\pm$0.21&5450$\pm$340& 3.33$\pm$0.75&-1.71$\pm$0.53\\
3& 0.87$\pm$0.13&1.86$\pm$0.29&4830$\pm$230& 4.61$\pm$0.22&-0.52$\pm$0.40\\
4& 1.02$\pm$0.25&1.70$\pm$0.44&5260$\pm$440& 3.74$\pm$0.74&-0.55$\pm$0.50\\
5& 0.36$\pm$0.12&0.74$\pm$0.13&6400$\pm$300& 3.98$\pm$0.35&-0.79$\pm$0.45\\
6& 0.30$\pm$0.20&0.64$\pm$0.23&6500$\pm$330& 3.74$\pm$0.47&-1.86$\pm$0.61\\
7& 1.31$\pm$0.17&2.24$\pm$0.33&4290$\pm$150& 4.39$\pm$0.34&-0.55$\pm$0.42\\
8& 0.18$\pm$0.14&0.64$\pm$0.17&7250$\pm$340& 4.05$\pm$0.37&-0.90$\pm$0.62\\
9& 0.69$\pm$0.16&1.31$\pm$0.35&5340$\pm$330& 4.24$\pm$0.47&-0.89$\pm$0.50\\
10& 0.02$\pm$0.14&0.59$\pm$0.17&8020$\pm$340& 4.12$\pm$0.40&-0.98$\pm$0.70\\
11&-0.14$\pm$0.08&0.49$\pm$0.12&8700$\pm$350& 3.80$\pm$0.47&-1.36$\pm$0.66\\
12&-0.27$\pm$0.13&0.35$\pm$0.18&8890$\pm$410& 4.67$\pm$0.17&-3.03$\pm$0.6
\enddata
\tablenotetext{a}{Effective temperature in K.}
\tablenotetext{b}{Gravity g in cm\,s$^{-2}$.}
\tablenotetext{c}{Metallicity in logarithmic scale referred to the Sun.}
\label{summary}
\end{deluxetable}
\section{Outliers}\label{outliers}
Having a classification automatically provides outliers, i.e.,
uncommon objects which therefore do not belong to
any of the classes.
We can easily identify them since, in addition to assigning class
memberships, our algorithm estimates the probability of
belonging to the class (see Sect.~\ref{algorithm}).
Outliers are therefore those objects whose
probability of belonging to their class is below a threshold.
We set the threshold to 0.01, which implies selecting as outliers
the 1\,\% spectra furthest from their clusters centers.
The actual threshold is both arbitrary and unimportant, since
our purpose was figuring out the type of spectra that do not
fit in the classification. The adopted threshold renders some
2200 spectra, that were inspected individually.
We carried out
the inspection for both classifications, the one
including continuum and the one without continuum.
In both cases the vast majority of the outliers are
noisy spectra or failures of the SDSS pipeline (e.g.,
gaps, unsuccessful removal of telluric lines,
mismatches between the red and the blue
spectrograph arms, and so on). These problematic
spectra represent three-quarters of the outliers
for the classification using continuum, and a bit less when
continuum is removed. (This difference is to be
expected since subtracting the continuum automatically
cancels many of the calibration problems.)
Here we focus on the outliers of the classification with
continuum, although they are qualitatively similar to
those of the classification with the continuum subtracted.
In this case we count 548 genuine outliers. They are
illustrated in Fig.~\ref{outlier1b} and described
in the following list ordered from more to less
common:
\begin{itemize}
\item [-] Quasars (QSOs) at redshift between 2 and 4, so that Ly$\alpha$ appears
in the visible spectral range
\citep[e.g., Fig.~\ref{outlier1b}a -- we
identified the observed lines in the QSO spectrum
template by][]{1991ApJ...373..465F}.
Most outliers are of this kind (some 320 or 60\% of the sample).
They represent only 0.3\% of QSOs in the
latest SDSS QSO catalog \citep{2010AJ....139.2360S},
but they tend to appear in the redshifts where the SDSS
identification algorithm has known problems (redshifts 2.9 and 3.2).
A bad redshift assignation at these particular redshifts
would explain the presence of a large number of QSOs
contaminating our stellar sample.
\item [-] Broad Absorption Line (BAL) QSOs at the same high redshifts
(Fig.~\ref{outlier1b}b).
Those are believed to be AGNs with very rapid outflows
(of a few times 10$^4$\,km\,s$^{-1}$)
along the line of sight \citep{2009ApJ...692..758G}.
We get 33 of these objects, which represent 6.0\,\% of the outliers,
and 9.3\,\% of the QSO -- a percentage that seems to be normal
for BAL QSOs \citep[e.g.,][]{2009ApJ...692..758G}.
\item [-] Composite spectra of blue and red stars combined
(e.g., Fig.~\ref{outlier1b}c,
where H$\alpha$ shows up in emission).
They may be genuine
binary systems with the two stars gravitationally bounded, or just
two or more stars that happen to be along our line of sight.
We ignore why these stars appear as outliers rather than elements of
existing classes (classes~20 or 21 in Fig.~\ref{the_class_temp4}),
but it may be due to having an excess of blue upturn
as compared to the template spectra.
\item [-] Flat spectra, showing absorption line features characteristics
of hot stars in the blue,
and of cold stars in the red (e.g., Fig.~\ref{outlier1b}d).
They may be composite spectra like in Fig.~\ref{outlier1b}c,
but with the luminosities of the
stars fine-tuned so that the combination looks spectrally flat.
\item [-] Extreme spectra. They look-like the corresponding templates,
but seem to be extreme cases (e.g., extreme colors or
particularly deep absorption lines).
Figure~\ref{outlier1b}e shows a particularly cold star or sub-stellar
object
-- the spectrum and
the corresponding template are shown as solid and dotted lines,
respectively. Figure~\ref{outlier1b}f shows the spectrum of a
star (the solid line) hotter than its template (the dotted line).
\item [-] Star-forming galaxies at intermediate redshifts (e.g., Fig.~\ref{outlier1b}g).
\item [-] QSOs at redshifts around one (e.g., Fig.~\ref{outlier1b}h).
\item [-] Strongly dust-reddened blue stars (e.g., Fig.~\ref{outlier1b}p).
\item[-]
Carbon rich WDs with strong C$_2$ bands (e.g., Fig.~\ref{outlier1b}i
and \ref{outlier1b}j) -- see the observed and
synthetic spectra in \citet{1984ApJ...284..257W}.
\item[-]
Carbon stars (e.g., Fig.~\ref{outlier1b}o), where the photospheric opacity is dominated by
C-bearing molecules. The carbon, dredged up to the
photosphere, comes from the He burning shell
characteristic of low mass stars during their late stages of evolution
\citep[e.g.,][]{2009A&A...503..913A}.
Figure~\ref{outlier1b}o should be compared with C-star spectra in,
e.g., \citet[][Fig.~4]{2001A&A...371.1065L} and \citet[][Fig.~3]{2009A&A...503..913A}.
\item [-] Strange-looking spectra. They may be failures of the
reduction pipeline, but they may be genuine abnormal
objects as well.
Figures~\ref{outlier1b}k--\ref{outlier1b}n show
a few of them, chosen only when they are not the sole
representative of its class.
They include spectra with strong emission
lines (e.g., Fig.~\ref{outlier1b}m),
or spectra with a single absorption line
(e.g., Fig.~\ref{outlier1b}n).
\end{itemize}
\begin{figure*}
\includegraphics[width=1.\textwidth,angle=0]{outlier_star_plot.eps}
\caption{
Examples of outliers of the classification that includes continuum.
(a) QSO at redshift around 4, so that Ly$\alpha$
shows up at some 6000\,\AA .
(b) BAL~QSO, with broad absorption lines blue-shifted
with respect to their emission line counterparts.
The largest signal corresponds to Ly$\alpha$.
(c) Composite spectrum formed by combining light
from red and blue stars.
(d) Flat spectrum, showing signs of a hot star in the blue and
a red star in the red. They may be composite spectra like in (c) but
with the magnitudes of the two stars tuned to look spectrally flat.
(e) Spectrum of an extremely cold star -- the spectrum and
the corresponding template are shown as solid and dotted lines,
respectively.
(f) Star (the solid line) significantly hotter than its template (the dotted line).
(g) Star-forming galaxy at intermediate redshift.
(h) QSO at redshift around one.
(i--j) Carbon-rich WDs.
(k--n) Strange-looking spectra.
They may be failures of the
reduction pipeline, but they may be genuine abnormal
objects as well.
They include
spectra with strong emission lines (m),
or spectra with a single absorption line (n).
(o) Carbon stars.
(p) Dust-reddened blue stars.
}
\label{outlier1b}
\end{figure*}
\section{Discussion and Conclusions}\label{conclusions}
The traditional approach to classify stellar spectra has to be
adapted to process the volume of data produced by
large surveys underway (see Sect.~\ref{intro}).
There is a need for new automated techniques of analysis.
In this work we explore the
use of the algorithm k-means for the task, i.e.,
as a tool for the automated unsupervised classification of massive stellar
spectrum catalogs. The algorithm has already proven its
potential to fast processing other astronomical spectra (Sect.~\ref{intro}), and
we expected it to be useful in this context as well.
For this exploratory application of k-means, we selected the data set of stellar
spectra associated with the SEGUE and SEGUE 2 programs. Even though
it is not a fair sample of the Milky Way stellar populations, it contains
a rich variety of stellar types and so it is a good test-bench for the
classification algorithm. After discarding faulty cases,
the reference dataset consists of 173,390
stellar spectra from 3800 to 9200\,\AA\ sampled in 3849 wavelengths.
Therefore,
the problem for k-means is to find
clusters among 173,390 vectors
defined in a 3849-dimensional space.
The full data set occupies 6.1\,GB, and is classified using a
standard up-to-date workstation in some two hours (see Sect.~\ref{repeatability}).
We apply the classification to the original spectra and also
to the spectra with the continuum removed. The latter data set
contains only spectral lines, and it is less dependent on
observational and instrumental problems like dust extinction,
spectro-photometric mis-calibrations,
or failures in the reduction pipeline.
The classification of the spectra with continuum renders
16 major classes, with 99\,\% of the objects, and ten minor
classes with the remaining 1\,\% (Sect.~\ref{class_with}). Roughly speaking, the
stars are split according to their colors, with enough
finesse to distinguish dwarf and giant stars
(Fig.~\ref{the_class_col4}). Figure~\ref{the_class_temp4}
shows the template spectra representative of all the classes:
there are classes for
WDs (class~15),
A-type stars (class 9),
F-type stars (class~0),
K-type stars (class~8),
M-type stars (class~23),
dust-reddened intrinsically blue stars (class~18),
binary systems (class 21),
and even classes with faulty spectra (classes 17 and 19).
It must be stressed, however,
that there is not a one-to-one correspondence
between the classes we derived and the MK types.
Often our classes mix-up several MK types, and vice versa.
The classification is able to separate stars with similar
temperatures but different surface gravities
(compare classes 0 and 3 in Fig.~\ref{density4}), but
has difficulties to separate stars with different
metallicities (Fig.~\ref{fe_vs_teff4}).
The classification of spectra without continuum renders less classes
(Sect.~\ref{class_out}) -- 13 major classes
and only 1 minor class that probably collects faults of the
reduction pipeline (Figs.~\ref{the_class_temp6} and
\ref{the_class_temp6mas}).
In this case the color separation is not so sharp as it is
for the classification with continuum included
(cf., Figs~\ref{the_class_col4} and \ref{the_class_col6}).
However, it is able to separate stars in classes with
the same effective temperatures but different metallicities (Fig.~\ref{fe_vs_teff6}).
The behavior is opposite to that
of the classes resulting from classifying spectra with continua,
which are well separated in the color-color plot
but overlap in [Fe/H]~vs~$T_{\rm eff}$.
Some classes include starts with a fairly small range of physical
parameters, as assigned by the SSPP.
The mean value and dispersion of the effective temperature,
surface gravity, and metallicity of the classes without continuum
are listed in Table~\ref{summary}. A small dispersion
implies that our classification can be used to estimate
the main physical parameters of the stars at a minimum computational
cost. One only has to assign the problem spectrum to one of the existing
classes, e.g., to the one of minimum residual. Then the properties
of the class can be passed on to the new spectrum, thus providing its
main physical properties. For example,
if the problem spectrum
happens to belong to class~0, then we know its temperature, gravity, and
metallicity within a standard deviation of 190~K, 0.25~dex, and 0.36~dex,
respectively. These uncertainties are probably upper
limits since the estimate of the physical parameters by the SSPP
has non-negligible internal errors
that are included in the dispersions.
Note that the uncertainties are comparable with those associated
with other approaches currently used to estimate effective
temperature and gravity, which are far more time consuming.
Moreover, since we derive physical parameters from spectra
without continuum, the estimates are fairly robust against
dust reddening and other observational issues, which are often
a serious problem when dealing with stars at low galactic latitudes.
The classification also provides a means of
finding rare but scientifically interesting
objects, e.g., unusually low metallicity stars, odd spectral types, etc.
By definition, rare objects must be outliers of any classification,
otherwise they would be common and would have classes associated
to them.
Our rendering of k-means gives the goodness of the assignation, i.e.,
the probability that each star belongs to the class it has been
assigned to. Therefore, the outliers of the classification are easy to pinpoint as those
spectra whose probability of a correct assignation is low
enough. The nature of the outliers thus selected was
examined in Sect.~\ref{outliers} -- see Fig.~\ref{outlier1b}.
Most outliers are faulty data or failures of the SDSS reduction
pipeline.
The remaining 25\,\% is firstly formed by
high redshift QSOs. Since they are in the appropriate redshift range, we
speculate that these QSO may be those lost by a known problem in the SDSS
QSO identification algorithm. There is a large number of outliers
corresponding to composite spectra formed by either real or fake double
or multiple stellar systems. The spectrum is that of a hot star in the blue and
a cold star in the red. There are also reddened stellar spectra, and galaxy spectra.
Finally, there are odd spectral types whose nature we did not manage to
figure out, and which we plan to observe in follow up work.
One obvious use of the present classification is identifying
spectra having instrumental problems or being produced
by flaws of the reduction pipeline. We find classes containing
faulty spectra when the problem is common, and then we find
faulty spectra as outliers of the classification when the problem
is unusual.
Stellar spectra are known to be highly compressible so that they can be
characterized using only a few independent parameters (see Sect.~\ref{intro}).
Then the fact that the classes present a regular behavior was somehow
expected, and this fact is not the main outcome of our exercise.
Instead, our exploratory work shows k-means to provide a viable tool
for the systematic classification of large data sets of stellar
spectra. Moreover, there
is plenty of room for improving the procedure, i.e., for
upgrades that have not been considered in the paper, but which
may be of interest in future uses.
One can focus the classification in a particular spectral
range (or set of ranges) particularly sensitive to the
physical parameter one wants to select
(say, the metallicity if searching for classes of extremely metal
poor stars).
Then the resulting classes would emphasize this particular aspect of the spectra.
Obviously, using smaller spectral ranges for classification also
speeds up the procedure.
One can also resort to nested
k-means classifications, where the spectra of a given class are
separated into subclasses. This can be used to fine-tuning separation.
Finally, we want to indicate that the template spectra from the classifications
with and without continuum are publicly available\footnote{{\tt
ftp://stars:[email protected]}}.
\begin{acknowledgements}
Thanks are due to J.~Casares, L.~Girardi, and C.~Ramos-Almeida
for help with deciphering
the nature of some of the outliers,
and to I.~Ferreras and B.~Gustafsson
for illuminating discussions.
In addition, we thank the referee, C. Bailer-Jones, for his constructive
criticisms that tempered the statements made in the paper.
This work has been partly funded by the Spanish Ministry for Science,
project AYA~2010-21887-C04-04.
JSA is member of the Consolider-Ingenio 2010 Program, grant
MICINN CSD2006-00070: First Science with GTC.
Funding for SDSS, SDSS-II, and SDSS-III has been provided by the Alfred P. Sloan Foundation,
the Participating Institutions, the National Science Foundation,
and the U.S. Department of Energy Office of Science.
SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions
of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group,
Brookhaven National Laboratory, University of Cambridge, Carnegie Mellon University, University of Florida,
the French Participation Group, the German Participation Group, Harvard University,
the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group,
Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics,
Max Planck Institute for Extraterrestrial Physics, New Mexico State University,
New York University, Ohio State University, Pennsylvania State University, University of Portsmouth,
Princeton University, the Spanish Participation Group, University of Tokyo,
University of Utah, Vanderbilt University, University of Virginia,
University of Washington, and Yale University.
{\it Facilities:} \facility{Sloan (DR8, spectra)
}
\end{acknowledgements}
|
1,116,691,497,386 | arxiv | \section{Introduction}
A rich sentiment analysis corpus is crucial in creating a good sentiment analyzer. Unfortunately, low-resource languages like Indonesian lack such resources. Some prior studies focused on enriching low-resource languages \cite{nasution2018PHMT,nasution2016pivot,nasution2017pivot,nasution2017plan,nasution2017simcluster,nasution2018collab,nasution2019simcluster,nasution2017PHMT}.
The rapid growth of online textual data creates an urgent need for powerful text mining techniques \cite{aggarwal2012mining}. Sentiment analysis or opinion mining is a part of text mining. Sentimen analysis basically is a computational research that analyses the textual expression from opinion, sentiment and emotion of the social media users \cite{liu2012sentiment}. It extracts attributes and components of the documented object. Through the sentiment analysis of the text, information such as the public's emotional status, views on some social phenomena, and preferences for a product can be obtained \cite{Yuan2019}. Hence, the perspective of the users either positive or negative could be revealed.
During the Indonesia 2019 presidential election, the competition was quite fierce where there were only two candidates fighting in the battle. Most of supporters from these two candidates were actively campaigning their candidates on social media and twitter was the highly used social media chosen by them. Due to the huge enthusiasm of those two supporters, most of the time fierce debate among them could not be avoided.
One of the trending topic emerged was during the recapitulation of the votes. Twitter users reacted to the several findings showed that the calculation of the votes led to deception. Foremost, supporters from one party, from Prabowo Subianto volunteers found that many evidence of the wrong data were inputed to the system. Thus, the real count results was irrelevant with the information displayed on the system. This finding made the situation in Indonesia heating up. Supporters from Prabowo Subianto was upset and condemned the General Election Commision as the legal institution to take full responsibility of this matter. To express their disappointment, most of the twitter users created hashtag \textit{\#kpujangancurang} or ``The General Election Commision should not be unfair". However, this issue was objected by the opponent supporters. They argued that this issue was merely caused by human error. The same hashtag actually was being used by the both parties, so that no one knows the exact sentiment of the tweets. Therefore, sentiment analyzer that could analyse the sentiment of the tweets is crucial
In sentiment analysis, the available corpus in Indonesian language is scarce. The existing machine learning tool such as rapidminer has two sentiment analyzer which are Aylien and Rosette, do not cover Indonesian language. We run an experiment by using the \textit{\#kpujangancurang} hastag to obtain corpus using rapidminer to extract the tweets and then analyse the sentiment of users by using four machine learning methods which are Naive Bayes, K-Nearest Neighbor, Decision Tree, and Multi-Layer Perceptron classification. The objective of this research is to find out which classifier is more suitable to be used in creating sentiment analyzer for low-resource languages with small corpus.
\section{Literature Study}
Several researches have been done on sentiment analysis. A study attempted to analyze the online sentiment changes of social media users using both the textual and visual content by analysing sentiment of twitter text and image \cite{You2016}. Another related study performed linguistic analysis of the collected corpus and explain discovered phenomena to build a sentiment classifier, that is able to determine positive, negative and neutral sentiments for a document \cite{pak2010twitter}.
Furthermore, several studies have been done using machine learning method on sentiment analysis, for instance a study showed that a similar research on a twitter sentiment analysis by applying Naive Bayes classifier method to investigate the sentiment analysis of the twitter users on the traffic jam in Bandung \cite{rodiyansyah2012klasifikasi}. Another study focused on data classification using k-NN (k-Nearest Neighbors) and Naive Bayes where the Corpus was downloaded from TREC Legal Track with a total of more than three thousand text documents and over twenty types of classifications \cite{Rasjid2017}. A study utilized maximum entropy part of speech tagging and support vector machine to analyse the public sentiment. The study used dataset in Indonesian language and implemented machine learning approached due to its efficiency for integrating a large scale feature into a model. This kind of approach has been successfully implemented in various tasks such as natural language processing \cite{putranti2014analisis}. A study proposed a semi-automatic, complementary approach in which rule-based classification, supervised learning and machine learning are combined into a new method to achieve a good level of effectiveness \cite{prabowo2009sentiment}. Another study about opinion mining for hotel rating through reviews using decision tree classifier shows the advantage of using the algorithm is that the rule set can be easily generated and by analyzing each level of the tree, a particular service quality can be improved \cite{gupta2018opinion}. Deep learning methods also have been widely used in sentiment analysis tasks \cite{mukherjee2019malignant,zhang2018deep}. However, these studies show different accuracy from each machine learning method used depending on the size of the corpus.
RapidMiner is an open source software\footnote{https://rapidminer.com}. RapidMiner is one of the solutions for doing analysis on data mining, text mining and prediction analysis. RapidMiner uses various descriptive technique and prediction in giving a new insight to the users so that allows and helps users to make a better decision. RapidMiner is a standalone software and enable to be integrated with its own products. RapidMiner provides GUI (Graphic User Interface) for designing an analytical pipeline. GUI will generate XML (Extensible Markup Language) that defines the analytical process of the users need to be applied on the data. This file is later on read by rapid miner to be automatically analyzed. We use rapid miner due to several reasons: it eases in getting the dataset, it can be connected to twitter, it enables to search the topic as query so that the intended topic will emerge and can be saved in excel file, furthermore it allows extracting plentiful data. A study examined an anomaly detection extension for RapidMiner in order to assist non-experts with applying eight different k-nearest-neighbor and clustering based algorithms on their data \cite{Amer2012}. However, in this study, we only use RapidMiner to extract data from Twitter.
\section{Research Methodology}
In this study, we use a dataset that was gotten from the tweets' document. We utilized rapid miner to obtain the tweets from the hashtag \textit{\#kpujangancurang}. To investigate further about the hashtag \textit{\#kpujangancurang}, we compare Naive Bayes, K-Nearest Neighbor, Decision Tree, and Multi-Layer Perceptron classification methods to classify the sentiment of the twitter data. There are two steps of the document classification: the first one is training the document that has been categorized. And the second one is training the uncategorized document. The four methods classify the distribution of the positive and negative sentiments. There are overall 200 labeled data in this experiment. To evaluate the performance of the sentiment analyzer, we use accuracy as the evaluation measure.
\subsection{System Workflow}
\subsubsection{Overview}
Sentiment analysis overview is described in details which is depicted in the Fig. 1 below.
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.33]{figures/figure1.jpeg}}
\caption{Example of sentiment analysis workflow using Naive Bayes method.}
\label{fig1}
\end{figure}
\begin{itemize}
\item Data Crawling: It is a process of aggregating data from twitter using rapid miner as a tool. The aggregated data from hashtag \textit{\#kpujangancurang} is used as training dataset and testing dataset.
\item Preprocessing: It is a process of cleaning the data by deleting common words by referring to stopwords.
\item Classification: Naive Bayes method is applied to classify the sentiment into positive and negative sentiments. The rest of methods will be used in the same manner.
\item Evaluation: The classification result from classifiers is evaluated with the manual labeling classification. The accuracy of the classification determine whether a new training dataset need to be added or not to reach the accuracy threshold of 80\%.
\end{itemize}
\subsubsection{Dataset}
How do we get the dataset is depicted in Fig. 2 below. The dataset that we analyse is in Indonesian language. Firstly, the tweet was queried by using the hashtag \textit{\#kpujangancurang}.
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.43]{figures/figure2.jpeg}}
\caption{Dataset Flow.}
\label{fig2}
\end{figure}
Then, the queried data is crawled by using rapidminer. The result from the query is divided into two part: training data and testing data. Testing data is classified by using the classifiers and then the result was marked with negative and positive sentiment label. Whereas, the training data is classified manually and the result was marked the same way as testing data is treated. Training data will be used during the evaluation to determine the accuracy of the result. Table 1 shows example of evaluation of the predicted sentiment by the classifiers.
\begin{table}[htbp]
\caption{Example of Evaluation of The Predicted Sentiment}
\begin{center}
\begin{tabular}{|p{5cm}|p{2cm}|p{2cm}|p{2cm}|}
\hline
\textbf{Testing Data}&\textbf{Predicted Sentiment}&\textbf{Manually Labeled Sentiment}&\textbf{Accuracy} \\
\hline
kalau terus melanggar, hukumannya segera diterapkan&Positive&Positive&Accurate\\
kalau bersih kenapa takut audit forensic&Negative&Negative&Accurate\\
harus banyak belajar ke @BKNgoid dalam hal penyelenggaraan akbar&Positive&Positive&Accurate\\
Kebenaran meninggikan derajat bangsa tetapi dosa adalah noda bangsa&Negative&Positive&Inaccurate\\
\hline
\end{tabular}
\label{tab1}
\end{center}
\end{table}
\subsubsection{Preprocessing}
Preprocessing process is an important step for the next step which disposes the non-useful attribute that can be noise for the classification process. Data that is imported in this process is a raw data, thus the result of this process is a high-quality document expected to ease the classification process. Preprocessing process is depicted in Fig. 3.
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.33]{figures/figure3.jpeg}}
\caption{Preprocessing Flow.}
\label{fig3}
\end{figure}
\begin{table}[htbp]
\caption{Preprocessing Process Example}
\begin{center}
\begin{tabular}{|p{1cm}|p{5cm}|p{5cm}|}
\hline
\textbf{Stage}&\textbf{Before}&\textbf{After} \\
\hline
1&Benar juga, kpu yang membuat rakyat resah. Aduh kejamnya kecurangan.&Benar juga kpu yang membuat rakyat resah Aduh kejamnya kecurangan\\
2&Benar juga kpu yang membuat rakyat resah Aduh kejamnya kecurangan&benar juga kpu yang membuat rakyat resah aduh kejamnya kecurangan\\
3&benar juga kpu yang membuat rakyat resah aduh kejamnya kecurangan&-benar- -juga- -kpu- -yang- -membuat- -rakyat- -resah- -aduh- -kejamnya- -kecurangan-\\
4&-benar- -juga- -kpu- -yang- -membuat- -rakyat- -resah- -aduh- -kejamnya- -kecurangan-&-benar- -kpu- -membuat- -rakyat- -resah- -kejamnya- -kecurangan- \\
\hline
\end{tabular}
\label{tab1}
\end{center}
\end{table}
This step is started with punctuation removal, case folding, tokenizing, and finally stopword removal which is intended to remove words that are not relevant with the topic. If in the tweet document exists irrelevant words, then these words will be removed. An example of each stage of the preprocessing process is listed in Table 2. The detailed preprocessing stage is as follow:
\begin{itemize}
\item Removing punctuation. This stage is the initial process in order to get pure text containing words only so that further processing becomes easier.
\item Case Folding. This stage is the process of changing uppercase letters to lowercase letters.
\item Tokenizing. In this stage, each word will be separated based on the specified space.
\item Filtering. This stage is the removal of unimportant words based on Indonesian stopwords.
\end{itemize}
\subsubsection{Term Frequency - Inverse Document Frequency (TF-IDF)}
After doing the preprocessing, the next step is to weight the words using the tf-idf calculation. Tf-idf is a way of giving the weight of a word (term) to words. For single words, each sentence is considered as a document. The following is an example of tf-idf calculation. The example of documents that want to be weighted is shown in Table 3 and the sample tf-idf result of Document A is shown in Table 4.
\begin{table}[htbp]
\caption{Example of Documents}
\begin{center}
\begin{tabular}{|l|p{7cm}|}
\hline
\textbf{Tweet Document}&\textbf{Text}\\
\hline
Document A&Jangan ancam rakyat, rakyat indonesia pintar\\
Document B&Rakyat tidak pernah gagal bernegara, pemerintah yang gagal bernegara\\
Document C&Suara rakyat dicuri, bagaimana uang rakyat\\
\hline
\end{tabular}
\label{tab1}
\end{center}
\end{table}
\begin{table}[htbp]
\caption{TF-IDF Score of Document A}
\begin{center}
\begin{tabular}{|l|l|l|l|}
\hline
\textbf{Word}&\textbf{TF}&\textbf{IDF}&\textbf{Weight}\\
\hline
ancam &1&0.477&0.477\\
bernegara&0&0.176&0\\
gagal&0&0.176&0\\
jangan&1&0.477&0.477\\
rakyat &0.4&-0.2218&-0.0887\\
indonesia&1&0.477&0.477\\
pintar&1&0.477&0.477\\
tidak&0&0.477&0\\
pernah&0&0.477&0\\
pemerintah&0&0.477&0\\
dicuri&0&0.477&0\\
bagaimana&0&0.477&0\\
uang&0&0.477&0\\
\hline
\end{tabular}
\label{tab1}
\end{center}
\end{table}
\subsubsection{Classifier}
The last step is classifying the weighted data with Naive Bayes, K-Nearest Neighbor, Decision Tree, and Multi-Layer Perceptron classification methods. To evaluate which classifiers are best for scarce corpus, we experimented by changing the size of the training-testing data split from 0.25-0.75 to 0.75-0.25. The evaluation is done by measuring the accuracy of the classifiers for each scenario as shown in Fig. 4.
\section{Result}
We obtained 200 twitter data using rapidminer. From the 200 twitter data, we conducted 11 experiments with different size of training-testing data split. Every classifiers shows a trend of increased accuracy on larger size of training data. However, Naive Bayes and Multi-Layer Perceptron classifier outperformed the other two methods in overall experiment as shown in Fig. 4. Decision Tree classifier shows a very low performance on small data, while K-Nearest Neighbor classifier shows accuracy below 0.76 on all combination size of training-testing data split. Both Naive Bayes and Multi-Layer Perceptron classifier have the highest accuracy on all combination size of training-testing data split and show consistent increased of accuracy as the training data size is increased.
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.43]{figures/figure5.pdf}}
\caption{Accuracy Comparison of Classifiers.}
\label{fig5}
\end{figure}
\section{Conclusion}
We have build a sentiment analyzer to identify users' sentiment from Twitter hashtag \textit{\#kpujangancurang} toward the General Election Commission We use the hashtag to obtain a set of data from Twitter to analyse and investigate further the positive and the negative sentiment of the users from their tweets. This research utilizes rapid miner tool to generate the twitter data and comparing Naive Bayes, K-Nearest Neighbor, Decision Tree, and Multi-Layer Perceptron classification methods to classify the sentiment of the twitter data. There are overall 200 labeled data in this experiment. Overall, Naive Bayes and Multi-Layer Perceptron classifier outperformed the other two methods on 11 experiments with different size of training-testing data split. The two classifiers are potential to be used in creating sentiment analyzer for low-resource languages with small corpus. In our future work, we will compare the accuracy of both Naive Bayes and Multi-Layer Perceptron classifier on bigger size of corpus.
\section*{Acknowledgment}
This research is funded by Universitas Islam Riau.
\bibliographystyle{splncs04}
|
1,116,691,497,387 | arxiv | \section{}
\vspace{-2em}
\noindent
The magnetic field is a key ingredient in the recipe of star formation. However, the importance of the magnetic field in the early stages of the formation of low- and high-mass stars is still far from certain. Over the past two decades, the millimeter and submillimeter interferometers BIMA, OVRO, CARMA, SMA, and most recently ALMA have made major strides in unveiling the role of the magnetic field in star formation at progressively smaller spatial scales; ALMA observations have recently achieved spatial resolutions of up to $\sim$\,100\,au and $\sim$\,1,000\,au in nearby low- and high-mass star-forming regions, respectively. From the kiloparsec scale of molecular clouds down to the inner few hundred au immediately surrounding forming stars, the polarization at millimeter and submillimeter wavelengths is dominated by polarized thermal dust emission, where the dust grains are aligned relative to the magnetic field. Interferometric studies have focused on this dust polarization and occasionally on the polarization of spectral-line emission. We review the current state of the field of magnetized star formation, from the first BIMA results through the latest ALMA observations, in the context of several questions that continue to motivate the studies of high- and low-mass star formation. By aggregating and analyzing the results from individual studies, we come to several conclusions: (1) Magnetic fields and outflows from low-mass protostellar cores are randomly aligned, suggesting that the magnetic field at $\sim$\,1000\,au scales is not the dominant factor in setting the angular momentum of embedded disks and outflows. (2) Recent measurements of the thermal and dynamic properties in high-mass star-forming regions reveal small virial parameters, challenging the assumption of equilibrium star formation. However, we estimate that a magnetic field strength of a fraction of a mG to several mG in these objects could bring the dense gas close to a state of equilibrium. Finally, (3) We find that the small number of sources with hourglass-shaped magnetic field morphologies at 0.01--0.1\,pc scales cannot be explained purely by projection effects, suggesting that while it does occur occasionally, magnetically dominated core collapse is not the predominant mode of low- or high-mass star formation.
\fontsize{8}{11}\helveticabold {\small \section{Keywords:} astronomy, low-mass star formation, high-mass star formation, polarization, magnetic fields, dust, interferometry, millimeter-wave observations}
\end{abstract}
\section{Introduction}
Magnetic fields are known to play a critical role in many aspects of both low- and high-mass star formation. Even weakly ionized star-forming material is coupled to the ambient magnetic field, and thus the field can regulate (or prevent) the collapse and fragmentation of star-forming clouds \citep[][in this volume]{HennebelleInutsuka2019, KrumholzFederrath2019, TeyssierCommercon2019}, can influence the formation of protoplanetary disks \citep[][in this volume]{WursterLi2018}, and can launch bipolar outflows and jets from young protostars \citep[][in this volume]{PudritzRay2019}. Mapping the morphology of magnetic fields in low- and high-mass star-forming regions is therefore critical to better understand how magnetic fields affect the star-formation process at early times, and how the role of the field changes relative to other dynamical effects (e.g., turbulence, rotation, thermal and radiation pressure, and gravitational collapse) as a function of spatial scale, source environment, and source mass. Over more than fifty years, studies of magnetic fields in low- and high-mass star-forming regions have been performed across a wide range of spatial scales, from the $>$\,100\,pc scale of molecular clouds, to the 1\,pc scale of clumps, to the 0.1\,pc scale of dense cores, and finally to the 1000--100\,au scale of protostellar envelopes surrounding forming protostellar systems.\footnote{\label{footnote:1} In this review we follow the nomenclature used in \citet{zhang2009} and \citet{Motte2018}, who refer to a \textit{cloud} as a structure 10\,--\,100\,pc in size; a molecular \textit{clump} as a structure 1\,pc in size that forms massive stars along with a population of lower mass stars; and a \textit{dense core} as a structure 0.01\,--\,0.1\,pc in size that forms one or a group of stars. Following the nomenclature of, e.g., \citet{Hull2014}, a protostellar \textit{envelope} is a $\sim$\,1000\,au ($\sim$\,0.005\,pc) structure comprising the densest part of the dense core, inside of which one or a few protostars form.}
In this review we introduce the typical tracers of the magnetic field in star-forming regions, as well as the methods used to analyze the observations. We then discuss the state of the field of magnetized star formation, focusing primarily on the last three decades of high-resolution polarization studies by millimeter and submillimeter (hereafter combined into ``(sub)millimeter'') interferometers including the Berkeley-Illinois-Maryland Association (BIMA) millimeter array, the Combined Array for Research in Millimeter-wave Astronomy (CARMA), the Submillimeter Array (SMA), and the Atacama Large Millimeter/submillimeter Array (ALMA).\footnote{Note that while we aim to provide an exhaustive review of the literature surrounding millimeter-wave interferometric observations of low- and high-mass stars, we mention only a few theoretical and single-dish polarization studies in order to support our narrative. We do not discuss other types of millimeter-wave polarization observations, i.e., toward the galactic center, quasars, or evolved stars.}
\subsection{Magnetic field tracers (plane-of-sky component)}
\label{sec:intro_tracers_POS}
\subsubsection{Polarized dust emission}
Polarized thermal emission from dust grains is the main tracer of the magnetic field in star-forming regions observed at high resolution and at (sub)millimeter wavelengths. Under most circumstances, oblong interstellar medium (ISM) dust grains with sizes $\ll$\,100\,$\mu$m are aligned with their long axes perpendicular to magnetic field lines \citep[e.g.,][]{Hildebrand1988}. The currently accepted way to achieve this alignment is via the ``radiative torque'' (RAT) mechanism, where an anisotropic radiation field (e.g., the external UV field in the ISM, or the radiation from a deeply embedded protostar) causes grains to become aligned relative to the magnetic field \citep{Lazarian2007, Hoang2009, Andersson2015}.\footnote{\label{footnote:RATs} RATs that align grains with respect to the magnetic field $B$ are sometimes known as ``$B$-RATs,'' in contrast to other radiative torque mechanisms such as the ``$k$-RATs'' mentioned in Section \ref{sec:scattering}, where RATs align grains with respect to the radiation direction.} Thus, at the physical scales of star-forming clouds, cores, and envelopes (i.e., scales $\gtrsim$\,100\,au), magnetically aligned dust grains emit thermal radiation that is polarized perpendicular to the magnetic field. Observations of dust polarization, which comprise the vast majority of the single-dish and interferometric polarization observations to date, are discussed at length throughout this article.
\subsubsection{Polarized molecular-line emission}
\label{sec:intro_tracers_POS:GK}
Polarization of molecular-line emission is another tracer of the magnetic field in star-forming regions. Molecular and atomic lines are sensitive to magnetic fields, which cause their spectral levels to split into magnetic sub-levels. For some molecules, linear polarization can arise when an anisotropy in the radiation and/or velocity field yields a population of magnetic sub-levels that are not in local thermodynamic equilibrium (LTE); this is known as the Goldreich-Kylafis (G-K) effect. Polarization from the G-K effect is most easily detected where the spectral line emission has an optical depth $\tau \approx 1$, when the ratio of the collision rate to the radiative transition rate (i.e., the spontaneous emission rate) is $\sim$\,1, and where the gradient in the radiation and/or velocity field is large. The effect is strongest in the lowest rotational transitions of simple molecules such as CO, CS, HCN, SiO, or HCO$^+$. Spectral-line polarization from the G-K effect can be parallel or perpendicular to the magnetic field. Due to the different optical depths of the parallel and perpendicularly polarized components in different locations on the sky, polarization from the G-K effect ultimately traces the plane-of-sky magnetic field orientation with an ambiguity of 90$^{\circ}$ \citep{Goldreich1981, Goldreich1982, Kylafis1983, Deguchi1984, Lis1988}.
The first detections of the G-K effect were by \citet{Glenn1997}, who presented National Radio Astronomy Observatory (NRAO) 12\,m telescope observations of the evolved star IRC +10216, and by \citet{Greaves1999}, who observed the galactic center and the high-mass star-forming clouds S140 and DR 21 using the James Clerk Maxwell Telescope (JCMT). In addition to these first observations, others have detected the G-K effect in molecular outflows from low-mass protostellar sources \citep[e.g.,][]{Girart1999, Lai2002, Cortes2006b, CFLee2014, Ching2016, CFLee2018b} as well as in high-mass protostellar sources \citep[e.g.,][]{Lai2003, Cortes2005, Cortes2006a, Cortes2008}. The G-K effect is a powerful way to trace the magnetic field in regions where the brightness of the thermal dust emission is too low to detect polarization at the few-percent level, either because the source is too distant or because the column density of dust is too low (e.g., in an outflow cavity where the gas and dust have been driven away).
One way to resolve the 90$^{\circ}$ ambiguity in G-K observations is via radiative transfer modeling. This was done in \citet{Cortes2005}, who expanded on the original G-K models (which assumed that gradients in the CO optical depth were responsible for the necessary anisotropy in the radiation field) by including bright, central sources as additional causes of anisotropy. They used these models to successfully reproduce the 90$^{\circ}$ difference in polarization angles seen toward the high-mass star-forming region DR 21(OH) in their 3\,mm BIMA observations of CO\,($J$\,$=$\,1\,$\rightarrow$\,0) versus earlier 1.3\,mm observations of CO\,($J$\,$=$\,2\,$\rightarrow$\,1) by \citet{Lai2003}. While the modeling in \citet{Cortes2005} was successful, in general it is difficult to know the structure of the radiation field, especially in high-mass star-forming regions. Other methods exist to resolve the ambiguity, such as comparing polarization from both dust and spectral lines in the same region. This was done in DR 21(OH) by \citet{Lai2003} and \citet{Ching2016}; however, it is not clear how strong the correlation should be between the polarization angles from the two types of emission, as the dust emission traces much denser material than the CO.
Looking to the future, in low-mass sources with well behaved outflows, or in protoplanetary disks, the radiation and velocity fields may be simple enough to allow us to more confidently break the ambiguity in polarization orientation. Given the high quality of ALMA data toward these types of sources and the fact that ALMA's sensitivity will allow us to probe polarization in multiple transitions of many different molecules, spectral-line polarization studies have a bright future.
Finally, one must exercise caution when interpreting polarized spectral-line observations, as linearly polarized spectral-line emission can be converted into circularly polarized emission via anisotropic resonant scattering. Several studies have detected non-Zeeman circularly polarized emission, including \citet{Houde2013} in Orion KL and \citet{Hezareh2013} in the supernova remnant SNR IC 443. These observations were from the polarimeters at the Caltech Submillimeter Observatory (CSO) and the Institut de Radioastronomie Millim\'etrique (IRAM) 30\,m telescope, respectively (see \citealt{Chamma2018} for more results from the SMA). The model described in those publications shows that resonant scattering can cause a rotation in the polarization position angle in maps of linear spectral-line polarization. \citet{Hezareh2013} used the detected Stokes $V$ signal toward SNR IC 443 to correct the map of linear spectral-line polarization. After doing so, the polarization was well aligned with dust polarization observations using the PolKA polarimeter at the Atacama Pathfinder Experiment (APEX) telescope. The authors note that this effect is proportional to the square of the magnitude of the plane-of-sky magnetic field, which opens up the possibility of detecting the magnetic field via circular polarization observations of Zeeman-insensitive spectral lines.
\subsubsection{The Velocity-gradient technique}
Another method for probing the plane-of-sky magnetic field in star-forming regions is the ``velocity-gradient technique'' (VGT), which estimates the magnetic field orientation using the velocity gradients present in spectral-line observations. In turbulent regions that are not gravitationally bound, \citet{GonzalezCasanovaLazarian2017} and \citet{YuenLazarian2017a} showed that the magnetic field is expected to be oriented perpendicular to velocity gradients. The authors also extended their study to shocked and gravitationally bound regions \citep{YuenLazarian2017b}, and argued that the VGT provides an even better view of the plane-of-sky magnetic field morphology in the interstellar medium (particularly in diffuse regions with a low column density of dust) than both the \textit{Planck} polarization data \citep[e.g.,][]{PlanckXIX} and observations of neutral {\rm H}{\sc i}\xspace ``fibers,'' which also trace the interstellar magnetic field \citep{Clark2014, Clark2015}.
\subsection{Magnetic field tracers (line-of-sight component): the Zeeman effect}
\label{sec:intro_tracers_LOS}
The Zeeman effect is another important tracer of the magnetic field that has been observed primarily with single-dish radio telescopes to infer the line-of-sight magnetic field strength\footnote{Note that under certain conditions one can derive the total magnetic field strength from the Zeeman effect \citep{Heiles1993,Heiles2005b}. This has been seen several times toward galactic OH masers \citep{Hutawarakorn2002, Fish2006, Caswell2014}. However, in typical Zeeman observations of star-forming molecular clouds, the Zeeman signal is only strong enough to yield the strength of the magnetic field along the line of sight.} in molecular clouds \citep{Troland1986, Crutcher1993, Crutcher1999}. When threaded by a magnetic field, atomic hydrogen and molecules with a strong magnetic dipole moment will have the degeneracy in magnetic sub-levels lifted for states with non-zero angular momentum. This will split the radio frequency transitions into a number of linearly and elliptically polarized components separated slightly in frequency. Measuring this Zeeman splitting is the only way to directly measure a component of the magnetic field strength. However, we will not focus more on the thermal Zeeman effect in this review, as there are no reported observations with a (sub)millimeter-wavelength interferometer. For reviews of single-dish observations of magnetic fields in molecular clouds via the Zeeman effect, see \citet{Crutcher2012} and \citet[][in this volume]{CrutcherKemball2019}.
\subsection{Analysis methods}
\subsubsection{Indirect estimates of magnetic field strength}
\label{sec:analysis_field_strength}
The polarization arising from magnetically aligned dust grains and from the G-K effect yields the plane-of-sky magnetic field morphology. However, these observations do not contain information about the magnetic field strength. Since knowing the field strength is critical to our understanding of the importance of magnetic fields, a variety of indirect methods have thus been developed to estimate the field strength from these types of observations.
The most longstanding of these methods is the Davis-Chandrasekhar-Fermi (DCF) method \citep{Davis1951, Chandrasekhar1953}, which uses the fact that turbulent motions should cause an observable scatter in what would otherwise be a well ordered mean magnetic field. The original, simplest form of the DCF formula yields an estimate of the plane-of-sky magnetic field strength
\begin{equation}
B_\textrm{POS} = \frac{\sqrt{4\pi\rho}}{\delta\phi} \delta V \,\,,
\end{equation}
\noindent
where $\rho$ is the gas density, $\delta$V is the one-dimensional velocity dispersion, and $\delta\phi$ is the dispersion in polarization position angles. $\delta$V and $\delta\phi$ are both assumed to be caused by turbulent motions in the region being studied. The derivation of this expression also assumes that there is a mean magnetic field in the region, that the turbulence is incompressible and isotropic, and that the turbulent components of the kinetic and magnetic energies are in equipartition. Note that the DCF method also assumes that the dispersion is ``small,'' i.e., that the turbulent magnetic energy is small compared with the mean-field magnetic energy in the system.
The DCF method was originally developed to estimate magnetic field strengths in the diffuse ISM, where perturbations in the magnetic field can be assumed to be caused by turbulent motions in the magnetized medium. Comparisons with MHD simulations of giant molecular clouds (GMCs) by \citet{Ostriker2001} found that at these scales, the original DCF method typically overestimates the magnetic field strength by a factor of $\sim$\,2 because line-of-sight field tangling is not taken into account. Further simulations by \citet{Heitsch2001} also found that the true field strength is overestimated unless finite telescope resolution and self-gravity within the GMC are accounted for.
At the smaller scales of the individual star-forming clumps and cores that are probed by interferometers, gravity is almost always the dominant dynamical factor, and thus the structure of the background field must be removed before calculating the dispersion in polarization position angles. In two ideal cases with very clean examples of hourglass-shaped magnetic fields, \citet{Girart2006} and \citet{Qiu2014} removed the background hourglass field by subtracting a set of parabolic field lines from the data, after which they calculated the magnetic field dispersion relative to the background structure. In complicated cases, however, a more general method is necessary to remove arbitrary background field structure. This has been achieved by employing a second-order structure function of the polarization angle that allows the separation of the turbulent and mean magnetic-field components, with the underlying assumption that the mean-field component has an intrinsically larger spatial scale than the turbulent component \citep{FalcetaGoncalves2008, Hildebrand2009, Houde2009, Houde2011, Chitsazzadeh2012}. Further refinements of these studies have also taken into account interferometric filtering effects \citep{Houde2016}.
\citet{FalcetaGoncalves2008} used the structure-function approach to test the validity of the DCF technique using their MHD simulations. In addition to analyzing the effect of different telescope resolutions (their conclusion: lower-resolution observations tend to overestimate the field strength), they also derived a generalized form of the DCF equation, which allows for the separation of the turbulent and mean magnetic field components, and yielded magnetic-field estimates that were accurate to within $\sim$\,20\%. More recently, \citet{Juarez2017} performed SMA observations of a magnetized high-mass star-forming region and used the structure-function method to compare the data with synthetic observations of gravity-dominated MHD simulations. They found the magnetic field strength estimates from both the observations and simulations to be in good agreement.
\citet{Koch2012} proposed another method of indirectly measuring the magnetic field strength that is also based on an ideal MHD framework. They argue that a position-dependent magnetic field strength in a polarization map can be calculated using the angle between the magnetic field and the gradient of the total-intensity emission, and a
second angle between the local gravity direction and the gradient of the total-intensity emission. This is based on the assumption that the intensity gradient is a proxy for the direction of motion in the MHD force equation. For example, in a centrally concentrated, gravitationally bound core, the infalling motion will be along the intensity gradient (across the iso-intensity contours). This method requires observations of the magnetic-field morphology (i.e., via observations of thermal dust polarization or the G-K effect) in order to produce a spatial distribution of the magnetic field strength. However, unlike the DCF method, this method does not require spectral-line observations in the analysis. Estimations of magnetic field strengths using this method have been found to be consistent with previous estimations from the DCF method (see, e.g., Section \ref{sec:high_mass_cores}).
\subsubsection{The mass-to-flux ratio}
\label{sec:m2f}
Merely measuring the magnetic field strength does not allow us to determine immediately the importance of the magnetic field in a given star-forming region. Therefore, in magnetic field studies it is common to compare the magnetic energy density with that of other dynamical quantities such as gravity, turbulence, and outflow feedback (see, e.g., \citealt{Hull2017b} for a comparison of magnetic energy with gravitational and outflow energy densities in the intermediate-mass Class 0 protostellar core Serpens SMM1).
Historically, there has been a strong focus on the comparison of gravity (which causes inward motion) and the magnetic field (whose tension provides resistance against infall across the field lines). The common quantity quoted as a metric for the importance of the magnetic field with respect to gravity is the ``mass-to-flux ratio'' $M/\Phi$, where $M$ is the mass of the object of interest and $\Phi$ is the flux of the magnetic field threading the object. As discussed in \citet[][and references therein]{Crutcher2004}, the maximum mass that can be supported by a given magnetic flux is given by $M_\textrm{crit} = \Phi / 2 \pi \sqrt{G}$. However, it is more useful to discuss the dynamical status of an object by measuring the ratio $\lambda$ of the observed mass-to-flux ratio to the critical mass-to-flux ratio:
\begin{equation}
\lambda = \frac{\left(M/\Phi \right)_\textrm{obs}}{\left(M/\Phi \right)_\textrm{crit}}\,\,.
\end{equation}
\noindent
Clouds that are supported by the magnetic field and are not collapsing are deemed ``subcritical'' ($\lambda$\,$<$\,1), whereas those where gravity has overcome the resistance of the magnetic field are referred to as ``supercritical'' ($\lambda$\,$>$\,1). \citet{Crutcher2012} analyzed data across a wide range of spatial scales that trace more than five orders of magnitude in densities, and found that when the hydrogen column density $N_H > 10^{21}$\,cm$^{-2}$, all star-forming objects are supercritical (i.e., are collapsing). This value is less than the typical column densities of low-mass ($N_H \sim 10^{22} - 10^{23}$\,cm$^{-2}$; e.g., \citealt{Girart2006, Hull2017b}) and high-mass ($N_H \sim 10^{23} - 10^{24}$\,cm$^{-2}$; e.g., \citealt{Girart2009}) protostellar cores, and thus the types of objects we review in this article are all supercritical. This is reasonable, as most of them have already formed stars, as revealed by the presence of bipolar outflows. Furthermore, due to the sensitivity limits of CARMA and the SMA, most of the sources in previous interferometric surveys of polarization \citep{Hull2014, Zhang2014} were chosen based on their strong millimeter flux, which correlates with the presence of embedded star formation.
\subsection{Core-mass estimates from dust emission}
\label{sec:core_mass}
In order to convert the observed millimeter-wave flux density $S_{\nu}$ contained within a given spatial area into a corresponding gas mass $M_{\mathrm{gas}}$, we can use the following relation:
\begin{equation}
M_{\mathrm{gas}} = \frac{S_{\nu}d^{2}}{\kappa_{\nu}B_{\nu}\left(T_d\right)}\,\,,
\end{equation}
\noindent
where $d$ is the distance to the source, $\kappa_{\nu}$ is the opacity of the dust \citep{Ossenkopf1994}, and $B_{\nu}\left(T_d \right)$ is the Planck function at the frequency of the observations. $T_d$ is the temperature of the dust, which is usually $\sim$\,20--50\,K in a low-mass protostellar core \citep{Girart2006}, and as high as (or greater than) 100\,K in a high-mass core \citep{Girart2009}. Once the dust mass is calculated, a gas-to-dust mass ratio of 100 is usually assumed in order to calculate the total (gas\,$+$\,dust) mass of the protostellar core. Note that such an estimate does not include the mass of the central star(s), which must be obtained by other means, e.g., via direct detection of a Keplerian disk around the source \citep[e.g.,][]{Tobin2012, Ohashi2014} or via determination of the source's bolometric luminosity.
\subsection{Motivating questions in low- and high-mass star formation}
The primary goal of observing the magnetic field at any spatial scale is to determine the importance (or lack thereof) of the magnetic field in the star-formation process. The steady progress toward this goal over the last two decades began with single-dish submillimeter polarization surveys probing $\gtrsim$\,20\arcsec scales using the Viper 2\,m telescope at the South Pole (SPARO polarimeter, e.g., \citealt{Dotson1998, Renbarger2004, HBLi2006}), the JCMT (850\,$\mu$m SCUBA polarimeter, e.g., \citealt{Matthews2009}), and the CSO (350\,$\mu$m SHARP [\citealt{Li2008}] and Hertz [\citealt{Dotson2010}] polarimeters). A resurgence of single-dish studies has been brought about by results from the PolKa polarimeter at the APEX telescope \citep{Siringo2004, Siringo2012, Hezareh2013, Wiesemeyer2014, Alves2014}, the BISTRO survey with the upgraded POL-2 polarimeter at the JCMT \citep[e.g.,][]{WardThompson2017, Pattle2017, JKwon2018, Pattle2018, Soam2018}, results from the polarimeter on the Balloon-borne Large Aperture Submillimeter Telescope \citep[BLAST;][]{Roy2011, Gandilo2016, Fissel2016}, observations from the HAWC+ polarimeter \citep{Vaillancourt2007} on the Stratospheric Observatory for Infrared Astronomy \citep[SOFIA; e.g.,][]{Chuss2018, Gordon2018, LopezRodriguez2018}, and the galactic polarization maps from the \textit{Planck} satellite \citep[e.g.,][]{PlanckXIX,PlanckXX,PlanckXXI,PlanckXXXII,PlanckXXXIII,PlanckXXXV}. These new studies will pave the way for future work with even more sensitive instruments such at the next-generation BLAST instrument (BLAST-TNG; \citealt{Galitzki2014}), the NIKA2 polarimeter at the IRAM 30\,m telescope \citep{Ritacco2017}, and the TolTEC polarimeter at the Large Millimeter Telescope (LMT). For reviews on multi-scale/multi-wavelength studies and single-dish observations of magnetic fields, respectively, see \citet{HBLiLaw2019} and \citet{PattleFissel2019}, both in this volume.
There are a number of questions applicable to both low- and high-mass star formation that have been investigated using (sub)millimeter polarimetric observations. These include the (direct or indirect) measurements of the absolute magnetic field strength in star-forming material at different spatial scales, as well as the estimation of the dynamical importance of the magnetic field with respect to gravity (i.e., the mass-to-flux ratio; see Section \ref{sec:m2f}). Observations of magnetic fields across multiple spatial scales toward both low-mass \citep{HBLi2009, Hull2014} and high-mass \citep{Zhang2014, HBLi2015} protostellar sources have also been used to constrain the dynamical importance of the magnetic field based on the morphological consistency (or lack thereof) of the field as a function of scale. Generally, however, these observations have only compared two or three of the relevant spatial scales (i.e., 100\,pc cloud scales, 1\,pc clump scales, 0.1\,pc dense-core scales, 1000\,au protostellar envelope scales, and 100\,au disk scales). A full characterization of the magnetic field from galactic (\textit{Planck}) scales down to scales approaching the 100\,au size of protoplanetary disks has yet to be accomplished, but will be possible in the near future when upcoming polarization surveys of the full populations of protostars in entire molecular clouds are completed.
In the low-mass regime, single-dish observations probe the magnetic field in star-forming clouds at large scales, revealing the magnetic field from the scale of entire molecular clouds (\textit{Planck}, BLAST) to the canonical, $\sim$\,0.1\,pc dense core, where one or a few protostars will form (JCMT, CSO). One of the main benefits of single-dish studies is their ability to recover a larger range of spatial scales than interferometers, thus enabling an accurate characterization of the magnetic field in ambient cloud material. However, higher resolution is needed in order to probe the environments of individual stars; this is where results from the BIMA, CARMA, SMA, and ALMA interferometers dominate the discussion, allowing us to characterize the magnetic field from scales of several $\times$ 1000\,au down to the scales of a few $\times$ 10\,au accessible to ALMA.\footnote{At the high resolutions achievable by ALMA, several studies have revealed polarization in well resolved maps of protoplanetary disks; however, it appears that in many cases the polarized emission is from dust scattering and not from magnetically aligned dust grains (see Section \ref{sec:scattering}).} The main questions that have been tackled over the last two decades using data from these interferometers include:
\textbf{(1)} What is the importance of the magnetic field in regulating the collapse of star-forming cores? (Section \ref{sec:collapse});
\textbf{(2)} What is the relationship of bipolar outflows with the magnetic field? (Section \ref{sec:Bfield_outflow}); and
\textbf{(3)} What is the role of the magnetic field in the launching and collimation of bipolar outflows in low-mass protostars? (Section \ref{sec:outflows})
Moving to the high-mass regime: high-mass stars ($M_* >$ 8\,M$_\odot$) form predominantly in clustered environments where a population of stars are born with a range of stellar masses \citep{LadaLada2003}. These high-mass stellar populations form in dense cores that are embedded in parsec-scale, massive molecular clumps.
This review will focus on several questions raised in recent studies of these high-mass sources, including
\textbf{(4)} What is the dynamical role of magnetic fields in dense cores? (Section \ref{sec:high_mass_cores});
\textbf{(5)} What is the role of the magnetic field in the formation of disks and the launching of protostellar outflows in high-mass protostars? (Section \ref{sec:Bfield_outflow_high});
\textbf{(6)} Do magnetic fields play a significant role in the fragmentation of molecular clumps and the formation of dense cores? (Section \ref{sec:fragmentation_high}); and
\textbf{(7)} Does high-mass star formation proceed in virial equilibrium? (Section \ref{sec:virial_high})
The dense clustering nature of high-mass star formation implies considerable fragmentation within these massive molecular clumps, which distinguishes high-mass star formation from the more isolated process of low-mass star formation. High-mass regions also tend to have much more intense radiation environments, hosting {\rm H}{\sc ii}\xspace regions whose radiative feedback and ionization can impact the ambient magnetic field. Another clear difference arises from the fact that the best-studied low-mass stars are forming at distances $\sim$\,10\,$\times$ closer than typical high-mass star forming regions, allowing us to study the formation of individual low-mass protostellar systems and their associated outflows, jets, and disks in much greater detail than is possible in high-mass systems. Ultimately, however, studies of both low- and high-mass star formation use the same observing techniques and confront many of the same questions. In this review we focus on those questions that have been of the most interest to both communities in recent years.
\section{Magnetic fields in low-mass star formation}
\label{sec:low_mass}
The revolution of high-resolution, interferometric observations of polarization began with BIMA and the Owens Valley Radio Observatory (OVRO). These two sets of antennas were later combined into CARMA \citep{Bock2006}. Early observations with BIMA and
OVRO\footnote{The two polarization results from OVRO are observations toward NGC~1333-IRAS~4A and IRAS~16293 \citep{Akeson1996, Akeson1997}; however, OVRO was known to have issues with polarization calibration, which is the most likely explanation for the inconsistency of those results with later observations of the same sources \citep{Girart1999, Girart2006, Girart2008, Rao2009}.}
covered a wide range of topics, including polarization observations of dust, SiO, CO, and SiO
masers toward iconic regions in Orion \citep{Rao1998, Plambeck2003, Girart2004, Matthews2005} as well as observations of individual protostars \citep{Girart1999, Kwon2006, Cortes2006b}. These first observations, combined with the extensive follow-up from CARMA, the SMA, ALMA form the body of work motivating this review. Below we put this work in the context of a narrative addressing several of the major open questions in the field of magnetized low-mass\footnote{We do not treat the topic of intermediate-mass star formation separately in this review. As many of the characteristics of the early stages of intermediate- and low-mass star formation are thought to be similar \citep{Beltran2015}, we include references to several intermediate-mass objects and regions in this section. Many of these intermediate-mass sources are in Orion \citep[e.g.,][]{Takahashi2006, Takahashi2018, Hull2014}, but there are also objects in other regions, such as Serpens SMM1 in the Serpens Main molecular cloud \citep{vanKempen2016, Hull2017b}.} star formation.
\subsection{The role of the magnetic field in protostellar collapse}
\label{sec:collapse}
\begin{wrapfigure}[19]{r}{3.0in}
\vspace{-2em}
\includegraphics[scale=0.85, clip, trim=0cm 0cm 0cm 0cm]{hourglass_model.pdf}
\caption{
\small
A model from \citet{Allen2003} showing an hourglass-shaped magnetic field configuration in a collapsing, magnetized star-forming core. Reproduced with permission from the American Astronomical Society (AAS).
}
\label{fig:hourglass_model}
\end{wrapfigure}
In models of magnetically regulated protostellar collapse \citep[e.g.,][]{Mouschovias1976a, Mouschovias1976b, Mouschovias1991, Mouschovias1999}, a strong, well ordered magnetic field provides outward pressure support of the infalling material. This is because the field is coupled (or ``frozen'') to the small fraction of charged particles in the weakly ionized gas. However, in non-turbulent models, the non-ideal MHD effect of ambipolar diffusion \citep{Mestel1956} enables the neutral material (which comprises the bulk of the star-forming core) to slip slowly past the magnetic field lines, thus removing magnetic flux and eventually allowing collapse to proceed once the mass-to-flux ratio exceeds the critical value.
One of the predicted signposts of highly magnetized star formation is that at high enough densities ($\gtrsim 10^4$\,cm$^{-3}$), the collapse of strongly magnetized gas should pinch the magnetic field into an ``hourglass'' shape with a symmetry axis perpendicular to the major axis of a flattened, $\sim$\,1000\,au ``pseudodisk'' \citep{Galli1993a, Galli1993b}. The hourglass is expected to persist down to scales $<$\,1000\,au \citep{Fiedler1993, Galli1993b, Allen2003, Goncalves2008, Frau2011, Kataoka2012, Mocz2017}; see Figure \ref{fig:hourglass_model}. And indeed, the predicted hourglass has now been seen in a number of interferometric observations of low-mass protostellar cores \citep[][see Figure \ref{fig:hourglass_obs}]{Girart1999, Girart2006, Girart2008, Rao2009, Stephens2013, Hull2014, Kwon2018, Sadavoy2018a, Maury2018}, suggesting that some protostellar cores do form in strongly magnetized regions. For a discussion of constraining strong-field star formation via observations of hourglass-shaped magnetic fields, see Section \ref{sec:stat}.
A second signpost of strong-field star formation is the consistency of the magnetic field orientation across multiple spatial scales. If the field is strong relative to other dynamical effects, observations at small scales should reveal a magnetic field whose original orientation is preserved from the parsec scale of the cloud in which the source is embedded. To date, multi-scale studies of the magnetic field in low- and high-mass star-forming regions have compared two or three scales: i.e., $\sim$\,kpc galactic scales to $\sim$\,0.1\,pc dense-core scales in \citet{Stephens2011}; 100\,pc cloud scales to 0.1\,pc dense-core scales in \citet{HBLi2009, HBLi2015}; 1\,pc clump scales to 0.1\,pc dense-core scales in \citet{Zhang2014}, continuing down to $\sim$\,0.01\,pc scales in \citet{Girart2013} and \citet{Ching2017}; 0.1\,pc dense core scales to 1000\,au protostellar envelope scales in \citet{Hull2014} and \citet{Davidson2014}; and 0.1\,pc to 1000\,au to 100\,au scales in \citet{Hull2017a, Hull2017b}.
\begin{figure}
\includegraphics[width=1.0\textwidth, clip, trim=0cm 8.5cm 0cm -1cm]{hourglass_obs.pdf}
\caption{
\small
Observations of hourglass-shaped magnetic field configurations (line segments). The rightmost, overlapping panels are NGC 1333-IRAS 4A, observed by the SMA in \citet[][top]{Girart2006}, and again by CARMA in \citet[][bottom/background]{Hull2014}. The SMA and CARMA observations are zoomed to the same spatial scale. The left-hand panel is the isolated Class 0 protostar L1157 (adapted from \citealt{Hull2014}; see also \citealt{Stephens2013}). Figures reproduced with permission from \textit{Science} magazine and the AAS.
}
\label{fig:hourglass_obs}
\end{figure}
In the low-mass regime, \citet{HBLi2009} found striking consistency between the magnetic field orientation in the Orion molecular cloud derived from background starlight polarization at $\sim$\,100\,pc scales versus polarized thermal dust emission at $\sim$\,0.1\,pc scales. \citet{Hull2014} took this one step further, finding consistency in the field orientation in just a few of the low-mass protostars in their sample from scales of $\sim$\,0.1\,pc to $\sim$\,1000\,au. The sample of sources from \citet{Hull2014} that maintained consistency in the magnetic field orientation down to scales of $\sim$\,1000\,au tended to be those sources with a higher polarization fraction, which implies that the magnetic fields in those sources are more ordered, and thus may be more dynamically important.
Among those sources with consistent large-to-small scales magnetic fields are several with known hourglass morphologies, including OMC3 MMS6 \citep{Hull2014}, NGC~1333-IRAS~4A \citep{Girart2006}, L1157 \citep{Stephens2013}, and L1448 IRS 2 \citep{Kwon2018}. The magnetic field strengths have been estimated toward the latter three objects, an are all relatively high, on the order of $\gtrsim$\,1\,mG, which is similar to values obtained in high-mass regions (see Section \ref{sec:high_mass_cores}). However, while the values are high (5\,mG in IRAS~4A [\citealt{Girart2006}]; 1.3--3.5\,mG in L1157 [\citealt{Stephens2013}]; and 750\,$\mu$G in L1448 IRS 2 [\citealt{Kwon2018}]), the mass-to-flux ratios calculated for the two most magnetized sources (IRAS~4A and L1157) are both slightly greater than the critical value (1.7 and 1.1 for IRAS~4A and L1157, respectively), which is reasonable, considering that the objects have already collapsed to form protostars.
\begin{figure}[hbt!]
\includegraphics[width=1.0\textwidth, clip, trim=5.5cm 0cm 5.0cm -1cm]{ser8_multiscale.pdf}
\caption{
\small
\textit{Top:} Multi-scale observations of the magnetic field (line segments) toward the low-mass Class 0 protostar Ser-emb~8. Grayscale is total-intensity (Stokes $I$) thermal dust emission. Observations are from the JCMT at 850\,$\mu$m (top-left; \citealt{Matthews2009}), CARMA at 1.3\,mm (bottom-left; \citealt{Hull2014}), and ALMA at 870\,$\mu$m (right). The red and blue arrows indicate the red- and blueshifted lobes of the bipolar outflow from Ser-emb~8 traced in CO\,($J$\,$=$\,2\,$\rightarrow$\,1). The data show the inconsistency of the magnetic field across spatial scales, and are consistent with moving-mesh AREPO MHD simulations (\textit{bottom}) whose initial conditions include a magnetic field that is weak compared with turbulence at large (5\,pc) scales. Adapted from \citet{Hull2017a}. Reproduced with permission from the AAS.
}
\label{fig:ser8_multiscale}
\end{figure}
Follow-up studies with ALMA of individual sources from previous surveys (e.g., \citealt{Hull2014}) have suggested that consistency in the magnetic field orientation across spatial scales may be the exception rather than the rule at scales smaller than $\sim$\,0.1\,pc. Indeed, detailed multi-scale follow-up studies by \citet{Hull2017a, Hull2017b} of Ser-emb 8 and Serpens SMM1, two Class 0 protostellar sources in the Serpens Main molecular cloud, found significant deviations in the magnetic field morphology across spatial scales ranging from $\sim$\,0.1\,pc down to $\sim$\,\,100\,au.
In the case of Serpens SMM1, it appears that the deviations at the $\sim$\,100\,au scales probed by ALMA are due to shaping of the magnetic field by the bipolar outflow. This is in spite of the fact that the magnetic field strength is estimated to be quite high, $\sim$\,5.7\,mG \citep{Hull2017b}. The fact that dynamics are shaping the magnetic field morphology in this source is not surprising, however, as the magnetic is comparable to the kinetic energy in the outflow. This is in contrast to sources like NGC 1333-IRAS 4A, where estimates by \citet{Girart2006} show that the magnetic energy is clearly dominant over turbulent motions in the source.
In the case of Ser-emb 8, the outflow does not have a clear effect on the magnetic field, and yet the field morphology is still not consistent across scales. A comparison of the Ser-emb 8 data with moving-mesh AREPO gravoturbulent MHD simulations \citep{Mocz2017} suggests that Ser-emb 8 may have formed in an environment where dynamical effects such as turbulence and infall dominate the magnetic field, in contrast to the strong-field examples described above; see Figure \ref{fig:ser8_multiscale}. These results are qualitatively consistent with other simulations studying star formation in weakly magnetized turbulent environments \citep[e.g.,][]{Seifried2015, Offner2017}, suggesting that the importance of the magnetic field at the smallest scales of star formation lies along a continuum, from the bright, highly polarized examples of strong-field star formation to the more complex examples of star formation in regions dominated by dynamical processes. Large, high-resolution polarization surveys by ALMA will reveal the distribution of low-mass protostars across this continuum of magnetic importance, and will allow us to compare with results from high-mass regions such as those by \citet{Zhang2014}, who found that the magnetic field is dynamically important at the larger spatial scales probed by their observations (see Section \ref{sec:fragmentation_high}).
\subsection{Misalignment of outflows and magnetic fields}
\label{sec:Bfield_outflow}
For more than a decade, one of the primary ways that astronomers have tested the importance of the magnetic field in star-forming regions has been via comparisons of the orientations of bipolar outflows/jets and the ambient magnetic field. This is because if a protostellar core is very strongly magnetized, the magnetic field has the ability to align all of the relevant axes: the axis of the (well ordered and poloidal) magnetic field, the symmetry axis of the pseudodisk, the rotational axis of the protostellar disk, and the axis of the outflow and/or jet emanating from the central source. This happens because of the ``magnetic braking'' phenomenon \citep{Allen2003}, where a strong magnetic field removes angular momentum from the central source and causes the angular momentum (and thus disk/outflow) and magnetic axes to align \citep[e.g.,][]{Machida2006}.
This same magnetic braking phenomenon can potentially lead to what has come to be known as the ``magnetic braking catastrophe,'' where a strong magnetic field aligned with the core rotation axis can suppress the formation of a Keplerian disk in MHD simulations \citep[e.g.,][]{Galli2006, Mellon2008, Li2011}. This may lead to the formation of sources like L1157 \citep{Stephens2013} and B335 \citep{Maury2018}, which have outflows aligned with the magnetic field, and which have as-of-yet undetectably small disks ($\lesssim$\,10\,au). However, since it is known that large Keplerian disks form around many protostellar sources, a variety of methods have been proposed to overcome this problem, including the introduction of an initial misalignment between the rotation axis and the magnetic field, which enhances disk formation \citep[e.g.,][]{Hennebelle2009, Joos2012, Krumholz2013, Li2013}), and the consideration of non-ideal MHD effects such as ambipolar diffusion \citep{Dapp2012, Masson2016, Tsukamoto2018}, Ohmic dissipation \citep{Dapp2012, Tomida2015, Tsukamoto2018}, the Hall effect \citep{Tsukamoto2015a, Tsukamoto2015b, Tsukamoto2017, Wurster2018}, and magnetic reconnection \citep{SantosLima2012, Li2014}.
Observationally, if very strong magnetic fields were the norm, then the rotational axes of protostellar disks, and the jets and outflows that emanate from them, should all be parallel with the ambient magnetic field. A study of seven low-mass protostellar cores by \citet{Chapman2013} found a correlation between outflows and magnetic fields at $\sim$\,0.1\,pc scales. However, the majority of the studies of this type have come to the opposite conclusion. For example, \citet{Menard2004} found that the optical jets from classical T Tauri stars in the Taurus-Auriga molecular cloud are randomly oriented with respect to the parsec-scale magnetic field observed via background-starlight polarization observations. \citet{Targon2011} obtained a similar result for 28 regions spread over the Galaxy, finding no strong correlations between protostellar jets and the ambient magnetic field. On the $\sim$\,0.1\,pc scales of high-mass star-forming cores, \citet{Curran2007} and \citet{Zhang2014} used thermal dust polarization observations by the JCMT and the SMA, respectively, to determine that outflows and inferred magnetic fields are randomly aligned. Finally, \citet{Hull2013, Hull2014} used the 1.3\,mm polarization system at CARMA \citep{Hull2015b} to observe dust polarization toward a sample of low- and high-mass sources, and found that the outflows and $\sim$\,1000\,au-scale magnetic fields in the low-mass sources were randomly aligned. In Figure \ref{fig:misalignment}, we compile all of the outflow-versus-magnetic-field angles derivable to date from interferometric observations of low-mass protostellar cores \citep{Girart1999,Girart2006,Girart2008,Rao2009,Hull2013,Stephens2013,Hull2014,Hull2017a,Hull2017b,Cox2018,Galametz2018,Kwon2018,Maury2018,Sadavoy2018a,Harris2018}, and come to the same conclusion: while a few sources have well aligned outflows and magnetic fields (e.g., those on the very bottom-left of the plot in Figure \ref{fig:misalignment} that are climbing the 0--20$^\circ$ curve, several of which have hourglass-shaped field morphologies; see Section \ref{sec:stat}), overall protostellar outflows and magnetic fields measured at 1000\,au-scales are randomly aligned.
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width=0.85\textwidth, clip, trim=0cm 0cm 0cm 0cm]{ks_plot.pdf}
\caption{
\small
The thick, stepped curve shows the cumulative distribution function (CDF) of the (projected) angles between the bipolar outflows and the mean magnetic-field orientations in the full sample of low-mass protostellar cores observed to date in full polarization with BIMA, the SMA, CARMA, and ALMA. The dotted curves are the CDFs from Monte Carlo simulations where the magnetic fields and outflows are oriented within 20$^\circ$, 45$^\circ$, and 70--90$^\circ$ of one another, respectively. The straight line is the CDF for random orientation. The plot shows that outflows appear to be randomly aligned with magnetic fields in the sample of low-mass sources whose magnetic fields have been observed with $\sim$\,500--1000\,au resolution.
}
\label{fig:misalignment}
\end{center}
\end{figure}
The finding from several studies that magnetic fields and outflows are randomly oriented suggests that most protostars form out of material with a magnetic field that is too weak to maintain a consistent orientation all the way down to the 0.1--10\,au scales where outflows are launched. Rather, it seems likely that dynamical effects such as turbulence and infall are more important than the magnetic field when it comes to dictating the ultimate angular-momentum direction at the small ($<$\,1000\,au) scales relevant for the formation of protostellar disks and outflows/jets. This claim is corroborated by the analysis by \citet{JLee2017} of the synthetic observations (produced using the \texttt{Dustpol} radiative transfer software \citealt{Padovani2012}) of two MHD simulations with different initial mass-to-flux ratios \citep{Offner2017}. They concluded that while the protostar that formed in the strong-field case exhibited a correlation between the outflow and the magnetic field, the weaker-field case showed a random orientation. This is most likely because asymmetric accretion from the turbulent envelope stochastically reoriented the disk/outflow during the earliest (Class 0) formation stage, ultimately decoupling the source from the natal magnetic field (see, e.g., simulations by \citealt{CYChen2018}).
Two non-magnetic studies focused on the alignment of protostellar outflows with respect to the natal filamentary structure in which the sources formed, and on the relative alignment of the outflows from wide ($\sim$\,1000\,au) and tight ($<$\,250\,au) binary/multiple systems. Both studies used data from the MASSES survey at the SMA \citep{Lee2015, Stephens2018}. Regarding outflows versus filaments, \citet{Stephens2017a} studied the relative orientation of 57 protostellar outflows in the Perseus molecular cloud (derived from the SMA MASSES data) with the local filamentary structure (derived from \textit{Herschel} observations), and found that the orientations are consistent with a random distribution. Their results held regardless of protostellar age, multiplicity, or the opacity of the dense core, suggesting that the angular momentum of the protostellar cores and outflow-launching disks are independent of the large-scale structure out of which the protostars are forming. Regarding the orientation of outflows from multiple systems, \citet{Lee2016} used SMA MASSES observations to determine that the outflows from proto-multiple systems in the Perseus molecular cloud are randomly aligned. \citet{Offner2016} followed up with MHD simulations that are consistent with the SMA observations, arguing that multiple systems with randomly oriented angular momenta are likely to have formed from turbulent fragmentation.
The turbulent fragmentation scenario is consistent with recent ALMA observations of multiple systems with misaligned protoplanetary disks \citep{JensenAkeson2014, JELee2017}. However, observations by \citet{Tobin2016b} have found evidence for an alternative theory of multiple formation: disk fragmentation \citep{Kratter2010}. The VLA Nascent Disk and Multiplicity Survey of Perseus Protostars (VANDAM) by \citet{Tobin2016a} found a bimodality in the multiplicity of systems; they argued that the wide multiples are the result of turbulent fragmentation, whereas the tight multiples are the result of disk fragmentation. This latter conclusion was strengthened in a follow-up study by \citet{Tobin2018}, who observed a sample of tight binaries from the VANDAM survey with ALMA in both continuum and spectral lines.
Recent work by \citet{Galametz2018} focused on the role of the magnetic field in the formation of multiple systems using an SMA survey of magnetic fields in protostellar cores. They found tantalizing evidence that a large misalignment between the outflow and the magnetic field is found preferentially in protostars with higher rotational energies. This $\sim$\,90$^\circ$ misalignment observed in some objects could thus be due to the winding of the magnetic field lines in the equatorial plane by strong rotation. Furthermore, they found hints that many of those same sources (i.e., those with approximately perpendicular outflows and magnetic fields) are wide multiple sources and/or have large disks, whereas the sources with well aligned magnetic fields and outflows tend to be single objects with small (or undetected) disks at the $\sim$\,500\,--\,1500\,au resolution of their observations. The trend of large disks being associated with sources that have misaligned magnetic fields and outflows was also tentatively seen in an analysis of Karl G. Jansky Very Large Array (VLA) observations of Class 0 and I protostars by \citet{SeguraCox2018}. These results suggest that the morphology and dynamical importance of magnetic fields at the scale of the protostellar envelope may significantly impact the outcome of protostellar collapse as well as the formation of disks and multiple systems. Large ALMA surveys in polarization toward binary/multiple systems will shed light on the impact of the magnetic field on multiple formation via both turbulent and disk fragmentation.
\subsection{The importance of the magnetic field in launching jets and outflows}
\label{sec:outflows}
Magnetic fields play a critical role in launching and collimating both bipolar outflows and jets from young forming stars \citep{Frank2014}. Several theories exist to explain how outflows and jets are generated, including the ``disk wind'' theory where an outflow is launched from the magnetized, rotating surface of a disk \citep{Konigl2000}, and the ``X-wind'' theory where jets are launched close to the central protostar itself \citep{Shu2000}. Both of these theories require a magnetic field to function, and that magnetic field is expected to have both poloidal (i.e., along the outflow) and toroidal (perpendicular to the outflow) components due to the combination of infall, outflow, and rotational motions present near a forming star. Characterizing the magnetic field in outflows and jets can thus allow us to investigate the origin of outflows in the context of these different theories.
Historically, observations of dust polarization have been used mainly to study the magnetic field morphology in the optically thin dense cores of dust and gas surrounding embedded protostars. With the sensitivity and resolution of ALMA, it is now possible to detect polarized dust emission along the edges of outflow cavities \citep{Hull2017b, Maury2018}. However, there is not enough dust in the cavity itself (where the outflow has evacuated most of the material) to allow for a detection of the polarized emission. Therefore, in order to probe the magnetic field in the outflowing material, one must turn to observations of spectral-line polarization (see Section \ref{sec:intro_tracers_POS:GK}).
\begin{figure*}[hbt!]
\begin{center}
\includegraphics[width=1.0\textwidth, clip, trim=5cm 0cm 3cm -1cm]{low_mass_SL_pol.pdf}
\caption{
\small
Observations of spectral-line polarization from the G-K effect toward bipolar outflows from low-mass protostellar sources. \textit{Top left:} SMA observations of polarization of the CO\,($J$\,$=$\,3\,$\rightarrow$\,2) line (red and blue line segments) toward NGC 1333-IRAS 4A, adapted from \citet{Ching2016}. The grayscale shows the CO\,($J$\,$=$\,3\,$\rightarrow$\,2) polarized intensity in units of Jy\,beam$^{-1}$\,km\,s$^{-1}$. The authors also compare with CO\,($J$\,$=$\,2\,$\rightarrow$\,1) BIMA polarization results from \citet{Girart1999}, plotted in green. \textit{Top right:} comparison of the SMA CO polarization data with the 870\,$\mu$m dust polarization map (the black line segments have been rotated by 90$^\circ$ to reflect the inferred magnetic field direction).
\textit{Bottom:} SMA observations of polarization in the SiO\,($J$\,$=$\,8\,$\rightarrow$\,7) line, adapted from \citet{CFLee2018b}. The colors of the line segments indicate different levels of significance in the polarized intensity; the lengths of the segments are scaled to the polarization fraction. The top panel was reproduced with permission from the AAS. The bottom panel was reproduced, with modifications, in accordance with the Creative Commons Attribution 4.0 International License (\href{https://creativecommons.org/licenses/by/4.0/}{\texttt{creativecommons.org/licenses/by/4.0}}).
}
\label{fig:low_mass_SL_pol}
\end{center}
\end{figure*}
While several studies have focused on SiO maser polarization \citep[e.g.,][]{Plambeck2003}, the majority of spectral-line polarization studies toward low-mass forming stars have targeted thermal CO and SiO emission polarized due to the G-K effect. Pioneering CO\,($J$\,$=$\,2\,$\rightarrow$\,1) polarization observations were performed with BIMA by \citet{Girart1999, Girart2004, Kwon2006}; and \citet{Cortes2006b}. So far there has been one detection of SiO\,($J$\,$=$\,8\,$\rightarrow$\,7) polarization toward the low-mass protostar HH 211, tentatively detected by \citet{CFLee2014} using the SMA, and recently confirmed by ALMA observations reported in \citet{CFLee2018b}: see the bottom panel of Figure \ref{fig:low_mass_SL_pol}.
The BIMA observations of CO\,($J$\,$=$\,2\,$\rightarrow$\,1) polarization toward the iconic Class 0 protostar NGC~1333-IRAS~4A by \citet{Girart1999} were the first interferometric detection of the G-K effect. Toward the central core, these observations are consistent with the magnetic field inferred from polarized dust emission. The observations by \citeauthor{Girart1999} were followed up with the SMA by \citet{Ching2016} in the higher-energy transition CO\,($J$\,$=$\,3\,$\rightarrow$\,2). \citeauthor{Ching2016} found good consistency between their observations and those by \citeauthor{Girart1999} in the inner regions of the outflow where the polarization detections overlap. These SMA observations, shown in Figure \ref{fig:low_mass_SL_pol} (top panel), allowed the authors to come to a number of conclusions. First was the fact that the data are consistent with a magnetic field in IRAS~4A that is poloidal at the base of the outflows (there are two outflows, each launched by a member of the embedded binary) and toroidally wrapped up further out in the outflow cavity. This observation, combined with the coexistence of a low-velocity outflow and a high-velocity jet in the source, led the authors to conclude that the outflows in IRAS~4A are most likely driven by MHD winds from the surface of a rotating disk.
\section{Magnetic fields in high-mass star formation}
\label{sec:high_mass}
\subsection{Magnetic field measurements at core scales}
\label{sec:high_mass_cores}
The first pioneering high-resolution observations of linearly polarized continuum and spectral-line emission toward high-mass star-forming regions were made with the BIMA interferometer. \citet{Rao1998} reported the first interferometric polarization observations of a high-mass star-forming region, toward Orion KL. They detected linear polarization at both 3.3\,mm and 1.3\,mm in the BIMA data at a resolution of 1000\,--\,3000\,au, revealing abrupt changes in the magnetic field orientations among the continuum emission peaks. This chaotic distribution is in contrast to the uniform magnetic field topology in the lower resolution polarization maps revealed by single dish telescopes \citep[e.g.,][]{Schleuning1998, Houde2004, Pattle2017, WardThompson2017}.
Shortly after the Orion KL study, \citet{Lai2001} reported polarization observations of W51 e1 and e2 in the 1.3\,mm continuum emission using BIMA. Later, \citet{Tang2009b} and \citet{Koch2018} published results toward the same source using the SMA and ALMA, respectively. W51 is a cloud complex that harbors massive star formation at various evolutionary stages \citep{Ginsburg2015, Saral2017}. W51 east hosts an active star-forming molecular clump with as many as 10 compact radio continuum sources over the 0.2\,pc projected area of sources e1 and e2 \citep{Ginsburg2016}. \citet{Zhang1997} reported inverse P-Cygni profile in the e2 core in the \NH3\,($J$,$K$)\,=\,(1,1), (2,2) and (3,3) spectral lines, consistent with infall motions of the dense gas. \citet{Lai2001} detected linear polarization in the continuum emission in the e2 and e8 cores at a resolution of 14,000\,au using BIMA and found that the inferred plane-of-sky components of the magnetic fields are mostly uniform, with an average position angle of 113$^\circ$ in the e2 and 105$^\circ$ in the e8 core. Using the DCF method, the authors estimated a magnetic field strength of 0.8\,mG and 1.3\,mG in the e2 and e8 cores, respectively.
More sensitive observations of continuum emission at 870\,$\mu$m toward W51 using the SMA revealed a non-uniform magnetic field morphology at a higher resolution of 3300\,au \citep{Tang2009b}. \citeauthor{Tang2009b} explored the possible reasons for the different distributions between the BIMA and the SMA images, finding that interferometric spatial filtering is the most likely cause. This spatial filtering by interferometers can be an advantage, as it allows us to probe magnetic fields at different spatial scales, thus revealing the dynamical role of the magnetic field across the many spatial scales relevant to the star-formation process.
Figure \ref{fig:w51} shows the magnetic field maps obtained by the SMA and later by ALMA at various spatial resolutions toward W51 e2 and e8. The figure shows a pinched, hourglass morphology in the e2 core at a resolution of 3300\,au (the top-middle panel), and significantly more substructures at a resolution of 1500\,au (top-right panel). \citet{Koch2018} speculate that the additional substructure in the magnetic field is the result of gravitational collapse at high densities that pulls and/or bends the field lines.
\begin{figure}[hbt!]
\includegraphics[width=1.0\textwidth, clip, trim=0cm 0cm 0cm 0.5cm]{Koch_W51e2_B.pdf}
\caption{
\small
Magnetic fields (red line segments) in W51 e2 and e8 obtained from the SMA and ALMA that exhibit more substructure at higher angular resolution. \textit{Panel (a):} SMA observations at 870\,$\mu$m probing large scales at a resolution of $2''$ ($\sim$\,10,000\,au). \textit{Panels (b) and (c):} SMA observations with a resolution of $0.\!\!^{\prime\prime}7$ ($\sim$\,3500\,au) at 870\,$\mu$m. \textit{Panels (d) and (e):} ALMA observations at 1.3\,mm with a resolution of $0.\!\!^{\prime\prime}26$ ($\sim$\,1330\,au). Contours are Stokes $I$ dust continuum intensity at levels of 3,\,6,\,10,\,20,\,35,\,50,\,65,\,80,\,95... times 75\,mJy\,beam$^{-1}$ in panel (a), 60\,mJy\,beam$^{-1}$ in panels (b) and (c), and 6\,mJy\,beam$^{-1}$ in panels (d) and (e). $\times$ symbols in panels (a), (b), and (d) mark the known continuum sources W51 e2-E, e2-W, e2-NW and e2-N. Star symbols indicate known ultra-compact {\rm H}{\sc ii}\xspace (\uchii) regions. Synthesized beams are shown as ellipses at the lower-left corner of panels (a), (c), and (e). Adapted from \citet{Koch2018}. Reproduced with permission from the AAS.}
\label{fig:w51}
\end{figure}
One of the key questions in the fields of magnetized low- and high-mass star formation is whether the magnetic field is dynamically important relative to turbulence and gravity. \citet{Koch2010} applied the two-point correlation function technique \citep{Hildebrand2009} and found a ratio of turbulent to magnetic energy ranging from 0.7 to 1.27 at scales ranging from $\sim$\,15,000\,au down to $\sim$\,500\,au. \citet{Koch2012} proposed a polarization-intensity-gradient technique that can be used to derive a distribution of magnetic field strength in a given source (see Section \ref{sec:analysis_field_strength}). Applying the technique to the polarization map of W51 e2, \citet{Koch2012} found a field strength of 7.7\,mG. This value is in agreement with the estimate by \citet{Tang2009b}, who derived the field strength from the SMA polarization observations. However, it is higher than that found by \citet{Lai2001}, who used lower-resolution BIMA polarization observations probing larger spatial scales.
The different values reported by \citet{Lai2001} and \citet{Koch2012} can be explained by the fact that the magnetic field strength increases toward smaller spatial scales where the gas density is higher.
Besides W51, another region of massive star formation whose magnetic field has been well studied is DR 21(OH) and its neighboring regions along the DR 21 filament \citep{Lai2003, Cortes2005, Girart2013, Hull2014, Houde2016, Ching2017, Ching2018}. DR 21(OH) was first imaged in linear polarization using BIMA in the CO\,($J$\,$=$\,2\,$\rightarrow$\,1) transition and 1.3\,mm continuum emission \citep{Lai2003}. These results were followed up in the CO\,($J$\,$=$\,1\,$\rightarrow$\,0) transition and 3.4\,mm continuum emission using BIMA \citep{Cortes2005}; in the CO ($J$\,$=$\,3\,$\rightarrow$\,2) transition and 870\,$\mu$m continuum emission using the SMA \citep{Girart2013, Zhang2014}; and in the CO ($J$\,$=$\,2\,$\rightarrow$\,1) transition and 1.3\,mm continuum emission with CARMA \citep{Hull2014, Houde2016}. Strong polarization was detected in both 1.3\,mm and 870\,$\mu$m continuum emission. The field strength derived using the two-point correlation function is 2.1\,mG, yielding a mass-to-flux ratio of 6\,$\times$ the critical value \citep{Girart2013}. The field strength is consistent with the value reported by \citet{Lai2003} using the DCF method. \citet{Ching2017} imaged five additional regions along the DR 21 filament using the SMA. The magnetic fields in all six cores display large dispersions, in contrast to the ordered magnetic field obtained with the JCMT at lower angular resolution \citep{Vallee2006, Matthews2009}. The field strength derived from the DCF method ranges from 0.4 to 1.7\,mG among the five cores, with mass-to-flux ratios from 1 to 4.3\,$\times$ the critical value.
As was the case in many low-mass star formation studies (see Section \ref{sec:collapse}), significant effort has been devoted to the search for pinched, hourglass-shaped magnetic field morphologies in massive dense cores. So far, the best examples of hourglass-shaped magnetic fields in massive dense cores are G31.41+0.31 and G240.31+0.07 \citep{Girart2009, Qiu2014}.
G31.41+0.31 is a 500\,M$_\odot$\ hot molecular core with a luminosity of $3 \times 10^5$\,\lsun. Observed at resolutions as high as 2400\,au, the dense core does not appear to fragment in the dust continuum emission. However, sensitive observations with the VLA at 1.3 and 0.7\,cm wavelength revealed two compact continuum objects with a projected separation of 1300\,au \citep{Cesaroni2010}. The dense core exhibits infall motions as well as rotation over a scale of 14,000\,au. Magnetic fields inferred from the 870\,$\mu$m continuum emission reveal a distribution that is pinched along the major axis of the flattened core \citep{Girart2009}. No molecular outflows have been definitively identified in this region. Observations in the CO\,($J$\,$=$\,2\,$\rightarrow$\,1) transition found a velocity gradient along the major axis of the flattened core \citep{Cesaroni2011}; however, it is not clear if the gradient represents an outflow, or if it is due to core rotation. The strength of the plane-of-sky component of the magnetic field is 9.7\,mG, implying a turbulence-to-magnetic-energy ratio of 0.35. The rotational velocity within the core inferred from spectral-line observations of high-density tracers indicates significant magnetic braking. G31.41+0.31 is a case where the magnetic field dominates the turbulence and the dynamics in the system.
Similar to G31.41+0.31, G240.31+0.07 is a massive star-forming region with an hourglass magnetic field morphology \citep{Qiu2014}. As shown in Figure \ref{fig:G240}, the dust continuum emission reveals a flattened structure extended along the northeast--southwest direction that has fragmented into three cores, each harboring at least one massive young star. A wide-angle, bipolar outflow is seen in the CO emission \citep{Qiu2009}, with the outflow axis parallel to the minor axis of the flattened dense core. Polarization observations at 870\,$\mu$m reveal a magnetic field topology pinched along the major axis of the core. The magnetic field strength estimated from the DCF method is 1.2\,mG, with a mass-to-flux ratio of 1.2\,$\times$ the critical value, and a turbulent-to-magnetic-energy ratio of 0.4. G240.31+0.07 is another clear example of a massive star-forming core in which the magnetic field dominates the turbulence and the dynamics in the system.
\begin{figure}[hbt!]
\includegraphics[width=1.0\textwidth, clip, trim=0cm 0cm 0cm 0cm]{g240.pdf}
\caption{
\small
Magnetic field (red line segments in the right panel) in the high-mass star formation region G240.31+0.07 inferred from dust continuum polarization at 870\,$\mu$m with the SMA \citep{Qiu2014, Zhang2014}. The left panel presents an overview of the region in a three-color composite of \textit{Spitzer} IRAC images in the 3.6\,$\mu$m (green), 4.5\,$\mu$m (blue) and 8.0\,$\mu$m (red) bands; the middle panel shows a wide-angle bipolar molecular outflow in the CO\,($J$\,$=$\,2\,$\rightarrow$\,1) line \citep{Qiu2009}. The star symbols indicate the known continuum peaks. Reproduced with permission from the AAS.}
\label{fig:G240}
\end{figure}
So far, the overwhelming majority of publications on interferometric observations of magnetic fields are studies of individual objects. The improvement in sensitivity with the advent of the CARMA and SMA polarimeters led to surveys of small---but significantly larger---samples of objects. See Section \ref{sec:collapse} for a discussion of the TADPOL survey of predominantly low-mass sources by \citet{Hull2014}. On the high-mass end, \citet{Zhang2014} reported polarization detections in 14 massive star-forming clumps from a survey of 21 star forming regions. This effort significantly increased the number of clumps with magnetic fields detections and consequently enabled statistical analyses of the ensemble behavior of magnetic fields in massive star-forming regions. To date, there are approximately 24 unique high-mass star forming clumps that have been observed in polarization with interferometers. They are Orion KL, NGC 2071, W3, W3(OH), DR 21(OH), DR 21 filament, G192, G30.79, NGC 6334 I/In/IV/V, W51 e and N, IRAS 18306, IRAS 18089, W43, NGC 7538, G5.89, NGC 2264C1, G34.4, G35.2N, G31.41+0.31, and G240.31+0.07
\citep{Rao1998,Lai2001,Lai2003,Cortes2006a,Cortes2006b,Cortes2008,Girart2009,Tang2009a,Tang2009b,Beuther2010,Tang2010,HRChen2012,Girart2013,Liu2013,Qiu2013,Frau2014,Hull2014,Qiu2014,Sridharan2014,Wright2014,Zhang2014,HBLi2015,Cortes2016,Houde2016,Ching2017,Juarez2017,Koch2018}
Magnetic fields toward these sources display diverse topologies. In Sections \ref{sec:Bfield_outflow_high}, \ref{sec:fragmentation_high}, \ref{sec:virial_high}, and \ref{sec:stat} we discuss findings and interesting trends from the statistical analysis of this sample. While the analysis represents a significant advance in the study of the role of the magnetic field in high-mass star formation, the limitations and biases in the sample used in the analysis cannot be overlooked. One of the most significant limitations is that non-detections are not included in the statistics. The advent of ALMA will increase the size of the sample drastically, enabling significantly more robust analysis within the next decade.
\subsection{Outflow--core magnetic field connection}
\label{sec:Bfield_outflow_high}
Linear polarization from CO rotational transitions (i.e., from the G-K effect) probes magnetic fields in molecular gas with densities from 10$^2$\,--\,10$^3$\,cm$^{-3}$. This can be an effective tool for tracing magnetic fields in protostellar outflows. In the context of high-mass star formation, \citet{Lai2003} reported the first detection of linear polarization in the CO\,($J$\,$=$\,2\,$\rightarrow$\,1) transition in DR 21(OH). The inferred magnetic field orientation is in the east--west direction, aligned with the major axis of the CO outflows. Subsequent polarimetric observations in the CO\,($J$\,$=$\,1\,$\rightarrow$\,0) transition by \citet{Cortes2005} found linear polarization perpendicular to that of the CO ($J$\,$=$\,2\,$\rightarrow$\,1) transition. While this disagreement may be reconciled by anisotropy in the CO optical depth and external radiation field (see Section \ref{sec:intro_tracers_POS:GK}), it highlights the difficulty of interpreting spectral-line polarization from the G-K effect. Finally, \citet{Beuther2010} reported detections of spectral-line polarization in the CO\,($J$\,$=$\,3\,$\rightarrow$\,2) transition in IRAS 18089-1732. Using the DCF method, they derived a magnetic field strength of 28\,$\mu$G. A similar field strength of 10\,$\mu$G is reported in DR 21(OH) by \citet{Cortes2005}.
Despite the early success in detecting spectral-line polarization primarily in DR 21(OH), very few protostellar sources have shown detectable spectral-line linear polarization since those early results. In the survey of 14 high-mass protostellar clumps by \citet{Zhang2014}, only DR 21(OH) had detectable polarization in the CO\,($J$\,$=$\,3\,$\rightarrow$\,2) transition \citep{Girart2013}. The limited sensitivity of the SMA may be a contributing factor to these non-detections. Furthermore, the CO emission is typically spatially extended, which presents an additional challenge when imaging linear polarization, since Stokes $Q$ and $U$ can be either positive or negative, and a lack of short-spacing information in sparsely sampled interferometric data can produce negative emission that may be confused with the polarization signal in Stokes $Q$ and $U$. Both factors are mitigated by ALMA, which provides significant improvements in sensitivity and imaging fidelity over BIMA, CARMA, and the SMA.
Probing magnetic fields in accretion disks around high-mass protostars is challenging in two regards. First, extremely high angular resolution observations are required to achieve the $\sim$\,100\,au linear scales of a disk at a typical source distance of several kpc. Second, at the typical high densities and optical depths in these disks, the polarized emission may be contaminated (and sometimes dominated) by self-scattering of large dust grains (see Section \ref{sec:scattering}). For example, \citet{Girart2018} observed polarized emission from the disk associated with the radio jet HH 80-81. These deep ALMA observations in the 1.14\,mm continuum emission spatially resolved a flattened disk with a radius of 291\,au. The disk is partially optically thick and its polarized emission appears to be dominated by self-scattering of large dust grains.
In the absence of magnetic field information in protostellar outflows and disks, the comparison of magnetic fields in cores with outflow axes offers an alternative to assess the importance of magnetic fields in the formation of disks and outflows in high-mass systems. As discussed in Section \ref{sec:collapse}, when collapsing to form stars, strongly magnetized cores are expected to form a disk and outflow system with the major axis of the outflow parallel to the magnetic field. \citet{Zhang2014} compared outflow axes with the plane-of-the-sky component of magnetic fields in cores from which the outflows originate. They found a slight preference around $0^\circ$ and $90^\circ$ in angles between the magnetic field and the outflow axis. However, due to the small sample size, the data are consistent with a random orientation of magnetic fields and outflows. This lack of correlation, if confirmed by more statistically significant samples, suggests that angular momentum in accretion disks is not dictated by the orientation of the magnetic field in the cores. The dynamical importance of the magnetic field thus appears to weaken relative to gravity and angular momentum from the core to the disk scales.
\subsection{Fragmentation and cluster formation}
\label{sec:fragmentation_high}
The clustering nature of massive stars implies that there must be considerable fragmentation in parsec-scale molecular clumps in order to produce a cluster of stars. Gravity, turbulence, magnetic fields, and stellar feedback all influence fragmentation and the outcome of cluster formation. The dynamical role of the magnetic field during the fragmentation of molecular clumps can be assessed if the magnetic field strength is measured directly. As mentioned in Section \ref{sec:intro_tracers_LOS}, this can be achieved by observing circular polarization from the Zeeman effect. While observations of the Zeeman effect have been carried in {\rm H}{\sc i}\xspace and OH line emission using the VLA \citep{Crutcher2012, CrutcherKemball2019}, these observations probe a low-density medium that may not be directly involved in gravitational collapse. There are no reported interferometric observations of the Zeeman effect in dense molecular gas, although ALMA is likely to reshape this field once precise measurements of circular polarization become available to the user community. Future interferometric observations of the Zeeman effect (measuring the line-of-sight magnetic field), when combined with linear dust and spectral-line polarization observations (measuring the plane-of-sky field), will allow much more robust estimates of the total magnetic field strengths in star-forming regions.
A powerful indirect method that can be used to assess the dynamical role of magnetic fields in protocluster formation is the analysis of the distribution of magnetic fields within cluster-forming molecular clumps. Numerical simulations of turbulent, magnetized molecular clouds offer clues about magnetic field topologies in strong and weak field regimes. When the magnetic field is strong relative to turbulence, the field is less disturbed and appears to be ordered \citep[e.g.][]{Ostriker2001}.
Observationally, such a study becomes meaningful only when a statistically significant sample is involved. \citet{Zhang2014} compared dust polarization in dense cores probed by the SMA with the polarization in the parental molecular clumps observed by single-dish telescopes in a sample of 14 high-mass star forming regions; the results show a bimodal distribution in polarization angles. As shown in Figure \ref{fig:Bclump_Bcore}, magnetic fields on dense-core scales are not randomly distributed, but are either parallel or perpendicular to the field orientations in their parental clumps. A later study of a larger sample of 50 primarily high-mass sources by \citet{Koch2014} compared the magnetic field orientation with the gradient of the total dust emission and came to the same conclusion. These findings indicate that the magnetic fields are dynamically important in cluster-forming clumps, and that the field is strong enough on the clump scale to channel the material along the field lines into dense cores during the gravitational collapse. While \citet{Hull2014} found hints of consistency in the magnetic field orientation from $\sim$\,0.1\,pc to $\sim$\,1000\,au scales in a few low-mass sources (see Section \ref{sec:collapse}), the results at the $\sim$\,1\,--\,0.1\,pc scales from \citet{Koch2014} and \citet{Zhang2014} are much more significant, suggesting that the magnetic field may be more dynamically important at parsec scales.
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width=0.8\textwidth, clip, trim=3.5cm 2.5cm 2.75cm 3cm]{PA_diff_B.pdf}
\caption{
\small
The distribution of polarization angles in dense cores with respect to polarization angles of their parental clumps. The dashed line represents the probability distribution of the plane-of-the-sky-projected orientations of pairs of vectors with $0^\circ - 40^\circ$ opening angles randomly oriented in space. The dotted line represents the probability distribution where the vectors are preferentially perpendicular, with $80^\circ - 90^\circ$ opening angles. The dashed-dotted line represents the contributions from the two distributions. Adapted from \citet{Zhang2014}. Figures reproduced with permission from the AAS.}
\label{fig:Bclump_Bcore}
\end{center}
\end{figure}
\subsection{Virial equilibrium in massive cores and cluster formation}
\label{sec:virial_high}
An essential component of the classical view of star formation is that dense cores should be in a state of an approximate virial equilibrium \citep{Larson1981, Shu1987, McKee2003}. However, there is increasing evidence from recent observations that cores forming high-mass protostars may not be in virial equilibrium. \citet{Pillai2011} carried out a stability analysis in two infrared dark clouds (IRDCs) and found that the virial parameter $\alpha_{\textrm{vir}}$, defined as the ratio between the virial mass $M_\textrm{Virial}$ and the gas mass $M_\textrm{gas}$, is typically between 0.1 to 0.3. These results are corroborated by \citet{Kauffmann2013}, who compiled a large sample of massive clumps from surveys and found that a significant fraction of them have virial parameters $\ll$\,2, where 2 is the value expected from a pressure confined, self-gravitating Bonnor-Ebert sphere.
More recently, \cite{Traficante2018} pointed out that the virial mass can be underestimated because the spectral line emission of a tracer molecule preferentially probes sections of a cloud with gas densities above the critical density of the line transition. This effect may result in an underestimate of the observed line width, and hence lead to a small virial parameter. However, since the effective excitation density of a line transition is typically lower than the critical density due to radiative trapping of photons when the optical depth is non-negligible, this effect may not be as significant. For example, many of the virial analyses in the literature use the NH$_3$ and N$_2$H$^+$ lines, which often exhibit consistent line widths despite having critical densities that differ by more than a factor of 5.
These small virial parameters appear to challenge the picture of star formation in which gas evolves in a state of equilibrium. However, the virial analyses discussed above do not include contributions from magnetic fields, which may provide significant support in dense cores. \citet{Frau2014} carried out continuum polarization measurements of NGC 7538 with the SMA and found a magnetic field strength of 2.3--2.6\,mG in the region. \citeauthor{Frau2014} also performed a detailed energetics comparison of the gravitational potential energy, turbulent support, thermal pressure, and magnetic support. Among the 13 dense cores analyzed, the magnetic support amounts to 0.2 to 2.4\,$\times$ the combined effect from turbulent and thermal pressure. More than half (eight out of 14) of the cores have magnetic support comparable to the turbulent and thermal support. In addition, the virial parameters including the magnetic support vary from 0.1 in the central region where star formation takes place to 8 in cores that are influenced significantly by molecular outflows.
The studies of NGC 7538 by \citet{Frau2014} and \citet{Wright2014} demonstrate that magnetic fields can indeed be a significant contributor to the support of dense cores. Such an analysis should be extended to high-mass star-forming regions at earlier evolutionary stages when stellar feedback has not significantly altered the initial physical conditions. To that end, \citet{Pillai2011, Zhang2015, Ohashi2016, Sanhueza2017}; and \citet{Lu2018} performed virial analyses in high-mass star forming regions prior to the development of strong {\rm H}{\sc ii}\xspace regions. The interferometric observations allowed the identification of structures $<$\,0.1\,pc in size within massive clumps. Figure \ref{fig:virial} presents a comparison of the virial mass and the gas mass for structures identified in 12 high-mass star forming regions \citep{Pillai2011, Zhang2015, Ohashi2016, Lu2018}. The virial mass is computed as
\begin{equation}
M_\textrm{Virial} = \frac{3 k R \sigma_v^2 }{G}\,\,,
\end{equation}
\noindent
where $\sigma_v$ is the line-of-sight velocity dispersion due to both non-thermal and thermal broadening, $R$ is the radius, and $G$ the gravitational constant. $k = \frac{5 - 2a}{3 - a}$ is a correction factor related to the density profile $\rho \propto r^{-a}$. We assume a constant density in the structure, i.e., $a = 0$, which leads to the maximum value in the correction factor ($k = 5/3$) and thus the maximum virial mass. The gas mass is derived from the dust continuum emission (see Section \ref{sec:core_mass}). Figure \ref{fig:virial} reveals that the dense gas structures in these high-mass star forming regions have virial parameters $<$\,2. Furthermore, a large fraction of the dense gas structures have virial parameters $\ll$\,2. Since there are no direct measurements of magnetic fields for this sample, we compute the magnetic field required to increase the virial parameter from 0.5 to 1.0. The magnetic virial mass \citep{Hennebelle2008c} is computed as
\begin{equation}
M_\textrm{Mag} = \frac{5 R \sigma_A^2 }{6 G}\,\,,
\end{equation}
\noindent
where $\sigma_A = \frac{B }{\sqrt{4 \pi \rho}}$ is the Alfv\'enic velocity corresponding to a magnetic field strength $B$ and density $\rho$. We compute the magnetic field strengths when $M_\textrm{Mag} = M_\textrm{Virial} = 0.5M_\textrm{gas}.$ Assuming a representative gas mass $M_\textrm{gas}$ and radius $R$ for the clumps ($M_\textrm{gas} = 1000$\,M$_\odot$, $R = 0.5$\,pc), cores ($M_\textrm{gas} = 50$\,M$_\odot$, $R = 0.05$\,pc), and condensations ($M_\textrm{gas} = 5$\,M$_\odot$, $R = 0.005$\,pc), we find required magnetic field strengths of 0.29\,mG, 1.46\,mG and 14.4\,mG, respectively. Under these conditions, the virial parameters would be 0.5 without the contribution of magnetic fields, and would increase to 1 after the inclusion of magnetic fields. The required field strengths are in broad agreement with typical literature values of magnetic fields that were derived using the DCF method based on polarization observations of massive star-forming regions.
Recent measurements of the thermodynamic properties in high-mass star-forming regions reveal small virial parameters that appear to challenge the assumption of equilibrium star formation. However, a lack of magnetic field measurements in these same regions leaves open the possibility of virialized star formation, since the field strength of a fraction of a mG to several mG could bring the dense gas close to a state of equilibrium. With the advent of ALMA, we expect significant progress to be made on this vital question in high-mass star formation as more observations of both spectral lines and polarization are carried out over the coming years.
\begin{figure}[hbt!]
\begin{center}
\includegraphics[width=0.8\textwidth, clip, trim=4.5cm 2.5cm 5cm 3.5cm]{high_mass_virial_data_mcore.pdf}
\caption{
\small
Virial mass $M_\textrm{Virial}$ versus gas mass $M_\textrm{gas}$ for clumps and cores in 12 high-mass star forming regions. The data were taken from \citet{Pillai2011, Zhang2015,Ohashi2016}; and \citet{Lu2018}. The virial mass includes contributions from thermal and non-thermal line widths. The gas mass was derived from dust continuum emission. The dashed, solid, and dotted lines denote virial parameters $\alpha_{\textrm{vir}} = M_\textrm{Virial}/M_\textrm{gas}$ of 2, 1 and 0.1, respectively. The majority of the data points have $\alpha_{\textrm{vir}} < 1$, indicating that clumps and cores are far from virial equilibrium when contributions from magnetic fields are not included. The red arrows indicate the strength of the magnetic field (in mG) that are required to increase $\alpha_{\textrm{vir}}$ from 0.5 to 1.0 (see Section \ref{sec:virial_high}).}
\label{fig:virial}
\end{center}
\end{figure}
\section{Constraining strong-field star formation with statistics of hourglass-shaped magnetic fields}
\label{sec:stat}
There are approximately 32 unique interferometric observations of low-mass (Class 0, 0/I, or I) star-forming cores exhibiting significant polarization detections that are not obviously dominated by dust scattering. They are
L1448 IRS 2,
L1448N(B),
L1448C,
NGC 1333-IRAS 2A,
SVS 13A,
SVS 13B,
NGC 1333-IRAS 4A,
Per-emb-21,
NGC 1333-IRAS 7,
NGC 1333-IRAS 4B,
IRAS 03282+3035,
IRAS 03292+3039,
B1-c,
B1-b,
HH 211 mm,
HH797,
L1551 NE,
NGC 2024 FIR 5,
OMC3-MMS5,
OMC3-MMS6,
OMC2-FIR3,
OMC2-FIR4,
VLA 1623,
IRAS 16293A,
IRAS 16293B,
Ser-emb 8,
Ser-emb 8 (N),
Serpens SMM1-a,
Serpens SMM1-b,
B335,
L1157,
and CB 230
\citep{Girart1999, Lai2002, Girart2006, Rao2009, Alves2011, Stephens2013, Hull2013, Hull2014, Hull2017a, Hull2017b, SeguraCox2015, Cox2018, Galametz2018, Harris2018, Kwon2018, Maury2018, Sadavoy2018a, Sadavoy2018c}.
Among them,
NGC~1333-IRAS~4A,
IRAS~16293A,
L1157,
NGC 2024 FIR 5,
OMC3 MMS 6,
L1448 IRS 2,
B335,
VLA 1623,
and B1-c
(9/32 $\approx$ 28\%)
exhibit hourglass morphologies, consistent with magnetically dominant star formation
\citep{Girart1999, Lai2002, Girart2006, Rao2009, Alves2011, Stephens2013, Hull2014, Cox2018, Galametz2018, Kwon2018, Maury2018, Sadavoy2018a}.
On the high-mass side, there are 24 unique high-mass star forming clumps observed in polarization with interferometers. They are
Orion KL,
NGC 2071,
W3,
W3(OH),
DR 21(OH),
DR 21 filament,
G192,
G30.79,
NGC 6334 I/In/IV/V,
W51 e and N,
IRAS 18306,
IRAS 18089,
W43,
NGC 7538,
G5.89,
NGC 2264C1,
G34.43+0.24,
G35.2N,
G31.41+0.31,
and G240.31+0.07
\citep{Rao1998,Lai2001,Lai2003,Cortes2006a,Cortes2006b,Cortes2008,Girart2009,Tang2009a,Tang2009b,Beuther2010,Tang2010,HRChen2012,Girart2013,Liu2013,Qiu2013,Frau2014,Hull2014,Qiu2014,Sridharan2014,Wright2014,Zhang2014,HBLi2015,Cortes2016,Houde2016,Ching2017,Juarez2017,Koch2018}.
Of these,
G31.41+0.31,
G240.31+0.07,
NGC 6334 I and In,
and W51 e2
(5/24 $\approx$ 21\%)
exhibit hourglass magnetic field configurations
\citep{Girart2009,Tang2009a,Qiu2014,HBLi2015}.
A pinched/hourglass-shaped magnetic field configuration can yield a range of magnetic-field morphologies when projected in the plane of the sky \citep{Frau2011}. When the main axis of the system is aligned with the line of sight, the plane-of-sky component of the field is oriented radially outward. This scenario was proposed by \citet{Girart2013} to explain the magnetic field topology in DR 21(OH). Among the low- and high-mass objects with robust detections of polarization (and thus inferred magnetic fields), 28\% of the low-mass sources and 21\% of the high-mass sources exhibit hourglass configurations. Assuming that the sample is randomly oriented in three dimensions,
the probability than an hourglass axis will be oriented within 30$^\circ$ of the line of sight is $\sim$\,13\% (\citealt{Frau2011} show that an angle closer to the line of sight than $\sim$\,30$^\circ$ yields a mostly radial pattern). Therefore, the majority of the objects ($\sim$\,87\%) should display an hourglass shape if the variations in magnetic field morphologies were solely due to projection effects.
The fact that only a small fraction of the objects exhibit hourglass morphologies suggests that the scenario of magnetically dominant core collapse is not the predominant mode of low- or high-mass star formation. However, we should note that when taking into account the projection effect, the fraction of the detected hourglass reaches 28\%/87\% $\approx$\,1/3 for low-mass and 21\%/87\% $\approx$\,1/4 for high-mass sources. Such fractions are not negligible, and are even more significant considering the various environmental and dynamical differences between sources, as well as the observational limitations that may hinder the detection of an hourglass---in particular the sensitivity and dynamic range limitations of the pre-ALMA interferometers.
Of the sources observed by ALMA that clearly lack hourglass magnetic field morphologies, only one so far (Ser-emb 8 in \citealt{Hull2017a}; see Figure \ref{fig:ser8_multiscale}) shows a truly chaotic morphology, presumably having been influenced by turbulence and infall. However, the few ALMA polarization observations published to date have shown more examples of magnetic fields that seem to have been shaped by bipolar outflows. These observations are challenging to interpret, as magnetic fields aligned with the walls of an outflow cavity can look quite similar to an hourglass when observed with low resolution. Thus, future studies will need to carefully consider how to determine which ``hourglasses'' are preserved from the natal collapse phase, such as in NGC 1333-IRAS 4A, where at $\sim$\,1000\,au scales the hourglass appears to be unperturbed by the two outflows emanating at different angles from the embedded binary \citep{Girart2006}; versus which are caused by outflow/jet feedback, such as in Serpens SMM1 \citep{Hull2017b} and B335 \citep{Maury2018}.
We further caution that the above values are lower limits that are limited by telescope resolution: i.e., in the high-mass case, we consider each source as unique, despite the fact that each clump is likely to have more than one protostellar object. This is even true in the low-mass case: for example, SMA observations were able to distinguish the magnetic fields in SVS 13A and B, whereas CARMA observations were not; and ALMA observations were able to map the magnetic fields in Serpens SMM1-a and b separately, whereas CARMA and SMA observations were not. Finally, and perhaps most important, the number of objects observed with interferometers remains small: observations by ALMA are likely to increase the sample size significantly in the near future, enabling studies that will establish far better statistical constraints.
\section{Polarization from dust scattering}
\label{sec:scattering}
While polarized emission from dust scattering is independent of the magnetic field, we briefly mention it here because this young, quickly growing subfield emerged from studies that were attempting to make resolved maps of the magnetic field in protoplanetary disks, which is one of the longstanding goals of disk- and star-formation studies. \citet{Rao2014} reported a tentative detection of a toroidally wrapped magnetic field in the Class 0 protostar IRAS~16293B at $\sim$\,75\,au resolution using the SMA. However, on the whole, pre-ALMA full polarization observations of disks did not have the sensitivity or the resolution necessary to make well resolved maps of low-level ($\lesssim$\,1\%) polarized dust emission \citep{Hughes2009b,Hughes2013}.
Around the same time that ALMA started producing polarization results, several theoretical studies suggested that polarized (sub)millimeter-wave emission from protoplanetary disks could be caused by the self-scattering of emission from (sub)millimeter-sized dust grains \citep{Kataoka2015, Kataoka2016, Pohl2016, Yang2016a, Yang2016b}, consistent with previous work by \citet{Cho2007}. \citet{Kataoka2016} and \citet{Yang2016b} explained the 1.3\,mm CARMA polarization observations by \citet{Stephens2014} of the Class I/II source HL~Tau in the context of this self-scattering mechanism. Self-scattering can also explain the polarization pattern observed in the Class II protoplanetary disk IM Lup in 870\,$\mu$m ALMA observations by \citet{Hull2018a}, and similar observations of HD 163296 by \citet{Dent2019}.
The situation became even more complex when \citet{Tazaki2017} proposed yet another mechanism that can cause polarization in disks: dust grains whose minor axes are aligned with the gradient in the dust emission (this work is rooted in the radiative torque model by \citealt{Lazarian2007}). This mechanism, which is sometimes referred to as ``$k$-RATs'' (i.e., where $k$ is the orientation of the gradient in the dust emission; see also Footnote \ref{footnote:RATs}), is different from self-scattering by dust grains. However, similar to self-scattering, it is independent of any magnetic field that might be in the disk. Polarization from $k$-RATs, which has a morphology that is azimuthal, is broadly consistent with 3\,mm ALMA observations of polarization in the HL~Tau disk \citep{Kataoka2017}, although \citet{Yang2019} found that their $k$-RAT model did not reproduce the data when examined in detail. The transition from possible $k$-RAT alignment at long (3\,mm) wavelengths to scattering at short (870\,$\mu$m) wavelengths was shown clearly by \citet{Stephens2017b}, who reported high-resolution ALMA polarization results at 870\,$\mu$m and 1.3\,mm, complementing the 3\,mm observations reported by \citet{Kataoka2017}. The intermediate-wavelength 1.3\,mm data exhibit a morphology consistent with roughly equal contributions of self-scattering and $k$-RATs.
Other recent results have interpretations that are not as clear. These include observations of the edge-on HH 111 and HH 212 disks \citep{CFLee2018a}, the high-mass protostellar disk Cepheus A HW2 \citep{FernandezLopez2016}, and the transition disk HD 142527 \citep{Kataoka2016b, SOhashi2018}; the polarized emission from these objects may be caused by either scattering or magnetically aligned grains ($B$-RATs). Observations of the low-mass disks CW Tau and DG Tau \citep{Bacciotti2018} and the massive GGD27 MM1 disk \citep{Girart2018} associated with the iconic HH 80-81 radio jet show polarization that may be caused by dust self-scattering and/or $k$-RATs. Polarization observations of a number of embedded Class 0 protostars by \citet{SeguraCox2015, Cox2018, Harris2018, Sadavoy2018a, Sadavoy2018c} and \citet{Takahashi2018} sometimes show signs of scattering in the inner regions of sources and magnetic alignment in the outer regions. Finally, \citet{Alves2018} reported $\sim$\,30\,au-resolution observations of the Class I binary source BHB07-11, which exhibits consistent polarization patterns at 3\,mm, 1.3\,mm, and 850\,$\mu$m. They argue that the polarized emission most likely arises from magnetically aligned dust grains.
We emphasize that studies of (sub)millimeter polarization from disks are in their infancy. More observations and more mature, detailed models are necessary before we will be able to determine whether there is, in fact, any contribution from magnetic fields in the polarized emission from protostellar disks.
\section{Summary}
The steady improvement in telescope sensitivity at the (sub)millimeter-wave bands where dust continuum emission is strong has led to a revolution in interferometric observations of linear polarization over the past two decades. In recent years, both CARMA and the SMA conducted surveys of magnetic fields in samples of $>$\,10 star-forming cores/clumps. These studies have proven insightful in revealing the statistical behavior of magnetic fields in star formation. However, while individual case studies continue to be fruitful, large, less biased surveys that target statistically significant samples of sources are the only way to further constrain the dynamical role of magnetic fields in the star formation process.
Studies of low-mass star formation have found that while there are a few cases where the magnetic field seems to regulate the collapse of star-forming material across the scales from clouds/clumps to cores to envelopes, there are also many cases where the multi-scale magnetic field morphology shows no consistency. Furthermore, the resolution and sensitivity of ALMA have yielded new observations that show evidence of shaping of the magnetic field by bipolar outflows at the $\sim$\,100\,au scale in some (but not all) sources. A few recent observations of spectral-line polarization by the SMA and ALMA have shed light on the importance of magnetic fields in the launching of outflows. More observations of this type with ALMA will enhance this subfield significantly in the coming years.
Studies of high-mass star formation found that magnetic fields are not the dominant force within the parsec scale of molecular clumps. However, fields appear to be dynamically important in the fragmentation of clumps into 0.1\,pc-scale cores. Furthermore, molecular gas with densities $>$ $10^4$\,cm$^{-3}$ in high-mass star formation appears to be far from virial equilibrium if magnetic fields do not contribute significantly to the dynamical process. This significant departure from the state of virialization appears to challenge the basic assumption of equilibrium star formation. Direct measurements of magnetic fields will shed more insight on this important issue.
In both the low- and high-mass regimes, comparisons of outflow versus magnetic field orientations have yielded random alignment. The outflow is a probe of the angular momentum at the smallest scales in the source, and thus the limited data currently available point to a scenario where angular momentum is far more important than the magnetic field at the small ($<$\,1000\,au) scales of individual forming protostars.
The role of magnetic fields in the formation of high- and low-mass disks is less clear due to a small number of observations, and the issue of scattering by dust. Near-future studies targeting the Zeeman and G-K effects may finally be able to access information about the magnetic field in disks \citep[e.g.,][]{Brauer2017}. Furthermore, high resolution studies at longer wavelengths in regions that are optically thick (and thus dominated by scattering) in the ALMA data will be made possible by future telescopes such as the Next Generation Very Large Array (ngVLA; \citealt{Carilli2015}). The ngVLA will be a powerful tool for studying magnetized star formation \citep{Isella2015, Hull2018b}, as it will offer dramatic improvements in the sensitivity, resolution, and image fidelity over the current VLA, which has enabled just a few studies of polarization in the very brightest low- and high-mass star-forming sources \citep{CarrascoGonzalez2010, Cox2015, Liu2016, Liu2018}.
Despite major advances in recent years, the studies presented in this review---in particular the survey studies---are biased toward objects with detectable polarization. Nearly all low- and high-mass star forming objects have embedded protostars (and sometimes {\rm H}{\sc ii}\xspace regions, in the case of high-mass sources). A crucial, under-explored area of star formation involves studying the initial conditions of the magnetic field before feedback (e.g., infall, outflows, and radiation) has altered its morphology. However, this has rarely been achieved due to a lack of sensitivity. In the coming years, large, multi-scale, multi-telescope surveys of magnetic fields in star-forming clouds covering the full range of evolutionary stages will allow us to break new ground in our studies of magnetized star formation.
\section*{Author Contributions}
CH led the writing of the sections about low-mass star formation and scattering. QZ led the section about high-mass star formation. Both authors wrote the abstract, introduction, hourglass-statistics section, and summary.
\section*{Funding}
CH acknowledges the support of both the NAOJ Fellowship as well as JSPS KAKENHI grant 18K13586. QZ acknowledges the support of the the Smithsonian Institute SSA grant, ``Are Magnetic Fields Dynamically Important in Massive Star Formation?''
\section*{Acknowledgments}
CH and QZ acknowledge the two anonymous referees for their careful reading of the manuscript and for the insightful comments, which led to a more clear and thorough presentation of this review.
CH acknowledges Paulo Cort\'es, Martin Houde, and Josep Miquel Girart for the useful discussion.
The BIMA array was operated by the Berkeley-Illinois-Maryland Association with support from the National Science Foundation.
Support for CARMA construction was derived from the states of California, Illinois, and Maryland, the James S. McDonnell Foundation, the Gordon and Betty Moore Foundation, the Kenneth T. and Eileen L. Norris Foundation, the University of Chicago, the Associates of the California Institute of Technology, and the National Science Foundation.
The Submillimeter Array is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics, and is funded by the Smithsonian Institution and the Academia Sinica.
ALMA is a partnership of ESO (representing
its member states), NSF (USA) and NINS (Japan), together with NRC
(Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea),
in cooperation with the Republic of Chile. The Joint ALMA
Observatory is operated by ESO, AUI/NRAO and NAOJ.
The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
\textit{Facilities:} ALMA, APEX, BIMA, BLAST, BLAST-TNG, CARMA, CSO, IRAM 30\,m, JCMT, LMT, NRAO 12\,m, SMA, SOFIA, Viper, VLA.
\textit{Software:} APLpy, an open-source plotting package for Python hosted at \href{http://aplpy.github.com}{\texttt{aplpy.github.com}} \citep{Robitaille2012}. Common Astronomy Software Applications (CASA, \citet{McMullin2007}). Astropy \citep{Astropy2018}.
\section*{Conflict of Interest Statement}
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
|
1,116,691,497,388 | arxiv | \section{Introduction}
Graph sequences with limits that can be represented by an unbounded, $L^p$ kernel for some $p<\infty$ arise naturally from various random graph models, among others the dense Preferential Attachment Graph random multigraph model. The second author showed in \cite{KKD} that even if all densities of simple graphs are finite, this does not necessarily uniquely determine the unbounded (non-negative) kernel up to weak isomorphism. Actually, counter-examples can also be given that are non-negative integer valued, which we will think of as multigraphs in this context. In particular, this shows that density convergence and convergence in cut distance are not equivalent. Whilst it was shown that even for unbounded kernels $W$ one has probability 1 convergence in density of the $W$-random graph sequence $\mathbb{G}(W,n)$ to $W$ itself, this does not imply the same convergence in cut distance. For bounded kernels, this is quantified by the Second Sampling lemma. Ultimately, our goal is to provide a similar result for unbounded kernels, but here we first present high probability bounds on the closeness of the cut norms of the kernel and its samples, i.e., a First Sampling lemma for unbounded kernels.
Under mild conditions, we provide an upper bound that is arbitrarily close in order to the bound for $L^\infty$ graphons, but at the cost of a much larger, only polynomially small (as opposed to exponentially small) exceptional event set. We note, however, that such a polynomial sized exceptional set is still adequate for most applications.
Also, the authors with Gabor Kun have applied the results of this paper to
build the theory of property testing of multigraphs.
They showed that $L^m$ multigraphs admit a Szemerédi regular partition
that can be approximated by sampling a constant number of vertices. This
reveals the testable properties of a multigraph.
\section{Main results}\label{Section:Theorems}
For some definitions and notation, we refer to the monograph \cite{Lovasz}.
Given an ordered \(k\)-subset \(X\) of \([0, 1]\), let $U[X]$ denote the step-function on $[0,1]^2$ with uniform steps of length $1/k$ in each variable, and values given by $(U(X_i,X_j))_{i\neq j\in[k]}$, and 0 on the main diagonal (this latter is in order to avoid having to use values concentrated on a null-set). Throughout this paper we shall assume that $X$ is obtained by sampling $k$ independent and identically distributed points uniformly from $[0,1]$, and then rearranging them in increasing order.
Recall that given an $n\times n$ matrix $A$ and a function $U\in L^1([0,1]^2)$, their cut norm is defined as:
\[\|A\|_{{}_\square} = \frac{1}{n^2} \max_{S,T \subseteq[n]} \left|\sum_{i\in S,j\in T} A_{ij}\right| \quad \text{ and } \quad \|U\|_{{}_\square} = \sup_{S,T \subseteq[0,1]} \left| \int_{S\times T} U(x, y) dx dy\right|,\] respectively. Our goal is to provide a high probability bound on the difference of the norm
$
\big\|U[X]\big\|_{{}_\square}$ of the random sample and the original norm $\big\|U\big\|_{{}_\square}
$. The typical application would be to check with very high probability that $\|U\|_{{}_\square}$ is small, via looking at the samples $U[X]$.
Our main result is the following generalization of the graphon sampling lemma (\cite{Alon}, \cite{Borgs}).
\begin{theorem}{(First Sampling Lemma for Unbounded Kernels).}\label{Thm:First_sampling} Let $\gamma>0$ and $m>4$ be constants such that $m\gamma>2$.
Then for any function \(U \in L^m_{sym}([0,1]^2)\), there exist constants $C,C_0, C_1>0$ only depending on $\gamma, m$ and $\|U\|_m$ such that with probability at least \textcolor{black}{\(1 − C_1k^{2-\gamma m}\)},
\[-C_0 k^{-1/2+\gamma}(\ln k)^{1/2}\leq \big\|U[X]\big\|_{{}_\square} − \big\|U\big\|_{{}_\square} \leq C\cdot k^{-1/4+\gamma/2}(\ln k)^{1/4}\]
\end{theorem}
In proving this result, we shall proceed by establishing intermediate steps, splitting the error into the deterministic, "systematic error" $\ensuremath{\mathbb{E}}_X\left[ \|U[X]\|_{{}_\square} \right] -\|U\|_{{}_\square}$ (i.e., comparing the expected value of the random norm to the original), and the random "dispersion" term $\ensuremath{\mathbb{E}}_X\left[ \|U[X]\|_{{}_\square} \right] -\|U[X]\|_{{}_\square}$ (i.e., exploiting concentration inequalities to bound with high probability the error between the expectation and the true random norm).
For the lower bound, we have the following systematic error bound, matching the result from the bounded kernel case.
\begin{prop} For any kernel $U$ and integer $k$, we have
\begin{equation}\label{Ineq:Elowerbound2}
\ensuremath{\mathbb{E}}_X\left[ \|U[X]\|_{{}_\square} \right] -\|U\|_{{}_\square} \geq - \frac{1}{k}\|U\|_{{}_\square}.
\end{equation}
\end{prop}
Next, we wish to establish an upper bound on the systematic error, which is of a significantly larger order of magnitude, and no longer matches the bounded kernel case.
It will actually be more convenient to work with the following one-sided version of the cut norm:
\[\|A\|^+_{{}_\square} = \frac{1}{n^2} \max_{S,T \subseteq[n]} \sum_{i\in S,j\in T} A_{ij}\]
for an \(n\times n\) matrix \(A\), and
\[\|U\|^+_{{}_\square} = \sup_{S,T \subseteq[0,1]} \int_{S\times T} U(x, y) dx dy\]
for a function $U\in L^1([0,1]^2)$. We note that
\begin{equation*}
\|A\|_{{}_\square}= \max\{\|A\|^+_{{}_\square}, \|−A\|^+_{{}_\square}\}, \quad \text{ and } \quad \|U\|_{{}_\square}= \max\{\|U\|^+_{{}_\square}, \|−U\|^+_{{}_\square}\}.
\end{equation*}
In terms of this norm, we are going to prove the following bound:
\begin{prop}\label{Prop:Plus_upper}
Let $m\geq 2,\gamma>0$ such that $m\gamma>2$ and let \(U \in L^m_{sym} \). Then
with probability at least \(1-C_{1}\cdot k^{2-\ensuremath{\gamma} m}\)
\begin{align}\label{Ineq:E_upper2}
\nonumber
& \ensuremath{\mathbb{E}}_{X} \left[\|U[X]\|^+_{{}_\square}\right] - \|U\|_{{}_\square}^+ \\
\nonumber
& \leq \frac{2 k^{-\frac{1}{2} - \ensuremath{\gamma}}}{\sqrt{\ln k}}\|U\|_{1} + C_{M,0} \cdot \sqrt[4]{\ln k} \cdot k^{-\frac{1}{4}+\frac{\ensuremath{\gamma}}{2}} \cdot \left(1 - \frac{C_{M,1}}{1-\frac{2m}{3}}\right) + O\left( \left(\ln k\right)^{\frac{3}{4}} \cdot k^{-\frac{3}{4}}\right)\\
\nonumber
& \quad \quad + 2 \sqrt[4]{\ln k} \cdot k^{-\frac{1}{4}+\frac{\ensuremath{\gamma}}{2}} \|U\|_2 + C_{0} \cdot \sqrt{\ln k} k^{-\frac{1}{2} + \ensuremath{\gamma}} \\
& = O\left(\sqrt[4]{\ln k} \cdot k^{-\frac{1}{4}+\frac{\ensuremath{\gamma}}{2}}\right).
\end{align}
where $C_{M,0}$ and $C_{M,1}$ are constants depending on $U, m$, \(C_0\) and \(C_1\) are constants.
\end{prop}
Finally, we need to show that the random variable $\|U[X]\|_{{}_\square}$ is highly concentrated around its expectation. The "bad set" is no longer exponentially small (as is the case for bounded kernels), but only polynomially so (typically still sufficient for many applications), unless we impose much stronger assumptions on $U$. Let us introduce the notations
\[c(U, m):=\sum_{\ell=0}^{m} \binom{m}{\ell} \|U\|_{m-\ell}^{m-\ell} \|U\|_1^{\ell}
\]
and
\[
c_*(U,m):=e^{-m\|U\|_1 } \cdot \sum\limits_{n=0}^\infty \frac{ m^n \|U\|^n_n}{n!}.
\]
Note that the former is finite as long as $U\in L^m$, whereas the finiteness of the latter requires even $e^U$ to have finite integral.
\begin{prop}\label{Prop:Concentration} Assume $\nu>1$ and $\gamma,\epsilon>0$.
Letting $\|U[X]\|^*_{{}_\square}$ denote any of $\|U[X]\|_{{}_\square}$ or $\|B\|_{{}_\square}^+$
and \(m>\max\left\{1,\frac{2}{\ensuremath{\gamma}}\right\}\), for all \(k\) large enough, we have the following inequalities.
\begin{equation}\label{Ineq:Azumalower}
\begin{split}
\Prob{\left|\Big.\|U[X]\|^*_{{}_\square} - \ensuremath{\mathbb{E}}_X\left[ \|U[X]\|^*_{{}_\square}\right]\right| \geq \frac{8\nu\sqrt{m\gamma-2}}{\sqrt2} \cdot \sqrt{\ln k} \cdot k^{-\frac{1}{2}+\ensuremath{\gamma}}}
& \leq C_{1}\cdot k^{2-\ensuremath{\gamma} m},
\end{split}
\end{equation}
where \(C_{1}=2+ \frac{c(U,m)}{\ensuremath{\nu}^m}\)
and
\begin{equation}\label{Ineq:Azumalower_Chernoff2}
\begin{split}
\Prob{\left|\Big.\|U[X]\|_{{}_\square}^* - \ensuremath{\mathbb{E}}_X\left[ \|U[X]\|_{{}_\square}^*\right]\right| \geq \frac{8\nu}{\sqrt2} k^{-\frac{1}{2}+\ensuremath{\gamma} + \epsilon}}
& \leq 2e^{-k^{-2\epsilon}} + k^{2}e^{-m\ensuremath{\nu} k^{\ensuremath{\gamma}}}c_*(U,m).
\end{split}
\end{equation}
\end{prop}
Optimizing for $q,\nu$ and putting together the three propositions will yield our main result.
\medskip
\section{Main Tools} \label{Section: Preliminaries}
We shall be making use of the following generalised version of Azuma’s Inequality.
\begin{lemma}\label{Lemma:generalised_azuma}
Let \((\Omega, A, \pi)\) be a probability space, and let \(f_1, \dots, f_k : Ω^n \rightarrow \ensuremath{\mathbb{R}}\) be measurable functions, such that for any \(1\leq a\leq m\) the events
\[H_{a,i} :=\Bigg\{\bigg | \ensuremath{\mathbb{E}}\left[f_i(X)\Big|(X_1, \dots X_a)\right]-\ensuremath{\mathbb{E}}\left[f_i(X) \Big| (X_1, \dots X_{a-1})\right] \bigg|\geq \alpha_a\Bigg\}\]
for all \(1\leq i\leq k\) are contained in a set \(H_a\). Let \(X\) be a random point of \(\Omega^n\) (chosen according to the product measure). Then
\begin{equation*}
\begin{split}
\Prob{\Big|f_i(X) − \ensuremath{\mathbb{E}}[f_i(X)] \Big| \geq \lambda \sqrt{\sum_{a=1}^m \alpha_a^2} \quad \text{for some } i\in [k]} \leq 2 k \cdot \exp(-2\lambda^2) + \Prob{\cup_{a \in [m]}H_a}
\end{split}
\end{equation*}
for any \(\lambda, \alpha_a \geq 0\).
\end{lemma}
The proof of this claim goes similarly to the one in \cite[Prop. 34]{TV}.
The Chernoff bound (also known as the exponential Chebyshev's inequality) will also be useful:
\begin{lemma}\label{Lemma:Chernoff_absval}
For a random variable \(Z\) with \(\ensuremath{\mathbb{E}}[Z] > 0\) and any \(\delta>0, m>0\) we have that
\begin{equation*}
\begin{split}
\Prob{\left|\big.Z-\ensuremath{\mathbb{E}}[Z]\right|>\delta}\leq & \min \left\{1, \quad e^{-m \delta}e^{m\ensuremath{\mathbb{E}}[Z]} \cdot \sum\limits_{n=0}^\infty \frac{ m^n\ensuremath{\mathbb{E}}\left[Z^n\right]}{n!}\right\}.
\end{split}
\end{equation*}
\end{lemma}
Another useful statement is a generalisation of the well known Chebyshev's inequality.
\begin{lemma}\label{Lemma:generalised_Chebyshev}
For a random variable \(Z\) and any \(\delta>0, m\in \ensuremath{\mathbb{Z}}^+\) we have that
\begin{equation*}
\begin{split}
\Prob{\Big|Z-\ensuremath{\mathbb{E}}[Z]\Big|>\delta}\leq & \min \left\{1, \frac{\ensuremath{\mathbb{E}}\bigg[\Big|Z-\ensuremath{\mathbb{E}}[Z]\Big|^m\bigg]}{\delta^m}\right\}
\leq
\min \left\{1, \frac{\sum_{\ell=0}^m \binom{m}{\ell}\ensuremath{\mathbb{E}}\Big[|Z|^{m-\ell}\Big] \cdot \Big|\ensuremath{\mathbb{E}}[Z]\Big|^\ell}{\delta^m}\right\}.
\end{split}
\end{equation*}
\end{lemma}
The proof is similar to that of the Chebyshev's inequality.
\begin{comment}
\begin{proof}
Similarly to the proof of Chebyshev's inequality we get that for any \(\delta>0\) and \(m \geq 2\)
\begin{equation*}
\begin{split}
\frac{\ensuremath{\mathbb{E}}\bigg[\Big|Z-\ensuremath{\mathbb{E}}[Z]\Big|^m\bigg]}{\delta^m}= & \ensuremath{\mathbb{E}}\left[\frac{\Big|Z-\ensuremath{\mathbb{E}}[Z]\Big|^m}{\delta^m} \Bigg| \Big|Z-E[Z]\Big| > \delta\right] \Prob{\Big|Z-\ensuremath{\mathbb{E}}[Z]\Big| > \delta} \\
& + \ensuremath{\mathbb{E}}\left[\frac{\Big|Z-\ensuremath{\mathbb{E}}[Z]\Big|^m}{\delta^m} \Bigg| \Big|Z-E[Z]\Big| > \delta\right] \Prob{\Big|Z-\ensuremath{\mathbb{E}}[Z]\Big| \leq \delta}\\
\geq & 1 \cdot \Prob{\Big|Z-\ensuremath{\mathbb{E}}[Z]\Big| > \delta} + 0 \cdot \Prob{\Big|Z-\ensuremath{\mathbb{E}}[Z]\Big| \leq \delta},
\end{split}
\end{equation*}
and hence we are done with the first inequality. For the second one
\begin{equation*}
\begin{split}
\ensuremath{\mathbb{E}}\bigg[\Big|Z-\ensuremath{\mathbb{E}}[Z]\Big|^m\bigg] \leq & \ensuremath{\mathbb{E}}\left[\bigg(\Big|Z\Big|+\Big|\ensuremath{\mathbb{E}}[Z]\Big|\bigg)^m\right] \quad \text{using triangle inequality}\\
= & \sum_{\ell=0}^m \binom{m}{\ell} \ensuremath{\mathbb{E}}\Big[|Z|^{m-\ell}\Big] \cdot \Big|\ensuremath{\mathbb{E}}[Z]\Big|^\ell \quad \text{using the binomial thm and linearity of exp.}
\end{split}
\end{equation*}
\end{proof}
\end{comment}
\section{Notations, set \(L\) and upper bounds on its measure}\label{Section:notations_measure}
For any \(x\in [0,1]\) and \(b>0\) integer let us define
\[ U_x:=\ensuremath{\mathbb{E}}_{z\in [0,1]}\left[ \big|U(x,z)\big|\right] =\int\limits_0^1 \big|U(x,z)\big| dz.\]
\begin{definition}\label{def:L}
Let \(L:= L_{\ensuremath{\nu},\ensuremath{\gamma}}\subseteq [0,1]^k\) be
\begin{equation*}
\begin{split}
L_{\ensuremath{\nu}, \ensuremath{\gamma}} = & \left\{X\in [0,1]^k \quad \Big| \quad \big| U_{X_j}- \|U\|_1\big| \leq \ensuremath{\nu} k^{\ensuremath{\gamma}} \text{ for } j\in [k], \big|\left|U(X_i,X_j)\right|- \|U\|_{1}\big| \leq \ensuremath{\nu} k^{\ensuremath{\gamma}} \text{ for } i,j\in [k] \right\}, \\
& \text{ where } \ensuremath{\nu}, \ensuremath{\gamma} \text{ are positive constants.}
\end{split}
\end{equation*}
\end{definition}
\begin{comment}
\begin{definition}\label{def:H}
Let us fix some \(d \geq 2\) positive integer, and let \(H:=H_{\ensuremath{\nu},d,\ensuremath{\gamma}} \subseteq[0,1]\) with \(\ensuremath{\nu}, \ensuremath{\gamma}\) positive constants be
\begin{equation*}
\begin{split}
H_{\ensuremath{\nu},d,\ensuremath{\gamma}}
: = & \left\{x\in [0,1] \quad \Big| \quad \left| U_x- \|U\|_1\right| \leq \delta_1, \quad \left|U_x^{(d)}- \|U\|_{d}^{d}\right| \leq \delta_d \right\}, \\
& \text{ where for } d\geq 2 \quad \delta_d:= \ensuremath{\nu}^{d} k^{\ensuremath{\gamma} d -1/2 } \quad \text{ and } \quad \delta_1= \ensuremath{\nu} k^{\ensuremath{\gamma}}.
\end{split}
\end{equation*}
\end{definition}
\begin{definition}\label{def:L}
Let us define
\(M:=M_{\ensuremath{\nu}, \ensuremath{\gamma}} \subset [0,1]^k\)
as
\begin{equation*}
\begin{split}
M_{\ensuremath{\nu}, \ensuremath{\gamma}}
& :=\left\{X \in [0,1]^k \quad \Big| \quad \left|U_{X_j}-\|U\|_1\right| \leq \delta_{M,1} \text{ for } j \in [k],
\Big.\left|\big.\left|U(X_i,X_j)\right|-\|U\|_1\right| \leq \delta_{M,0} \text{ for } i,j \in [k] \right\},
\\
& \quad \text{ where } \delta_{M,1} = \delta_{L,1}, \quad \delta_{M,0}=\delta_{L,0}.\\
\end{split}
\end{equation*}
\end{definition}
\end{comment}
\bigskip
Next we show that \(\Prob{L}\)
is very close to \(1\),
i.e. \(U_x\) and \(|U(x,y)|\) are both highly concentrated around their expectation
\[\ensuremath{\mathbb{E}}_{(x,y)\in [0,1]^2} \left[\big. |U(x,y)|\right] = \ensuremath{\mathbb{E}}_{x\in[0,1]}\left[U_{x}\right]=\int\limits_0^1 \int\limits_0^1 \big|U(x,z)\big| dz dx= \|U\|_1.\]
Let \(\lambda(L)\) be the Lebesgue measure of \(L\).
\begin{prop}\label{Prop:L_Chebyshev}
For arbitrary \(m > \max\left\{1,\frac{2}{\ensuremath{\gamma}}\right\}\) and some \(c(U,m)\) constant in \(k\) we have
\begin{equation*}
\begin{split}
\lambda(L)= \Prob{X\in L} \geq & 1 -k^2 \cdot \frac{c(U,m)}{\ensuremath{\nu}^m k^{\ensuremath{\gamma} m}} =: p(L, m) \rightarrow 1 \quad \text{ as } k\rightarrow \infty. \\
\end{split}
\end{equation*}
\end{prop}
\begin{prop}\label{Prop:L_Chernoff}
For arbitrary \(m \geq 1\) and some \(c^*(U,m)\) constant in \(k\) we have
\begin{equation*}
\begin{split}
\lambda(L)= \Prob{X\in L} \geq & 1-k^2\cdot \frac{c^*(U,m)}{e^{m\ensuremath{\nu} k^{\ensuremath{\gamma}}}} =: p^*(L, m) \rightarrow 1 \quad \text{ as } k\rightarrow \infty. \\
\end{split}
\end{equation*}
\end{prop}
\begin{proof}{(Proposition~\ref{Prop:L_Chebyshev}.)}
Substituting \(Z=\left|U(x,y)\right|\) into the generalised Chebyshev's inequailty (Lemma~\ref{Lemma:generalised_Chebyshev}) we get that
\begin{equation*}
\begin{split}
\Prob{\left|\Big. \left|U(x,y)\right|- \|U\|_1 \right|>\delta_0}
& \leq \min\left\{ 1,\frac{ \ensuremath{\mathbb{E}}\left[\left| \Big. \left|U(x,y)\right|-\|U\|_1\right|^m\right]}{\delta_0^m}\right\}. \\
\end{split}
\end{equation*}
According to the binomial theorem (using the linearity of the expected value) we get that
\begin{equation*}
\begin{split}
\ensuremath{\mathbb{E}}\left[\left|\Big. \left|U(x,y)\right| - \|U\|_1\right|^m\right]
\leq & \ensuremath{\mathbb{E}}\left[\left(\Big. \left|U(x,y)\right| + \|U\|_1\right)^m\right] \quad \text{ using the triangle inequality}\\
= & \sum_{\ell=0}^m \binom{m}{\ell} \ensuremath{\mathbb{E}}\left[\left|U(x,y)\right|^{m-\ell}\right] \|U\|_1^\ell\\
= & \sum_{\ell=0}^{m} \binom{m}{\ell} \|U\|_{m-\ell}^{m-\ell} \|U\|_1^{\ell} =:c(U, m) \quad \text{ a constant in } k.
\end{split}
\end{equation*}
Hence for all \(m\geq 1\)
\begin{equation*}
\begin{split}
\Prob{\left|\Big. \left|U(x,y)\right|- \|U\|_1 \right|>\delta_0}
& \leq \min\left\{ 1,\frac{ c(U,m)}{\delta_0^m}\right\}, \\
\end{split}
\end{equation*}
For fixed \(x\) substituting \(Z= U_{x}\) into the generalised Chebyshev's inequality (Lemma~\ref{Lemma:generalised_Chebyshev}) we get that
\begin{equation*}
\begin{split}
\Prob{\big| U_{x}- \|U\|_1 \big|>\delta_1}
\leq \min\left\{ 1,\frac{ \ensuremath{\mathbb{E}}\left[\left| U_{x}-\ensuremath{\mathbb{E}}\left[U_{x}\right]\right|^m\right]}{\delta_1^m}\right\}.
\end{split}
\end{equation*}
We would like to estimate the right hand side with a constant (depending on \(U\)). First of all observe that for \(p\geq 1\) we have \(\ensuremath{\mathbb{E}}\left[\left(U_{x}\right)^p\right] \leq \|U\|_{p}^{p}\), because
\begin{equation*}
\begin{split}
\ensuremath{\mathbb{E}}\left[\left(U_{x}\right)^p\right]= & \int_{x \in [0,1]} \left(U_{x}\right)^p dx = \int_{x \in [0,1]} \left(\int_{y \in[0,1]} |U(x,y)| dy\right)^p dx \\
\leq & \int_{x \in [0,1]} \int_{y \in [0,1]} |U(x,y)|^{p} dy dx = \|U\|_{p}^{p} \quad \text{ by Jensen's inequality for } z^p, p \geq 1.
\end{split}
\end{equation*}
According to the binomial theorem (using the linearity of the expected value) we get that
\begin{equation*}
\begin{split}
\ensuremath{\mathbb{E}}\left[\Big| U_{x} - \ensuremath{\mathbb{E}}\left[U_{x}\right]\Big|^m\right]
\leq & \ensuremath{\mathbb{E}}\left[\left(\left| U_{x}\right|+ \left|\ensuremath{\mathbb{E}}\left[U_{x}\right]\right|\right)^m\right] \quad \text{ using the triangle inequality}\\
= & \sum_{\ell=0}^m \binom{m}{\ell} \ensuremath{\mathbb{E}}\left[\left(U_{x}\right)^{m-\ell}\right] \ensuremath{\mathbb{E}}\left[U_{x}\right]^\ell \quad \text{since } U_x \geq 0 \quad\forall x\in[0,1]\\
\leq & \sum_{\ell=0}^{m} \binom{m}{\ell} \|U\|_{m-\ell}^{m-\ell} \|U\|_1^{\ell} =c(U, m) \quad \text{ a constant.}
\end{split}
\end{equation*}
Note that \(c(U,m)\) is the same constant as above.
Thus for any \(m \geq 1\)
\begin{equation*}
\begin{split}
\Prob{\big| U_{x}- \|U\|_1 \big|>\delta_1}
& \leq \min\left\{ 1,\frac{ c(U,m)}{\delta_1^m}\right\}.\\
\end{split}
\end{equation*}
Finally
\begin{equation*}
\begin{split}
\Prob{X\in L^c}= & \Prob{\exists j\in [k] : \left|U_{X_j}-\|U\|_1\right| > \delta_{1} \text{ or } \exists i<j\in [k]: \left|\big.\left|U(X_i,X_j)\right|-\|U\|_1\right| > \delta_{0}} \\
\leq & k \cdot \Prob{\big|U_{x}-\|U\|_1\big| >\delta_{1}} + k(k-1)\cdot \Prob{\left|\big.\left|U(x,y)\right|-\|U\|_1\right| > \delta_{0}} \\
\leq & k \cdot \min\left\{1, \frac{c(U,m)}{\delta_{1}^m}\right\}+ k(k-1)\cdot \min\left\{1, \frac{c(U,m)}{\delta_{0}^m}\right\} = k^2 \cdot \min\left\{1, \frac{c(U,m)}{\delta_{1}^m}\right\}.
\end{split}
\end{equation*}
Hence
\begin{equation*}
\begin{split}
\lambda(L)= & \Prob{X\in L}= 1-\Prob{X\in L^c} \geq 1 - k^2 \cdot \min\left\{1, \frac{c(U,m)}{\delta_{1}^m}\right\} \\
= & 1-k^2\cdot \frac{c(U,m)}{\ensuremath{\nu}^m k^{\ensuremath{\gamma} m}} \rightarrow 1 \quad \text{ as } k\rightarrow \infty \text{ if } m>\frac{2}{\ensuremath{\gamma}}.
\end{split}
\end{equation*}
\end{proof}
\begin{proof}{(Proposition~\ref{Prop:L_Chernoff}.)}
Substituting \(Z=\left|U(x,y)\right|\) into the absolute value version of the Chernoff bound (Lemma~\ref{Lemma:Chernoff_absval}) we get that
\begin{equation*}
\begin{split}
\Prob{\left|\Big. \left|U(x,y)\right|- \|U\|_1 \right|>\delta_0}
& \leq \min\left\{ 1, e^{-m_*\delta_0} \cdot c^*(U,m_*) \right\} \quad \text{ where } c^*(U, m_*) \text{ is a constant.}
\end{split}
\end{equation*}
Hence for all \(m_* \geq 1\)
\begin{equation*}
\begin{split}
\Prob{\left|\Big. \left|U(x,y)\right|- \|U\|_1 \right|>\delta_0}
& \leq \min\left\{ 1,\frac{ c^*(U,m_*)}{e^{\delta_0 m_*}}\right\}. \\
\end{split}
\end{equation*}
For fixed \(x\) substituting \(Z= U_{x}\) into the absolute value version of the Chernoff bound (Lemma~\ref{Lemma:Chernoff_absval}) we get that
\begin{equation*}
\begin{split}
\Prob{\big| U_{x}- \|U\|_1 \big|>\delta_1}
\leq \min\left\{ 1,e^{-m_* (\delta_1 + \|U\|_1 )} \cdot \sum\limits_{n=0}^\infty \frac{ m_*^n \ensuremath{\mathbb{E}}\left[U_{x}^n\right]}{n!}\right\}.
\end{split}
\end{equation*}
Again we would like to bound the right hand side with a constant (depending on \(U\)). Using observations from the previous proof
\begin{equation*}
\begin{split}
\Prob{\big| U_{x}- \|U\|_1 \big|>\delta_1}
& \leq \min\left\{ 1, e^{-m_*\delta_1} \cdot c^*(U,m_*) \right\}.
\end{split}
\end{equation*}
Note that \(c^*(U,m_*)\) is the same constant as above.
Thus for any \(m_* \geq 1\)
\begin{equation*}
\begin{split}
\Prob{\big| U_{x}- \|U\|_1 \big|>\delta_1}
& \leq \min\left\{ 1, e^{-m_*\delta_1} \cdot c^*(U,m_*) \right\}.\\
\end{split}
\end{equation*}
Finally
\begin{equation*}
\begin{split}
\Prob{X\in L^c}= & \Prob{\exists j\in [k] : \left|U_{X_j}-\|U\|_1\right| > \delta_{1} \text{ or } \exists i<j\in [k]: \left|\big.\left|U(X_i,X_j)\right|-\|U\|_1\right| > \delta_{0}} \\
\leq & k \cdot \Prob{\big|U_{x}-\|U\|_1\big| >\delta_{1}} + k(k-1)\cdot \Prob{\left|\big.\left|U(x,y)\right|-\|U\|_1\right| > \delta_{0}} \\
\leq & k \cdot \min\left\{ 1, e^{-m_*\delta_{1}} \cdot c^*(U,m_*) \right\}.+ k(k-1)\cdot \min\left\{ 1, e^{-m_*\delta_{1}} \cdot c^*(U,m_*) \right\}. \\
= & k^2 \cdot \min\left\{ 1, e^{-m_*\delta_{1}} \cdot c^*(U,m_*) \right\}.
\end{split}
\end{equation*}
Hence
\begin{equation*}
\begin{split}
\lambda(L)= & \Prob{X\in L}= 1-\Prob{X\in L^c} \\
\geq & 1 - k^2 \cdot \min\left\{ 1, e^{-m_*\delta_{1}} \cdot c^*(U,m_*) \right\} \geq 1-k^2\cdot e^{-m_*\delta_{1}} \cdot c^*(U,m_*) \\
= & 1-k^2\cdot e^{-m_*\ensuremath{\nu} k^{\ensuremath{\gamma}}} \cdot c^*(U,m_*) \rightarrow 1 \quad \text{ as } k\rightarrow \infty \text{ if } \ensuremath{\gamma}>0.
\end{split}
\end{equation*}
\end{proof}
\begin{comment}
For fixed \(x,b\) substituting \(Z= U_{x}^{(b)}\) into the generalised Chebyshev's inequality (Lemma~\ref{Lemma:generalised_Chebyshev}) we get that
\begin{equation*}
\begin{split}
\Prob{\big| U_{x}^{(b)}- \|U\|_b^b \big|>\delta_*(x,b)}
\leq \min\left\{ 1,\frac{ \ensuremath{\mathbb{E}}\left[\left| U_{x}^{(b)}-\ensuremath{\mathbb{E}}\left[U_{x}^{(b)}\right]\right|^m\right]}{\delta_*(x,b)^m}\right\}.
\end{split}
\end{equation*}
We would like to estimate the right hand side with a constant (depending on \(U\)). First of all observe that for \(p\geq 1\) we have \(\ensuremath{\mathbb{E}}\left[\left((U_{x}^{(b)}\right)^p\right] \leq \|U\|_{pb}^{pb}\), because
\begin{equation*}
\begin{split}
\ensuremath{\mathbb{E}}\left[\left(U_{x}^{(b)}\right)^p\right]= & \int_{x \in [0,1]} \left(U_{x}^{(b)}\right)^p dx\\
= & \int_{x \in [0,1]} \left(\int_{y \in[0,1]} |U(x,y)|^b dy\right)^p dx \\
\leq & \int_{x \in [0,1]} \int_{y \in [0,1]} |U(x,y)|^{pb} dy dx \quad \text{ by Jensen's inequality for } z^p, p \geq 1\\
= & \|U\|_{pb}^{pb}.
\end{split}
\end{equation*}
According to the binomial theorem (using the linearity of the expected value) we get that
\begin{equation*}
\begin{split}
\ensuremath{\mathbb{E}}\left[\Big| U_{x}^{(b)} - \ensuremath{\mathbb{E}}\left[U_{x}^{(b)}\right]\Big|^m\right]
\leq & \ensuremath{\mathbb{E}}\left[\left(\left| U_{x}^{(b)}\right|+ \left|\ensuremath{\mathbb{E}}\left[U_{x}^{(b)}\right]\right|\right)^m\right] \quad \text{ using the triangle inequality}\\
= & \sum_{\ell=0}^m \binom{m}{\ell} \ensuremath{\mathbb{E}}\left[\left(U_{x}^{(b)}\right)^{m-\ell}\right] \ensuremath{\mathbb{E}}\left[U_{x}^{(b)}\right]^\ell \quad \text{since } U_x^{(b)} \geq 0 \quad\forall x\in[0,1]\\
\leq & \sum_{\ell=0}^{m} \binom{m}{\ell} \|U\|_{b(m-\ell)}^{b(m-\ell)} \|U\|_b^{b\ell} =:c(U,b, m) \quad \text{ a constant.}
\end{split}
\end{equation*}
Note that \(c(U,1,m)\) is the same constant as above.
\end{comment}
\begin{comment}
All the above is true for \(b=1\) and \(d\), so for any \(m \geq 2\)
\begin{equation*}
\begin{split}
\Prob{\big| U_{x}- \|U\|_1 \big|>\delta_*(x,1)}
& \leq \min\left\{ 1,\frac{ c(U,1,m)}{\delta_*(x,1)^m}\right\} \\
\Prob{\big| U_{x}^{(d)}- \|U\|_{d}^{d} \big|>\delta_*(x,d)}
& \leq \min\left\{ 1,\frac{ c(U,d,m)}{\delta_*(x,d)^m}\right\}. \\
\end{split}
\end{equation*}
Finally
\begin{equation*}
\begin{split}
\Prob{X\in L^c}= & \Prob{\exists j\in [k] : \left|U_{X_j}-\|U\|_1\right| > \delta_{L,1} \text{ or } \exists i<j\in [k]: \left|\big.\left|U(X_i,X_j)\right|-\|U\|_1\right| > \delta_{L,0}} \\
\leq & k \cdot \Prob{\big|U_{x}-\|U\|_1\big| >\delta_{L,1}} + k(k-1)\cdot \Prob{\left|\big.\left|U(x,y)\right|-\|U\|_1\right| > \delta_{L,0}} \\
\leq & k \cdot \min\left\{1, \frac{c(U,1,m)}{\delta_{L,1}^m}\right\}+ k(k-1)\cdot \min\left\{1, \frac{c(U,1,m)}{\delta_{L,0}^m}\right\} \\
= & k^2 \cdot \min\left\{1, \frac{c(U,1,m)}{\delta_{L,1}^m}\right\},
\end{split}
\end{equation*}
\textcolor{red}{Konkrétan ugyanez csak L helyett M-mel (ha nem változik addig meg).}
and
\begin{equation*}
\begin{split}
\Prob{x\in H^c}= & \Prob{\big|U_{x}-\|U\|_1\big| > \delta_1 \text{ or }\big|U_{x}^{(d)}-\|U\|_d^d\big| > \delta_d} \\
\leq & \Prob{\big|U_{x}-\|U\|_1\big| >\delta_1} + \Prob{\big|U_{x}^{(d)}-\|U\|_d^d\big| > \delta_d } \\
\leq & \min\left\{1, \frac{c(U,1,m)}{\delta_1^m}\right\}+ \min\left\{1, \frac{c(U,d,m)}{\delta_d^m}\right\}.
\end{split}
\end{equation*}
Hence
\begin{equation*}
\begin{split}
\lambda(L)= & \Prob{X\in L}= 1-\Prob{X\in L^c} \\
\geq & 1 - k^2 \cdot \min\left\{1, \frac{c(U,1,m)}{\delta_{L,1}^m}\right\} \\
= & 1-k^2\cdot \frac{c(U,1,m)}{\ensuremath{\nu}^m k^{\ensuremath{\gamma} m}} \rightarrow 1 \quad \text{ as } k\rightarrow \infty \text{ if } m>\frac{2}{\ensuremath{\gamma}}
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
\lambda(H)= & \Prob{x\in H}= 1-\Prob{x\in H^c} \\
\geq & \max\left\{0, 1-\frac{c(U,1,m)}{\delta_1^m}\right\}+ \max\left\{0, 1-\frac{c(U,d,m)}{\delta_d^m}\right\}-1 \\
\geq & 1-\frac{c(U,1,m)}{\delta_1^m} + 1-\frac{c(U,d,m)}{\delta_d^m} - 1 \\
\geq & 1 - \frac{c(U,1,m)}{\ensuremath{\nu}^m k^{\ensuremath{\gamma} m}} - \frac{c(U,d,m)}{\ensuremath{\nu}^{md}k^{\ensuremath{\gamma} dm - m/2}} \rightarrow 1 \quad \text{ as } k\rightarrow \infty \text{ if } \ensuremath{\gamma} > \frac{1}{2d}
\end{split}
\end{equation*}
On the other hand
\begin{equation*}
\begin{split}
\Prob{X\in H^k}
= & \Prob{\big|U_{X_i}-\|U\|_1\big| \leq \delta_1 \text{ and }\big|U_{X_i}^{(d)}-\|U\|_d^d\big| \leq \delta_d \quad \forall i \in [k]} \\
\geq & \Prob{\big|U_{X_i}-\|U\|_1\big| \leq \delta_1 \quad \forall i \in [k]} + \Prob{\big|U_{X_i}^{(d)}-\|U\|_d^d\big| \leq \delta_d \quad \forall i \in [k]} - 1 \\
\geq & \max\left\{0,\left(1-\frac{c(U,1,m)}{\delta_1^m}\right)^{k}\right\} + \max\left\{0,\left(1-\frac{c(U,d,m)}{\delta_d^m}\right)^{k}\right\} - 1,\\
\end{split}
\end{equation*}
by the independence of \(X_i\).
\end{comment}
\begin{comment}
So \(X\in H^k\) with probability at least \begin{equation*}
\begin{split}
&\left(1-\frac{c(U,1,m)}{\delta_1^m}\right)^{k}+ \left(1-\frac{c(U,d,m)}{\delta_d^m}\right)^{k} - 1 \\
= & \left(1-\frac{c(U,1,m)}{\ensuremath{\nu}^{m}k^{\ensuremath{\gamma} m}}\right)^{k} + \left(1-\frac{c(U,d,m)}{\ensuremath{\nu}^{md}k^{\ensuremath{\gamma} dm - m/2}}\right)^{k} -1 \\
\geq & 1-k \cdot \frac{c(U,1,m)}{\ensuremath{\nu}^{m}k^{\ensuremath{\gamma} m}} + 1 - k \cdot \frac{c(U,d,m)}{\ensuremath{\nu}^{md}k^{\ensuremath{\gamma} dm - m/2}} - 1 \rightarrow 1, \text{ for } m>\max\left\{\frac{1}{\ensuremath{\gamma}}, \frac{1}{\ensuremath{\gamma} d -\frac{1}{2}}\right\},
\end{split}
\end{equation*}
by Bernoulli's inequality and similarly to the above (given that \(\ensuremath{\gamma} > \frac{1}{2d}\)).
\end{comment}
\bigskip
Let us define
\[U(Z_1, Z_2) = \sum_{x\in Z_1,y\in Z_2} U(x, y)\]
for finite subsets \(Z_1, Z_2 \subset [0, 1]\).
Let \(B := B_X= U[X] \). Similarly to the above for any set \(Q_1\) of rows and any set \(Q_2\) of columns, we set \[B(Q_1, Q_2) = \sum_{i\in Q_1,j \in Q_2} B_{ij}.\]
Note that this means
\[\|B\|_{{}_\square} = \frac{1}{k^2} \max_{S,T \subseteq[k]} \left|B(S,T)\right| \quad \text{ and }\quad \|B\|^+_{{}_\square} = \frac{1}{k^2} \max_{S,T \subseteq[k]} B(S,T).\]
\section{Proof of the lower bound in the First Sampling Lemma for Kernels (Theorem~\ref{Thm:First_sampling})}\label{Section:lower_bound}
To bound the difference \(\|B\|_{{}_\square} − \|U\|_{{}_\square}\), we first bound its expectation. From definition we know that for any two measurable subsets \(S_1, S_2 \subset [0, 1]\), we have
\[\|B\|_{{}_\square} \geq \frac{1}{k^2} \left|U(S_1 \cap X, S_2 \cap X)\right|.\]
Fixing \(S_1,S_2\) for a moment and choosing the k-subset \(X\) randomly, we get
\begin{equation*}
\begin{split}
\ensuremath{\mathbb{E}}_X\left[ \|B\|_{{}_\square}\right] \geq & \frac{1}{k^2} \ensuremath{\mathbb{E}}_X \left|\Big. U(S_1 \cap X, S_2 \cap X) \right| \geq \frac{1}{k^2} \left|\Big.\ensuremath{\mathbb{E}}_X\left[U(S_1 \cap X, S_2 \cap X)\right] \right| \\
\geq & \frac{k − 1}{k} \left| \int_{S_1 \times S_2} U(x, y) dx dy \right|
\end{split}
\end{equation*}
because \(U(x,x)=0\) by definition.
Now taking the supremum over all measurable sets \(S_1, S_2\) we get
\begin{equation*}
\ensuremath{\mathbb{E}}_X\left[ \|B\|_{{}_\square} \right] \geq \frac{k-1}{k}\|U\|_{{}_\square},
\end{equation*}
and hence
\begin{equation}\label{Ineq:Elowerbound}
\ensuremath{\mathbb{E}}_X\left[ \|B\|_{{}_\square} \right] -\|U\|_{{}_\square} \geq - \frac{1}{k}\|U\|_{{}_\square}.
\end{equation}
Now we would like to get rid of the expected value, using the generalised version of Azuma's inequality (Lemma~\ref{Lemma:generalised_azuma}).
\subsection{Using Azuma's inequality}\label{Section:lower_Azuma}
For any \(S,T\subseteq [k]\) the function \(B_X\left(S,T\right)\) is a function of the independent random variables \(X_\ell, \ell \in [k]\), and if we change the value of one of these \(X_a\) to \(X^*_a\), the sum \(B_X(S,T)\) changes to \(B_{X^*}(S,T)\). Note that
\begin{equation}\label{Ineq:B_R_also}
\begin{split}
\left|\left|B_{X}(S, T)\right|-\left|B_{X^*}(S,T)\right| \Big.\right|
\leq & \left| B_{X}(S, T) - B_{X^*}(S,T) \Big.\right| \quad \quad \text{ (triangle inequality) }\\
\leq &\left|\sum_{i\in S, j\in T} \left( U(X_i,X_j)-U(X^*_i, X^*_j)\right)\right|\\
\leq & \sum_{i\in S, j\in T} \left|\Big.U(X_i,X_j)-U(X^*_i, X^*_j)\right| \quad \quad \text{ (triangle inequality)}\\
\leq & 2 \sum_{i\neq a} \big| U(X_a,X_i) \big| + 2\sum_{i\neq a} \big|U(X^*_a,X_i)\big|.
\end{split}
\end{equation}
The last inequality holds because for example if \(a \in S\) and \(a \notin T\) then the sum changes by at most
\begin{equation*}
\begin{split}
\sum_{ j\in T} \left|\Big.U(X^*_a,X_j)-U(X_a,X_j) \right| & \leq \sum_{ j\in T} \left(\left|U(X^*_a,X_j)\right|+ \left|U(X_a,X_j)\right|\right) \\
& \leq \sum_{ j\neq a} \left|U(X^*_a,X_j)\right|+ \sum_{j\neq a} \left|U(X_a,X_j) \right|.
\end{split}
\end{equation*}
The other cases work similarly.
Hence
\begin{equation*}
\begin{split}
& \left| k^2 \cdot \|B_X\|_{{{}_\square}}-k^2 \cdot \|B_{X^*}\|_{{{}_\square}} \right|= \left|\max_{S,T \subseteq[k]} \left|B_X(S,T)\right| - \max_{S,T \subseteq[k]} \left|B_{X^*}(S,T)\right|\right| \\
& \quad \quad \quad \leq 2 \sum_{ j\neq a} \left|U(X^*_a,X_j)\right|+ 2 \sum_{ j\neq a} \left|U(X_a,X_j) \right|.
\end{split}
\end{equation*}
Let us suppose that \(X_1,X_2, \dots X_{a-1}\) are fixed and we are fixing the value at index \(a\) now, so we take the expected value over the rest.
\begin{equation*}
\begin{split}
& \left|\ensuremath{\mathbb{E}}_{X_i, i>a}\left[ \|B_X\|_{{}_\square} - \|B_{X^*}\|_{{}_\square}\right]\right| \leq \ensuremath{\mathbb{E}}_{X_i, i>a}\left| \|B_X\|_{{}_\square} - \|B_{X^*}\|_{{}_\square}\right| \\
& \quad \leq \frac{2}{k^2} \ensuremath{\mathbb{E}}_{X_i, i>a} \left[\sum_{ j\neq a} \left(\big|U(X^*_a,X_j)\big|+ \big|U(X_a,X_j) \big|\right)\right]\\
& \quad = \frac{2}{k^2} \left[\sum_{ j < a} \left(\big|U(X^*_a,X_j)\big|+ \big|U(X_a,X_j) \big|\right) +\ensuremath{\mathbb{E}}_{X_i, i>a} \left[\sum_{ j> a} \left(\big|U(X^*_a,X_j)\big|+ \big|U(X_a,X_j) \big|\right)\right] \right] \\
& \quad = \frac{2}{k^2} \left[\sum_{ j < a} \left(\big|U(X^*_a,X_j)\big|+ \big|U(X_a,X_j) \big|\right) + \left|\{i\in[k] : i>a\}\right| \cdot \left(U_{X^*_a}+ U_{X_a}\right)\bigg.\right] \\
& \quad = \frac{2}{k^2} \left[\sum_{ j < a} \left(\big|U(X^*_a,X_j)\big|+ \big|U(X_a,X_j) \big|\right) + (k-a) \cdot \left(U_{X^*_a}+ U_{X_a}\right)\right]. \\
\end{split}
\end{equation*}
We would like to use Azuma's inequality, so we are interested in the following difference, where we fix \(X_a^*\):
\begin{equation*}
\begin{split}
& \left|\Big.\ensuremath{\mathbb{E}}_{X_i, i\geq a} \|B_X\|_{{}_\square} - \ensuremath{\mathbb{E}}_{X_i, i>a} \|B_{X^*}\|_{{}_\square} \right| =\left| \ensuremath{\mathbb{E}}_{X_a} \left[\Big.\ensuremath{\mathbb{E}}_{X_i, i> a}\left[ \|B_X\|_{{}_\square} - \|B_{X^*}\|_{{}_\square} \right]\right]\right| \\
& \quad \quad \leq \ensuremath{\mathbb{E}}_{X_a} \left|\Big.\ensuremath{\mathbb{E}}_{X_i, i> a}\left[ \|B_X\|_{{}_\square} - \|B_{X^*}\|_{{}_\square}\right]\right| \\
&\quad \quad \leq \frac{2}{k^2} \ensuremath{\mathbb{E}}_{X_a}\left[\sum_{ j < a} \left(\big|U(X^*_a,X_j)\big|+ \big|U(X_a,X_j) \big|\right) + (k-a) \cdot \left(U_{X^*_a}+ U_{X_a}\right)\right]\\
& \quad \quad = \frac{2}{k^2} \left[\sum_{ j < a} \big|U(X^*_a,X_j)\big|+ \ensuremath{\mathbb{E}}_{X_a}\left[\sum_{ j < a}\big|U(X_a,X_j) \big|\right] + (k-a) \cdot U_{X^*_a}+ (k-a) \cdot \ensuremath{\mathbb{E}}_{X_a}\left[U_{X_a}\right]\right]\\
& \quad \quad = \frac{2}{k^2} \left[\sum_{ j < a} \big|U(X^*_a,X_j)\big|+ \sum_{ j < a} U_{X_j} + (k-a) \cdot U_{X^*_a}+ (k-a) \cdot \|U\|_1\right].\\
\end{split}
\end{equation*}
Let us denote by \(X^*\) the vector which we get by changing \(X_a\) to \(X_a^*\) and keeping everything else the same. Suppose that \(X^* \in L\) (for definition see Definition~\ref{def:L}). Then
\begin{equation*}
\begin{split}
& \left|\Big.\ensuremath{\mathbb{E}}_{X_i, i\geq a} \|B_X\|_{{}_\square} - \ensuremath{\mathbb{E}}_{X_i, i>a} \|B_{X^*}\|_{{}_\square} \right| \\
& \quad \quad \leq \frac{2}{k^2} \left[(a-1) \left(\|U\|_1 + \delta_{1}\right)+ (a-1) \left(\|U\|_1 + \delta_{1}\right) + (k-a) \cdot \left(\|U\|_1 + \delta_{1}\right) + (k-a) \cdot \|U\|_1\right]\\
& \quad \quad = \frac{2}{k^2} \left[2 (k-1) \|U\|_1 + (k+a-2) \delta_{1} \right] < \frac{4k}{k^2} \left(\|U\|_1 + \delta_{1}\right) = \frac{4}{k} \|U\|_1 + \frac{4}{k} \ensuremath{\nu} k^{\ensuremath{\gamma}}:=\alpha.
\end{split}
\end{equation*}
Note that \(\alpha\) depends only on \(k, \ensuremath{\nu}, \ensuremath{\gamma}\) (and not on \(a\)).
Now we would like to use the modified version of the generalised version of Azuma's inequality (Lemma~\ref{Lemma:generalised_azuma}) with \(([0,1]^k, A, \pi)\) probability space, \(f(X)=\|B_X\|_{{}_\square}: [0,1]^k \rightarrow \ensuremath{\mathbb{R}}\) measurable function and events
\begin{equation*}
\begin{split}
H_{\alpha,a}:= \left\{X\in [0,1]^k \quad \bigg| \quad \left|\Big.\ensuremath{\mathbb{E}}_{X_i, i\geq a} \|B_X\|_{{}_\square} - \ensuremath{\mathbb{E}}_{X_i, i>a} \|B_{X^*}\|_{{}_\square}\right| \geq \alpha\right\}.
\end{split}
\end{equation*}
Note that above we just proved that \(L \subseteq \cup_{a\in [k]} \left(H_{\alpha,a}\right)^c\), i.e. \(\Prob{L} \leq \Prob{\left(\cup_{a\in[k]}H_{\alpha, a}\right)^c }\) and \(\Prob{L^c} \geq \Prob{\cup_{a\in[k]}H_{\alpha, a}}\).
Let \(X\in [0,1]^k\) be random and chosen according to the product measure. Then for some \(\lambda>0\) and \(m>\max\left\{1,\frac{2}{\ensuremath{\gamma}}\right\}\)
\begin{equation*}
\begin{split}
\Prob{\left|\|B\|_{{}_\square} - \ensuremath{\mathbb{E}}_X\left[ \|B\|_{{}_\square}\right]\right| \geq \lambda \sqrt{k\cdot \alpha^2}}
& \leq 2 \exp(-2 \lambda^2) + \Prob{\cup_{a\in[k]}H_{\alpha,a}} \\
& \leq 2 \exp(-2 \lambda^2) + \Prob{L^c} \\
& \leq 2 \exp(-2 \lambda^2) + k^2 \min\left\{1, \frac{c(U,m)}{\ensuremath{\nu}^m k^{\ensuremath{\gamma} m}}\right\},
\end{split}
\end{equation*}
where we used the result of Proposition~\ref{Prop:L_Chebyshev}.
Using Proposition~\ref{Prop:L_Chernoff} for some \(\lambda_*>0\) and \(m_* \geq 1\)
\begin{equation*}
\begin{split}
\Prob{\left|\|B\|_{{}_\square} - \ensuremath{\mathbb{E}}_X\left[ \|B\|_{{}_\square}\right]\right| \geq \lambda_* \sqrt{k\cdot \alpha^2}}
& \leq 2 \exp(-2 \lambda^2) + k^2 \min\left\{1, \frac{c^*(U,m_*)}{e^{m_* \ensuremath{\nu} k^{\ensuremath{\gamma}}}}\right\},
\end{split}
\end{equation*}
Let \(\beta = \sqrt{m\ensuremath{\gamma} -2}\) (exists since \(m\ensuremath{\gamma} -2 \geq 0\) assumed in the polynomial case), \(\lambda = \sqrt{\ln k} \cdot \frac{\beta}{\sqrt{2}}\) and \(\lambda_* = \frac{k^\epsilon}{\sqrt{2}}\). Then for \(k\) large enough (so that \(\|U\|_1 \leq \delta_{1} =\ensuremath{\nu} k^{\ensuremath{\gamma}}\))
\begin{equation*}
\begin{split}
\lambda\cdot \sqrt{k\cdot \alpha_a^2}
& \leq C_{0} k^{-\frac{1}{2}+\ensuremath{\gamma} } \cdot \sqrt{\ln{k}} \quad \quad \text{ where } C_{0} := 8 \ensuremath{\nu} \cdot \frac{\beta}{\sqrt{2}}
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
\lambda_*\cdot \sqrt{k\cdot \alpha_a^2}
& \leq C_{0,*} k^{-\frac{1}{2}+\ensuremath{\gamma} +\epsilon} \quad \quad \text{ where } C_{0,*} := 8 \ensuremath{\nu} \cdot \frac{1}{\sqrt{2}}.
\end{split}
\end{equation*}
Hence we got that for \(m>\max\left\{1,\frac{2}{\ensuremath{\gamma}}\right\}\), \(k\) large enough
\begin{equation}\label{Ineq:Azumalower_Chebyshev}
\begin{split}
\Prob{\left|\Big.\|B\|_{{}_\square} - \ensuremath{\mathbb{E}}_X\left[ \|B\|_{{}_\square}\right]\right| \geq C_{0} k^{-\frac{1}{2}+\ensuremath{\gamma}}\cdot \sqrt{\ln{k}}}
& \leq C_{1}\cdot k^{2-\ensuremath{\gamma} m},
\end{split}
\end{equation}
where \(C_{1}:=C+ \frac{c(U,m)}{\ensuremath{\nu}^m}\) and also
\begin{equation}\label{Ineq:Azumalower_Chernoff}
\begin{split}
\Prob{\left|\Big.\|B\|_{{}_\square} - \ensuremath{\mathbb{E}}_X\left[ \|B\|_{{}_\square}\right]\right| \geq C_{0,*} k^{-\frac{1}{2}+\ensuremath{\gamma} + \epsilon}}
& \leq C_{1,*}\cdot k^{2}e^{m_*\ensuremath{\nu} k^{\ensuremath{\gamma}}},
\end{split}
\end{equation}
where \(C_{1,*} = \).
\subsection{End of proof of lower bound in Theorem~\ref{Thm:First_sampling}}
Now we look back to inequality~(\ref{Ineq:Elowerbound}):
\begin{equation*}
\ensuremath{\mathbb{E}}_X\left[ \|B\|_{{}_\square} \right] -\|U\|_{{}_\square} \geq - \frac{1}{k}\|U\|_{{}_\square}.
\end{equation*}
Using the generalised version of Azuma's inequality we proved (inequality~(\ref{Ineq:Azumalower_Chebyshev})) that with probability at least \(1-C_{1} \cdot k^{2-\ensuremath{\gamma} m}\)
\begin{equation*}
\ensuremath{\mathbb{E}}_X\left[\|B\|_{{}_\square}\right] < \|B\|_{{}_\square} + C_{0} \cdot k^{-\frac{1}{2} + \ensuremath{\gamma} + \epsilon}.
\end{equation*}
Put these together to get the required result: with probability at least \(1-C_{1}\cdot k^{2-\ensuremath{\gamma} m}\)
\begin{equation*}
\|B\|_{{}_\square} - \|U\|_{{}_\square} \geq -\frac{1}{k} \|U\|_{{}_\square} - C_{0}\cdot \frac{1}{k^{\frac{1}{2}-\ensuremath{\gamma}-\epsilon}}.
\end{equation*}
\section{Proof of Proposition~\ref{Prop:Plus_upper}}\label{Section:upper_bound}
We denote by \(Q^+_1\) the set of columns \(j \in [k]\) for which \(B(Q_1, \{j\}) > 0\). We define the set of columns \(Q^−_1\) and the sets of rows
\(Q^+_2, Q^−_2\) analogously. Note that \(B(Q_1, Q^+_1), B(Q^+_2, Q_2) > 0\) by this definition.
\begin{lemma}(\cite[Lemma 3]{Alon})\label{Lemma:BS12} Let \(S_1, S_2 \subseteq[k]\), and let \(Q\) be random \(q\)-subset of \([k]\) \((1 \leq q \leq k)\). Then for any \(X\in [0,1]^k\)
\[B(S_1, S_2) \leq \ensuremath{\mathbb{E}}_Q\bigg[B\left((Q \cap S_2)^+, S_2\right)\bigg] + \frac{k}{\sqrt{q}}\|B\|_F.\]
\end{lemma}
Reminder:
\[\|B\|_F = \sqrt{\sum_{i,j \in [k]} B_{i,j}^2}.\]
The following lemma gives an upper bound on the one-sided cut norm, using the sampling procedure from Lemma~\ref{Lemma:BS12} (i.e. by maximizing only over certain rectangles, at the cost of averaging these estimates). The main point for our purposes will be that (for a fixed \(Q_1\) and \(Q_2\)), the number of rectangles to consider is only \(4^q\), as opposed to \(4^k\) in the definition of the cut norm.
\begin{lemma}\label{Lemma:Bplus_upper} Let \(Q_1\) and \(Q_2\) be random \(q\)-subsets of \([k], (1 \leq q \leq k)\). Then for any \(X\in [0,1]^k\)
\begin{equation*}
\begin{split}
\|B\|^+_{{}_\square} \leq & \frac{1}{k^2} \ensuremath{\mathbb{E}}_{Q_1,Q_2} \left[\max_{R_i\subseteq Q_i} B(R_2^+, R_1^+)\right]+\frac{2}{k\sqrt{q}} \|B\|_F.
\end{split}
\end{equation*}
\end{lemma}
The proof follows from Lemma~\ref{Lemma:BS12}.
Hence where we got so far is
\begin{equation}\label{Ineq:EB_upper}
\begin{split}
\ensuremath{\mathbb{E}}_{X} \left[\|B\|^+_{{}_\square}\right] \leq & \frac{1}{k^2} \ensuremath{\mathbb{E}}_X\left[\ensuremath{\mathbb{E}}_{Q_1,Q_2} \left[\max_{R_i\subseteq Q_i} B(R_2^+, R_1^+)\right]\right]+\frac{2}{k\sqrt{q}} \ensuremath{\mathbb{E}}_X\left[\|B\|_F\right].
\end{split}
\end{equation}
Now we give upper bound on the two summands of the above inequality (Inequality~(\ref{Ineq:EB_upper}) separately, because similarly to the lower bound we first give an upper bound on \(\ensuremath{\mathbb{E}}_{X\in [0,1]^k} \left[\|B\|_{{}_\square}^+\right]\) and then remove the expected value using Azuma's inequality.
\subsection{Upper bound on \(\ensuremath{\mathbb{E}}_X\left[\ensuremath{\mathbb{E}}_{Q_1, Q_2}\left[ \max\limits_{R_i\subseteq Q_i} B(R_2^+, R_1^+)\right]\right]\)}\label{Section:upper_term1}
Let \(Q_1\) and \(Q_2\) be random \(\ensuremath{q}\)-subsets of \([k]\), with \(\ensuremath{q}=\left\lfloor\frac{\ensuremath{c}^2}{4} \cdot k^{2\ensuremath{\rho}}\right\rfloor +1\) for some constants \(\ensuremath{c},\ensuremath{\rho}>0\).
First of all we prove an upper bound on \(B\left(R_2^+,R_1^+\right)\) with high probability (using Azuma's inequality), then we say something about \(\max\limits_{R_i \subseteq Q_i} B\left(R_2^+,R_1^+\right) \) for arbitrary \(Q_1,Q_2\).
\subsubsection{Upper bound on \(B\left(R_2^+,R_1^+\right)\) using Azuma's inequality}\label{Section:B_upper_Azuma}
To prove an upper bound on \(B\left(R_2^+,R_1^+\right)\) first we take the expected value of $B(R_2^+, R_1^+)$ (for arbitrary (\(R_i \subseteq Q_i, i=1,2)\)) over the choice of \(X\in [0,1]^k\). More precisely fix those points \(X_i\) for which \(i \in Q := Q_1 \cup Q_2\) and take the expected value over the rest.Note that \(|Q| \leq 2\ensuremath{q}\).
Let \(X_A\) denote the restriction of \(X\) to the indices in set \(A\) (i.e. \(X_A\) is a vector of length \(|A|\), which we get by keeping the coordinates in the set \(A\) and deleting all other coordinates in \(X\)).
Define
\begin{equation*}
\begin{split}
Y_1 = & \left\{y \in [0,1] : \sum_{i\in R_1} U(X_i, y) > 0\right\},\\
Y_2 = & \left\{y \in [0,1] : \sum_{i\in R_2} U(y, X_i) > 0\right\}.
\end{split}
\end{equation*}
For every \(i \in [k] \setminus Q\) and \(j \in [k] \setminus Q\), the contribution of a term \(U(X_i, X_j )\) to the value of \(E_{X}\left[B(R_2^+, R_1^+) \bigg| X_Q\right]\) is
\[\int_{Y_2\times Y_1} U(x,y) dx dy \leq \|U\|^+_{{}_\square}.\]
The contribution of the remaining terms \(U(X_i, X_j )\) with \(i \in Q\) or \(j \in Q\)
is less than
\[ 2 \cdot k \cdot \sum_{i \in Q} U_{X_i},\]
because for example assuming \(i\in Q, j \in [k]\)
\begin{equation*}
\begin{split}
E_{X}\left[U(X_i,X_j) \bigg| X_Q\right] = & \int_{[0,1]^{k-|Q|}} U(X_i,X_j) \prod_{\alpha\in [k] \setminus Q} dX_\alpha \\
= & \int_0^1 U(X_i,X_j)dX_j \quad \text{ using independence} \\
\leq & \int_0^1 \Big|U(X_i,X_j)\Big| dX_j = U_{X_i}.
\end{split}
\end{equation*}
Hence
\begin{equation}\label{Ineq:EBR12}
\begin{split}
E_{X}\left[B\left(R_2^+, R_1^+\right) \bigg| X_Q\right] \leq & (k-|Q|)^2 \|U\|^+_{{}_\square} + 2k \sum_{i \in Q} U_{X_i} \\
\end{split}
\end{equation}
Next we show that with high probability the value of \(B(R_2^+, R_1^+)\) is highly concentrated around its expectation. (Reminder: \(B=B_X=U[X]\), so \(B\) depends on \(X\).)
The function \(B_X\left(R_2^+,R_1^+\right)\) is a function of the independent random variables \(X_\ell, \ell \in [k] \setminus Q\) (as we fixed the other values), and if we change exactly one coordinate of \(X\), say \(a\) (i.e. we change the value of \(X_a\) to \(Y_a \in [0,1]\) and \(Y_i=X_i\) for \(i\neq a\)), the sum \(B_X(R_2^+, R_1^+)\) changes to \(B_{Y}(R_2^+, R_1^+)\). Note that
\begin{equation}\label{Ineq:B_R_difference}
\begin{split}
& \left|B_{X}(R_2^+, R_1^+)-B_{Y}(R_2^+, R_1^+)\right| \leq \sum_{j\neq a} \big| U(X_a,X_j) \big| + \sum_{j\neq a} \big|U(Y_a,Y_j)\big| \\
& \quad = \sum_{j\neq a, j \in [k]\setminus Q} \big| U(X_a,X_j) \big|+ \sum_{ j\in Q} \big| U(X_a,X_j) \big| + \sum_{j\neq a, j \in [k]\setminus Q} \big|U(Y_a,X_j)\big| + \sum_{j \in Q} \big|U(Y_a,X_j)\big|,
\end{split}
\end{equation}
using similar argument as in the lower bound.
Let \(Z\in [0,1]^k\) fix and chosen uniformly and random. Let us suppose that \(X_1 = Z_1, X_2=Z_2, \dots , X_{a-1}=Z_{a-1}\) are the fixed values (and also \(X_Q=Z_Q\), but \(a\notin Q\)) and we are fixing the value at index \(a\), so we take the expected value over \(X_{\{i>a\} \setminus Q}\). Note that \(Y_1=Z_1, Y_2=Z_2, \dots Y_{a-1} = Z_{a-1}\) also holds.
\begin{equation*}
\begin{split}
& \left|\ensuremath{\mathbb{E}}_{X_i, i>a, i\notin Q} \Big. \right. \left.\left[ B_X\left(R_2^+, R_1^+\right) - B_{Y} \left(R_2^+, R_1^+\right) \right] \Big.\right| \leq \ensuremath{\mathbb{E}}_{X_i, i>a, i\notin Q} \left[ \left| B_X\left(R_2^+, R_1^+\right) - B_{Y} \left(R_2^+, R_1^+\right) \right| \right] \\
& \quad \leq \ensuremath{\mathbb{E}}_{X_i, i>a, i\notin Q}\left[\sum_{j\neq a, j \in [k]\setminus Q} \big| U(X_a,X_j) \big| \right]+ \ensuremath{\mathbb{E}}_{X_i, i>a, i\notin Q}\left[\sum_{ j\in Q} \big| U(X_a,X_j) \big| \right]\\
&\quad \quad + \ensuremath{\mathbb{E}}_{X_i, i>a, i\notin Q}\left[\sum_{j\neq a, j \in [k]\setminus Q} \big|U(Y_a,X_j)\big| \right]+ \ensuremath{\mathbb{E}}_{X_i, i>a, i\notin Q}\left[\sum_{j \in Q} \big|U(Y_a,X_j)\big|\right] \\
& \quad = \ensuremath{\mathbb{E}}_{X_i, i>a, i\notin Q}\left[\sum_{j\neq a, j \in [k]\setminus Q} \left(\big| U(X_a,X_j) \big| + \big|U(Y_a,X_j)\big| \right)\right]\\
& \quad \quad + \ensuremath{\mathbb{E}}_{X_i, i>a, i\notin Q}\left[\sum_{j \in Q} \left(\big|U(X_a,X_j)\big| + \big| U(Y_a,X_j) \big| \right)\right] \\
& \quad = \sum_{j< a, j \in [k]\setminus Q} \left(\big| U(X_a,X_j) \big| + \big|U(Y_a,X_j)\big| \right) + \sum_{j> a, j \in [k]\setminus Q} \ensuremath{\mathbb{E}}_{X_i, i>a, i\notin Q}\left[\big| U(X_a,X_j) \big| + \big|U(Y_a,X_j)\big| \right]\\
& \quad \quad + \sum_{j \in Q} \left(\big|U(X_a,X_j)\big| + \big| U(Y_a,X_j) \big| \right) \\
& \quad = \sum_{j< a, j \in [k]\setminus Q} \left(\big| U(X_a,X_j) \big| + \big|U(Y_a,X_j)\big| \right) + \sum_{j> a, j \in [k]\setminus Q} \left(U_{X_a} + U_{Y_a}\right)\\
& \quad \quad + \sum_{j \in Q} \left(\big|U(X_a,X_j)\big| + \big| U(Y_a,X_j) \big| \right). \\
\end{split}
\end{equation*}
We would like to use Azuma's inequality, so we are interested in the following difference, where we fix \(Y_a=Z_a\), but we don't fix \(X_a\):
\begin{equation*}
\begin{split}
& \Big|\ensuremath{\mathbb{E}}_{X_i, i\geq a, i\notin Q} \left[ B_X\left(R_2^+, R_1^+\right)\right] - \ensuremath{\mathbb{E}}_{X_i, i> a, i\notin Q} \left[B_{Y} \left(R_2^+, R_1^+\right) \right] \Big| \\
& \quad = \left|\ensuremath{\mathbb{E}}_{X_a} \left[ \ensuremath{\mathbb{E}}_{X_i, i> a, i\notin Q} \left[ B_X\left(R_2^+, R_1^+\right) - B_{Y} \left(R_2^+, R_1^+\right) \right]\right]\right|\\
& \quad \leq \ensuremath{\mathbb{E}}_{X_a} \left| \ensuremath{\mathbb{E}}_{X_i, i> a, i\notin Q} \left[ B_X\left(R_2^+, R_1^+\right) - B_{Y} \left(R_2^+, R_1^+\right) \right]\right| \\
& \quad \leq \ensuremath{\mathbb{E}}_{X_a} \left[\sum_{j< a, j \in [k]\setminus Q} \left(\big| U(X_a,X_j) \big| + \big|U(Y_a,X_j)\big| \right) \right] + \ensuremath{\mathbb{E}}_{X_a} \left[ \sum_{j> a, j \in [k]\setminus Q} \left(U_{X_a} + U_{Y_a}\right)\right] \\
& \quad \quad + \ensuremath{\mathbb{E}}_{X_a} \left[\sum_{j \in Q} \left(\big|U(X_a,X_j)\big| + \big| U(Y_a,X_j) \big| \right) \right] \\
& \quad = \sum_{j< a, j \in [k]\setminus Q} U_{X_j} + \sum_{j< a, j \in [k]\setminus Q} \big| U(Y_a,X_j)\big| + \sum_{j> a, j \in [k]\setminus Q} \|U\|_1 + \sum_{j> a, j \in [k]\setminus Q} U_{Y_a} \\
& \quad \quad + \sum_{j \in Q} U_{X_j} + \sum_{j \in Q} \big| U(Y_a,X_j) \big| \\
\end{split}
\end{equation*}
Suppose that \(Y \in L\) (see Definition~\ref{def:L}). Then (because \(Y_i=X_i\) for \(i\neq a\))
\begin{equation*}
\begin{split}
& \Big|\ensuremath{\mathbb{E}}_{X_i, i\geq a, i\notin Q} \left[ B_X\left(R_2^+, R_1^+\right)\right] - \ensuremath{\mathbb{E}}_{X_i, i> a, i\notin Q} \left[B_{Y} \left(R_2^+, R_1^+\right) \right] \Big| \\
& \quad \leq \sum_{j< a, j \in [k]\setminus Q} \left(\|U\|_1 + \delta_{1}\right) + \sum_{j< a, j \in [k]\setminus Q} \left(\|U\|_1 + \delta_{0}\right) + \sum_{j> a, j \in [k]\setminus Q} \|U\|_1 + \sum_{j> a, j \in [k]\setminus Q} \left(\|U\|_1 + \delta_{1}\right) \\
& \quad \quad + \sum_{j \in Q} \left(\|U\|_1 + \delta_{1}\right) + \sum_{j \in Q} \left( \|U\|_1 + \delta_{0} \right) \\
& \quad \leq (2k-2)\cdot \|U\|_1 + (k-1) \cdot \delta_{0} + (k-1) \cdot \delta_{1}
< 2k\cdot \left(\|U\|_1 + \delta_{1}\right)=: \beta. \\
\end{split}
\end{equation*}
Note that the upper bound is independent of \(a\), it depends only on \(\ensuremath{\nu},\ensuremath{\gamma},U,k\) .
Now we would like to use the modified version of the generalised version of Azuma's inequality (Lemma~\ref{Lemma:generalised_azuma}) with \(\left([0,1]^{k-|Q|}, A, \pi \right)\) probability space, \(f_{R_1,R_2}(X|X_Q) = B\left(R_2^+,R_1^+\right): [0,1]^{k-|Q|} \rightarrow \ensuremath{\mathbb{R}}\) measurable functions (note that \(X_Q\) is still fixed and there are \(4^q\) such functions) and events
\begin{equation*}
\begin{split}
H_{\beta,a, R_1,R_2}:= & \left\{X\in [0,1]^k \quad \Big| \left|\ensuremath{\mathbb{E}}_{X_i, i\geq a, i\notin Q} \left[ B_X\left(R_2^+, R_1^+\right)\right] - \ensuremath{\mathbb{E}}_{X_i, i> a, i\notin Q} \left[B_{Y} \left(R_2^+, R_1^+\right) \right] \right| \geq \beta\right\}.
\end{split}
\end{equation*}
Above we proved that \(L \subseteq \left(\cup_{R_i\subseteq Q_i}\cup_{a\in [k]}H_{\beta,a}\right)^c\subseteq \left(\cup_{a\in [k]}H_{\beta,a}\right)^c\), i.e.
\[\Prob{L} \leq \Prob{\left(\cup_{R_i\subseteq Q_i} \cup_{a\in[k]} H_{\beta,a}\right)^c} \leq \Prob{\left(\cup_{a\in [k]}H_{\beta,a}\right)^c}\]
and \(\Prob{L^c} \geq \Prob{\cup_{R_i\subseteq Q_i}\cup_{a\in[k]} H_{\beta,a}} \geq \Prob{\cup_{a\in [k]}H_{\beta,a}}\).
Let \(X\in [0,1]^k\) be random and chosen according to the product measure (\(X_Q=Z_Q\) is already given). Then for some \(\lambda>0\) and \(m>\max\left\{2,\frac{2}{\ensuremath{\gamma}}\right\}\)
\begin{equation*}
\begin{split}
& \Prob{\left|B_X\left(R_2^+, R_1^+\right) - \ensuremath{\mathbb{E}}_{X} \left[B_X\left(R_2^+, R_1^+\right) \Big| X_Q\right] \right| \geq \lambda \sqrt{k\cdot \beta^2} \quad \text{for some } R_i\subseteq Q_1, R_2\subseteq Q_2} \\
& \quad \leq 4^q \cdot 2 \exp\left(-2 \lambda^2\right) +\Prob{\cup_{R_i\subseteq Q_i}\cup_{a\in[k]}H_{\beta,a}} \leq 4^q \cdot 2 \exp\left(-2 \lambda^2\right) + \Prob{L^c}. \\
\end{split}
\end{equation*}
Using the result of Proposition~\ref{Prop:L_Chebyshev} we get that
\begin{equation*}
\begin{split}
& \Prob{\left|B_X\left(R_2^+, R_1^+\right) - \ensuremath{\mathbb{E}}_{X} \left[B_X\left(R_2^+, R_1^+\right) \Big| X_Q\right] \right| \geq \lambda \sqrt{k\cdot \beta^2} \quad \text{for some } R_i\subseteq Q_1, R_2\subseteq Q_2} \\
& \quad \leq 4^q \cdot 2\exp\left(-2 \lambda^2\right) + k^2\cdot \frac{c(U,m)}{\ensuremath{\nu}^{m} k^{\ensuremath{\gamma} m}}.
\end{split}
\end{equation*}
Let \(\lambda = \sqrt{\ln k \cdot q \cdot \ensuremath{\nu} \cdot m \cdot \frac{ \ensuremath{\gamma}+2}{2}}\).
Then
\begin{equation*}
\begin{split}
\exp\left(-2 \lambda^2\right)
& = \exp\left(-\ln k \cdot q \cdot \ensuremath{\nu} \cdot m \cdot (\ensuremath{\gamma}+2)\right) = k^{-q \ensuremath{\nu} m \ensuremath{\gamma}}\cdot k^{-q \ensuremath{\nu} m } \cdot k^{-q \ensuremath{\nu} m \ensuremath{\gamma}} \leq k^{-\ensuremath{\gamma} m} \cdot \ensuremath{\nu}^{-m} \cdot 4^{-q},
\end{split}
\end{equation*}
because \(q \ensuremath{\nu} \geq 1\), \(q\geq 1\) and \(k^{\ensuremath{\nu}} \geq \ensuremath{\nu}\) for \(k>e\), finally \(\log_k(4)\leq 2 \leq \ensuremath{\nu} m\) since \(k\geq 2, \ensuremath{\nu} \geq 1, m\geq 2\).
On the other hand for \(k\) large enough (so that \(\|U\|_1 \leq \delta_{1} =\ensuremath{\nu} k^{\ensuremath{\gamma}}\))
\begin{equation*}
\begin{split}
\lambda\cdot \sqrt{k\cdot \beta^2}
< C_{0,u} \sqrt{\ln k} \cdot k^{\frac{3}{2}+\ensuremath{\gamma}} \sqrt{q}\ensuremath{\nu}^{\frac{3}{2}} \quad \quad \text{ where } C_{0,u} := 4 \cdot \sqrt{\frac{m(\ensuremath{\gamma} + 2)}{D}}.
\end{split}
\end{equation*}
Hence we got that for \(m>\max\left\{1,\frac{2}{\ensuremath{\gamma}}\right\}\), \(k > e\) large enough
\begin{equation}\label{Ineq:Azumaupper}
\begin{split}
& \Prob{\left|B_X\left(R_2^+, R_1^+\right) - \ensuremath{\mathbb{E}}_X\left[ B_X\left(R_2^+, R_1^+\right) \Big| X_Q\right]\right| > C_{0,u} \sqrt{\ln k} \cdot k^{\frac{3}{2}+\ensuremath{\gamma}} \sqrt{q} \ensuremath{\nu}^{\frac{3}{2}} \quad \forall R_i\subseteq Q_i} \\
& \quad \quad \quad \quad \quad < C_{1,u}\cdot k^{2-\ensuremath{\gamma} m} \ensuremath{\nu}^{-m}, \quad \quad \quad \text{ where } C_{1,u}:= 2+c(U,m) >\frac{2}{k^2} + c(U,m).
\end{split}
\end{equation}
Hence using the modified version of the generalised version of Azuma's inequality we proved (inequality~(\ref{Ineq:Azumaupper})) that with probability at least \(1-C_{1,u}\cdot k^{2-\ensuremath{\gamma} m}\ensuremath{\nu}^{-m}\)
\begin{equation*}
\begin{split}
B_X\left(R_2^+, R_1^+\right) < E_{X}\left[B_X\left(R_2^+, R_1^+\right) \bigg| X_Q\right] + C_{0,u} \cdot \sqrt{\ln k} \cdot k^{\frac{3}{2}+\ensuremath{\gamma}} \sqrt{q} \ensuremath{\nu}^{\frac{3}{2}}.
\end{split}
\end{equation*}
Now we look back to inequality~(\ref{Ineq:EBR12}):
\begin{equation*}
\begin{split}
E_{X}\left[B\left(R_2^+, R_1^+\right) \bigg| X_Q\right]
\leq & (k-|Q|)^2 \|U\|^+_{{}_\square} + 2k \sum_{i \in Q} U_{X_i}. \\
\end{split}
\end{equation*}
Putting these together we get the following:
on a set of \(X\in[0,1]^k\) with given \(X_Q\) with measure at least \(1-C_{1,u}\cdot k^{2-\ensuremath{\gamma} m}\ensuremath{\nu}^{-m}\) for all \(R_1,R_2\) the following holds:
\begin{equation}\label{Ineq:BR_upper}
\begin{split}
B\left(R_2^+, R_1^+\right) < (k-|Q|)^2 \|U\|^+_{{}_\square} + 2k \sum_{i \in Q} U_{X_i} + C_{0,u} \cdot \sqrt{\ln k} \cdot k^{\frac{3}{2}+\ensuremath{\gamma}} \sqrt{q} \ensuremath{\nu}^{\frac{3}{2}}.
\end{split}
\end{equation}
\subsubsection{Upper bound on \(\ensuremath{\mathbb{E}}_{X\in [0,1]^k} \left[\max\limits_{R_i\subseteq Q_i} B\left(R_2^+, R_1^+\right) \right]\)}\label{Section:upper_term2}
From the above we also know that with probability at least \(1-C_{1,u}\cdot k^{2-\ensuremath{\gamma} m} \ensuremath{\nu}^{-m}\)
(i.e. if we are on the set of \(X\in[0,1]^k\) where inequality~(\ref{Ineq:BR_upper}) holds for all \(R_1\subseteq Q_1, R_2 \subseteq Q_2\))
\begin{equation*}
\begin{split}
\max\limits_{R_i\subseteq Q_i} B\left(R_2^+, R_1^+\right) < (k-|Q|)^2 \|U\|^+_{{}_\square} + 2k \sum_{i \in Q} U_{X_i} + C_{0,u} \cdot \sqrt{\ln k} \cdot k^{\frac{3}{2}+\ensuremath{\gamma}} \sqrt{q} \ensuremath{\nu}^{\frac{3}{2}}.
\end{split}
\end{equation*}
Let
\[Z:=\max\left\{0, \max\limits_{R_i\subseteq Q_i} B\left(R_2^+, R_1^+\right) - (k-|Q|)^2 \|U\|^+_{{}_\square} - 2k \sum_{i \in Q} U_{X_i}\right\}.\]
Then we know that
\begin{equation*}
\begin{split}
\mathds{P}_X\left(Z \geq C_{0,u} \cdot \sqrt{\ln k} \cdot k^{\frac{3}{2}+\ensuremath{\gamma}} \sqrt{q} \ensuremath{\nu}^{\frac{3}{2}}\right) \leq C_{1,u}\cdot k^{2-\ensuremath{\gamma} m} \ensuremath{\nu}^{-m}.
\end{split}
\end{equation*}
Let us use the following notations: \begin{equation*}
\begin{split}
t =\ensuremath{\nu}^{\frac{3}{2}} \quad \quad \quad \text{ and }\quad \quad \quad
f =C_{0,u} \cdot \sqrt{\ln k} \cdot k^{\frac{3}{2}+\ensuremath{\gamma}} \sqrt{q}.
\end{split}
\end{equation*}
Then the above inequality becomes
\begin{equation*}
\begin{split}
\mathds{P}_X\left(Z \geq f\cdot t \right) \leq C_{1,u}\cdot k^{2-\ensuremath{\gamma} m} t^{-\frac{2m}{3}}.
\end{split}
\end{equation*}
Note that
\begin{align}
\nonumber \ensuremath{\mathbb{E}}_X\left[Z\right] & = \int\limits_0^\infty \mathds{P}_X(Z\geq s) ds = \int\limits_0^\infty \mathds{P}_X(Z\geq t\cdot f) \cdot f dt \\
\label{Ineq:kiintegralas} & =\int\limits_0^1 \mathds{P}_X(Z\geq t\cdot f) \cdot f dt + \int\limits_1^\infty \mathds{P}_X(Z\geq t\cdot f) \cdot f dt \\
\nonumber & \leq \int\limits_0^1 1 \cdot f dt + f \cdot \int\limits_1^\infty \mathds{P}_X(Z\geq t\cdot f) dt \leq f + f \cdot \int\limits_1^\infty C_{1,u}\cdot k^{2-\ensuremath{\gamma} m} t^{-\frac{2m}{3}} dt \\
\nonumber & = f + f \cdot C_{1,u}\cdot k^{2-\ensuremath{\gamma} m} \cdot \left[ \frac{t^{1-\frac{2m}{3}}}{1-\frac{2m}{3}}\right]_1^\infty \\
\nonumber & = C_{0,u} \cdot \sqrt{\ln k} \cdot k^{\frac{3}{2}+\ensuremath{\gamma}} \sqrt{q} \cdot \left(1 - \frac{C_{1,u}\cdot k^{2-\ensuremath{\gamma} m}}{1-\frac{2m}{3}}\right) \quad \quad \text{ given that } \frac{3}{2}<m.
\end{align}
So surely we know that if \(m>\max\left\{1,\frac{2}{\ensuremath{\gamma}},\frac{3}{2}\right\}\) then
\begin{equation*}
\begin{split}
& \ensuremath{\mathbb{E}}_X \left[ \max\limits_{R_i\subseteq Q_i} B\left(R_2^+, R_1^+\right) \right] - (k-|Q|)^2 \|U\|^+_{{}_\square} - 2kq \|U\|_1 \\
& \quad \quad = \ensuremath{\mathbb{E}}_X \left[ \max\limits_{R_i\subseteq Q_i} B\left(R_2^+, R_1^+\right) - (k-|Q|)^2 \|U\|^+_{{}_\square} - 2k \sum_{i \in Q} U_{X_i}\right] \\
& \quad \quad \leq \ensuremath{\mathbb{E}}_X \left[\max\left\{0, \max\limits_{R_i\subseteq Q_i} B\left(R_2^+, R_1^+\right) - (k-|Q|)^2 \|U\|^+_{{}_\square} - 2k \sum_{i \in Q} U_{X_i}\right\}\right] \\
& \quad \quad \leq C_{0,u} \cdot \sqrt{\ln k} \cdot k^{\frac{3}{2}+\ensuremath{\gamma}} \sqrt{q} \cdot \left(1 - \frac{C_{1,u}\cdot k^{2-\ensuremath{\gamma} m}}{1-\frac{2m}{3}}\right)
\end{split}
\end{equation*}
and hence
\begin{equation*}
\begin{split}
\ensuremath{\mathbb{E}}_X \left[ \max\limits_{R_i\subseteq Q_i} B\left(R_2^+, R_1^+\right) \right] & \leq (k-|Q|)^2 \|U\|^+_{{}_\square} + 2kq \|U\|_1 + C_{0,u} \cdot \sqrt{\ln k} \cdot k^{\frac{3}{2}+\ensuremath{\gamma}} \sqrt{q} \cdot \left(1 - \frac{C_{1,u}\cdot k^{2-\ensuremath{\gamma} m}}{1-\frac{2m}{3}}\right).
\end{split}
\end{equation*}
The right hand side is independent of \(Q_1, Q_2\), so it also holds if we take the expected value over \(Q_1,Q_2\):
\begin{equation*}
\begin{split}
\ensuremath{\mathbb{E}}_X & \left[\ensuremath{\mathbb{E}}_{Q_1,Q_2} \left[ \max\limits_{R_i\subseteq Q_i} B\left(R_2^+, R_1^+\right) \right]\right] = \ensuremath{\mathbb{E}}_{Q_1,Q_2} \left[\ensuremath{\mathbb{E}}_X \left[ \max\limits_{R_i\subseteq Q_i} B\left(R_2^+, R_1^+\right) \right]\right]\\
& \leq (k-|Q|)^2 \|U\|^+_{{}_\square} + 2kq \|U\|_1 + C_{0,u} \cdot \sqrt{\ln k} \cdot k^{\frac{3}{2}+\ensuremath{\gamma}} \sqrt{q} \cdot \left(1 - \frac{C_{1,u}\cdot k^{2-\ensuremath{\gamma} m}}{1-\frac{2m}{3}}\right)\\
& \leq (k-|Q|)^2 \|U\|^+_{{}_\square} + 2kq \|U\|_1 + C_{0,u} \cdot \sqrt{\ln k} \cdot k^{\frac{3}{2}+\ensuremath{\gamma}} \sqrt{q} \cdot \left(1 - \frac{C_{1,u}}{1-\frac{2m}{3}}\right) \\
& \leq k^2 \|U\|^+_{{}_\square} + 2kq \|U\|_1 + C_{0,u} \cdot \sqrt{\ln k} \cdot k^{\frac{3}{2}+\ensuremath{\gamma}} \sqrt{q} \cdot \left(1 - \frac{C_{1,u}}{1-\frac{2m}{3}}\right),
\end{split}
\end{equation*}
where we also used that \(|Q| \in [0,k], k^{2-\ensuremath{\gamma} m} \leq 1\)
\subsection{Upper bound on \(\ensuremath{\mathbb{E}}_X\left[\|B\|_F\right]\)}\label{Section:B_F}
Reminder:
\[\|B\|_F^2=\sum_{i \in [k]} \sum_{j\in [k]} B_{ij}^2 \quad \text{ and } \quad \|U\|_2^2=\int_{x\in [0,1]} \int_{y\in [0,1]} U(x,y)^2 dy dx.\]
\begin{lemma}\label{Lemma:Frobenius}
\[
\ensuremath{\mathbb{E}}_X\left[\|B\|_F\right]\leq \sqrt{k(k-1)} \|U\|_2 <k \|U\|_2.
\]
\end{lemma}
\begin{proof}
Note that \[\ensuremath{\mathbb{E}}_X\left[\|B\|_F^2\right]=\ensuremath{\mathbb{E}}_X\left[\sum_{i=1}^k \sum_{j=1}^k B_{ij}^2\right]= k(k-1) \cdot \int_0^1 \int_0^1 U(x,y)^2 dx dy = k(k-1) \|U\|_2^2,\]
since \(U(x,x)=0\) and hence \(B_{ii}=0\).
Also, by convexity $\left(\ensuremath{\mathbb{E}}_X\left[\|B\|_F\right]\right)^2\leq\ensuremath{\mathbb{E}}_X\left[\|B\|_F^2\right]$.
\end{proof}
\subsection{Using Azuma's inequality}\label{Section:upper_Azuma}
Let us have a look at our aims and where we got so far. Our main goal is to give an upper bound on \(\|U[X]\|_{{}_\square} - \|U\|_{{}_\square}\) and we do it by first giving an upper bound on the one sided version.
We showed that the following holds: (Inequality~(\ref{Ineq:EB_upper}), Section~\ref{Section:upper_term1}, Section~\ref{Section:B_F})
\begin{align}\label{Ineq:E_upper}
\nonumber \ensuremath{\mathbb{E}}_{X} \left[\|B\|^+_{{}_\square}\right] & \leq \frac{1}{k^2} \ensuremath{\mathbb{E}}_X\left[\ensuremath{\mathbb{E}}_{Q_1,Q_2} \left[\max_{R_i\subseteq Q_i} B(R_2^+, R_1^+)\right]\right]+\frac{2}{k\sqrt{q}} \ensuremath{\mathbb{E}}_X\left[\|B\|_F\right] \\
& \leq \|U\|^+_{{}_\square} + \frac{2q}{k} \|U\|_1 + C_{M,0} \cdot \sqrt{\ln k} \cdot k^{-\frac{1}{2}+\ensuremath{\gamma}} \sqrt{q} \cdot \left(1 - \frac{C_{M,1}}{1-\frac{2m}{3}}\right) + \frac{2}{\sqrt{q}} \|U\|_2.
\end{align}
Similarly to the proof in Section~\ref{Section:lower_Azuma} we use Azuma's inequality to get rid of the expected value:
For any \(S,T\subseteq [k]\) the function \(B_X\left(S,T\right)\) is a function of the independent random variables \(X_\ell, \ell \in [k]\), and if we change the value of one of these \(X_a\) to \(X^*_a\), the sum \(B_X(S,T)\) changes to \(B_{X^*}(S,T)\). Note that
\begin{equation*}
\begin{split}
\left| B_{X}(S, T) - B_{X^*}(S,T) \Big.\right| \leq &\left|\sum_{i\in S, j\in T} \left( U(X_i,X_j)-U(X^*_i, X^*_j)\right)\right|\\
\leq & \sum_{i\in S, j\in T} \left|\Big.U(X_i,X_j)-U(X^*_i, X^*_j)\right| \quad \quad \text{ (triangle inequality)}\\
\leq & 2 \sum_{i\neq a} \big| U(X_a,X_i) \big| + 2\sum_{i\neq a} \big|U(X^*_a,X_i)\big|.
\end{split}
\end{equation*}
The last inequality holds for similar reasons as in the lower bound.
Hence
\begin{equation*}
\begin{split}
& \left| k^2 \cdot \|B_X\|^+_{{{}_\square}}-k^2 \cdot \|B_{X^*}\|^+_{{{}_\square}} \right| \leq \left|\max_{S,T \subseteq[k]} B_X(S,T) - \max_{S,T \subseteq[k]} B_{X^*}(S,T)\right| \\
& \quad \quad \quad \leq 2 \sum_{ j\neq a} \left|U(X^*_a,X_j)\right|+ 2 \sum_{ j\neq a} \left|U(X_a,X_j) \right|.
\end{split}
\end{equation*}
Let us suppose that \(X_1,X_2, \dots X_{a-1}\) are fixed and we are fixing the value at index \(a\) now, so we take the expected value over the rest.
\begin{equation*}
\begin{split}
& \left|\ensuremath{\mathbb{E}}_{X_i, i>a}\left[ \|B_X\|^+_{{}_\square} - \|B_{X^*}\|^+_{{}_\square}\right]\right| \leq \ensuremath{\mathbb{E}}_{X_i, i>a}\left| \|B_X\|^+_{{}_\square} - \|B_{X^*}\|^+_{{}_\square}\right| \\
& \quad \leq \frac{2}{k^2} \ensuremath{\mathbb{E}}_{X_i, i>a} \left[\sum_{ j\neq a} \left(\big|U(X^*_a,X_j)\big|+ \big|U(X_a,X_j) \big|\right)\right]\\
& \quad = \frac{2}{k^2} \left[\sum_{ j < a} \left(\big|U(X^*_a,X_j)\big|+ \big|U(X_a,X_j) \big|\right) +\ensuremath{\mathbb{E}}_{X_i, i>a} \left[\sum_{ j> a} \left(\big|U(X^*_a,X_j)\big|+ \big|U(X_a,X_j) \big|\right)\right] \right] \\
& \quad = \frac{2}{k^2} \left[\sum_{ j < a} \left(\big|U(X^*_a,X_j)\big|+ \big|U(X_a,X_j) \big|\right) + \left|\{i\in[k] : i>a\}\right| \cdot \left(U_{X^*_a}+ U_{X_a}\right)\bigg.\right] \\
& \quad = \frac{2}{k^2} \left[\sum_{ j < a} \left(\big|U(X^*_a,X_j)\big|+ \big|U(X_a,X_j) \big|\right) + (k-a) \cdot \left(U_{X^*_a}+ U_{X_a}\right)\right]. \\
\end{split}
\end{equation*}
We would like to use Azuma's inequality, so we are interested in the following difference, where we fix \(X_a^*\):
\begin{equation*}
\begin{split}
& \left|\Big.\ensuremath{\mathbb{E}}_{X_i, i\geq a} \|B_X\|^+_{{}_\square} - \ensuremath{\mathbb{E}}_{X_i, i>a} \|B_{X^*}\|^+_{{}_\square} \right| =\left| \ensuremath{\mathbb{E}}_{X_a} \left[\Big.\ensuremath{\mathbb{E}}_{X_i, i> a}\left[ \|B_X\|^+_{{}_\square} - \|B_{X^*}\|^+_{{}_\square} \right]\right]\right| \\
& \quad \quad \leq \ensuremath{\mathbb{E}}_{X_a} \left|\Big.\ensuremath{\mathbb{E}}_{X_i, i> a}\left[ \|B_X\|^+_{{}_\square} - \|B_{X^*}\|^+_{{}_\square}\right]\right| \\
&\quad \quad \leq \frac{2}{k^2} \ensuremath{\mathbb{E}}_{X_a}\left[\sum_{ j < a} \left(\big|U(X^*_a,X_j)\big|+ \big|U(X_a,X_j) \big|\right) + (k-a) \cdot \left(U_{X^*_a}+ U_{X_a}\right)\right]\\
& \quad \quad = \frac{2}{k^2} \left[\sum_{ j < a} \big|U(X^*_a,X_j)\big|+ \ensuremath{\mathbb{E}}_{X_a}\left[\sum_{ j < a}\big|U(X_a,X_j) \big|\right] + (k-a) \cdot U_{X^*_a}+ (k-a) \cdot \ensuremath{\mathbb{E}}_{X_a}\left[U_{X_a}\right]\right]\\
& \quad \quad = \frac{2}{k^2} \left[\sum_{ j < a} \big|U(X^*_a,X_j)\big|+ \sum_{ j < a} U_{X_j} + (k-a) \cdot U_{X^*_a}+ (k-a) \cdot \|U\|_1\right].\\
\end{split}
\end{equation*}
Let us denote by \(X^*\) the vector which we get by changing \(X_a\) to \(X_a^*\) and keeping everything else the same. Suppose that \(X^* \in L\) (for definition see Definition~\ref{def:L}). Then
\begin{equation*}
\begin{split}
& \left|\Big.\ensuremath{\mathbb{E}}_{X_i, i\geq a} \|B_X\|^+_{{}_\square} - \ensuremath{\mathbb{E}}_{X_i, i>a} \|B_{X^*}\|^+_{{}_\square} \right| \\
& \quad \quad = \frac{2}{k^2} \left[(a-1) \left(\|U\|_1 + \delta_{M,1}\right)+ (a-1) \left(\|U\|_1 + \delta_{M,1}\right) + (k-a) \cdot \left(\|U\|_1 + \delta_{M,1}\right) + (k-a) \cdot \|U\|_1\right]\\
& \quad \quad = \frac{2}{k^2} \left[2 (k-1) \|U\|_1 + (k+a-2) \delta_{M,1} \right] < \frac{4k}{k^2} \left(\|U\|_1 + \delta_{M,1}\right) = \frac{4}{k} \|U\|_1 + \frac{4}{k} \ensuremath{\nu} k^{\ensuremath{\gamma}}=\alpha.
\end{split}
\end{equation*}
Note that \(\alpha\) depends only on \(k, \ensuremath{\nu}, \ensuremath{\gamma}\) (and not on \(a\)) and is exactly the same as in Section~\ref{Section:lower_Azuma}.
Now we would like to use the modified version of the generalised version of Azuma's inequality (Lemma~\ref{Lemma:generalised_azuma}) with \(([0,1]^k, A, \pi)\) probability space, \(f(X)=\|B_X\|^+_{{}_\square}: [0,1]^k \rightarrow \ensuremath{\mathbb{R}}\) measurable function and events
\begin{equation*}
\begin{split}
H^+_{\alpha,a}:= \left\{X\in [0,1]^k \quad \bigg| \quad \left|\Big.\ensuremath{\mathbb{E}}_{X_i, i\geq a} \|B_X\|^+_{{}_\square} - \ensuremath{\mathbb{E}}_{X_i, i>a} \|B_{X^*}\|^+_{{}_\square}\right| \geq \alpha\right\}.
\end{split}
\end{equation*}
Note that above we just proved that \(L \subseteq \cup_{a\in [k]} \left(H^+_{\alpha,a}\right)^c\), i.e. \(\Prob{L} \leq \Prob{\left(\cup_{a\in[k]}H^+_{\alpha, a}\right)^c }\) and \(\Prob{L^c} \geq \Prob{\cup_{a\in[k]}H^+_{\alpha, a}}\).
Let \(X\in [0,1]^k\) be random and chosen according to the product measure. Then for some \(\lambda>0\) and \(m>\max\left\{1,\frac{2}{\ensuremath{\gamma}}\right\}\)
\begin{equation*}
\begin{split}
\Prob{\left|\|B\|^+_{{}_\square} - \ensuremath{\mathbb{E}}_X\left[ \|B\|^+_{{}_\square}\right]\right| \geq \lambda \sqrt{k\cdot \alpha^2}}
& \leq 2 \exp(-2 \lambda^2) + \Prob{\cup_{a\in[k]}H^+_{\alpha,a}} \\
& \leq 2 \exp(-2 \lambda^2) + \Prob{L^c} \\
& \leq 2 \exp(-2 \lambda^2) + k^2 \min\left\{1, \frac{c(U,1,m)}{\ensuremath{\nu}^m k^{\ensuremath{\gamma} m}}\right\},
\end{split}
\end{equation*}
where we used the result of Proposition~\ref{Prop:L_Chebyshev}.
Let \(\lambda = \sqrt{\ln k \cdot \frac{m \ensuremath{\gamma}-2}{2}}\). Then for \(k\) large enough (so that \(\|U\|_1 \leq \delta_{L,1} =\ensuremath{\nu} k^{\ensuremath{\gamma}}\))
\begin{equation*}
\begin{split}
\lambda\cdot \sqrt{k\cdot \alpha_a^2}
C_{0} \cdot \sqrt{\ln k} \cdot k^{-\frac{1}{2}+\ensuremath{\gamma}}\quad \quad \text{ where } C_{0} = 8 \ensuremath{\nu} \cdot \sqrt{\frac{m\ensuremath{\gamma} - 2}{2}}.
\end{split}
\end{equation*}
Hence we got that for \(m>\max\left\{1,\frac{2}{\ensuremath{\gamma}}\right\}\), \(k\) large enough
\begin{equation}\label{Ineq:Azumalower2}
\begin{split}
\Prob{\left|\Big.\|B\|^+_{{}_\square} - \ensuremath{\mathbb{E}}_X\left[ \|B\|^+_{{}_\square}\right]\right| \geq C_{0} \cdot \sqrt{\ln k} \cdot k^{-\frac{1}{2}+\ensuremath{\gamma}}}
& \leq C_{1}\cdot k^{2-\ensuremath{\gamma} m},
\end{split}
\end{equation}
where \(C_{1}=2+ \frac{c(U,m)}{\ensuremath{\nu}^m}\), i.e. the same holds for the one sided cut norm as for the cut norm (see Section~\ref{Section:lower_Azuma}).
\subsection{End of proof of Proposition~\ref{Prop:Plus_upper}}
Now we look back to inequality~(\ref{Ineq:E_upper}):
\begin{equation*}
\begin{split}
\ensuremath{\mathbb{E}}_{X} \left[\|B\|^+_{{}_\square}\right] & \leq \|U\|^+_{{}_\square} + \frac{2q}{k} \|U\|_1 + C_{M,0} \cdot \sqrt{\ln k} \cdot k^{-\frac{1}{2}+\ensuremath{\gamma}} \sqrt{q} \cdot \left(1 - \frac{C_{M,1}}{1-\frac{2m}{3}}\right) + \frac{2}{\sqrt{q}} \|U\|_2.
\end{split}
\end{equation*}
Using the generalised version of Azuma's inequality we proved (inequality~(\ref{Ineq:Azumalower2})) that with probability at least \(1-C_{L,1} \cdot k^{2-\ensuremath{\gamma} m}\)
\begin{equation*}
\ensuremath{\mathbb{E}}_X\left[\|B\|^+_{{}_\square}\right] > \|B\|^+_{{}_\square} - C_{L,0} \cdot \sqrt{\ln k} \cdot k^{-\frac{1}{2} + \ensuremath{\gamma}}.
\end{equation*}
Let \(q=\left\lceil\frac{ k^{\frac{1}{2} - \ensuremath{\gamma}}}{\sqrt{\ln k}}\right\rceil\), assuming \(\ensuremath{\gamma} < \frac{1}{2}\). Then
\[\sqrt{q} = \sqrt{\frac{ k^{\frac{1}{2} - \ensuremath{\gamma} }}{\sqrt{\ln k}}} + O \left(\frac{\sqrt[4]{\ln k}}{\sqrt{ k^{\frac{1}{2} - \ensuremath{\gamma}}}}\right) = \frac{k^{\frac{1}{4}-\frac{\ensuremath{\gamma}}{2}}}{\sqrt[4]{\ln k}} + O\left(\sqrt[4]{\ln k} \cdot k^{-\frac{1}{4}+\frac{\ensuremath{\gamma}}{2}}\right) \geq \sqrt{\frac{ k^{\frac{1}{2} - \ensuremath{\gamma} }}{\sqrt{\ln k}}}\]
Put these together to get the required result: with probability at least \(1-C_{1}\cdot k^{2-\ensuremath{\gamma} m}\)
\begin{equation*}
\begin{split}
\|B\|^+_{{}_\square} - \|U\|^+_{{}_\square} & \leq \frac{2q}{k} \|U\|_{1} + C_{M,0} \cdot \sqrt{\ln k} \cdot k^{-\frac{1}{2}+\ensuremath{\gamma}} \sqrt{q} \cdot \left(1 - \frac{C_{M,1}}{1-\frac{2m}{3}}\right) + \frac{2}{\sqrt{q}} \|U\|_2 + C_{0} \cdot \sqrt{\ln k} \cdot k^{-\frac{1}{2} + \ensuremath{\gamma}}\\
& \leq \frac{2k^{\frac{1}{2} - \ensuremath{\gamma}}}{\sqrt{\ln k} \cdot k} \|U\|_{1} + C_{M,0} \cdot \sqrt{\ln k} \cdot k^{-\frac{1}{2}+\ensuremath{\gamma}} \left(\frac{k^{\frac{1}{4}-\frac{\ensuremath{\gamma}}{2}}}{\sqrt[4]{\ln k}} + O\left(\sqrt[4]{\ln k} \cdot k^{-\frac{1}{4}+\frac{\ensuremath{\gamma}}{2}}\right)\right) \cdot \left(1 - \frac{C_{M,1}}{1-\frac{2m}{3}}\right)\\
& \quad \quad + \frac{2 \sqrt[4]{\ln k}}{k^{\frac{1}{4}-\frac{\ensuremath{\gamma}}{2}}} \|U\|_2 + C_{0} \cdot \sqrt{\ln k} \cdot k^{-\frac{1}{2} + \ensuremath{\gamma}} \\
& = \frac{2 k^{-\frac{1}{2} - \ensuremath{\gamma}}}{\sqrt{\ln k}}\|U\|_{1} + C_{M,0} \cdot \sqrt[4]{\ln k} \cdot k^{-\frac{1}{4}+\frac{\ensuremath{\gamma}}{2}} \cdot \left(1 - \frac{C_{M,1}}{1-\frac{2m}{3}}\right) + O\left( \left(\ln k\right)^{\frac{3}{4}} \cdot k^{-\frac{3}{4}}\right)\\
& \quad \quad + 2 \sqrt[4]{\ln k} \cdot k^{-\frac{1}{4}+\frac{\ensuremath{\gamma}}{2}} \|U\|_2 + C_{0} \cdot \sqrt{\ln k} k^{-\frac{1}{2} + \ensuremath{\gamma}} \\
& = O\left(\sqrt[4]{\ln k} \cdot k^{-\frac{1}{4}+\frac{\ensuremath{\gamma}}{2}}\right).
\end{split}
\end{equation*}
\section{Proof of Theorem~\ref{Thm:First_sampling}}
Note that Proposition~\ref{Prop:Plus_upper} holds for both \(U\) and \(-U\) and thus by the union bound we get that with probability at least \(1-2C_1 \cdot k^{2-\ensuremath{\gamma} m}\)
\begin{align*}
\nonumber
& \ensuremath{\mathbb{E}}_{X} \left[\|U[X]\|^+_{{}_\square}\right] - \|U\|_{{}_\square}^+ \\
\nonumber
& \leq \frac{2 k^{-\frac{1}{2} - \ensuremath{\gamma}}}{\sqrt{\ln k}}\|U\|_{1} + C_{M,0} \cdot \sqrt[4]{\ln k} \cdot k^{-\frac{1}{4}+\frac{\ensuremath{\gamma}}{2}} \cdot \left(1 - \frac{C_{M,1}}{1-\frac{2m}{3}}\right) + O\left( \left(\ln k\right)^{\frac{3}{4}} \cdot k^{-\frac{3}{4}}\right)\\
\nonumber
& \quad \quad + 2 \sqrt[4]{\ln k} \cdot k^{-\frac{1}{4}+\frac{\ensuremath{\gamma}}{2}} \|U\|_2 + C_{0} \cdot \sqrt{\ln k} k^{-\frac{1}{2} + \ensuremath{\gamma}} \\
& = O\left(\sqrt[4]{\ln k} \cdot k^{-\frac{1}{4}+\frac{\ensuremath{\gamma}}{2}}\right) \\
\end{align*}
and
\begin{align*}
\nonumber
& \ensuremath{\mathbb{E}}_{X} \left[\|-U[X]\|^+_{{}_\square}\right] - \|-U\|_{{}_\square}^+ \\
\nonumber
& \leq \frac{2 k^{-\frac{1}{2} - \ensuremath{\gamma}}}{\sqrt{\ln k}}\|-U\|_{1} + C_{M,0, -} \cdot \sqrt[4]{\ln k} \cdot k^{-\frac{1}{4}+\frac{\ensuremath{\gamma}}{2}} \cdot \left(1 - \frac{C_{M,1, -}}{1-\frac{2m}{3}}\right) + O\left( \left(\ln k\right)^{\frac{3}{4}} \cdot k^{-\frac{3}{4}}\right)\\
\nonumber
& \quad \quad + 2 \sqrt[4]{\ln k} \cdot k^{-\frac{1}{4}+\frac{\ensuremath{\gamma}}{2}} \|-U\|_2 + C_{0} \cdot \sqrt{\ln k} k^{-\frac{1}{2} + \ensuremath{\gamma}} \\
& = O\left(\sqrt[4]{\ln k} \cdot k^{-\frac{1}{4}+\frac{\ensuremath{\gamma}}{2}}\right).
\end{align*}
As noted in Section~\ref{Section:Theorems}, \(\|U\|_{{}_\square}= \max\{\|U\|^+_{{}_\square}, \|−U\|^+_{{}_\square}\}\) and thus the right hand side of Theorem~\ref{Thm:First_sampling} holds: with probability at least \(1-2C_1\cdot k^{2-\ensuremath{\gamma} m}\)
\begin{align*}
& \ensuremath{\mathbb{E}}_{X} \left[\|U[X]\|_{{}_\square}\right] - \|U\|_{{}_\square} \leq O\left(\sqrt[4]{\ln k} \cdot k^{-\frac{1}{4}+\frac{\ensuremath{\gamma}}{2}}\right). \\
\end{align*}
\section*{Acknowledgement.}
Prepared with the professional support of the doctoral student scholarship program of the Co-operative Doctoral Program of the Ministry of Innovation and Technology financed from the National Research, Development and Innovation Fund. The second author was also supported by ERC Synergy Grant No.~810115.
|
1,116,691,497,389 | arxiv | \section{Introduction}
Unless specified otherwise, `sequence in $X$' means finite sequence whose terms are elements in the set $X$. For ${\bf f} := \langle x_0,x_1,\ldots,x_k\rangle$ a sequence, $x_i<_{\bf f} x_j$ \ iff \
$0\le i<j\le k$, and $x_i\le_{\bf f} x_j$ \ iff \ $0\le i\le j\le k$.
For $n$ a positive integer, Sym$(n)$ and Alt$(n)$ denote respectively the symmetric group and the alternating group on the set $n := \{0,1,\ldots, n-1\}$. When ${\bf s} :=
\langle s_0,s_1,\ldots, s_{k-1}\rangle$ is a {\em permutational sequence}, i.e., a sequence in Sym$(n)$, then $|{\bf s}|=k$ is its {\em length}, and $\bigcirc{\bf s}:=
s_0\circ s_1\circ\cdots\circ s_{k-1}$ is its compositional product.\footnote{We compose permutations from left to right. That is, when $\{f,g\}\subseteq$ Sym$(n)$ and $x\in n$, then
$x(f\circ g) = (xf)g = xfg$.}
Seq$({\bf s})$ denotes the set of sequences that are arrangements of the terms of {\bf s}. That is to say, Seq$({\bf s})$ denotes the set
$\{{\bf r}: {\bf r} :=\langle s_{\psi(0)},s_{\psi(1)},\ldots, s_{\psi(k-1)}\rangle$, where $\psi\in$ Sym$(k)\}$. Obviously ${\bf r}\in$ Seq$({\bf s}) \Rightarrow |{\bf r|=|s}|$.\vspace{.5em}
Our general subject is the family of sets Prod$({\bf s}) := \{\bigcirc{\bf r}: {\bf r}\in$ Seq$({\bf s})\}$ for the sequences {\bf s} in Sym$(n)$. However, fully to characterize the family of all such Prod$({\bf s})$ seems daunting.
So we confine ourselves to the subclass, of that class of {\bf s}, which is treated in the papers \cite{Denes, Eden, Polya, Silberger}, whose results we extend.
Plainly, either Prod$({\bf s})\subseteq$ Alt$(n)$ or Prod$({\bf s})\subseteq$ Sym$(n)\setminus$Alt$(n)$. Also, $|$Prod$({\bf s})|\le|$Seq$({\bf s})|\le |{\bf s}|!$ \vspace{.5em}
When $f\in$ Sym$(n)$, the expression supp$(f)$ denotes the set of $x\in n$ for which $xf\not=x$. If {\bf s} is a permutational sequence then Supp$({\bf s})$ denotes the family of
all supp$(g)$ for which $g$ is a term in {\bf s}.\vspace{.5em}
By a {\em transposition} we mean a permutation $f\in$ Sym$(n)$ for which there exist elements $a\not=b$ in $n$ with $af=b$, with $bf =a$ and with $xf =x$ for all $x\in n\setminus\{a,b\}$.
For $n\ge2$, the set of transpositions in Sym$(n)$ is written $1^{n-2}2^1$. By a {\em transpositional sequence} we mean a sequence in the subset $1^{n-2}2^1$ of Sym$(n)$..
By the {\em transpositional multigraph} ${\cal T}({\bf u})$ of a sequence {\bf u} in $1^{n-2}2^1$, we mean the labeled multigraph on the vertex set $n$ that has an $(x\, y)$ as a multiedge
of multiplicity $\mu(g)\ge0$ if and only if the transposition $g := (x\, y)$ occurs exactly $\mu(g)$ times as a term in {\bf u}. For convenience, we will usually take it that ${\cal T}({\bf u})$ is connected, in which event of course
$\bigcup$Supp$({\bf u})=$ supp$({\bf u})=n$. It is obvious that ${\cal T}({\bf r})={\cal T}({\bf u})$, which is to say that ${\cal T}({\bf r})$ is the same labeled multigraph as ${\cal T}({\bf u})$, if and only if
${\bf r}\in$ Seq$({\bf u})$.
The multigraph ${\cal T}({\bf u})$ is simple, i.e., is a graph, if and only if {\bf u} is injective.\footnote{We call {\bf s} {\em injective} iff $s_i=s_j \Leftrightarrow i=j$ for $s_i$ and $s_j$ terms in {\bf s};
i.e., iff the function ${\bf s}:j\mapsto s_j \in {\rm Sym}(n)$ is injective.} Where we omit the prefix ``multi'' from ``multentity'', we are tacitly indicating that the entity is simple; but our writing that
$X$ is a multithing does not prohibit $X$ from its being a simple thing. (E.g., a multiedge can be of multiplicity $1$.)
For ${\cal T}({\bf u})$ a simple tree, this graph has been used in \cite{Eden} to specify the set of all ${\bf r}\in$ Seq$({\bf u})$ for which $\bigcirc{\bf r}=\bigcirc{\bf u}$, thus inducing a natural
partition of Seq$({\bf u})$. Also, both \cite{Denes} and \cite{Silberger} show that, if the multigraph ${\cal T}({\bf u})$ is simple, then every element in Prod$({\bf u})$ is a cyclic permutation of the set
$n$ if and only if ${\cal T}({\bf u})$ is a tree. See also \cite{Polya}.
\begin{defn}\label{DefPermComplete} A sequence {\bf s} in Sym$(n)$ is {\em permutationally complete} iff Prod$({\bf s})\in \{{\rm Alt}(n),{\rm Blt}(n)\}$ where ${\rm Blt}(n) :=
{\rm Sym}(n)\setminus{\rm Alt}(n)$; that is to say, ${\bf s}$ is permutationally complete iff ${\rm Prod}({\bf s})\in{\rm Sym}(n)/{\rm Alt}(n)$. `Permutationally complete' is abbreviated
perm-complete.\end{defn}
A sequence ${\bf s}$ in ${\rm Sym}(n)$ is perm-complete if and only if ${\rm Prod}({\bf s})$ is of largest possible size, $|{\rm Prod}({\bf s})| = n!/2$.
In \S2 we elaborate criteria that imply the perm-completeness of a sequence {\bf u} in $1^{n-2}2^1$, and we provide other criteria which entail that such a {\bf u} cannot be perm-complete.
If the product function $\bigcirc$ maps Seq$({\bf s})$ onto an element in the family ${\rm Sym}(n)/{\rm Alt}(n)$, and if {\bf r} is a sequence produced by inserting into {\bf s} an additional term
$f\in{\rm Sym}(n)$, then plainly $\bigcirc$ maps ${\rm Seq}({\bf r})$ onto an element in ${\rm Sym}(n)/{\rm Alt}(n)$; {\it viz} Theorem \ref{SupSeq}. So we can confine our attention in \S2 to
those transpositional {\bf u} which are injective, and whose transpositional multigraphs are consequently simple; i.e., they are ``graphs''.
These graphs facilitate the identification of infinite classes of {\bf u} which are perm-complete and also of infinite classes of {\bf u} which fail to be perm-complete. For instance, if ${\cal T}({\bf u})$
is the complete graph ${\cal K}_n$ then {\bf u} is perm-complete, but if ${\cal T}({\bf u})$ is a tree with $n\ge3$ then {\bf u} is not perm-complete. Therefore, every injective perm-complete
transpositional sequence {\bf u} has a minimal perm-complete subsequence.
In \S2 we will specify, for each $n\ge2$, a family of minimal perm-complete injective sequences in $1^{n-2}2^1$.
\begin{defn}\label{DefCI} We call a permutational sequence {\bf s} {\em conjugacy invariant}, aka CI, iff every element in Prod$({\bf s})$ is conjugate to $\bigcirc{\bf s}$.\end{defn}
We lose no generality if we ignore the fact that ${\cal T}({\bf u})$ is labeled. Indeed, we call an unlabeled multigraph ${\cal G}$ perm-complete if ${\cal G}$ is isomorphic to ${\cal T}({\bf u})$
for some perm-complete ${\bf u}$. Likewise, ${\cal G}$ is CI if ${\bf u}$ is CI.
\section{Permutational completeness}
\begin{thm} \label{SupSeq} Every supersequence in {\rm Sym}$(n)$ of a perm-complete sequence {\bf s} in {\rm Sym}$(n)$ is perm-complete. \end{thm}
\begin{proof} Without loss of generality, let ${\rm Prod}({\bf s})={\rm Alt}(n)$. Pick $g\in{\rm Sym}(n)$. The mapping, ${\rm Alt}(n)\rightarrow{\rm Sym}(n)$ defined by $f\mapsto f\circ g$, takes
${\rm Prod}({\bf s})$ either into ${\rm Alt}(n)$ or into ${\rm Blt}(n)$, and it is bijective. Since $|{\rm Alt}(n)|=n!/2=|{\rm Blt}(n)|$, we conclude that ${\rm Prod}({\bf w})\in{\rm Sym}(n)/{\rm Alt}(n)$
where ${\bf w} := \langle{\bf s},g\rangle=\langle s_0,s_1,\ldots, s_{k-1},g\rangle$. \end{proof}
Call a family ${\cal A}$ {\em connected} if $\bigcup{\cal A}$ cannot be expressed as the disjoint union, $\bigcup{\cal A} = \bigcup{\cal B}\,\dot{\cup}\bigcup{\cal C}$, of nonempty subfamilies ${\cal B}$
and ${\cal C}$ of ${\cal A}$. Observe that if {\bf s} is perm-complete then ${\rm Supp}({\bf s})$ is connected.\vspace{.5em}
In \S2. we restrict our concern to those ${\bf u} := \langle u_0,u_1,\ldots, u_{k-1}\rangle$ in $1^{n-2}2^1$ for which $\bigcup{\rm Supp}({\bf u}) = n$, and for which the family ${\rm Supp}({\bf u})$ is connected.
It is easy to see that if $g\in{\rm Prod}({\bf u})$ then $g^-\in{\rm Prod}({\bf u})$ too.\footnote{We write $g^-$ to designate the inverse of $g$, where other people may prefer instead to write $g^{-1}$.}
\subsection{Criteria ensuring that {\bf u} is not perm-complete}
\begin{thm}\label{Deg1} Let ${\cal G}$ be a connected graph with vertex set $n\ge3$, and which has a vertex $a$ if degree $1$. Then ${\cal G}$ is not perm-complete. Consequently, no tree having three
or more vertices is perm-complete. \end{thm}
\begin{proof} Pretend that ${\cal G}$ is perm-complete, and let ${\bf u}$ be a transpositional sequence for which ${\cal G = T}({\bf u})$. Then there exists ${\bf r}\in{\rm Seq}({\bf u})$ with
$a = a\bigcirc{\bf r}$. There are subsequences ${\bf f}$ and ${\bf g}$ of ${\bf r}$ such that ${\bf r} = \langle{\bf f},(a\,b), {\bf g}\rangle$ for some $b\in n\setminus\{a\}$. Since by hypothesis
deg$_{\cal g}(a)=1$, the transposition $(a\,b)$ is the only term in the injective sequence ${\bf r}$ with $a$ in its support, we get that
$a = a\bigcirc{\bf r} = a[\bigcirc{\bf f}\circ(a\,b)\circ\bigcirc{\bf g}] = a[(a\,b)\circ\bigcirc{\bf g}] = b\bigcirc{\bf g} \not= a$. \end{proof}
A transpositional sequence {\bf u} in Sym$(n)$ with $3\le|{\bf u}|<n$ fails to be perm-complete, since $|$Prod$({\bf u})|\le |$Seq$({\bf u})|\le |{\bf u}|!<n!/2$. Thus Theorem \ref{Deg1} implies that
there exist non-perm-complete injective {\bf u} of length \[{{n-1}\choose{2}}+1.\]
Although $|{\rm Seq}({\bf r})|=|{\bf r}|!$ when {\bf r} is an injective sequence in ${\rm Sym}(n)$, it is rare that $|{\rm Prod}({\bf r})|=|{\bf r}|!$
\begin{thm}\label{Deg2} Let ${\cal G}$ be a connected graph\footnote{For the notion of a connected graph, one may consult\cite{Harary} or almost any other textbook on graph theory.} on the vertex set $n\ge4$, and let ${\cal G}$ have adjacent vertices $x$ and $y$ each of which is of degree $2$. Then ${\cal G}$ is
not perm-complete. \end{thm}
\begin{proof} Pretend that ${\cal G}$ is perm-complete, and assume that ${\bf u}$ is an injective sequence in $1^{n-2}2^1$ with ${\cal T}({\bf u})$ isomorphic to ${\cal G}$. There are\footnote{not necessarily distinct} elements $a$ and $b$ in $n\setminus\{x,y\}$ such that $(a\,x),\,(x\,y)$, and $(y\,b)$ are edges
of ${\cal G}$. So we let ${\bf r}\in{\rm Seq}({\bf u})$ satisfy both $x = x\bigcirc{\bf r}$ and $y = y\bigcirc{\bf r}$. Let ${\bf r'}$ be the sequence of length $|{\bf u}|-3$ obtained by removing the terms $(a\,x), (x\,y)$ and $(y\,b)$ from ${\bf r}$. Let
${\bf r' = fghk}$ be the factorization of ${\bf r'}$ into the four\footnote{some of which may be empty} consecutive segments engendered by the removal from ${\bf r}$ of those three terms. Of
course $\{x,y\}\cap\big({\rm supp}(\bigcirc{\bf f})\cup{\rm supp}(\bigcirc{\bf g})\cup{\rm supp}(\bigcirc{\bf h})\cup{\rm supp}(\bigcirc{\bf k})\big) = \emptyset$. There are essentially three cases. \vspace{.5em}
\underline{Case}: \ $(a\,x) <_{\bf r} (x\,y) <_{\bf r} (y\,b)$. So ${\bf r} = \langle{\bf f},(a\,x),{\bf g},(x\,y),{\bf h},(y\,b),{\bf k}\rangle$. Then
$x\bigcirc{\bf r} = x[\bigcirc{\bf f}\circ(a\,x)\circ\bigcirc{\bf g}\circ(x\,y)\circ\bigcirc{\bf h}\circ(y\,b)\circ\bigcirc{\bf k}] =
x[(a\,x)\circ\bigcirc{\bf g}\circ(x\,y)\circ\bigcirc{\bf h}\circ(y\,b)\circ\bigcirc{\bf k}] =
a[\bigcirc{\bf g}\circ(x\,y)\circ\bigcirc{\bf h}\circ(y\,b)\circ\bigcirc{\bf k}] = a[\bigcirc{\bf h}\circ(y\,b)\circ\bigcirc{\bf k}]$.
{\it Subcase}: \ $b=a\bigcirc{\bf h}$. Then $x\bigcirc{\bf r} = y\bigcirc{\bf k} = y \not= x$ since $y\not\in{\rm supp}(\bigcirc{\bf k})$.
{\it Subcase}: \ $b\not= c := a\bigcirc{\bf h}$. Then $x\bigcirc{\bf r} = c\bigcirc{\bf hk} \not= x$.\vspace{.5em}
\underline{Case}: \ $(a\,x) <_{\bf r} (y\,b) <_{\bf r} (x\,y)$. Here, ${\bf r} = \langle{\bf f},(a\,x),{\bf g},(y\,b),{\bf h},(x\,y),{\bf k}\rangle$. Now $y\bigcirc{\bf r} = b[\bigcirc{\bf h}\circ(x\,y)\circ\bigcirc{\bf k}]
= b\bigcirc{\bf hk} \not= y$.\vspace{.5em}
\underline{Case}: \ $(y\,b) <_{\bf r} (a\,x) <_{\bf r} (x\,y)$. So ${\bf r = \langle f},(y\,b),{\bf g},(a\,x),{\bf h},(x\,y),{\bf k}\rangle$, and so $x\bigcirc{\bf r} = a\bigcirc{\bf hk} \not= x$.\vspace{.5em}
\noindent In each of these three cases we see that either $x\bigcirc{\bf r} \not= x$ or $y\bigcirc{\bf r} \not= y$, contrary to our requirement on ${\bf r}$. \end{proof}
\noindent{\bf Remarks.} Surely both of the complete graphs ${\cal K}_2$ and ${\cal K}_3$ are perm-complete. In fact, the triangle ${\cal K}_3$ is minimally so, in the sense that
the removal of one edge produces a graph which is not perm-complete.\vspace{1em}
Next, we prepare the way for two more-general theorems, each of which provides sufficient conditions for non-perm-completeness. \vspace{.5em}
Since, in the present context, the transformational multigraph of each minimal perm-complete sequence is simple, we let ${\cal G}$ be a simple connected graph whose vertex set is $n$,
Fix a sequence ${\bf u} := \langle u_0,u_1,\ldots, u_k\rangle$ for which ${\cal G} = {\cal T}({\bf u})$, where $u_i := (x_i\,y_i)$ for each $i\le k$.
$\overrightarrow{\cal G}$ denotes the digraph obtained by replacing each edge $(x\, y)$ of ${\cal G}$ with the two {\em arcs} $x\rightarrow y$ and $x\leftarrow y$.
For $x\in n$, the {\bf u}-{\em path from $x$} is the subdigraph, $\overrightarrow{{\bf u}_x }:= \quad x\rightarrow z_1\rightarrow z_2\rightarrow\cdots\rightarrow z_m\rightarrow y$, of $\overrightarrow{\cal G}$,
where the vertices of this path are chosen (and given new names) in the following fashion:
Let $j(1)$ be the least integer $\ell$ such that $x\in{\rm supp}(u_\ell)$. So $u_{j(1)} = (x\, z)$ for some $z\in n$. Supposing the integers $j(1)<j(2)<\cdots<j(i)$
to have been chosen with $(z_{d-1}\, z_d) = u_{j(d)}$ for each $d\in\{2,3,\ldots, i\}$, let $j(i+1)$ be the smallest integer $v>j(i)$ with $z_i\in$ supp$(u_v)$ if any such $v$ exists,
and in this event, define $(z_i\, z_{i+1}) := u_{j(i+1)}$; but if there is no such $v$ then define $\langle z_m,y\rangle := \langle z_{i-1},z_i\rangle$.
Let ${\bf u}_x$ be the subsequence of ${\bf u}$ whose terms contribute the respective arcs that comprise $\overrightarrow{{\bf u}_x}$.
\begin{lem}\label{PartitionArcs} \ $\{\overrightarrow{{\bf u}_x}: x\in n\}$ is a partition of the set of arcs comprising the digraph $\overrightarrow{\cal T}({\bf u})$. \end{lem}
\begin{proof} Since $n = \bigcup{\rm Supp}({\bf u})$, we have that ${\bf u}_x\not=\emptyset$ for every $x\in n$. Let $a\rightarrow b$ be an arc in $\overrightarrow{\cal T}({\bf u})$. Then
$(a\, b)$ is a term $u_i$ in the sequence {\bf u}. If $i=0$ then let $g$ be the identity permutation $\iota\restrict n$; but, if $i>0$, let $g := u_0\circ u_1\circ\cdots\circ u_{i-1}$. Let $v:=ag^-$.Then
$vg=a$, and so $a\rightarrow b$ is an arc of ${\bf u}_v$. Furthermore, if $v\not= q\in n$, then $qg\not=a$ and thus $a\rightarrow b$ is not an arc of ${\bf u}_q$. \end{proof}
\begin{defn}\label{Minimal} A perm-complete sequence ${\bf s}$ in ${\rm Sym}(n)$ is {\em minimally perm-complete} iff the removal of any term of ${\bf s}$ results in a sequence which is not perm-complete. \end{defn}
\begin{defn}\label{Cutset} A set $C$ of edges of a connected graph ${\cal G}$ is a {\em cut set} of ${\cal G}$ iff the removal of $C$ from the edge set of ${\cal G}$ results in a graph that is the union ${\cal G}_0\,\dot\cup\, {\cal G}_1$ of two disjoint subgraphs of ${\cal G}$, with each edge in $C$ having one vertex in ${\cal G}_0$ and the other in ${\cal G}_1$.\end{defn}
The next two theorems facilitate the identification of non-perm-complete transpositional sequences.
\begin{thm}\label{NotPermComplete1} Let ${\cal G}:=\langle n;E\rangle$ be a simple connected graph whose vertex set is $n$ and whose edge set is $E$, and which has a cut set
$C\subseteq E$. Let ${\cal G}_0 :=\langle V_0;E_0\rangle$ and ${\cal G}_1 := \langle V_1;E_1\rangle$ be the disjoint subgraphs of ${\cal G}$ gained by the removal of $C$ from $E$,
where $V_i$ and $E_i$ are respectively the vertex sets and the edge sets of the two ${\cal G}_i$. Let ${\cal G}_0$ be a forest, let $2|C|<|V_0|$, and let $|V_1|\ge2$. Then ${\cal G}$ fails
to be perm-complete. \end{thm}
\begin{proof} Assume that ${\cal G}$ is perm-complete. Then ${\cal G} = {\cal T}({\bf s})$ for some injective sequence ${\bf s} := \langle s_0,s_1,\ldots, s_{k-1}\rangle$ of transpositions in
${\rm Sym}(n)$, where of course $|E|=k$. If $|E|$ is even then ${\rm Prod}({\bf s})={\rm Alt}(n)$; so $\iota\restrict n\in{\rm Prod}({\bf s})$. But if $|E|$ is odd then
$(u\, w)\in{\rm Prod}({\bf s})$ for some elements $u\not= w$ in $V_1$. In both cases there exists $f\in{\rm Prod}({\bf s})$ with $xf=x$ for every $x\in V_0$. The perm-completeness of
${\bf s}$ implies that $f=\bigcirc{\bf t}$ for some ${\bf t}\in{\rm Seq}({\bf s})$.
Let $x$ be an arbirary element in $V_0$.
Since $x = x\bigcirc{\bf t}$, the ${\bf t}$-path $\overrightarrow{{\bf t}_x}$ induces a directed circuit $\overrightarrow{{\bf t}_x^\star}$ that starts and ends at $x$. So, since the subgraph ${\cal G}_0$ is a forest, and since the sequence ${\bf t}$ is injective, we can show that the path $\overrightarrow{{\bf t}_x}$ uses up two edges $e_x \not= l_x$ in $C$ that contribute, to $\overrightarrow{{\bf t}_x^\star}$, an arc $\overrightarrow{e_x}$ from $V_0$ to $V_1$ and another arc $\overrightarrow{l_x}$ back from $V_1$ to $V_0$.
Pretend that the arc $\overrightarrow{e_x}$ occurs not only in the path $\overrightarrow{{\bf t}_x}$, but also in the path $\overrightarrow{{\bf t}_{x'}}$ for some $x'\in V_0\setminus\{x\}$. Then Lemma \ref{PartitionArcs} implies
that the set of arcs comprising $\overrightarrow{{\bf t}_{x'}}$ is the same collection of arcs that comprise $\overrightarrow{{\bf t}_x}$. Viewed as a subsequence of ${\bf t}$, the word ${\bf t}_{x'}$ is a cyclic conjugate of the word ${\bf t}_{x}$.
Without loss of generality, take it that $h' \le_{\bf t} h$, where the transpositions $h'$ and $h$ are the first terms in ${\bf t}$, under the ordering $\le_{\bf t}$, to have $x'$ and $x$ in their respective supports.
Assume that $h'=h=(x\,x')$. Then $x\in {\rm supp}(h'_1)$ where $h'_1$ is the term immediately following $h'$ in the subsequence ${\bf t}_{x'}$ of ${\bf t}$. Similarly, $x'\in{\rm supp}(h_1)$, where $h_1$ is the immediate successor of $h$ in the subsequence ${\bf t}_x$. But obviously then $h'_1=h_1=(x\,x')=h$ in violation of the injectivity of the sequence ${\bf t}$. It follows that $h'<_{\bf t} h$. So there is a prefix
${\bf p}:= \langle h', h'_1, h'_2, \ldots, h'_\bullet, h\rangle$ of the word ${\bf t}_{x'}$, which induces a digraph $\overrightarrow{{\bf p}^\star}$ whose vertices are the integer endpoints of the arcs in $\overrightarrow{\bf p}$,
and which extends from $x'\in{\rm supp}(h')$ to $x\in{\rm supp}(h)$.
Of course ${\bf p}$ is a subsequence of ${\bf t}$. Notice that ${\rm supp}(h'_\bullet)\cap{\rm supp}(h)= \{x\}$, and that $t_0 \le_{\bf t} h'\le_{\bf t}h'_\bullet <_{\bf t} h$, where $t_0$ is the first term in the sequence ${\bf t}$. That $x\in{\rm supp}(h'_\bullet)$ violates a manufacturing criterion for the sequence ${\bf t}_x$; to wit: Under the ordering $<_{\bf t}$, the first term in ${\bf t}_x$ was specified to be the first term in the sequence ${\bf t}$, having $x$ in its support. That first term of ${\bf t}_x$ is $h>_{\bf t} h'_\bullet$. So $f\not\in{\rm Prod}({\bf s})$. Having verified that each vertex in $V_0$ uses up (at least) two edges in $C$ if indeed $f=\bigcirc{\bf t}$, we infer that
$|V_0|\le 2|C|$ if ${\bf s}$ is perm-complete. So, since $2|C|<|V_0|$ by hypothesis, we conclude that ${\bf s}$ is not perm-complete. \end{proof}
A modification of the proof of Theorem \ref{NotPermComplete1} will establish
\begin{cor}\label{WeakerNPC1} Let the hypothesis $2|C|<V_0$ in Theorem \ref{NotPermComplete1} be replaced by the hypothesis $|C|\le V_0$, but let the other hypotheses of the theorem hold. Then ${\cal G}$ fails to be
perm-complete.\end{cor}
\begin{thm}\label{NotPermComplete2} Let ${\cal G}_0$ and ${\cal G}_1$ be connected graphs on the disjoint vertex sets $V_0$ and $V_1$, with $V_0\cup V_1=n$ and $\min\{|V_0|,|V_1|\}\ge2$.
Let $C$ be a nonempty set of edges, each of which has one of its vertices in $V_0$ and the other in $V_1$. Let ${\cal G}:=\langle n;E\rangle={\cal G}_0\,{\cup}\,C\,{\cup}\,{\cal G}_1$.
Let $|C|< \min\{|V_0|,|V_1|\}$. Then ${\cal G}$ is not perm-complete. \end{thm}
\begin{proof} Let $c := |C|<\min\{m,p\}$ where $V_0=\{x_0,x_1,\ldots, x_{m-1}\}$ and where $V_1=\{y_0,y_1,\ldots, y_{p-1}\}$. Assume that ${\cal G}$ is perm-complete. Then
${\cal G = T}({\bf s})$ for some sequence {\bf s} of transpositions in Sym$(n)$.\vspace{.5em}
\underline{Case}: $|E|$ is odd. Let $f:=(x_0\, y_0\, x_1\, y_1\,\ldots\, x_c\, y_c) = f\in{\rm Blt}(n)$. Choose ${\bf r}\in$ Seq$({\bf s})$ such that $f=\bigcirc{\bf r}$. By Lemma
\ref{PartitionArcs}, each of the $2c+2$ distinct paths ${\bf r}_z$ in $\overrightarrow{\cal G}$, one for each $z\in\{x_0,y_0,\ldots, x_c,y_c\}$, contains an arc in $\overrightarrow{\cal C}$ that is
contained in no ${\bf r}_{z'}$ with $z'\in\{x_0,y_0,\ldots, x_c,y_c\}\setminus\{z\}$. But $\overrightarrow{\cal C}$ has only $2c$ arcs in all. Hence, $f\notin$ Prod$({\bf s})$. Thus we see that
${\cal G}$ fails to be perm-complete in the case that $|E|$ is odd. \vspace{.5em}
\underline{Case}: $|E|$ is even. \vspace{.3em}
{\it Subcase}: $c$ is odd. Let $g := (x_0\, y_0)(x_1\, y_1)\dots(x_c\, y_c)\in{\rm Alt}(n)$. Choose ${\bf w}\in{\rm Seq}({\bf s})$ for which $g=\bigcirc{\bf w}$.
As in the odd $|E|$ case, each of the $2c+2$ paths ${\bf w}_z$ in $\overrightarrow{\cal G}$ for the $z\in\{x_0,y_0,x_1,y_1,\ldots, x_c,y_c\}$ uses an arc in $\overrightarrow{\cal C}$ that is
contained in no path ${\bf w}_{z'}$ with $z'\in\{x_0,y_0,x_1,y_1,\ldots, x_c,y_c\}\setminus\{z\}$ -- an impossibility since $\overrightarrow{\cal C}$ has only $2c$ arcs. So $g\notin$ Prod$({\bf s})$.
We infer that here too ${\cal G}$ is not perm-complete.\vspace{.3em}
{\it Subcase}: $c$ is even. We amalgamate two $2$-cycles of $g$ to create a $4$-cycle, thus producing the even permutation $h := (x_0\, y_0\, x_1\, y_1)(x_2\, y_2)\ldots(x_c\, y_c)$.
Having assumed $h\in$ Prod$({\bf s})$, we can choose ${\bf u}\in$ Seq$({\bf s})$ for which $h=\bigcirc{\bf u}$. Once again we have that the set of $2c+2$ paths ${\bf u}_z$ is obliged to
use $2c+2$ arcs in $\overrightarrow{\cal C}$, but cannot do so since $\overrightarrow{\cal C}$ has only $2c$ arcs. Again we get that ${\cal G}$ is not perm-complete. \end{proof}
\subsection{Criteria ensuring permutational completeness}
When ${\cal G} := \langle n;E\rangle$ is a graph with vertex set $n$ and edge set $E$, and when $W\subseteq n$, then $\langle W\rangle$ denotes the subgraph $\langle W;D\rangle$ of ${\cal G}$
whose vertex set is $W$, and whose edge set $D$ consists of every edge $(x\, y)\in E$ for which $\{x,y\}\subseteq W$. This subgraph $\langle W\rangle$ of ${\cal G}$ is said to be {\em induced} by
$W$ in ${\cal G}$.
We say that a subgraph ${\cal S}$ of a graph ${\cal H}$ {\em spans} ${\cal H}$ iff the vertex set of ${\cal S}$ is that of ${\cal H}$. If a subgraph ${\cal S}$ of ${\cal H}$ spans ${\cal H}$, and if
no two distinct edges of ${\cal S}$ share a vertex, then we call ${\cal S}$ a {\em perfect matching} for ${\cal H}$.
\begin{thm}\label{Complicated} For {\bf t} an injective perm-complete transpositional sequence in {\rm Sym}$(n)$, let ${\cal G} := \langle n;E\rangle = {\cal T}({\bf t})$. Let $\emptyset \not=
W\subseteq n$, and let $x\notin n$ be a new vertex. Let ${\cal H} :=\langle V_0;E_0\rangle$ be the simple supergraph of ${\cal G}$ for which $V_0 := n\cup\{x\}$ is the vertex set of ${\cal H}$, and
where $E_0 := E\cup\{(x\,w): w\in W\}$ is the edge set of ${\cal H}$. Let {\bf s} be an injective transpositional sequence in {\rm Sym}$(V_0)$ such that $(p\, q)$ is a term of {\bf s} if and only if
$(p\, q)\in E_0$. Let the integer $|E_0|$ be even(odd). Given a permutation $f\in{\rm Sym}(V_0)$ that is, correspondively, even(odd):
{\bf 2.8.1} \ If $xf\in W$ then $f\in$ {\rm Prod}$({\bf s})$.
{\bf 2.8.2} \ If $w_0f=w_1\not=w_0$ for some $\{w_0,w_1\}\subseteq W$, then $f\in$ {\rm Prod}$({\bf s})$.
{\bf 2.8.3} \ If $\langle W\rangle$ contains a perfect matching, and if $xf=x$ as well, then $f\in$ {\rm Prod}$({\bf s})$. \end{thm}
\begin{proof} We establish the theorem for the case where $|E_0|$ is even, and omit the (identical) proof for the case where $|E_0|$ is odd. So now let $|E_0|$ be even. Since ${\cal G}$
is perm-complete, we have that $|{\rm Prod}({\bf t})|=n!/2$.
Let $W := \{w_0,w_1,\ldots, w_{k-1}\}\subseteq n$ with $|W|=k$. We write $f^+ := f\cup\{\langle x,x\rangle\}\in$ Sym$(V_0)$; \ i.e., $f^+$ is just $f$
augmented by the 1-cycle $(x)$.\vspace{.5em}
To prove 2.8.1, let \[Q := \{h^+\circ(w_0\, x)\circ(w_1\, x)\circ\cdots\circ(w_{k-1}\, x): h\in\,\mbox{Prod}({\bf t})\}.\] Define $\varphi:$ Prod$({\bf t})\rightarrow Q$ by
$\varphi(h) := h^+\circ(w_0\, x)\circ(w_1\, x)\circ\cdots\circ(w_{k-1}\, x)$. Plainly $\varphi$ is a bijection from Prod$({\bf t})$ onto $Q$. It follows that $|Q|=n!/2$. Now let $M := \{g: xg=w_0$ and
$g\in$ Alt$(V_0)\}$. Observe that $Q\subseteq M$.
Given $g\in M$, we have $\{\langle x,w_0\rangle,\langle z_g,x\rangle\}\subseteq g$ for some $z_g\in n$. Let $g^* := (g\setminus\{\langle x,w_0\rangle,\langle z_g,x\rangle\})\cup\{\langle z_g,w_0\rangle\}$.
The function $^*:g\mapsto g^*$ obviously maps $M$ bijectively onto Blt$(n)$. Hence $|M|=n!/2$. Therefore $Q=M$. But $Q\subseteq$ Prod$({\bf s})$. The assertion 2.8.1 follows.\vspace{.5em}
To prove 2.8.2, let $P := \{(w_0\, x)\circ h^+\circ(w_1\, x)\circ(w_2\, x)\circ\cdots\circ(w_{k-1}\, x): h\in{\rm Prod}({\bf t})\}$. Define the function $\psi:{\rm Prod}({\bf t})\rightarrow P$ by $\psi(h) :=
(w_0\, x)\circ h^+\circ(w_1\, x)\circ(w_2\, x)\circ\cdots\circ(w_{k-1}\, x)$. Notice that $\psi$ is a bijection from ${\rm Prod}({\bf t})$ onto $P$. So $|P|=n!/2$. Let $L:=\{g: w_0g=w_1$ and $g\in{\rm Alt}(V_0)\}$.
Then $P\subseteq L$.
For $g\in L$, let $y_g:=w_1g$. Let $g^\dag := (g\setminus\{\langle w_0,w_1\rangle,\langle w_1,y_g\rangle\})\cup\{\langle w_0,y_g\rangle\}$. The function $^\dag:g\mapsto g^\dag$ obviously maps
$L$ bijectively onto Blt$(V_0\setminus\{w_1\})$. However, $|V_0\setminus\{w_1\}| = n$. So $|L|=n!/2$. Thus $P=L$. But $P\subseteq$ Prod$({\bf s})$. The assertion 2.8.2 follows. \vspace{.5em}
To prove 2.8.3, take $|W|=k=2m\ge2$ to be even, and let $A:=\{(x_0\,y_0),(x_1\,y_1),\ldots,(x_{m-1}\,y_{m-1})\}$ be a perfect matching of $\langle W\rangle$. Since ${\cal H}$ has an even number
of edges, ${\cal G}$ also has an even number of edges. Thus Prod$({\bf t}) =$ Alt$(n)$. So it suffices to show for each $h\in{\rm Alt}(n)$ that $h^+=\bigcirc{\bf s}\in{\rm Prod}(V_0)$ for some sequence
{\sf s} such that ${\cal H=T}({\bf s})$.
Let $h\in{\rm Alt}(n)$. Choose ${\bf r}\in{\rm Seq}({\bf t})$ such that $h=\bigcirc{\bf r}$. We expand the length-$|{\bf t}|$ sequence {\bf r} to a sequence {\bf s} of $|{\bf t}|+2m$ distinct transpositions in Sym$(V_0)$, by
replacing each of the $m$ special terms, $(x_i\, y_i)$, in {\bf r} with the corresponding three-term sequence $\langle(x\, x_i),(x_i\, y_i),(y_i\, x)\rangle$. Plainly $h^+=\bigcirc{\bf s}$. Therefore $h^+\in{\rm Prod}({\bf s})$.
The assertion 2.8.3 follows. \end{proof}
\begin{cor}\label{Rectangle} A rectangle with one of its two diagonals is a minimal perm-complete transpositional graph. \end{cor}
\begin{proof} Let ${\bf t} := \langle(0\, 1),(0\, 2),(0\, 3),(1\, 3),(2\, 3)\rangle$. It is obvious from Theorems \ref{Deg1} and \ref{Deg2} that the removal of any of the five terms of {\bf t} results in a
transpositional sequence in Sym$(4)$ which is not perm-complete. Therefore it suffices to show that {\bf t} itself is perm-complete.
Since the triangle graph is perm-complete, Theorem \ref{Complicated} implies that Prod$({\bf t})$ contains every $h\in$ Blt$(4)$ except possibly for the missing diagonal, $(1\, 2)$. But
$(1\, 2) = (0\, 1)\circ(2\, 3)\circ(0\, 2)\circ(1\, 3)\circ(0\, 3)\in$ Prod$({\bf t})$. \end{proof}
By a {\em bike} on $n+2$ vertices we mean any graph isomorphic to the labeled graph ${\cal B}_n$, whose edge set has these 2n+1 edges: the ``axle'' $(0\, 1)$ and the $2n$ ``spokes''
$(0\, i)$ and $(1\, i)$ for the $i\in\{2,3,\ldots, n, n+1\}$.
We already observed that the tree with one edge, ${\cal B}_0={\cal K}_2$, and the triangle, ${\cal B}_1={\cal K}_3$, are minimal perm-complete. By Corollary \ref{Rectangle} we have that
the proper subgraph ${\cal B}_2$ of ${\cal K}_4$ is minimal perm-complete.\vspace{.5em}
As usual, $\omega := \{0,1,2,\ldots\}$. Let $\langle x_1,x_2,\ldots\rangle$ be an injective sequence in $\omega\setminus 2 := \{2,3,4,\ldots\}$. We recursively define an infinite sequence
$\langle{\bf c}_{(2t)}\rangle_{t=1}^\infty$ of finite sequences of transpositions in Sym$(\omega)$ thus:
${\bf c}_{(2)} := \langle(1\, x_2),(0\, x_2)\rangle$
${\bf c}_{(2t+2)} := \langle{\bf c}_{(2t)},(0\, x_{2t+1}),(1\, x_{2t+1}),(1\, x_{2t+2}),(0\, x_{2t+2})\rangle$
\begin{lem}\label{Bike1} Let ${\bf r}_{(2t)} := \langle(0\, x_1),{\bf c}_{(2t)},(1\, x_1)\rangle$ for $t>0$ an integer. Then $\bigcirc{\bf r}_{(2t)} = (0\, 1)(x_1\, x_2\,\cdots x_{2t})$. \end{lem}
\begin{proof} Since $\bigcirc{\bf r}_{(2)}=(0\, x_1)\circ(1\, x_2)\circ(0\, x_2)\circ(1\, x_1)= (0\, 1)(x_1\, x_2)$, the basis holds for an induction on $t$.
Now pick $t\ge1$, and suppose that $\bigcirc{\bf r}_{(2t)}=(0\, 1)(x_1\, x_2\,\ldots\,x_{2t})$. Then
\[ \bigcirc{\bf r}_{(2t+2)} = \bigcirc{\bf r}_{(2t)}\circ(1\, x_1)\circ(0\, x_{2t+1})\circ(1\, x_{2t+1})\circ(1\, x_{2t+2})\circ(0\, x_{2t+2})\circ(1\, x_1)= \]
\[(0\, 1)(x_1\, x_2\, \ldots\, x_{2t})\circ(1\, x_1)\circ(0\, x_{2t+1})\circ(1\, x_{2t+1})\circ(1\, x_{2t+2})\circ(0\, x_{2t+2})\circ(1\, x_1) = (0\, 1)(x_1\, x_2\,\ldots\,x_{2t+1}\, x_{2t+2}). \]
So $\bigcirc{\bf r}_{(2t+2)} = (0\, 1)(x_1\, x_2\,\ldots\,x_{2t+2})$. \end{proof}
\begin{thm}\label{Bike2} ${\cal B}_n$ is a minimal perm-complete graph for every nonnegative integer $n$. \end{thm}
\begin{proof} Recall that the theorem holds for $0\le n\le2$. So we will establish it for $n\ge3$. We show that the removal of an edge from ${\cal B}_n$ results in a subgraph which fails to
be perm-complete. So, if ${\cal B}_n$ is perm-complete then it is minimal as such.
The removal of a spoke from ${\cal B}_n$ results in a subgraph that has a vertex of degree $1$. By Corollary \ref{Deg1}, such a subgraph is not perm-complete. So consider the subgraph
${\cal G}_n := {\cal B}_n-(0\, 1)$ obtained by removing the axle from ${\cal B}_n$. Now ${\cal G}_n = {\cal G}_{n,0}\dot{\cup}{\cal E}\dot{\cup}{\cal G}_{n,1}$ is a disjoint union,
where ${\cal G}_{n,1}$ is the one-edge subgraph $(0\, 2)$, where ${\cal G}_{n,0}$ is the tree on the $n$ vertices -- $1,3,4,\ldots n,n+1$ -- and whose edge set is $\{(1\, j): 3\le j\le n+1$,
and where ${\cal E}$ is the subgraph whose vertex set is all of $n+2$ and whose edge set is $C := \{(1\, 2)\}\cup\{(0\, j):3\le j\le n+1\}$. But $C$ is the cut set connecting ${\cal G}_{n,0}$
to ${\cal G}_{n,1}$ to form ${\cal G}_n$. So Corollary \ref{WeakerNPC1} implies ${\cal G}_n $ is not perm-complete.
It remains only to show that ${\cal B}_n$ perm-complete. The basis of an induction is already established. So pick an integer $n\ge3$, and suppose for any nonnegative $i<n$ that any graph
isomorphic to ${\cal B}_i$ is perm-complete. Let {\bf s} be a transpositional sequence in Sym$(n+2)$ such that ${\cal B}_n={\cal T}({\bf s})$.
Of course Prod$({\bf s})\subseteq$ Blt$(n+2)$. But we do need to show that Blt$(n+2)\subseteq$ Prod$({\bf s})$.\vspace{.3em}
\underline{Claim}: For every even positive integer $2t\le n$, the set Prod$({\bf s})$ contains every $f\in$ Blt$(n+2)$ which has a cyclic component of length $2t$. \vspace{.3em}
To prove this Claim, pick $2t\in\{2,3,\ldots, n\}$. Let $\langle x_1,x_2,\ldots, x_{2t-1},x_{2t}\rangle$ be any injective sequence in the set $\{2,3,\ldots, n,n+1\}$, and let $X$ be the $(2t)$-membered
set $\{x_1,x_2,\ldots, x_{2t}\}$. Pick a sequence {\bf v} of transpositions such that ${\cal B}_{n\setminus X} = {\cal T}({\bf v})$, where ${\cal B}_{n\setminus X}$ is the graph obtained by
removing the $2t$ vertices in $X$ from ${\cal B}_n$. Since ${\cal B}_{n\setminus X}$ is isomorphic to ${\cal B}_{n-2t}$, we have by the inductive hypothesis that ${\cal B}_{n\setminus X}$ is
perm-complete. It follows that Prod$({\bf v})=$ Blt$((n+2)\setminus X)$.
Let {\bf r} be the transpositional sequence ${\bf r}_{(2t)}$ of Lemma \ref{Bike1}. Define $Q:=\{\bigcirc{\bf r}\circ g: g\in$ Blt$(n\setminus X)\}$. Then, by Lemma \ref{Bike1} we get that
$Q = \{(x_1\, x_2\, \ldots x_{2t})(0\,1)\circ g: g\in$ Blt$(n\setminus X)\}$. Furthermore, $Q\subseteq$ Blt$(n+2)$. For each $g\in$ Blt$(n\setminus X)$, the concatenation ${\bf rv}_g$ is
an element in Seq$({\bf s})$, where $g=\bigcirc{\bf v}_g$ for some ${\bf v}_g\in$ Seq$({\bf v})$. Therefore $Q\subseteq$ Prod$({\bf s})$. Thus, when both $f\in$ Blt$(n+2)$, and $f$ has
an even length cycle whose support is a subset of $\{2,3,\ldots, n+1\}$, then $f\in$ Prod$({\bf s})$.
For every $x\in \{2,3,\ldots, n+1\}$, the graph ${\cal T}({\bf a}_x) := {\cal B}_{n\setminus\{x\}}$ is perm-complete by the inductive hypothesis, and hence by Theorem \ref{Complicated}.1 we
have that Prod$({\bf a}_x)$ contains every $f_x\in$ Blt$((n+2)\setminus\{x\})$ such that $xf_x=x$; those $f_x$ include every one with an even-length cyclic component in $(n+2)\setminus\{x\}$.
The claim is established.\vspace{.3em}
The theorem follows from the Claim, since every $f\in$ Blt$(n+2)$ has at least one even-length cycle. \end{proof}
We call a vertex $v$ of a graph ${\cal G}$ {\em central} iff $v$ is adjacent to every other vertex of ${\cal G}$.
\begin{cor}\label{Central} If a connected graph ${\cal G}$ has at least two central vertices then ${\cal G}$ is perm-complete. \end{cor}
\begin{proof} The corollary is immediate by Theorems \ref{Bike2} and \ref{SupSeq}. \end{proof}
\begin{cor}\label{Complete} Every finite complete graph is perm-complete. \end{cor}
The following examples provide instances where the converse of Corollary \ref{Central} fails.
\begin{prop}\label{Counterexamples1} Each of the following five transpositional sequences is minimally perm-complete:
\[{\bf a} := \langle(0\, 1),(0\, 2),(0\, 3),(0\, 4),(1\, 2),(2\, 3),(3\, 4)\rangle\qquad {\bf b} := \langle {\bf a},(2\, 5),(3\, 5)\rangle \]
\[{\bf c}:=\langle(0\,1),(0\,2),(0\,3),(1\,2),(1\,4),(2\,3),(3\,4)\rangle\qquad{\bf d} := \langle{\bf b},(4\,6),(5\,6)\rangle\] \end{prop}
\noindent{\bf Partial Proof.} \ We shall establish our claim about {\bf a}, and leave the other four sequences for our reader.
It is easy to see by Corollaries \ref{Deg1} and \ref{Deg2} that the removal of an edge from the graph ${\cal T}({\bf a})$ produces a graph which is not perm-complete. So it remains only show that
{\bf a} is perm-complete.
${\cal B}_2$ is perm-complete. Referring to Theorem \ref{Complicated}, identify ${\cal G}$ to be the copy of ${\cal B}_2$ whose vertex set is $\{0,1,2,3\}$, whose $W$ is $\{0,3\}$, and whose $x$
is the vertex $4$. By Theorem \ref{Complicated} and symmetry considerations, it is easy to see that Prod$({\bf a})$ contains every element in Blt$(5)$ except maybe $(1\,4)$. But, since
$(1\,4)=(0\,2)\circ(3\,4)\circ(0\,1)\circ(2\,3)\circ(0\,4)\circ(1\,2)\circ(0\,3)$, we have that $(1\,4)\in$ Prod$({\bf a})$. So {\bf a} is minimally perm-complete.\vspace{.5em}
Lest it be surmised that every graph which is an amalgamation of triangles is perm-complete, we offer
\begin{prop}\label{Counterexample2} Let ${\bf e}:=\langle(0\,1),(0\,3),(1\,2),(1\,3),(2\,3),(2\,4),(3\,4),(3\,7),(4\,5),(4\,7),(5\,6),(5\,7),(6\,7)\rangle$. The transpositional sequence {\bf e} is not
perm-complete.\end{prop}
\begin{proof} ${\cal T}({\bf e})$ consists of two copies of ${\cal B}_2$ conjoined by a three-element cut set. So Theorem \ref{NotPermComplete2} implies that ${\cal T}({\bf e})$ is not
perm-complete. \end{proof}
By an $n$-{\em wheel} we mean any graph isomorphic to ${\cal W}_n:=\langle n+1;E\rangle$, where $E$ contains the following $2n$ edges: $(0\,i)$ for every $i\in\{1,2,\ldots, n\}$ and $(i\,i+1)$ for
every $i\in\{1,2,\ldots, n-1\}$ and finally also $(1\,n)$.
By Corollary \ref{WeakerNPC1} and Theorem \ref{NotPermComplete2}, if ${\cal W}_n$ is perm-complete then ${\cal W}_n$ is minimally perm-complete. \vspace{.5em}
\noindent{\bf Conjecture.} \ ${\cal W}_n$ is perm-complete for every $n\ge3$.
\section{Conjugacy invariance}
The present section will lay the ground work for, and thereafter establish, the following characterization of the conjugacy invariant transpositional sequences having multigraphs on the vertex set $n$ that are connected.
\begin{thm}\label{CCIT} Let {\bf u} be a transpositional sequence in {\rm Sym}$(n)$ with $2\le n\in{\mathbb N}$ whose multigraph ${\cal T}({\bf u})$ is connected on the vertex set $n$.
If $n=2$ then {\bf u} is both perm-complete and {\rm CI}. If $n=3$ then {\bf u} is {\rm CI} if and only if either $|{\bf u}|$ is odd or ${\cal T}({\bf u})$ is a multitree with at least one simple multiedge.
For $n\ge4$, the sequence {\bf u} is {\rm CI} if and only if ${\cal T}({\bf u})$ is a multitree in which no vertex is an endpoint of more than one non-simple multiedge, and in which each even-multiplicity
multiedge is a multitwig whose non-leaf vertex has only one non-leaf neighbor.\end{thm}
\subsection{Constant-product sequences}
We say that a permutational sequence {\bf s} is {\em constant-product} iff $|$Prod$({\bf s})|=1$. The class of constant-product {\bf s} is antipodal to the class of perm-complete {\bf s}.
It is clear that ${\bf s} := \langle s_0,s_1,\ldots,s_k\rangle$ is constant-product if $s_i\circ s_j=s_j\circ s_i$ whenever $0\le i<j\le k$. Moreover, $s_i\circ s_j=s_j\circ s_i$ if either
supp$(s_i)\,\cap\,$supp$(s_j)=\emptyset$ or $s_i$ and $s_j$ are powers $s_i=f^p$ and $s_j=f^q$ of a common permutation $f$; that is to say, {\bf s} is constant-product if {\bf s} is boring.
Do there exist non-boring constant-product permutational sequences?
We paraphrase a theorem of Eden and Sch\"{u}tzenberger (Page 144 of \cite{Eden}), which remarks upon certain injective transpositional sequences {\bf u}, and which touches on this question.\vspace{.5em}
For each $v\in n$, let ${\bf u}_{(v)}$ be the subsequence of {\bf u} of which $u_{(v),j}$ is a term if and only if $v\in$ supp$(u_{(v),j})$. \vspace{.5em}
{\bf Eden-Sch\"{u}tzenberger Theorem.} \ {\sl When the transpositional multigraph ${\cal T}({\bf u})$ is a simple tree and also ${\bf s}\in$ {\rm Seq}$({\bf u})$, then $\bigcirc{\bf s}=\bigcirc{\bf u}$ if and
only if ${\bf s}_{(v)}={\bf u}_{(v)}$ for every $v\in n$. }\vspace{.5em}
The paucity of non-boring constant-product permutational sequences, raises our interest to its superclass ${\cal O}(n)$ of permutational sequences {\bf s} for which the order of the permutation
$\bigcirc{\bf x}$ is constant over all ${\bf x}\in$ Seq$({\bf s})$. The class of conjugacy invariant sequences is a natural proper subclass of ${\cal O}(n)$.
\subsection{Preliminaries}
We call a binary relation $a\subseteq X\times X$ {\em conjugate} to $b\subseteq X\times X$, and write $a\simeq b$, iff $b=\{\langle xf,yf\rangle: \langle x,y\rangle\in a\}$ for some permutation
$f\in$ Sym$(X)$. Equivalently, $a\simeq b$ iff $g^-\circ a\circ g = b$ for some $g\in$ Sym$(X)$. Plainly $\simeq$ is an equivalence relation on the family ${\cal P}(X\times X) := \{r: r\subseteq X\times X\}$ of
all binary relations on the set $X$.
We define the {\em world} of $c\subseteq X\times X$ to be $\$(c):=$ Dom$(c)\,\cup\,$Rng$(c)$. It is commonplace that $a\simeq b$ if and only if $b=g^-\circ a\circ g$ for some $g\in$ Sym$(\$(a)\cup\$(b))$.
Of course $b = g^-\circ a\circ g$ if and only if $g\circ b=a\circ g$.\vspace{.5em}
In this paper we restrict our attention to those binary relations which are permutations on the set $n$. Whenever $\{a,b\}\subseteq$ Sym$(n)$, we have not only that $a\circ b\simeq b\circ a$ but also
that $a\simeq a^-$.
For $n>0$ an integer, $[n]$ denotes the set $\{1,2,\ldots,n\}$. (But remember that $n$ denotes $\{0,1,\ldots,n-1\}$.)
Type$(a)\subseteq{\cal P}(X\times X)$ denotes the conjugacy class of the binary relation $a\subseteq X\times X$. When $a\in$ Sym$(n)$ then Type$(a)$ acquires a more informative moniker;
namely, Type$(a) := 1^{e(1)}2^{e(2)}\cdots n^{e(n)}$, where for each $j\in[n]$ the integer $e(j)\ge0$ denotes the number of $j$-cycles in the permutation $a$. Obviously $n=\sum_{j=1}^nje(j)$.
We sometimes save space by omitting to write both $1^{e(1)}$ and also those $j^{e(j)}$ for which $e(j)=0$.\vspace{.5em}
\noindent{\bf Example.} If $a:=(0\, 1)(2\, 3)(4\, 5)(6\,7\,8\,9)\in$ Sym$(12)$ then Type$(a)=1^22^34^1$, which is to say $a\in 1^22^34^1$. But if we had prior knowledge that $a \in$ Sum$(12)$ then we
might have written more tersely instead that $a\in 2^34^1$.\vspace{.5em}
\noindent{\bf Definition.} A sequence ${\bf s}:=\langle s_0,\ldots, s_{k-1}\rangle$ in Sym$(n)$ is {\em conjugacy invariant} (CI) iff Prod$({\bf s})\subseteq$ Type$(\bigcirc{\bf s})$.
\begin{prop}\label{CyclicConj} Let ${\bf b}:=\langle b_0,b_1,\ldots, b_{k-1}\rangle$ be a sequence in {\rm Sym}$(n)$. For ${\bf b}^{(i)}$ and $i\in k$, the expression
${\bf b}^{(i)} :=\langle b_i,b_{i+1},\ldots, b_{k-2},b_{k-1},b_0,b_1,\ldots, b_{i-1}\rangle$, known as a ``cyclic conjugate'' of ${\bf b}$, satisfies $\bigcirc{\bf b}^{(i)}\simeq\bigcirc{\bf b}$. \end{prop}
\begin{proof} Let ${\bf p}:=\langle b_0,b_1,\ldots, b_{i-1}\rangle$ and ${\bf s}:=\langle b_i,b_{i+1},\ldots, b_{k-1}\rangle$. Then $\bigcirc{\bf b}^{(i)} = \bigcirc{\bf sp} =
\bigcirc{\bf s}\circ\bigcirc{\bf p} \simeq \bigcirc{\bf p}\circ\bigcirc{\bf s} = \bigcirc{\bf ps} = \bigcirc{\bf b}$. \end{proof}
\noindent{\bf Definition.} For ${\bf s} := \langle s_0,s_1,\ldots, s_{k-1}\rangle$ a sequence, ${\bf s}^{\rm R} :=\langle s_{k-1},\ldots,s_1,s_0\rangle$ is called the {\em reverse} of {\bf s}.\vspace{.5em}
\begin{prop}\label{Inverse} Let ${\bf t}:=\langle t_0,t_1,\ldots,t_{k-1}\rangle$ be a transpositional sequence in {\rm Sym}$(n)$. Then $\bigcirc({\bf t}^{\rm R})\simeq\bigcirc{\bf t}$. \end{prop}
\begin{proof} $\bigcirc({\bf t}^{\rm R}) = t_{k-1}\circ t_{k-2}\circ\cdots\circ t_1\circ t_0 = t_{k-1}^-\circ t_{k-2}^-\circ\cdots\circ t_1^-\circ t_0^- = (t_0\circ t_1\circ\cdots\circ t_{k-1})^- =
(\bigcirc{\bf t})^- \simeq\bigcirc{\bf t}$ since $t_i^- = t_i$ when $t_i$ is a transposition. \end{proof}
Dan Franklin: By Proposition \ref{Inverse}, if {\bf s} a transpositional sequence and $f\in$ Prod$({\bf s})$, then $f^-\in$ Prod$({\bf s})$. \vspace{.5em}
\noindent{\bf Terminology.} When ${\bf w}:=\langle x\rangle$ is a length-one sequence, then $x$ may serve as a nickname for {\bf w}. If a sequence {\bf w} is of length $|{\bf w}|=k$ in $X$, and if {\bf w}
occurs exactly $m$ times as a term in ${\bf r}=\langle {\bf w},{\bf w},\ldots, {\bf w}\rangle$, then we write ${\bf r := w}^{\beta(m)}$. That is, ${\bf w}^{\beta(m)}$ is the ``block'' consisting of exactly
$m$ adjacent occurrences of {\bf w}. Thus {\bf r} has length $m$ when seen as a sequence in the set $\{{\bf w}\}$, but $|{\bf r}|=mk$ when {\bf r} is viewed as a sequence in $X$.
Whereas ${\bf w}^{\beta(m)}$ denotes a sequence comprised of $m$ adjacent occurrences of the subsequence {\bf w}, the expression
$(\bigcirc{\bf w})^m$ denotes the compositional product $\bigcirc{\bf w}\circ\bigcirc{\bf w}\circ\cdots\circ\bigcirc{\bf w}$ of $m$ adjacent occurrences of the permutation $\bigcirc{\bf w}$.
That is to say, if ${\bf r}:={\bf w}^{\beta(m)}$ is a sequence in Sym$(n)$ then $\bigcirc{\bf r} = \bigcirc({\bf w}^{\beta(m)}) = (\bigcirc{\bf w})^m$. \vspace{.5em}
Each sequence in Sym$(2)$ is both perm-complete and CI. If $|{\bf s}|<3$ for {\bf s} a sequence in Sym$(n)$ then {\bf s} is CI. However, for ${\bf s}$ in Sym$(n)$ with $n\ge3$ and with $|{\bf s}|\ge3$, the plot
thickens.\vspace{.5em}
When $a\not=b$ are vertices in a multigraph ${\cal G}$, the {\em multiplicity} in ${\cal G}$ of its multiedge $(a\,b)$ is the number $\mu_{\cal G}(a\,b)\ge0$ of simple edges in the bundle comprising that multiedge.
Thus, when $\mu_{\cal G}(a\,b)=0$, there is no simple edge in ${\cal G}$ connecting $a$ with $b$. But, when $\mu_{\cal G}(a\,b)=1$, then the multiedge $(a\,b)$ is itself simple in ${\cal G}$. For
${\bf u}\in1^{n-2}2^1$, the multiplicity $\mu_{\bf u}(a\,b)$ in {\bf u} of the transposition $(a\,b)$ as a term in {\bf u} equals $\mu_{{\cal T}({\bf u})}(a\,b)$.
\vspace{.5em}
\underline{Reminder}: $f\in1^{n-2}2^1$ says merely that $f$ is a transposition in Sym$(n)$. A multigraph ${\cal G}$ we call CI iff ${\cal G}$ is isomorphic to ${\cal T}({\bf t})$ for a CI sequence {\bf t}
in $1^{n-2}2^1$. Without ado we will apply obviously corresponding terminology interchangeably to transpositional sequences and to isomorphs of transpositional multigraphs.
\subsection{Conjugacy invariant transpositional sequences}
We proceed to identify the CI transpositional sequences {\bf u} in Sym$(n)$. It suffices to treat those such {\bf u} for which ${\cal T}({\bf u})$ is a connected multigraph on the vertex set $n$; this narrow focus is
embodied in Theorem \ref{CCIT}.
\begin{thm}\label{OddTree} Let ${\cal T}({\bf u})$ be a multitree with no even-multiplicity multiedges, and none of whose vertices lie on more than one non-simple multiedge. Then
{\rm Prod}$({\bf u})\subseteq n^1$. \end{thm}
\begin{proof} We induce on $|{\bf u}|\ge n-1$.
{\sf Basis Step}: The theorem is easily seen to hold when ${\cal T}({\bf u})$ is simple. Proofs occur in \cite{Denes} and in \cite{Silberger}.
{\sf Inductive Step}: Pick $k>n$. Suppose the theorem holds for all {\bf u} for which $|{\bf u}|\in\{n-1,n,\ldots,k-1\}$. Let $|{\bf u}|\in\{k-1,k\}$, and let {\bf u} satisfy the hypotheses of the theorem.
Let $(x\,y)$ be a multiedge of ${\cal T}({\bf u})$ such that neither $x$ nor $y$ is an endpoint of any non-simple multiedge $(x'\,y')\not=(x\,y)$. Let {\bf v} be a sequence created by inserting into {\bf u} two
additional occurrences, $(x\,y)_1$ and $(x\,y)_2$, of the transposition $(x\,y)$. Thus ${\bf v} = \langle{\bf a},(x\,y)_1,{\bf b},(x\,y)_2,{\bf c}\rangle$ for some subsequences {\bf a}, {\bf b}, and {\bf c} of {\bf u}
for which
${\bf u = abc}$. If $|{\bf b}|=0$ then obviously $\bigcirc{\bf v}=\bigcirc{\bf u}\in n^1$. So suppose that $|{\bf b}|>0$.
Let the first term of {\bf b} be $(t\,z)$. If $\{x,y\}=\{t,z\}$ then $(x\,y)_1\circ(t\,z)=\iota$, and so again $\bigcirc{\bf v}=\bigcirc{\bf u}$. But, if $\{x,y\}\cap\{t,z\}=\emptyset$, then
$(x\,y)_1\circ(t\,z)=(t\,z)\circ(x\,y)_1$, and $(x\,y)_1$ will have migrated one space to the right in {\bf v} towards $(x\,y)_2$. So take it that $y=t$ and that $|\{x,y\}\cap\{y,z\}|=1$.
Now, $(x\,y)_1\circ(y\,z)=(x\,z)\circ(x\,y)_1$. The tree ${\cal T}({\bf v})$ does not have the triangle $\{(x\,y)_1,(y\,z),(x\,z)\}$ as a subgraph. So the transposition $(x\,z)$ does not occur as a term in {\bf u}.
Indeed, if {\bf v} satisfies the hypotheses of the lemma, then the multiplicity in {\bf v} of $(y\,z)$ is $1$, since the multiplicity of $(x\,y)$ in {\bf v} is greater than one. Thus the tree ${\cal T}({\bf w})$ is just the
modification of ${\cal T}({\bf v})$ obtained by the replacement of the simple multiedge $(y\,z)$ of ${\cal T}({\bf v})$ by the simple multiedge $(x\,z)$. That is, {\bf w} has a single occurrence of the transposition
$(x\,z)$ as a term but has no $(y\,z)$ terms, whereas {\bf v} has a single occurrence of $(y\,z)$ but has no occurrences of $(x\,z)$. Clearly {\bf w} also satisfies the hypotheses of the lemma, and $|{\bf w}|=
|{\bf v}|\in\{k+1,k+2\}$, since ${\bf w} = \langle{\bf a'},(x\,y)_1,{\bf b'},(x\,y)_2,{\bf c}\rangle$ where ${\bf a'}:=\langle{\bf a},(x\,z)\rangle$ and where ${\bf b'}$ is the sequence created by removing the
leftmost term $(y\,z)$ of {\bf b}. So in this fashion too, $(x\,y)_1$ migrates one space rightward towards $(x\,y)_2$. The rightward migrations of $(x\,y)_1$ continue until $(x\,y)_1$ either abuts on $(x\,y)_2$ or
on some occurrence of $(x\,y)$ to the left of $(x\,y)_2$. Thus the rightward migrations of $(x\,y)_1$ ultimately result in a sequence ${\bf w'}$ with $|{\bf w'}|\le k$ and for which $\bigcirc{\bf w'}\simeq\bigcirc{\bf u}$.
Thus the inductive step is successful, and the theorem follows. \end{proof}
\begin{thm}\label{n=3} Let {\bf u} be a sequence in $1^12^1$ with $\bigcup${\rm Supp}$({\bf u})=3$. Then {\bf u} is {\rm CI} if and only if either $|{\bf u}|$ is odd or ${\cal T}({\bf u})$ is a
multitree with at least one simple multiedge. \end{thm}
\begin{proof} If $|{\bf u}|$ is odd then Prod$({\bf u})\subseteq 1^12^1$, and so {\bf u} is CI. For the rest of the proof we take $|{\bf u}|$ to be even.
Let ${\cal T}({\bf u})$ be a tree with a simple multiedge $(0\, 1)$. If the multiedge $(1\,2)$ is simple too, then {\bf u} is CI. So take it that ${\bf u} := \langle(0\,1),(1\,2)^{\beta(2i+1)}\rangle$ for some $i\ge1$.
Let ${\bf r}\in$ Seq$({\bf u})$. Then ${\bf r} = \langle(1\,2)^{\beta(j)},(0\,1),(1\,2)^{\beta(2i+1-j)}\rangle$ for some $j\in 2i+2$. So $\bigcirc{\bf r}=(1\,2)^j\circ(0\,1)\circ(1\,2)^{2i+1-j}$. If $j$ is even then
$2i+1-j$ is odd, whence $\bigcirc{\bf r}= (0\,1)\circ(1\,2)=(0\,2\,1)\in3^1$, and if $j$ is odd then $2i+1-j$ is even, and so $\bigcirc{\bf r}=(1\,2)\circ(0\,1)=(0\,1\,2)\in3^1$. Therefore {\bf u} is CI in the event
that ${\cal T}({\bf u})$ is a multitree, one of whose multiedges has multiplicity one.\vspace{.3em}
To establish the converse, we first consider the case where ${\cal T}({\bf u})$ is a multitree, and assume it has no simple multiedge. We can take it that ${\bf u}:=\langle(0\,1)^{\beta(i)},(1\,2)^{\beta(j)}\rangle$,
where $i\ge2$ and $j\ge2$ and $i+j$ is even. The argument about this multitree obviously reduces to only two cases.
\underline{Case}. $i=j=2$. Then $\bigcirc{\bf u}=\iota\restrict3\not\simeq(0\,1\,2)=((0\,1)\circ(1\,2))^2$.
\underline{Case}. $i=j=3$. Then $\bigcirc{\bf u}=(0\,2\,1)\not\simeq\iota\restrict3=(0\,2\,1)^3=((0\,1)\circ(1\,2))^3$.\vspace{.3em}
Now suppose that ${\cal T}({\bf u})$ is a multitriangle with \ ${\bf u}:=\langle(0\,1)^{\beta(a)},(1\,2)^{\beta(b)},(2\,0)^{\beta(c)}\rangle$, where $1\le\min\{a,b,c\}$ and where $a+b+c$ is even. The argument
again reduces to two cases.
\underline{Case}. $a=b=1$ and $c=2$. Then $\bigcirc{\bf u} = (0\,2\,1)\not\simeq\iota\restrict3=(0\,1)\circ(2\,0)\circ(1\,2)\circ(2\,0)$.
\underline{Case}. $a=b=c=2$. Then $\bigcirc{\bf u}=\iota\restrict3\not\simeq(0\,1\,2)=(0\,1)\circ(1\,2)\circ(2\,0)\circ(1\,2)\circ(0\,1)\circ(2\,0)$.\vspace{.3em}
In all four cases we found an ${\bf r}\in$ Seq$({\bf u})$ with $\bigcirc{\bf r}\not\simeq\bigcirc{\bf u}$. So {\bf u} is not CI. \end{proof}
Henceforth {\bf u} is a sequence in $1^{n-2}2^1$ for which ${\cal T}({\bf u})$ a connected multigraph whose vertex set is $n$. We have characterized the CI sequences for $n<4$. From now on, $n\ge4$.
The {\bf u} we will be treating are of two sorts: \ {\sf One:} ${\cal T}({\bf u})$ is a multitree. \ {\sf Two:} ${\cal T}({\bf u})$ has a circuit subgraph. First we treat Sort One. \vspace{.5em}
By an $m$-{\em twig} of a multigraph ${\cal G}$ we mean any multiplicity-$m$ multiedge $(v\,w)$, one of whose vertices has exactly one neighbor in ${\cal G}$. If $w$ is the only neighbor of the vertex $v$,
then $v$ is the {\em leaf} of the multitwig.
\begin{thm}\label{SufficiencyCI} Let the transpositional multitree ${\cal T}({\bf u})$ have exactly $b$ multiedges of even multiplicity, where ${\bf u}:=\langle u_0,u_1,\ldots, u_{k-1}\rangle$ is of length
$|{\bf u}|:=k\ge3$ in $1^{n-2}2^1$ with $n\ge4$. Let the following two conditions hold:
{\bf 3.6.1} \ No vertex lies on more than one non-simple multiedge.
{\bf 3.6.2} \ Each even-multiplicity multiedge is a multitwig whose non-leaf vertex has exactly two neighbors.
\noindent Then {\rm Prod}$({\bf u})\subseteq 1^b(n-b)^1$, and therefore {\bf u} is {\rm CI}. \end{thm}
\begin{proof} Given $n\ge4$, we induce on $b\in\{0,1,\ldots,n-1\}$.
{\sf Basis Step}: $b=0$. This is just Theorem \ref{OddTree}.
{\sf Inductive Step}: Suppose, for each $m\in\{4,5,\ldots, n-1\}$ and each $X\subseteq n$ with $|X|=m$, that the theorem holds for every transpositional sequence {\bf t} in Sym$(X)$ for which ${\cal T}({\bf t})$
is a multitree with vertex set $X$. By hypothesis, {\bf u} is a sequence in $1^{n-2}2^1$ that satisfies 3.6.1 and 3.6.2, where ${\cal T}({\bf u})$ has exactly $b$ even-multiplicity multitwigs, and where all of the
non-multitwig multiedges of ${\cal T}({\bf u})$ are of odd multiplicity. Suppose $b\ge1$.
Let $(0\,1)$ be an even-multiplicity multitwig of ${\cal T}({\bf u})$ with leaf $0$. Let {\bf v} be the subsequence of {\bf u} obtained by removing all occurrences of $(0\,1)$ as terms in {\bf u}. Then ${\cal T}({\bf v})$
is a multitree on the set $X := n\setminus\{0\}$. Obviously ${\cal T}({\bf v})$ is a multitree that satisfies 3.6.1 and 3.6.2 and that has exactly $b-1$ even-multiplicity multitwigs. Since $|X|=n-1$, the inductive hypothesis
implies that Prod$({\bf v})\subseteq 1^{b-1}((n-1)-(b-1))^1 = 1^{b-1}(n-b)^1$ and that {\bf v} is CI. By 3.6.2, the only multiedge of ${\cal T}({\bf u})$, other than $(0\,1)$, to share the vertex $1$ is a simple multiedge
$(1\,x)$ of ${\cal T}({\bf v})$, and $(1\,x)$ is the only term of {\bf u} that fails to commute with $(0\,1)$. So $f\leftrightarrow f\cup(0)$ is a one-to-one matching Prod$({\bf v})\leftrightarrow$ Prod$({\bf u})$. It follows
that Prod$({\bf u})\subseteq 1^b(n-b)^1$ since Prod$({\bf v})\subseteq 1^{b-1}(n-b)^1$ by the inductive hypothesis. \end{proof}
Lemma \ref{n=3} gives necessary and sufficient conditions for {\bf u} to be CI when $n\le3$. Theorems \ref{OddTree} and \ref{SufficiencyCI} give sufficient conditions for {\bf u} to be CI when $n\ge4$. We will show
that those conditions are also necessary for $n\ge4$. The crux is to establish that, if the connected multigraph ${\cal T}({\bf u})$ on the vertex set $n\ge4$, fails to be a multitree satisfying both 3.5.1 and 3.5.2, then
{\bf u} is not CI. This project involves two subprojects:
The first such subproject will show that, if ${\cal T}({\bf u})$ is a ``pathological'' multitree -- which is to say, one for which either 3.6.1 or 3.6.2 fails, then {\bf u} cannot be CI.
The last will show that, if $n\ge4$ and ${\cal T}({\bf u})$ has a circuit submultigraph, then again {\bf u} cannot be CI.\vspace{.5em}
For the balance of \S3, the expression {\bf u} will denote a sequence in $1^{n-2}2^1$ with $|{\bf u}|\ge n-1\ge3$, and such that the transpositional multigraph ${\cal T}({\bf u})$ is connected on the vertex set
$n$. \vspace{1em}
\centerline{\sf Subproject One: \ To prove that, if ${\cal T}({\bf u})$ is a pathological multitree, then {\bf u} fails to be CI}\vspace{.5em}
We call a sequence ${\bf s}$ {\sl reduced} iff no entity occurs more than three times as a term in ${\bf s}$..
When at least one entity occurs as a term in a sequence ${\bf a}$ more than $3$ times, we may produce a reduced subsequence ${\bf c}$ of ${\bf a}$ by means of a string of ``elementary reductions'':
If $x$ occurs as a term more than $3$ times in {\bf a}, then a subsequence {\bf b} of {\bf a} is an {\em elementary reduction} of {\bf a} if {\bf b} is obtained by removing from {\bf a} two occurrences of $x$. The resulting
such ${\bf b}$ is of length $|{\bf a}|-2$.
A {\em reduction} of ${\bf a}$ is any reduced subsequence of ${\bf a}$ that results from a sequence of elementary reductions.
Clearly each sequence ${\bf u}$ in $1^{n-2}2^1$ has a unique reduced subsequence. If ${\bf r}$ is a reduced subsequence of a transpositional sequence ${\bf s}$ then of course ${\rm Seq}({\bf r})$ is the set of all reduced subsequences of elements in ${\rm Seq}({\bf s})$.
We will employ the contrapositive version of the following obvious fact:
\begin{lem}\label{Reduced} A reduced subsequence of a {\rm CI} transpositional sequence is {\rm CI}. \end{lem}
We henceforth take it that all of our transpositional sequences are reduced, unless specified otherwise.
\begin{lem}\label{Untwig} If $\mu_{\bf u}(0\,1)=2$, but if $(0\,1)$ is not a multitwig of the multitree ${\cal T}({\bf u})$, then {\bf u} fails to be {\rm CI}. \end{lem}
\begin{proof} In the spirit developed earlier, ``$\mu_{\bf u}$'' is an abbreviation for ``$\mu_{{\cal T}({\bf u})}$''.
Let $\mu_{\bf u}(0\,1)=2$ and the multiedge $(0\,1)$ of ${\cal T}({\bf u})$ not be a multitwig. Let {\bf v} be the subsequence of {\bf u} resulting from the removal from {\bf u} of its two occurrences
of $(0\,1)$ as terms. \ ${\cal T}({\bf v})$ is the disjoint union ${\cal G}_0\,\dot{\cup}\,{\cal G}_1$ of two multitrees, each of which has a vertex set containing more than one vertex since the excised multiedge $(0\,1)$
of ${\cal T}({\bf u})$ was not a multitwig. So {\bf v} consists of two nonempty complementary subsequences ${\bf v}_0$ and ${\bf v}_1$, with $|{\bf v}_0|+|{\bf v}_1|=|{\bf v}|=|{\bf u}|-2\ge n-3\ge1$, and
for which ${\cal G}_0={\cal T}({\bf v}_0)$ and ${\cal G}_1 = {\cal T}({\bf v}_1)$. That is to say, the terms of ${\bf v}_i$ are the simple edges of ${\cal G}_i$ for each $i\in2$.
Let $f_0$ be the component of $\bigcirc{\bf v}_0$ such that $0\in$ supp$(f_0)$, and let $f_1$ be the component of $\bigcirc{\bf v}_1$ such that $1\in$ supp$(f_1)$, observing that neither $f_0$ nor $f_1$ is a $1$-cycle.
Since our real concern is Seq$({\bf u})$, we can take it that ${\bf u}=\langle{\bf v}_0,(0\,1)^{\beta(2)},{\bf v}_1\rangle$ and that ${\bf v}={\bf v}_0{\bf v}_1$. Of course, $0$ is a vertex in ${\cal G}_0$ and $1$ is a vertex in
${\cal G}_1$. Then $f_0$ and $f_1$ are disjoint nontrivial cyclic components of the permutation $\bigcirc{\bf u}=\bigcirc{\bf v}=\bigcirc{\bf v}_0\bigcirc{\bf v}_1$.
Define ${\bf u'} := \langle(0\,1),{\bf v}_0,(0\,1),{\bf v}_1\rangle \in$ Seq$({\bf u})$. All of the components of $\bigcirc{\bf u}$ other than $f_0$ and $f_1$ are components also of $\bigcirc{\bf u'}$.
So the only change made to $\bigcirc{\bf u}$ that creates $\bigcirc{\bf u'}$ is the replacement of the two components $f_0$ and $f_1$ with a new pair $(0)$ and $h$, where $h$ is a cycle with $1\in$ supp$(h)$,
and with $|h|=|f_0|+|f_1|-1$. So $\bigcirc{\bf u'} \not\simeq \bigcirc{\bf u}$, and hence {\bf u} is not CI. \end{proof}
Lemma \ref{Untwig} shows without loss of generality for $n\ge4$ that, if the transpositional multitree ${\cal T}({\bf u})$ has an even-multiplicity multiedge which is not a multitwig, then {\bf u} cannot be CI.
\begin{cor}\label{TwoTwo} Let $\mu_{\bf u}(0\,1) =\mu_{\bf u}(1\,2)=2$. Then {\bf u} is not {\rm CI}. \end{cor}
\begin{proof} Pretend that {\bf u} is CI. It follows by Lemma \ref{Untwig} that both of the multiedges $(0\,1)$ and $(1\,2)$ of the multitree ${\cal T}({\bf u})$ are multitwigs. Therefore, since $n\ge4$, there exists
$x\in n\setminus 3$ for which $(1\,x)$ is a multiedge of ${\cal T}({\bf u})$. Let {\bf v} be the subsequence of {\bf u} that is produced by the removal from {\bf u} of both of the terms that are occurrences of the
transposition $(0\,1)$ and both of the terms that are occurrences of $(1\,2)$. Then $|{\bf v}| = |{\bf u}|-4 \ge 5-4 = 1$. Let $f$ be the component of $\bigcirc{\bf v}$ with either $f = (1)$ or $1\in$ supp$(f)$.
Observe that $\{0,2\}\cap\,$supp$(f) = \emptyset$. Since our interest lies in the sets Seq$({\bf u})$ and Seq$({\bf v})$, we can suppose that ${\bf u}=(0\,1)^{\beta(2)}(1\,2)^{\beta(2)}{\bf v}$. Of course $f$ is a
component of $\bigcirc{\bf v}$ = $\bigcirc{\bf u}$. Defining ${\bf u'} := \langle\big((0\,1),(1\,2)\big)^{\beta(2)},{\bf v}\rangle \in$ Seq$({\bf u})$, we see that $\bigcirc{\bf u'} = (0\,1\,2)\circ\bigcirc{\bf v}$, a
permutation which is identical to the permutation $\bigcirc{\bf u}$ in all component cycles that are disjoint from $3\,\cup\,$supp$(f)$. Where $(0), (2)$, and $f$ are components of $\bigcirc{\bf u}$, the permutation
$\bigcirc{\bf u'}$ instead has the cycle $(0\,1\,2)\circ f$ of length $|f|+2$. Thus $\bigcirc{\bf u'}\not\simeq\bigcirc{\bf u}$, and so {\bf u} is not CI. \end{proof}
\begin{cor}\label{TwoThree} Let ${\bf u} := (0\,1)^{\beta(2)}(1\,2)^{\beta(3)}{\bf v}$ and {\bf v} be sequences in $1^{n-2}2^2$, where neither $(0\,1)$ nor $(1\,2)$ is a term in {\bf v}. Then {\bf u} is not {\rm CI}.
\end{cor}
\begin{proof} Assume that {\bf u} is CI. By Lemma \ref{Untwig}, the multiedge $(0\,1)$ of the multitree ${\cal T}({\bf u})$ is a multitwig of ${\cal T}({\bf u})$. So there is a component $f$ of the permutation
$\bigcirc{\bf v}$ to which exactly one of the following two cases applies.
\underline{Case}: Either $2\in$ supp$(f)$ or $f = (2)$, and $\bigcirc{\bf u} = (0\,1)^2\circ(1\,2)^3\circ\bigcirc{\bf v} = (1\,2)\circ\bigcirc{\bf v}$. So $f_2 := (1\,2)\circ f$ is a cyclic component of $\bigcirc{\bf u}$.
Note: \ $|f_2|=|f|+1$, since the point $1$ is incorporated into the cycle $f$ in order to create $f_2$. [Paradigm example: When $f := (2\,3\,4)$ then $f_2 = (1\,2)\circ f = (1\,2)\circ(2\,3\,4)=(1\,3\,4\,2)$.] Define
${\bf u}_2 := \langle(1\,2),(0\,1),(1\,2),(0\,1),(1\,2),{\bf v}\rangle \in$ Seq$({\bf u})$. Then $\bigcirc{\bf u}_2 = (0\,1)\circ\bigcirc{\bf v} = (0\,1)\bigcirc{\bf v}$, and $f$ is a cyclic component of $\bigcirc{\bf u}_2$.
\underline{Case}: Either $1\in{\rm supp}(f)$ or $f=(1)$, and $\bigcirc{\bf u} = (1\,2)\circ\bigcirc{\bf v}$. So $f_1 = (1\,2)\circ f$ is a cyclic component of $\bigcirc{\bf u}$. Let
${\bf u}_1 := \langle(0\,1),(1\,2)^{\beta(3)},(0\,1),{\bf v}\rangle$. Then $\bigcirc{\bf u}_1 = (0\,2)\circ\bigcirc{\bf v} = (0\,2)\bigcirc{\bf v}$, and $f$ is a component of $\bigcirc{\bf u}_1$. But $|f_1|=|f|+1$.
We showed, for each $i\in2$, that $|f_i|=|f|+1$. Moreover, $\bigcirc{\bf u}$ has one more $1$-cycle and one fewer $2$-cycles than $\bigcirc{\bf u}_i$ has, while all other cyclic components of $\bigcirc{\bf u}_i$
are the same as those of $\bigcirc{\bf u}$. Hence $\bigcirc{\bf u}_i\not\simeq\bigcirc{\bf u}$ in both Cases. Thus our assumption fails. Therefore {\bf u} is not CI.\end{proof}
\begin{lem}\label{ThreeThree} Let ${\bf u} := (0\,1)^{\beta(3)}(1\,2)^{\beta(3)}{\bf v}_0{\bf v}_1{\bf v}_2$ where neither $(0\,1)$ nor $(1\,2)$ is a term in the sequence ${\bf v}_0{\bf v}_1{\bf v}_2$, and where
no vertex of ${\cal T}({\bf v}_i)$ is a vertex in ${\cal T}({\bf v}_j)$ if $i\not=j$. Then {\bf u} is not {\rm CI}. \end{lem}
\begin{proof} We can suppose at least one of the three subsequences ${\bf v}_i$ to be nonvacuous since $n\ge4$. Each ${\cal T}({\bf v}_i)$ is a (possibly one-vertex) submultigraph of
the transpositional multitree ${\cal T}({\bf u})$, where for each $i\in3$ we are given that $i$ is a vertex in ${\cal T}({\bf v}_i)$. Now, $\bigcirc{\bf u} = (0\,2\,1)\circ\bigcirc{\bf v}_0\bigcirc{\bf v}_1\bigcirc{\bf v}_2$.
For each $i\in3$, let $f_i$ be the component of $\bigcirc{\bf v}_i$ for which either $i\in$ supp$(f_i)$ or $f_i=(i)$. Then $\bigcirc{\bf u}$ has a cyclic component $f$ of length $|f|=|f_0|+|f_1|+|f_2|$ with
$3\subseteq$ supp$(f)$.
Let ${\bf u'} := \langle (1\,2),(0\,1),{\bf v}_0,(1\,2),(0\,1),{\bf v}_1,(1\,2),(0\,1){\bf v}_2 \rangle$. So $\bigcirc{\bf u'} = (0\,1\,2)\circ\bigcirc{\bf v}_0\circ(0\,1\,2)\circ\bigcirc{\bf v}_1\circ(0\,1\,2)\circ\bigcirc{\bf v}_2$ lacks the cyclic component $f$ of $\bigcirc{\bf u}$, but in place of $f$ it has the three cycles $f_0, f_1$ and $ f_2$, and otherwise the cycles of $\bigcirc{\bf u'}$ are identical to those of $\bigcirc{\bf u}$. So $\bigcirc{\bf u'}\not\simeq\bigcirc{\bf u}$ although ${\bf u'}\in$ Seq$({\bf u})$. Therefore {\bf u} is not CI. \end{proof}
\begin{cor}\label{Fork} Let $\mu_{\bf u}(0\,1)=2$, let $\mu_{\bf u}(1\,2) = 1 = \mu_{\bf u}(1\,3)$, and let ${\bf u} := \langle(0\,1)^{\beta(2)},(1\,2),(1\,3),{\bf v}_2{\bf v}_3\rangle$, where the two submultigraphs
${\cal T}({\bf v}_2)$ and ${\cal T}({\bf v}_3)$ of ${\cal T}({\bf u})$ are disjoint. Then {\bf u} is not {\rm CI}. \end{cor}
\begin{proof} Assume that {\bf u} is CI. By Lemma \ref{Untwig}, the multiedge $(0\,1)$ of ${\cal T}({\bf u})$ is a multitwig with leaf $0$. We take it that $f_2$ is a cyclic component of
$\bigcirc{\bf v}_2$ for which either $2\in$ supp$(f_2)$ or $f_2=(2)$, and likewise that $f_3$ is a cyclic component of $\bigcirc{\bf v}_3$ for which either $3\in$ supp$(f_3)$ or $f_3=(3)$. Now
$\bigcirc{\bf u} = (0)fg$, where $f$ is a cycle incorporating the point $1$ together with the points in $f_2$ and $f_3$ into a single cycle of consequent length $|f|=1+|f_2|+|f_3|$, where $g$ is a permutation
that involves the points in $\{4,5,\ldots,n-1\}$ which occur neither in $f_2$ nor in $f_3$. On the other hand, defining ${\bf u'} := \langle(0\,1),(1\,2),(0\,1),(1\,3),{\bf v}_2{\bf v}_3\rangle \in$ Seq$({\bf u})$, we
find that $\bigcirc{\bf u'} = f'_2f'_3g$, where $f'_2$ is a cycle of length $|f'_2|=1+|f_2|$ that incorporates together the point $0$ and the points in the cycle $f_2$, and where $f'_3$ is a cycle of length
$|f'_3|=1+|f_3|$ that incorporates together the point $1$ and the points in the cycle $f_2$. So $\bigcirc{\bf u'}\not\simeq\bigcirc{\bf u}$, violating our assumption that {\bf u} is CI. \end{proof}
Subproject One is completed. We summarize it in the following immediate conjunction of Lemma \ref{Untwig}, Corollaries \ref{TwoTwo} and \ref{TwoThree}, Lemma \ref{ThreeThree}, and Corollary \ref{Fork}:
\begin{thm}\label{NecessaryCI} Let ${\cal T}({\bf u})$ be a multitree with $n\ge4$. Then ${\bf u}$ is CI if and only if it satisfies 3.6.1 and 3.6.2, \end{thm}
\vspace{1em}
\centerline{\sf Subproject Two: \ Proving for $n\ge4$ that, if {\bf u} is CI, then ${\cal T}({\bf u})$ has no circuits} \vspace{.5em}
For $n\ge4$, our focus now is upon those sequences {\bf u} in $1^{n-2}2^1$ for which the transpositional multigraph ${\cal T}({\bf u})$ is connected on the vertex set $n$, but is not a multitree; instead,
${\cal T}({\bf u})$ has at least one circuit subgraph. We will now provide, some convenient additional terminological background.\vspace{.5em}
Although we write a sequence usually between pointy brackets -- e.g., $\langle x_0,x_1,\ldots,x_{k-1}\rangle$ -- with its terms separated by commas, when ambiguity is not at issue, we may write it with (some
or all of) its terms concatenated (i.e., without commas.) However, when $f$ and $g$ are permutations whose supports are distinct, we have been writing $f\circ g$ as $fg$ in order to indicate this disjointness.
Context will make it clear whether an expression denotes disjoint permutations instead of concatenated sequences.
When a sequence is of length one, we call its single term {\em primitive}.
A few specific sequences, to which we frequently refer, we will honor with the adjective {\em basic}.
Thus far, all of the sequences we have treated in detail are {\em permutational} sequences; their terms either are permutations or are characters denoting sequences of permutations. Indeed, almost all of our
permutational sequences are {\em transpositional}: Their terms are either transpositions or characters denoting sequences of transpositions. Non-basic permutational sequences get lower-case bold-face
Latin-letter names.
For the present subproject, when $n\ge4$, we shall have recourse to two basic tranpositional sequences, $\sigma(n)$ and $\tau(n)$. But we shall use number (integer) sequences as well; our basic number sequence
is written $\nu(n)$. Number sequences other than $\nu(n)$ will usually receive lower-case Latin letter designations.
\begin{defn}\label{Sequences} \ $\tau(n) := \langle(0\,1),(1\,2),\ldots,(n-2\,\,n-1)\rangle$ \ and \ $\sigma(n) := \langle\tau(n),(n-1\,\,0)\rangle$. Also, \ $\nu(n) := \langle 0,1,\ldots,n-1\rangle$. \end{defn}
Of course ${\cal T}(\sigma(n))$ is a simple circuit multigraph on $n$ vertices, with $n\ge4$ understood, and ${\cal T}(\tau(n))$ is the simple branchless multitree resulting from the removal of the simple multiedge
$(n-1\,\,0)$ from ${\cal T}(\sigma(n))$.\vspace{.5em}
Before we treat circuit-containing connected multigraphs with $n\ge4$, we recall that Theorem \ref{n=3} settles the case for $n\le3$. Now, for $n\ge4$, we show that, if the transpositional multigraph
${\cal T}({\bf u})$ contains a 4-vertex simple subgraph which is a triangle sprouting a twig, then {\bf u} is not CI. Remember: \ $4 := \{0,1,2,3\}$.
\begin{thm}\label{Four} Let $n\ge4$, and let ${\bf u}$ be a sequence in $1^{n-2}2^1$ which has ${\bf h} := \langle(0\,1),(1\,2),(0\,2),(0\,3)\rangle$ as a subsequence\footnote{We write ${\bf u\setminus h}$ to designate the subsequence of ${\bf u}$ obtained by removing from ${\bf u}$ its subsequence ${\bf h}$.} Then {\bf u} is not {\rm CI}. \end{thm}
\begin{proof} Let $W := \{c: c$ is a cyclic component of $\bigcirc({\bf u \setminus h})$ with $4\,\cap\,$supp$(c)\not=\emptyset\}$. Let $w\in{\rm Sym}(n)$ be the permutation having $W$ as its set of cyclic components.
It suffices to show that $\bigcirc{\bf p}\circ w\not\simeq\bigcirc{\bf h}\circ w$ for some ${\bf p}\in{\rm Seq}({\bf h})$. There are five cases to treat.\vspace{.5em}
{\sf Case 1:} \ $|4\,\cap\,$supp$(c)|=1$ for every $c\in W$. Then $W = \{(0\,{\rm s}_0),(1\,{\rm s}_1),(2\,{\rm s}_2),(3\,{\rm s}_3)\}$ for some sequences ${\rm s}_i$ in $\{4,5,\ldots,n-1\}$. Consider the following three
rearrangements ${\bf p}_i\in$ Seq$({\bf h})$:\vspace{.3em}
\centerline{${\bf p}_1 := \langle(0\,2),(0\,3),(1\,2),(0\,1)\rangle$ \ \ and \ \ ${\bf p}_2 := \langle(0\,2),(0\,1),(0\,3),(1\,2)\rangle$ \ \ and \ \ ${\bf p}_3 := \langle(0\,2),(0\,3),(0\,1),(1\,2)\rangle$}\vspace{.3em}
\noindent Then $\bigcirc{\bf p}_1 = (0)(1\,2\,3)$ \ and \ $\bigcirc{\bf p}_2 = (0\,1\,3)(2)$, \ and \ $\bigcirc{\bf p}_3 = (0\,1)(2\,3)$. Consequently $\bigcirc{\bf p}_1\circ w = (1\,2\,3)\circ w =
(1\,2\,3)\circ(0\,{\rm s}_0)(1\,{\rm s}_1)(2\,{\rm s}_2)(3\,{\rm s}_3) = (0\,\,{\rm s}_0)(1\,\,{\rm s}_2\,2\,\,{\rm s}_3\,3\,\,{\rm s}_1)$. Similarly, $\bigcirc{\bf p}_2\circ w = (0\,1\,3)\circ w =
(0\,\,{\rm s}_1\,1\,\,{\rm s}_3\,3\,\,{\rm s}_0)(2\,\,{\rm s}_2)$ \ and \ $\bigcirc{\bf p}_3\circ w = (0\,1)(2\,3)\circ w = (0\,\,{\rm s}_1\,1\,\,{\rm s}_0)(2\,\,{\rm s}_3\,3\,\,{\rm s}_2)$. Summarizing, we have
that\vspace{.3em}
\centerline{$\bigcirc{\bf p}_1\circ w = (0\,\,{\rm s}_0)(1\,\,{\rm s}_2 \, 2\,\,{\rm s}_3\,3\,\,{\rm s}_1)$ and $\bigcirc{\bf p}_2\circ w =(0\,\,{\rm s}_1\,1\,\,{\rm s}_3\,3\,\,{\rm s}_0)(2\,\,{\rm s}_2) $ and
$\bigcirc{\bf p}_3\circ w = (0\,\,{\rm s}_1\,1\,\,{\rm s}_0)(2\,\,{\rm s}_3\,3\,\,{\rm s}_2)$.}\vspace{.3em}
In order to establish that {\bf u} is not CI, it suffices to show that these three permutations $\bigcirc{\bf p}_i\circ w$ are not members of the same one conjugacy class. Observe that, for each $i\in\{1,2,3\}$, the
permutation $\bigcirc{\bf p}_i\circ w$ has exactly two cyclic components, $a_i$ and $b_i$. To argue by contradiction, we assume the multiset equalities $\{|a_1|,|b_1|\}=\{|a_2|,|b_2|\}=\{|a_3|,|b_3|\}$.
Spelled out, these multiset equalities are\vspace{.3em}
\centerline{$\{|(0\,\,{\rm s}_0)|,|(1\,\,{\rm s}_2\,2\,\,{\rm s}_3\,3\,\,{\rm s}_1)|\} = \{|(0\,\,{\rm s}_1\,1\,\,{\rm s}_3\,3\,\,{\rm s}_0)|,|(2\,\,{\rm s}_2)|\} =
\{|(0\,\,{\rm s}_1\,1\,\,{\rm s}_0)|,|(2\,\,{\rm s}_3\,3\,\,{\rm s}_2)|\}$, \ whence }\vspace{.3em}
\centerline{\{$1+|{\rm s}_0|,3+|{\rm s}_2|+|{\rm s}_3|+|{\rm s}_1|\} = \{3+|{\rm s}_1|+|{\rm s}_3|+|{\rm s}_0|,1+|{\rm s}_2|\} = \{2+|{\rm s}_1|+|{\rm s}_0|,2+|{\rm s}_3|+|{\rm s}_2|\}$. }\vspace{.3em}
\noindent Since $1+|{\rm s}_0| < 3+|{\rm s}_1|+|{\rm s}_3|+|{\rm s}_0|$, the equality \{$1+|{\rm s}_0|,3+|{\rm s}_2|+|{\rm s}_3|+|{\rm s}_1|\} = \{3+|{\rm s}_1|+|{\rm s}_3|+|{\rm s}_0|,1+|{\rm s}_2|\}$
implies that $1+|{\rm s}_0| = 1+|{\rm s}_2|$; so $|{\rm s}_0|=|{\rm s}_2|$. Therefore, \{$1+|{\rm s}_0|,3+|{\rm s}_2|+|{\rm s}_3|+|{\rm s}_1|\} = \{2+|{\rm s}_1|+|{\rm s}_0|,2+|{\rm s}_3|+|{\rm s}_2|\}$
implies that $1+|{\rm s}_0| = 2+|{\rm s}_3|+|{\rm s}_2|$ since $1+|{\rm s}_0| < 2+|{\rm s}_1|+|{\rm s}_0|$. Hence, $1+|{\rm s}_0| = 2+|{\rm s}_3|+|{\rm s}_0|$, forcing us to the impossibility $|{\rm s}_3|=-1$.
So the assumed three multiset equalities cannot hold simultaneously. Therefore $\bigcirc{\bf p}_i\circ w\not\simeq\bigcirc{\bf h}\circ w$ for at least one $i\in\{1,2,3\}$. We infer that {\bf u} is not CI in the Case 1
situation.\vspace{.5em}
In the remaining four cases, $\psi$ denotes an arbitrary element in Sym$(4)$.\vspace{.5em}
{\sf Case 2:} \ $W = \{w\}$, and $4\subseteq$ supp$(w)$. That is, $w = \big(\psi(0)\,\,{\rm s}_{\psi(0)}\,\,\psi(1)\,\,{\rm s}_{\psi(1)}\,\,\psi(2)\,\,{\rm s}_{\psi(2)}\,\,\psi(3)\,\,{\rm s}_{\psi(3)}\big)\in n^1$, where the
four s$_{\psi(i)}$ are number sequences, the family of whose nonempty term sets is a partition of the set $n\setminus 4 := \{4,5,\ldots,n-1\}$. Consider the subset $\{{\bf p}_4,{\bf p}_5\}\subseteq$ Seq$({\bf h})$ given by \vspace{.3em}
\centerline{${\bf p}_4 := \langle(0\,1),(0\,3),(0\,2),(1\,2)\rangle$ \ \ and \ \ ${\bf p}_5 := \langle(1\,2),(0\,1),(0\,3),(0\,2)\rangle$. }\vspace{.3em}
\noindent Then $\bigcirc{\bf p}_4 = (0\,2)(1\,3)$ and $\bigcirc{\bf p}_5 = (0\,1)(2\,3)$ and $\bigcirc{\bf h} = (0\,1)\circ(1\,2)\circ(0\,2)\circ(0\,3) = (0\,3)(1\,2)$. Observe that
$\{\bigcirc{\bf p}_4,\bigcirc{\bf p}_5,\bigcirc{\bf h}\} = 2^2\subset$ Alt$(4)$. Hence there exists $\{{\bf p},{\bf q}\}\subseteq$ Seq$({\bf h})$ for which $\bigcirc{\bf p} = \big(\psi(0)\,\,\psi(1)\big)\big(\psi(2)\,\,\psi(3)\big)$
and for which $\bigcirc{\bf q} = \big(\psi(0)\,\,\psi(2)\big)\big(\psi(1)\,\,\psi(3)\big)$. By straightforward computation we now obtain that
\[\bigcirc{\bf p}\circ w = \big(\psi(0)\,\,{\rm s}_{\psi(1)}\,\,\psi(2)\,\,{\rm s}_{\psi(3)}\big)\big(\psi(1)\,\,{\rm s}_{\psi(0)}\big)\big(\psi(3)\,\,{\rm s}_{\psi(2)}\big)\, \not\simeq\,\]
\[ \big(\psi(0)\,\,{\rm s}_{\psi(2)}\,\,\psi(3)\,\,{\rm s}_{\psi(1)}\,\,\psi(2)\,\,{\rm s}_{\psi(0)}\,\,\psi(1)\,\,{\rm s}_{\psi(3)}\big) = \bigcirc{\bf q}\circ w.\] Thus we infer that {\bf u} fails to be CI in the Case 2
situation.\vspace{.8em}
{\sf Case 3:} \ $W = \{c_1,c_2\}$ where $|4\,\cap\,$supp$(c_i)|=2$ for each $i\in\{1,2\}$. So this time we can write $w = c_1c_2 =
\big(\psi(0)\,\,{\rm s}_{\psi(0)}\,\,\psi(1)\,\,{\rm s}_{\psi(1)}\big)\big(\psi(2)\,\,{\rm s}_{\psi(2)}\,\,\psi(3)\,\,{\rm s}_{\psi(3)}\big)$. As in Case 2, here too we can provide $\{{\bf p,q}\}\subseteq$ Seq$({\bf h})$, for which
$\bigcirc{\bf p} = \big(\psi(0)\,\,\psi(1)\big)\big(\psi(2)\,\,\psi(3)\big)$ and for which $\bigcirc{\bf q} = \big(\psi(0)\,\,\psi(2)\big)\big(\psi(1)\,\,\psi(3)\big)$. We compute that
\[\bigcirc{\bf p}\circ w = \big(\psi(0)\,\,{\rm s}_{\psi(1)}\big)\big(\psi(1)\,\,{\rm s}_{\psi(0)}\big)\big(\psi(2)\,\,{\rm s}_{\psi(3)}\big)\big(\psi(3)\,\,{\rm s}_{\psi(2)}\big)\,\not\simeq\]
\[\big(\psi(0)\,\,{\rm s}_{\psi(2)}\,\,\psi(3)\,\,{\rm s}_{\psi(1)}\big)\big(\psi(1)\,\,{\rm s}_{\psi(3)}\,\,\psi(2)\,\,{\rm s}_{\psi(0)}\big) = \bigcirc{\bf q}\circ w.\]
\noindent Thus in the situation of Case 3 we again find that {\bf u} is not CI.\vspace{.5em}
{\sf Case 4:} \ $W = \{c_1,c_2,c_3\}$ with $c_1 := \big(\psi(0)\,\,{\rm s}_{\psi(0)}\,\,\psi(1)\,\,{\rm s}_{\psi(1)}\big)$ and $c_2 :=\big(\psi(2)\,\,{\rm s}_{\psi(2)}\big)$ and $c_3 = \big(\psi(3)\,\,{\rm s}_{\psi(3)}\big)$.
So $w = c_1c_2c_3 = \big(\psi(0)\,\,{\rm s}_{\psi(0)}\,\,\psi(1)\,\,{\rm s}_{\psi(1)}\big)\big(\psi(2)\,\,{\rm s}_{\psi(2)}\big)\big(\psi(3)\,\,{\rm s}_{\psi(3)}\big)$. Let {\bf p} and {\bf q} be as in Cases 2 and 3. Then
\[\bigcirc{\bf p}\circ w = \big(\psi(0)\,\,{\rm s}_{\psi(1)}\big)\big(\psi(1)\,\,{\rm s}_{\psi(0)}\big)\big(\psi(2)\,\,{\rm s}_{\psi(3)}\,\,\psi(3)\,\,{\rm s}_{\psi(2)}\big)\,\not\simeq\]
\[\big(\psi(0)\,\,{\rm s}_{\psi(2)}\,\,\psi(2)\,\,{\rm s}_{\psi(0)}\,\,\psi(1)\,\,{\rm s}_{\psi(3)}\,\,\psi(3)\,\,{\rm s}_{\psi(1)}\big) = \bigcirc{\bf q}\circ w.\] Thus {\bf u} fails to be CI in the Case 4 situation as well.
\vspace{.5em}
{\sf Case 5:} \ $W = \{c_1,c_2\}$ with $c_1 = \big(\psi(0)\,\,{\rm s}_{\psi(0)}\big)$ and $c_2 = \big(\psi(1)\,\,{\rm s}_{\psi(1)}\,\,\psi(2)\,\,{\rm s}_{\psi(2)}\,\,\psi(3)\,\,{\rm s}_{\psi(3)}\big)$. That is to say,
$w = \big(\psi(0)\,\,{\rm s}_{\psi(0)}\big) \big(\psi(1)\,\,{\rm s}_{\psi(1)}\,\,\psi(2)\,\,{\rm s}_{\psi(2)}\,\,\psi(3)\,\,{\rm s}_{\psi(3)}\big)$. The $\bigcirc{\bf r}_i$ of the following six ${\bf r}_i\in{\rm Seq}({\bf h})$ comprise the conjugacy class, $1^13^1 \subset{\rm Alt}(4)$, the six possible $3$-cycles:
\[{\bf r}_1 := \langle(0\,\,1),(1\,\,2),(0\,\,3),(0\,\,2)\rangle\quad\mbox{for which we compute that}\quad \bigcirc{\bf r}_1 = (0)(1\,\,3\,\,2)\]
\[{\bf r}_2 := \langle(0\,\,2),(1\,\,2),(0\,\,3),(0\,\,1)\rangle\quad\mbox{for which we compute that}\quad \bigcirc{\bf r}_2 = (0)(1\,\,2\,\,3)\]
\[{\bf r}_3 := \langle(1\,\,2),(0\,\,3),(0\,\,2),(0\,\,1)\rangle\quad\mbox{for which we compute that}\quad \bigcirc{\bf r_3} = (0\,\,3\,\,2)(1)\]
\[{\bf r}_4 := \langle(0\,\,1),(0\,\,2),(1\,\,2),(0\,\,3)\rangle\quad\mbox{for which we compute that}\quad \bigcirc{\bf r}_4 = (0\,\,2\,\,3)(1)\]
\[{\bf r}_5 := \langle(1\,\,2),(0\,\,3),(0\,\,1),(0\,\,2)\rangle\quad\mbox{for which we compute that}\quad \bigcirc{\bf r}_5 = (0\,\,3\,\,1)(2)\]
\[{\bf r}_6 := \langle(1\,\,2),(0\,\,1),(0\,\,2),(0\,\,3)\rangle\quad\mbox{for which we compute that}\quad \bigcirc{\bf r}_6 = (0\,\,1\,\,3)(2)\]
\underline{Subcase}: \ $\psi(0) \not= 3$. Then $\bigcirc{\bf p} = \big(\psi(0)\big)\big(\psi(1)\,\,\psi(2)\,\,\psi(3)\big)$ and $\bigcirc{\bf q} = \big(\psi(0)\big)\big(\psi(1)\,\,\psi(3)\,\,\psi(2)\big)$
for some ${\{\bf p},{\bf q}\}\subseteq\{{\bf r}_i: 1\le i\le6\}$.
We compute that $\bigcirc{\bf p}\circ w =
\big(\psi(0)\big)\big(\psi(1)\,\,\psi(2)\,\,\psi(3)\big)\circ\big(\psi(0)\,\,{\rm s}_{\psi(0)}\big) \big(\psi(1)\,\,{\rm s}_{\psi(1)}\,\,\psi(2)\,\,{\rm s}_{\psi(2)}\,\,\psi(3)\,\,{\rm s}_{\psi(3)}\big) =
\big(\psi(0)\,\,{\rm s}_{\psi(0)}\big)\big(\psi(1)\,\,{\rm s}_{\psi(2)}\,\,\psi(3)\,\,{\rm s}_{\psi(1)}\,\,\psi(2)\,\,{\rm s}_{\psi(3)}\big) \not\simeq
\big(\psi(0)\,\,{\rm s}_{\psi(0)}\big)\big(\psi(1)\,\,{\rm s}_{\psi(3)}\big)\big(\psi(2)\,\,{\rm s}_{\psi(1)}\big)\big(\psi(3)\,\,{\rm s}_{\psi(2)}\big) = \bigcirc{\bf q}\circ w$. Thus we see that {\bf u} is not CI
in the situation of Case 5 where also $\psi(0) \not= 3$. \vspace{.5em}
\underline{Subcase}: \ $\psi(0) = 3$. There are two subsubcases, which are:
\ \ \ \ \ \underline{i}: \ \ $w := (0\,\,{\rm s}_0\,\,1\,\,{\rm s}_1\,\,2\,\,{\rm s}_2)(3\,\,{\rm s}_3)$.
\ \ \ \ \ \underline{ii}: \ $w := (0\,\,{\rm s}_0\,\,2\,\,{\rm s}_2\,\,1\,\,{\rm s}_1)(3\,\,{\rm s}_3)$.
\noindent We will show that the theorem holds for Subsubcase \underline{i}, but omit the similar proof for Subsubcase \underline{ii}.\vspace{.3em}
We use rearrangements ${\bf v}_1 := {\bf h}$ and ${\bf v}_2 := \langle(0\,1),(0\,3),(0\,2),(1\,2)\rangle$ and ${\bf v}_3 := \langle(0\,2),(0\,3),(1\,2),(0\,1)\rangle$ of {\bf h}, noting first that
$\bigcirc{\bf v}_1 = (0\,3)(1\,2)$, that $\bigcirc{\bf v}_2 = (0\,2)(1\,3)$, and that $\bigcirc{\bf v}_3 = (0)(1\,2\,3)$, and hence that
$\bigcirc{\bf v}_1\circ w = (0\,\,{\rm s}_3\,\,3\,\,{\rm s}_0\,\,1\,\,{\rm s}_2)(2\,\,{\rm s}_1)$ and $\bigcirc{\bf v}_2\circ w = (0\,\,{\rm s}_2)(1\,\,{\rm s}_3\,\,3\,\,{\rm s}_1\,\,2\,\,{\rm s}_0)$ and
$\bigcirc{\bf v}_3\circ w = (0\,\,{\rm s}_0\,\,1\,\,{\rm s}_2)(2\,\,{\rm s}_3\,\,3\,\,{\rm s}_1)$.
Assume that $\bigcirc{\bf v}_1\circ w \simeq \bigcirc{\bf v}_2\circ w \simeq \bigcirc{\bf v}_3\circ w$. Then the following three multiset equalites must hold:
\[ \{3+|{\rm s}_3|+|{\rm s}_0|+|{\rm s}_2|,1+|{\rm s}_1|\} = \{1+|{\rm s}_2|,3+|{\rm s}_3|+|{\rm s}_1|+|{\rm s}_0|\} = \{2+|{\rm s}_0|+|{\rm s}_2|,2+|{\rm s}_3|+|{\rm s}_1|\}\]
\noindent Since $1+|{\rm s}_2| < 3+|{\rm s}_3|+|{\rm s}_0|+|{\rm s}_2|$, the equality of the first two multisets implies that $1+|{\rm s}_1| = 1+|{\rm s}_2|$, whence $|{\rm s}_1| = |{\rm s}_2|$.
Since $1+|{\rm s}_1| < 2+|{\rm s}_3|+|{\rm s}_1|$, the equality of the first and third multisets therefore implies that $1+|{\rm s}_1| = 2+|{\rm s}_0|+|{\rm s}_2| = 2+|{\rm s}_0|+|{\rm s}_1|$,
whence $0 = 1+|{\rm s}_0|$, which entails the impossibility $|{\rm s}_0| = -1$.
So {\bf u} fails to be CI in Case 5 as well. Since the five Cases are exhaustive, the theorem is proved. \end{proof}
Theorem \ref{Four} gives us that, if $n\ge4$ and if ${\cal T}({\bf u})$ is a transpositional multigraph containing a triangular subgraph, then {\bf u} is not CI. The remainder of \S3 is devoted mainly to
generalizing the proof of Theorem \ref{Four} in order to establish, for $n\ge4$, that no connected transpositional multigraph on $n$ vertices is CI if it contains a circuit subgraph on more than three vertices.
To this purpose it is useful to describe those sequences {\bf g} in $1^{n-2}2^1$ for which ${\cal T}({\bf g})$ is itself a circuit. The following three lemmas do so.
Recall the basic transpositional sequence $\tau(n) := \langle(0\,1),(1\,2),\ldots,(n-3\,\,n-2),(n-2\,\,n-1)\rangle$; that ${\rm s^R}$ is the backward spelling of the sequence \ s: \ and that, when \ a \ is a subsequence
of a sequence \ b, \ then \ b $\setminus$ a \ denotes the subsequence of \ b \ that is complementary to \ a, \ as per Footnote 7.
When ${\bf s}$ is a sequence, we write $x<_{\bf s} y$ to indicate that $x$ precedes $y$ as a term in ${\bf s}$.
\begin{lem}\label{Numbers1} For $n\ge3$, let {\bf g} be any rearrangement of $\tau(n)$. Then $\bigcirc{\bf g} = (0\,\,{\rm p}\,\,n-1\,\,{\rm q}) \in n^1$ for some subsequence {\rm p} of
$\langle1,2,\ldots,n-2\rangle$ and with ${\rm q} := (\langle 1,2,\ldots,n-2\rangle\setminus{\rm p})^{\rm R}$. \end{lem}
\begin{proof} We induce on $n$. Note that $\langle(0\,1),(1\,2)\rangle$ and $\langle(1\,2),(0\,1)\rangle$ are the only rearrangements of $\tau(3)$, that $(0\,1)\circ(1\,2)=(0\,2\,1) = (0\,{\rm p}\,2\,{\rm q})$
with \ p \ the empty sequence and reverse-complementary to \ q $=\langle 1\rangle$ \ in the number sequence $\langle 1\rangle$, and similarly that $(1\,0)\circ(0\,1) = (0\,1\,2) = (0\,{\rm p}\, 2\,{\rm q})$
where \ p $=\langle1\rangle$ and \ q $=\emptyset$.
Choose an integer $k\ge3$. Suppose the lemma holds for $n=k$. Let \ {\bf g} \ be a rearrangement of $\tau(k+1)$. Let ${\bf g'} := {\bf g}\setminus\langle(k-1\,\,k)\rangle$. Now,
supp$(t)\,\cap\,$supp$ \big((k-1\,\,k)\big)=\emptyset$ for every term $t$ in ${\bf g'}$ except for $t=(k-2\,\,k-1)$. Hence, one of the following two equalities must hold:
\[{\sf 1.}\qquad\qquad \bigcirc{\bf g} = (k-1\,\,k)\circ\bigcirc{\bf g'}\] \[{\sf 2.}\qquad\qquad \bigcirc{\bf g} = \bigcirc{\bf g'}\circ(k-1\,\,k)\] Equality {\sf 1} holds when $(k-2\,\,k-1)>_{\bf g}(k-1\,\,k)$. Equality {\sf 2}
holds when $(k-2\,\,k-1)<_{\bf g}(k-1\,\,k)$.
Since ${\bf g'}$ is a rearrangement of $\tau(k)$, we have by the inductive hypothesis that $\bigcirc{\bf g'} = (0\,\,{\rm p'}\,\,k-1\,\,{\rm q'})$ for some subsequence ${\rm p'}$ of $\tau(k)$, where
${\rm q'} = \big(\langle 1,2,\ldots,k-2\rangle\setminus{\rm p'}\big)^{\rm R}$. So, if $(k-2\,\,k-1)>_{\bf g}(k-1\,\,k)$, then \[\bigcirc{\bf g} = (k-1\,\,k)\circ\bigcirc{\bf g'} =
(k-1\,\,k)\circ(0\,\,{\rm p'}\,\,k-1\,\,{\rm q'}) = (0\,\,{\rm p}\,\,k\,\,{\rm q})\quad\mbox{where}\quad {\rm p} := \langle {\rm p'},\,k-1\rangle\quad\mbox{and}\quad {\rm q := q'}. \]
Similarly, in the event that $(k-2\,\,k-1)<_{\bf g}(k-1\,\,k)$, we have instead that \[\bigcirc{\bf g} = \bigcirc{\bf g'}\circ(k-1\,\,k) = (0\,\,{\rm p'}\,\,k-1\,\,{\rm q'})\circ(k-1\,\,k) =
(0\,\,{\rm p}\,\,k\,\,{\rm q})\quad\mbox{where}\quad {\rm p:=p'}\quad\mbox{and}\quad {\rm q} := \langle k-1,{\rm q'}\rangle. \]
\noindent These equalities are exactly what the lemma claims. \end{proof}
\vspace{.5em}
Recall our basic sequence $\sigma(n) := \langle\tau(n),(n-1\,\,0)\rangle = \langle(0\,1),(1\,2),\ldots,(n-2\,\,n-1),(n-1\,\,0)\rangle$ for $n\ge3$.
\begin{lem} \label{Numbers2}Let ${\bf f}\in{\rm Seq}(\sigma(n))$ with $n\ge3$. Then $\bigcirc{\bf f} = ({\rm h})(\nu(n)\setminus {\rm h})^-$ for a subsequence ${\rm h}\not=\emptyset$ of $\nu(n)$.
\end{lem}
\begin{proof} \underline{Case 1}: \ $(0\,1)<_{\bf f}(n-1\,\,0)$. Let $m$ be the smallest integer such that
\[(n-0\,\,0) <_{\bf f} (n-1\,\,n-2) <_{\bf f} (n-2\,\,n-3) <_{\bf f}\cdots <_{\bf f} (m+1\,\,m). \quad\mbox{We can decompose}\quad {\bf f}\quad\mbox{as follows:} \]
\[{\bf f} = \langle {\bf b}_0,(n-1\,\,0), {\bf b}_1,(n-1\,\,n-2), {\bf b}_2, (n-2\,\,n-3),\ldots, {\bf b}_{n-m-1}, (m+1\,\,m), {\bf b}_{n-m}\rangle.\]
\[\mbox{Then}\quad \bigcirc{\bf f} = \bigcirc{\bf b}_0\circ(n-1\,\,0)\circ\bigcirc{\bf b}_1\circ (n-1\,\,n-2)\circ\bigcirc{\bf b}_2\circ\cdots\circ\bigcirc{\bf b}_{n-m-1}\circ(m+1\,\,m)\circ\bigcirc{\bf b}_{n-m}.\]
Since ${\bf f}\in{\rm Seq}(\sigma(n))$, there are exactly two terms in ${\bf f}$ whose supports contain $n-1$; those two terms are $(n-1\,\,0)$ and $(n-1\,\,n-2)$. Since those terms border the transpositional
sequence ${\bf b}_1$, they do not occur as terms in ${\bf b}_1$. Consequently $n-1\not\in{\rm supp}(\bigcirc{\bf b}_1)$. By hypothesis $(0\,1) <_{\bf f} (n-1\,\,0)$, and hence $(0\,1) <_{\bf f} {\bf b}_1$. Thus
neither of the two terms of ${\bf f}$ which have $0$ in their supports are terms in ${\bf b}_1$. Therefore ${\rm supp}(\bigcirc{\bf b}_1)\cap{\rm supp}\big((n-1\,\,0)\big) = \emptyset$. So
$(n-1\,\,0)\circ\bigcirc{\bf b}_1 = \bigcirc{\bf b}_1\circ(n-1\,\,0)$. Thus we infer that
\[\bigcirc{\bf f} = \bigcirc{\bf b}_0\circ\bigcirc{\bf b}_1\circ (n-1\,\,0)\circ(n-2\,\,n-1)\circ \bigcirc{\bf b}_2\circ\cdots\circ\bigcirc{\bf b}_{n-m-1}\circ(m+1\,\,m)\circ\bigcirc{\bf b}_{n-m}.\] Similarly we see
that ${\rm supp}(\bigcirc{\bf b}_2)\cap{\rm supp}\big((n-1\,\,0)\circ(n-1\,\,n-2)\big) = \emptyset$, and thus that
\[\bigcirc{\bf f} = \bigcirc{\bf b}_0\circ\bigcirc{\bf b}_1\circ\bigcirc{\bf b}_2\circ(n-1\,\,0)\circ(n-1\,\,n-2)\circ(n-2\,\,n-3)\circ\cdots\circ\bigcirc{\bf b}_{n-m-1}\circ(m+1\,\,m)\circ\bigcirc{\bf b}_{n-m}.\]
Continuing in this fashion, we eventually obtain that
\[\bigcirc{\bf f} = \bigcirc{\bf b}_0\circ\bigcirc{\bf b}_1\circ\cdots\circ\bigcirc{\bf b}_{n-m}\circ(n-1\,\,0)\circ(n-1\,\,n-2)\circ(n-2\,\,n-3)\circ\cdots\circ(m+1\,\,m).\]
Define ${\bf g} := \langle{\bf b}_0,{\bf b}_1,\ldots,{\bf b}_{n-m-1},{\bf b}_{n-m}\rangle$. Note that ${\bf g} = {\bf f}\setminus\langle(n-1\,\,0),(n-1\,\,n-2),\ldots,(m+1\,\,m)\rangle$. Since
${\bf f}\in{\rm Seq}(\sigma(n))$, we see that ${\bf g}\in{\rm Seq}(\tau(m+1))$, recalling that $\tau(m+1) = \langle(0\,1),(1\,2),\ldots(m-1\,\,m) \rangle$. So by Lemma \ref{Numbers1}, we have that
$\bigcirc{\bf g} = (0\,\,{\rm p}\,\,m\,\,{\rm q})$, where ${\rm p}$ is a subsequence of $\langle 1,2,\ldots,m-1\rangle$ and where ${\rm q} = (\langle 1,2,\ldots,m-1\rangle\setminus{\rm p})^{\rm R}$.
Thus $\bigcirc{\bf f} = \bigcirc{\bf g}\circ(n-1\,\,0)\circ(n-1\,\,n-2)\circ(n-2\,\,n-3)\circ\cdots\circ(m+1\,\,m) = (0\,\,{\rm p}\,\,m\,\,{\rm q})\circ(n-1\,\,0)\circ(n-1\,\,n-2)\circ\cdots\circ(m+1\,\,m) =
(0\,\,{\rm p}\,\,m+1\,\,m+2\,\,\ldots\,n-2\,\,n-1)({\rm q}\,\,m)$. Setting ${\rm h} := \langle 0\,\,{\rm p}\,\,m+1\,\,m+2\,\,\ldots\,n-2\,\,n-1\rangle$, we see that ${\rm h}$ is a nonempty subsequence of
$\nu(n)$ and observe that $\langle{\rm q},m\rangle = (\nu(n)\setminus{\rm h})^{\rm R}$, and that therefore $({\rm q}\,\,m) = (\nu(n)\setminus{\rm h})^-$. So
$\bigcirc{\bf f} = ({\rm h})(\nu(n)\setminus{\rm h})^-$ as alleged.\vspace{.5em}
\underline{Case 2}: \ $(0\,1)>_{\bf f}(n-1\,\,0)$. Since the argument parallels that for Case 1, we omit it. \end{proof}
\begin{lem}\label{Numbers3} Let $n\ge3$. Let ${\rm h}$ be a proper nonempty subsequence of $\nu(n)$. Then $\bigcirc{\bf f}=({\rm h})(\nu(n)\setminus{\rm h})^-$ for some
${\bf f}\in{\rm Seq}(\sigma(n))$ . \end{lem}
\begin{proof} If the lemma holds for ${\rm h}$ then it holds also for its complement $\nu(n)\setminus{\rm h}$ in $\nu(n)$. For, if $\bigcirc{\bf f} = ({\rm h})(\nu(n)\setminus{\rm h})^-$,
then $\bigcirc{\bf f}^{\rm R} = (\bigcirc{\bf f})^- = (\nu(n)\setminus{\rm h})({\rm h})^- = (\nu(n)\setminus{\rm h})(\nu(n)\setminus(\nu(n)\setminus{\rm h}))^-$. We induce on $n\ge3$.\vspace{.5em}
{\sf Basis Step.} There are the six nonempty proper subsequences of $\nu(3)$; they are \[{\rm u}:=\langle0\rangle,\quad {\rm v}:=\langle1\rangle,\quad {\rm w}:=\langle2\rangle,
\quad {\rm x}:=\langle0,1\rangle,\quad {\rm y}:=\langle0,2\rangle,\quad {\rm z}:=\langle1,2\rangle. \] We use the fact noted in the preceding paragraph. If ${\bf f}_{\rm u} :=
\langle(0\,1),(1\,2),(2\,0)\rangle$ then $\bigcirc{\bf f}_{\rm u} = (0)(2\,1) = ({\rm u})(\nu(3)\setminus{\rm u})^- = ({\rm z})(\nu(3)\setminus{\rm z})^-$. If ${\bf f}_{\rm v} :=
\langle(0\,1),(2\,0),(1\,2)\rangle$ then $\bigcirc{\bf f}_{\rm v} = (0\,2)(1) = ({\rm v})(\nu(3)\setminus{\rm v})^- = ({\rm y})(\nu(3)\setminus{\rm y})^-$. If ${\bf f}_{\rm w} :=
\langle(1\,2),(0\,1),(2\,0)\rangle$ then $\bigcirc{\bf f}_{\rm w} = (0\,1)(2) = ({\rm w})(\nu(3)\setminus{\rm w})^- = ({\rm x})(\nu(3)\setminus{\rm x})^-$.\vspace{.5em}
T
he arbitrary length-one sequence $\langle x\rangle$ in $\nu(k+1)$ is a special case. Choose the transpositional sequence ${\bf f}\in{\rm Seq}(\sigma(k+1))$ to be
\[{\bf f} := \langle(x\,\,x+1),(x+1\,\,x+2),\ldots,(k-1\,\,k),(k\,0),(0\,1),(1\,2),\ldots,(x-2\,\,x-1),(x-1\,\,x)\rangle.\] Then\footnote{modulo $k+1$ of course}
$\bigcirc{\bf f} = (x)(k\,\,k-1\,\,k-2\,\ldots\,x+1\,\,x-1\,\,x-2\,\dots\,2\,\,1\,\,0) = (x)(\nu(k+1)\setminus\langle x\rangle)^-$, as desired.\vspace{.5em}
Let ${\rm h} := \langle x_1,x_2,\cdots,x_s\rangle$ be a subsequence of $\nu(k+1)$, and let ${\rm h'} := \nu(k+1)\setminus{\rm h} = \langle y_1,y_2,\ldots,y_t\rangle$ be the complement in $\nu(k+1)$ of ${\rm h}$.
Of course $s+t=k+1$.
By the first paragraph in this proof, we can take it both that $x_s=k$. and also that there exist $s$ disjoint subsequences\footnote{some of which may be vacuous}
${\rm a}_i$ of $\nu(k+1) = \langle{\rm a}_1,x_1,{\rm a}_2,x_2,\ldots,{\rm a}_s,x_s\rangle$. Indeed, ${\rm h'} = {\rm a}_1{\rm a}_2\ldots{\rm a}_s$, where ${\rm h'}$ is expressed here as the
concatenation of the subsequences ${\rm a}_i$. The ${\bf f}\in{\rm Seq}(\sigma(k+1))$, whose existence this lemma alleges, must satisfy $\bigcirc{\bf f} =
(x_1\,x_2\,\ldots\,x_s)(y_{k+1-s}\,y_{k-s}\,\ldots\,y_2\,y_1) = ({\rm h})({\rm a_s^Ra_{s-1}^R\cdots a_2^Ra_1^R})$.\vspace{.5em}
Since we have already dealt with the length-one case ${\rm h} = \langle x\rangle$, we now take it that $2\le|{\rm h}|= s \le k-1$. Recall that $x_s=k$ is the right-most term in the subsequence ${\rm h}$. Let
${\rm h''} := {\rm h}\setminus\langle x_s\rangle = \langle x_1,x_2,\ldots,x_{s-1}\rangle$. Since therefore $y_t<k$, we have that ${\rm h'} = \nu(k)\setminus{\rm h''} = \langle y_1,y_2,\ldots,y_t\rangle$ is the
complement\footnote{as well as remaining the complement in $\nu(k+1)$ of ${\rm h}$} in $\nu(k)$ of the sequence ${\rm h''}$. Hence, by the inductive hypothesis, there exists
${\bf g} \in {\rm Seq}(\sigma(k))$ for which $\bigcirc{\bf g} = ({\rm h'')(h'})^- = (x_1\,\ldots\,x_{s-1})(y_t\,\,y_{t-1}\,\ldots\,y_2\,\, y_1)$.
We create ${\bf f}\in{\rm Seq}(\sigma(k+1))$ from ${\bf g}\in{\rm Seq}(\sigma(k))$ by replacing the term $(k-1\,\,0)$ of ${\bf g}$ with the sequence $\langle(k\,\,0),(k-1\,\,k)\rangle$. Notice that, whereas
$(k-1)\bigcirc{\bf g}=0$, we have instead $(k-1)\bigcirc{\bf f} = k = x_s$ and $(k)\bigcirc{\bf f} = 0$. But for all $z\in (k+1)\setminus\{k-1,k\}$ we have $(z)\bigcirc{\bf g} = (z)\bigcirc{\bf f}$. Obviously
${\rm h} = \langle{\rm h''},x_s\rangle$, and $\bigcirc{\bf f} = ({\rm h})({\rm h'})^-$. \end{proof}
\begin{thm}\label{NixCircuits} Let ${\cal T}({\bf f})$ have a circuit, where ${\bf f}$ is a sequence in $1^{k-2}2^1$ with $k\ge4$. Then ${\bf f}$ is not {\rm CI}. \end{thm}
\begin{proof} Theorem \ref{Four} establishes this theorem where ${\cal T}({\bf f})$ contains a triangular subgraph. So for $k\ge n\ge4$, let ${\cal T}(\sigma(n))$ a subgraph of ${\cal T}(k)$. Then some
${\bf g}\in{\rm Seq}(\sigma(n))$ is a subsequence of ${\bf f}$. Let ${\sf D}$ be the family of cyclic components of the permutation $\bigcirc({\bf f\setminus g})$, let ${\sf U} := \{C: n\,\cap\,{\rm supp}(C) =
\emptyset\}$, and let ${\sf W := D\setminus U}$. Let ${\sf u}$ be the permutation whose family of components is ${\sf U}$, and let ${\sf w}$ be the permutation whose family of components is ${\bf W}$. Then
$\bigcirc({\bf f\setminus g}) = {\sf uw}$.
The theorem will be proved when we exhibit rearrangements $\{{\bf p},{\bf q}\}\subseteq{\rm Seq}({\bf g})$ such that $\bigcirc\langle{\bf p,f\setminus g}\rangle = \bigcirc{\bf p}\circ{\sf uw}\not\simeq
\bigcirc{\bf q}\circ{\sf uw} = \bigcirc\langle{\bf q,f\setminus g}\rangle$. Moreover, since ${\rm supp}({\sf u})\,\cap\,{\rm supp}(\bigcirc{\bf p}\circ{\sf w}) = \emptyset =
{\rm supp}({\sf u})\,\cap\,{\rm supp}(\bigcirc{\bf q}\circ{\sf w})$, it will suffice to insist only that $\bigcirc{\bf p}\circ{\sf w} \not\simeq \bigcirc{\bf q}\circ{\sf w}$. There are three cases. \vspace{.7em}
{\sf Case One:} \ $|n\,\cap\,{\rm supp}(C)| = 1$ for each cycle $C\in{\bf W}$.
We write ${\sf w} = (0\,\,{\rm s}_0)(1\,\,{\rm s}_1)\ldots(n-1\,\,{\rm s}_{n-1})$, where the ${\rm s}_i$ are finite sequences\footnote{which are not required to be nonempty} in $k\setminus n$. By Lemma
\ref{Numbers3}, for each $i\in n$ there exists ${\bf p}^{(i)}\in{\rm Seq}({\bf g})$ such that $\bigcirc{\bf p}^{(i)} = (i)(0\,\,1\,\,2\ldots\,\,i-2\,\,i-1\,\,i+1\,\,i+2\ldots n-2\,\, n-1)$. Hence
$\bigcirc{\bf p}^{(i)}\circ{\sf w} =
(i\,\,{\rm s}_i)(0\,\,{\rm s}_1\,\,1\,\,{\rm s}_2\,\,2\,\,s_3\ldots\,{\rm s}_{i-2}\,\,i-2\,\,{\rm s}_{i-1}\,\,i-1\,\,{\rm s}_{i+1}\,\,i+1\,\,{\rm s}_{i+2}\,\, i+2\ldots n-2\,\,{\rm s}_{n-1}\,\,n-1\,\,{\rm s}_0)$
for each $i\in n$. Now pretend that $\bigcirc{\bf p}^{(i)}\circ{\sf w} \simeq \bigcirc{\bf p}^{(0)}\circ{\sf w}$ for all $i\in n$. Then all $n$ of the cycle-length multisets $K[{\bf p}^{(i)}]$ of these
permutations $\bigcirc{\bf p}^{(i)}\circ{\sf w}$ must be identical. Specifying the $K[{\bf p}^{(i)}]$ for each $i\in n$, we see that \[K[{\bf p}^{(i)}] := \{1+|{\rm s}_i|,n-1+\sum\{|{\rm s}_j|: i\not=j\in n\}.\]
Obviously $1+|{\rm s}_i| < n-1+\sum\{|{\rm s}_j|: t\not= j\in n\}$ whenever $i\not= t$. So our assumption that all of the $K[{\bf p}^{(i)}]$ are identical implies that $|{\rm s}_i| = |{\rm s}_0|$ for all $i\in n$.
Again invoking Lemma \ref{Numbers3}, we can find ${\bf q}\in{\rm Seq}({\bf g})$ for which $\bigcirc{\bf q} = (1\,\,0)(2\,\,3\ldots n-2\,\,n-1)$. Then $\bigcirc{\bf q}\circ{\sf w} =
(0\,\,{\rm s}_1\,\,1\,\,{\rm s}_0)(2\,\,{\rm s}_3\,\,4\,\,{\rm s}_5\ldots n-2\,\,{\rm s}_{n-1}\,\,n-1\,\,{\rm s}_2)$, and so $K[{\bf q}] = \{2+|{\rm s}_1|+|{\rm s}_0|,n-2+\sum_{j=2}^{n-1}|{\rm s}_j|\}$.
Under the assumption that ${\bf f}$ is CI, we must have that $\bigcirc{\bf p}^{(i)}\circ{\sf w} = \bigcirc{\bf q}\circ{\sf w}$ for all $i\in n$, whereupon $K[{\bf p}^{(i)}] = K[{\bf q}]$. Since all of the integers
$|{\rm s}_i|$ were found to be equal, for $i=0$ we must infer that
$\{1+|{\rm s}_0|,(n-1)+(n-1)\cdot|{\rm s}_0|\} = \{2+2\cdot|{\rm s}_0|,n-2+(n-2)\cdot|{\rm s}_0|\}$, an impossibility since $\{1,n-1\}\cap\{2,n-2\} = \emptyset$ when $n\ge 4$. So the Theorem holds
in the Case-One situation.\vspace{.5em}
The next Case requires an ancillary fact.
\begin{claim}\label{1stClaim} If $n\ge4$ and if ${\bf W}$ contains a cycle $C$ with $|n\,\cap\,{\rm supp}(C)|\ge2$ then there exists ${\bf p}\in{\rm Seq}(\sigma(n))$ such that the permutation
$(\bigcirc{\bf p}\circ{\sf w})\restrict(n\,\cup\,{\rm supp}({\sf w}))$ has at least three cycles. \end{claim}
\noindent{\it Proof of Claim.} We first suppose that there exists $C := (x\,\,{\rm s}_x\,\,y\,\,{\rm s}_y)\in{\bf W}$ with $n\cap{\rm supp}(C) = \{x,y\}$. Without loss of generality, we take it that
$\langle x,y\rangle$ is a subsequence of $\nu(n)$, and we invoke Lemma \ref{Numbers3} to find some ${\bf p}\in{\rm Seq}(\sigma(n))$ for which $\bigcirc{\bf p} = (x\,\,y)(\nu(n)\setminus\langle x,y\rangle)^-$.
Let ${\sf w'}$ be the permutation whose family of cyclic components is ${\bf W}\setminus\{C\}$. Since two permutations commute if their supports are disjoint,\footnote{We may write ${\sf a\circ b}$ as ${\sf ab}$
in order to emphasize that ${\rm supp}({\sf a})\cap{\rm supp}({\sf b}) = \emptyset$.} we compute: $\bigcirc{\bf p}\circ{\sf w} = \bigcirc{\bf p}\circ{\sf w'}C =
(x\,\,y)(\nu(n)\setminus\langle x,y\rangle)^-\circ{\sf w'}C = (\nu(n)\setminus\langle x,y\rangle)^-\circ{\sf w'} (x\,\,y)\circ C =
[(\nu(n)\setminus\langle x,y\rangle)^-\circ{\sf w'}] [(x\,\,y)\circ(x\,\,{\rm s}_x\,\,y\,\,{\rm s}_y)] = [(\nu(n)\setminus\langle x,y\rangle)^-\circ{\sf w'}]\,(x\,\,{\rm s}_y)(y\,\,{\rm s}_x)$. It is thus clear that
here the permutation $(\bigcirc{\bf p}\circ{\sf w})\restrict(n\cup{\rm supp}({\sf w}))$ has at least three cyclic components.
More generally, now, suppose there exists $C := (x\,\,{\rm s}_x\,y\,\,{\rm s}_y\,z\,\,{\rm s}_z) \in {\bf W}$ where ${\rm s}_x{\rm s}_y{\rm s}_z$ is an injective sequence in the set $k\setminus n$. Let
${\rm q} = \langle m_1,m_2,m_3\rangle$ be a rearrangement of the number sequence $\langle x,y,z\rangle$ for which $m_1<m_2<m_3$. Surely either $({\rm q}) = (z\,\,y\,\,x)$ or $({\rm q})^- = (z\,\,y\,\,x)$.
For $({\rm q}) := (z\,\,y\,\,x)$, by Lemma \ref{Numbers3} there exists ${\bf p}\in{\rm Seq}(\sigma(n))$ with $\bigcirc{\bf p} = ({\rm q})(\nu(n)\setminus{\rm q})^-$. Again, let ${\sf w'}$ be the permutation
whose family of components is ${\bf W}\setminus\{C\}$. Then $\bigcirc{\bf p}\circ{\sf w} = \bigcirc{\bf p}\circ{\sf w'}C = ({\rm q})({\rm q'})^-{\sf w'}C$, where ${\rm q'} := \nu(n)\setminus{\rm q}$. Thus
$\bigcirc{\bf p}\circ{\sf w} = [({\sf q'})\circ{\sf w'}][(z\,\,y\,\,x)\circ C] = [({\sf q'})\circ{\sf w'}][(z\,\,y\,\,x)\circ(x\,\,{\rm s}_x\,y\,\,{\rm s}_y\,z\,\,{\rm s}_z)] =
[({\sf q'})\circ{\sf w'}](x\,{\rm s}_z)(y\,{\rm s}_x)(z\,{\rm s}_y)$ for three number sequences ${\rm s}_t$. So $\bigcirc{\bf p}\circ{\sf w}$ has at least three cycles.
Thus the claim holds for $({\rm q}) := (z\,\,y\,\,x)$. On the other hand, if $({\rm q})^- := (z\,\,y\,\,x)$, then Lemma \ref{Numbers3} provides a ${\bf p}_1\in{\rm Seq}(\sigma(n))$ for which
$\bigcirc{\bf p}_1 = (\nu(n)\setminus{\rm q})({\rm q})^-$, and we omit the repetitive rest of the argument. Claim 1 follows.\vspace{.7em}
{\sf Case Two:} \ The family ${\bf W}$ of cycles contains exactly one element $C$, and $n\subseteq{\rm supp}(C)$.
Pick an integer $i$ with $0\le i, i+1, i+2 < k$. The cycle $C$ is expressable in one of these two ways:
\ \ \ \ \ \underline{Order 1}. \ \ $C = (i\,\,\,{\rm s}_i\,\,\,i+1\,\,\,{\rm s}_{i+1}\,\,\,i+2\,\,\,{\rm s}_{i+2})$
\ \ \ \ \ \underline{Order 2}. \ \ $C = (i\,\,\,{\rm s}_i\,\,\,i+2\,\,\,{\rm s}_{i+2}\,\,\,i+1\,\,\,{\rm s}_{i+1})$
\noindent For the subsequence ${\bf a}_i := \langle(i+1\,\,i+2),(i\,\,i+1)\rangle$ of $\sigma(n)^{\rm R}\in{\rm Seq}(\sigma(n))$, if Order 1 prevails then \[\bigcirc{\bf a}_i\circ C =
(i\,\,i+1\,\,i+2)\circ(i\,\,\,{\rm s}_i\,\,\,i+1\,\,\,{\rm s}_{i+1}\,\,\,i+2\,\,\,{\rm s}_{i+2}) = (i\,\,\,{\rm s}_{i+1}\,\,\,i+2\,\,\,{\rm s}_i\,\,\,i+1\,\,\,{\rm s}_{i+2}).\] Thus, if $C$ is of the form in Order 1, then
$\bigcirc{\bf a}_i\circ C$ is a single cycle of the same length as that of $C$. But if, instead, Order 2 prevails, then
\[\bigcirc{\bf a}_i^{\rm R}\circ C = (i\,\,i+2\,\,i+1)\circ(i\,\,\,{\rm s}_i\,\,\,i+2\,\,\,{\rm s}_{i+2}\,\,\,i+1\,\,\,{\rm s}_{i+1}) = (i\,\,\,{\rm s}_{i+2}\,\,\,i+1\,\,\,{\rm s}_i\,\,\,i+2\,\,\,{\rm s}_{i+1}). \]
So here too, when $C$ is of the form Order 2, then $\bigcirc{\bf a}_i^{\rm R}\circ C$ is a single cycle whose length is $|C|$.\vspace{.5em}
{\sf Subcase}: $n$ is even. Then the transpositional sequence $\sigma(n)$ has an even number of terms. So we may write $\sigma(n)$ as a sequence ${\bf v}_1{\bf v}_2\ldots{\bf v}_t$ of $t := n/2$ pairs
${\bf v}_i:= \langle(2i-2\,\,\,2i-1),(2i-1\,\,\,2i)\rangle$ of transpositions that are adjacent and consecutive in $\sigma(n)$. Define ${\bf v}'_t := {\bf v}_t^{\rm R}$ if $\langle 2t-2,2t-1,0\rangle = \langle n-2,n-1,0\rangle$
occurs in Order 1 in $C$. For each $i\in[t-1]$ we write ${\bf v}'_i$ as ${\bf v}_i^{\rm R}$ if $\langle 2i-2,2i-1,2i\rangle$ occurs in Order 1 in the cycle
$\bigcirc{\bf v}'_{i+1}\circ\bigcirc{\bf v}'_{i+2}\circ\cdots\circ\bigcirc{\bf v}'_{t-1}\circ\bigcirc{\bf v}'_t\circ C$.
In the corresponding Order 2 situation we make the opposite definitions for the ${\bf v}'_i$; that is, ${\bf v}'_i := {\bf v}_i$ for each $i\in[t]$. As a consequence of our observations prior to the present Subcase, the
permutation $\bigcirc{\bf v}'_1\circ\bigcirc{\bf v}'_2\circ\cdots\circ\bigcirc{\bf v}'_t\circ C$ is a single cycle whose length is $|C|$.
Of course ${\bf p} := {\bf v}'_1{\bf v}'_2\ldots{\bf v}'_t \in {\rm Seq}(\sigma(n))$. Claim 1 tells us that there exists ${\bf q}\in{\rm Seq}(\sigma(n))$ for which $\bigcirc{\bf q}\circ C$ has at least three
component cycles. So $\bigcirc{\bf p}\circ C \not\simeq \bigcirc{\bf q}\circ C$. \vspace{.3em}
{\sf Subcase}: $n$ is odd. This time $t := (n-1)/2$ and, for $1\le i\le t$, we define the ${\bf v_i}$ and the ${\bf v}'_i$ as above. Here, $\sigma(n) = \langle{\bf v}_1,{\bf v}_2,\ldots,{\bf v}_t,(n-1\,\,0)\rangle$.
Now let ${\bf p} := \langle{\bf p'},(n-1\,\,\,0)\rangle := \langle{\bf v}'_1,{\bf v}'_2,\ldots,{\bf v}'_t,(n-1\,\,\,0)\rangle$. Then ${\bf p} \in {\rm Seq}(\sigma(n))$. So $\bigcirc{\bf p}\circ C =
[\bigcirc{\bf p'}\circ C]\circ(n-1\,\,\,0)$ is the product of the cycles $\bigcirc{\bf p'}\circ C$ and $(n-1\,\,\,0)$. Moreover, $\{n-1,0\}\subset{\rm supp}(\bigcirc{\bf p}\circ C)$. Therefore $\bigcirc{\bf p}\circ C$
has exactly two cyclic components. Claim 1 promises us a ${\bf q}\in{\rm Seq}(\sigma(n))$ for which $\bigcirc{\bf q}\circ C$ has more than two cyclic components, whence
$\bigcirc{\bf q}\circ C \not\simeq \bigcirc{\bf p}\circ C$. So the theorem holds under Case Two circumstances.\vspace{.7em}
{\sf Case Three}: \ ${\bf W}$ contains a cycle $C$ for which $1 < |n\cap{\rm supp}(C)| < n$. As before, let ${\sf w}\in{\rm Sym}(k)$ be the permutation whose family of cyclic components is ${\sf W}$,
and let ${\sf w'}$ be the permutation whose family of (nontrivial) cyclic components is ${\bf W}\setminus\{C\}$. Since $|n\cap{\rm supp}(C)|<n$, there exists $m\in n\setminus{\rm supp}(C)$. If
$m\not\in{\rm supp}({\sf w'})$, then let $Q$ be the trivial cycle. But if $m\in{\rm supp}({\sf w'})$ then let $Q$ be the unique cycle in ${\bf W}\setminus\{C\}$ such that $m\in{\rm supp}(Q)$, and let
${\sf w''}$ be the permutation whose family of nontrivial cyclic components is ${\bf W}\setminus\{C,Q\}$. Let ${\rm h}$ be the subsequence of $\nu(n)$ with ${\rm supp}({\rm h}) = n\cap{\rm supp}(Q)$. [If
$Q=(m)$, we let ${\rm h}:=\langle m\rangle$.]
Since ${\rm h}$ is a nonempty proper subsequence of $\nu(n)$, by Lemma \ref{Numbers3} there exists ${\bf p}\in{\rm Seq}(\sigma(n))$ such that $\bigcirc{\bf p} = ({\rm h})(\nu(n)\setminus{\rm h})^-$.
Thus $\bigcirc{\bf p}\circ{\sf w} = \bigcirc{\bf p}\circ CQ{\sf w''} = ({\rm h})(\nu(n)\setminus{\rm h})^-\circ CQ{\sf w''} = \big((\nu(n)\setminus{\rm h})^-\circ C{\sf w''}\big)\big(({\rm h})\circ Q\big)$, which is the
product of two permutations, $(\nu(n)\setminus{\rm h})^-\circ C{\sf w''}$ and $({\rm h})\circ Q$, whose supports are disjoint.
Now, if $\bigcirc{\bf p}\circ{\sf w}\restrict\big(n\cup{\rm supp}({\sf w})\big)$ has fewer than three cyclic components, then we employ Claim 1 to obtain some ${\bf q}\in{\rm Seq}(\sigma(n))$ such that
$\bigcirc{\bf q}\circ{\sf w}\restrict\big(n\cup{\rm supp}({\sf w})\big)$ has at least three cycles, whence $\bigcirc{\bf q}\circ{\sf w} \not\simeq \bigcirc{\bf p}\circ{\sf w}$. So it remains only to deal with the situation
where $\bigcirc{\bf p}\circ{\sf w}\restrict\big(n\cup{\rm supp}({\sf w})\big)$ has at least three cycles.
Suppose $\bigcirc{\bf p}\circ{\sf w}\restrict\big(n\cup{\rm supp}({\sf w})\big)$ has at least three cycles. Then there are two possibilities to treat; to wit:
(1). \ $(\nu(n)\setminus{\rm h})^-\circ C{\sf w''}\restrict Y$ has more than one cycle, where $Y := {\rm supp}(C{\sf w''})\cup{\rm supp}\big((\nu(n)\setminus{\rm h})^-\big)$.
(2). \ $({\rm h})\circ Q\restrict X$ has more than one cycle, where $X := {\rm supp}(Q)\cup\big(n\setminus{\rm supp}((\nu(n)\setminus{\rm h})^-)\big)$. \vspace{.7em}
\noindent FIRST POSSIBILITY: The permutation $(\nu(n)\setminus{\rm h})^-\circ C{\sf w''}\restrict Y$ has more than one cycle. Here we need
\begin{claim} Each orbit of $(\nu(n)\setminus{\rm h})^-\circ C{\sf w''}\restrict Y$ contains at least one element in ${\rm supp}((\nu(n)\setminus{\rm h})^-)$. \end{claim}
\noindent{\it Proof of Claim.} Let $E$ be an orbit of $(\nu(n)\setminus{\rm h})^-\circ C{\sf w''}\restrict Y$. Let $e\in E$. Then $E = \{e\big((\nu(n)\setminus{\rm h})^-\circ C{\sf w''}\big)^i:i\in{\mathbb Z}\}$.
We are done if $e\in{\rm supp}\big((\nu(n)\setminus{\rm h})^-\big)$. So suppose $e \not\in {\rm supp}\big((\nu(n)\setminus{\rm h})^-\big)$. Then, since $e\in Y$, it follows that $e\in{\rm supp}(C{\sf w''})$, and hence
that either $e\in{\rm supp}(C)$ or $e\in{\rm supp}({\sf w''})$.
First, suppose that $e\in{\rm supp}(C)$. Then $eC^i\in{\rm supp}(C)\subseteq{\rm supp}(C{\sf w''})$ for all $i\in{\mathbb Z}$. Also, there is a least positive integer $l$ with $eC^l\in n\cap{\rm supp}(C)$. Now,
$n\cap{\rm supp}(C) \subseteq {\rm supp}\big((\nu(n)\setminus{\rm h})^-\big)$. Hence $eC^l \in {\rm supp}((\nu(n)\setminus{\rm h})^-)$. Since $eC^i \not\in {\rm supp}((\nu(n)\setminus{\rm h})^-)$ for all
$i \in \{0,1,\ldots,l-1\}$, we have that $eC^l = e\big((\nu(n)\setminus{\rm h})^-\circ C{\sf w''}\big)^l \in E$. So $eC^l \in E\,\cap\,{\rm supp}\big((\nu(n)\setminus{\rm h})^-\big)$. Thus
$E\cap{\rm supp}\big((\nu(n)\setminus{\rm h})^-\big) \not= \emptyset$, as claimed.
Next, suppose instead that $e \in {\rm supp}({\sf w''})$. Then $e \in {\rm supp}(F)$ for some cycle $F \in {\bf W}\setminus\{C,Q\}$. Since $F \in {\bf W}$, we have that $n\cap{\rm supp}(F) =
\big({\rm supp}({\rm h})\cup{\rm supp}((\nu(n)\setminus{\rm h})^-)\big)\cap{\rm supp}(F) \not= \emptyset$. Also, since $F \in {\bf W}\setminus\{Q\}$, we have that ${\rm supp}(F)\cap{\rm supp}(Q) = \emptyset$,
and hence that ${\rm supp}(F)\cap{\rm supp}\big(({\rm h})\big) = \emptyset$ since ${\rm supp}\big(({\rm h})\big) \subseteq {\rm supp}(Q)$. So ${\rm supp}(F)\cap{\rm supp}\big((\nu(n)\setminus{\rm h})^-\big) \not= \emptyset$. This time let $l$ denote the least positive integer such that $eF^l \in {\rm supp}(F)\cap{\rm supp}\big((\nu(n)\setminus{\rm h})^-\big)$. Let ${\sf w'''}$ be the permutation whose family of component cycles is
${\bf W}\setminus\{C,Q,F\}$. Since $eF^i \not\in {\rm supp}\big((\nu(n)\setminus{\rm h})^-\circ C{\sf w'''}\big)$ when $i \in \{0,1,\ldots,l-1\}$, it follows that $eF^l = e\big((\nu(n)\setminus{\rm h})^-\circ CF{\sf w'''}\big) =
e\big((\nu(n)\setminus{\rm h})^-\circ C{\sf w''}\big)$, whence $eF^l \in E$. But then $eF^l \in E\cap{\rm supp}((\nu(n)\setminus{\rm h})^-)$, and again we have that
$E\cap{\rm supp}((\nu(n)\setminus{\rm h})^-) \not= \emptyset$. The proof of Claim 2 is complete.\vspace{.7em}
\begin{claim} There exist orbits $A \not= B$ of $(\nu(n)\setminus{\rm h})^-\circ C{\sf w''}\restrict Y$ and elements $x$ and $y$ in ${\rm supp}((\nu(n)\setminus{\rm h})^-)$ with $x \in A$ and $y \in B$, and such that $y(\nu(n)\setminus{\rm h})^- = x$. \end{claim}
\noindent{\it Proof of Claim.} Let $B$ be an orbit of $G\circ C{\sf w''}\restrict Y$, where $G := (\nu(n)\setminus{\rm h})^-$. By Claim 2, there exists $b\in B\cap{\rm supp}(G)$. If $bG^i \in B$ for every $i\in{\mathbb N}$, then
${\rm supp}(G)\subseteq B$, contrary to Claim 2, since by hypothesis $G\circ C{\sf w''}\restrict Y$ has at least two orbits. So $bG^j \in B$ while $bG^{j+1} \not\in B$ for some $j\in{\mathbb N}$. Let $y := bG^j$, let $x := yG$, and let $A$ be the orbit of $G\circ C{\sf w''}\restrict Y$ for which $x\in A$. Claim 3 follows. \vspace{.7em}
Let $x, y, A, B,$ and $G := (\nu(n)\setminus{\rm h})^-$ be as in Claim 3, and let ${\rm d}$ be the subsequence of $\nu(n)$ such that the term set of ${\rm d}$ is $\{x\}\cup{\rm supp}(({\rm h}))$. [If
${\rm h} = \langle m\rangle$, let the term set of ${\rm d}$ be $\{m,x\}$.] Since $n\cap{\rm supp}(C) \subseteq {\rm supp}(G)$, and since $|n\cap{\rm supp}(C)| > 1$, it follows that $|{\rm supp}(G)| > 1$.
Since $x$ is the only term of ${\rm d}$ which belongs to ${\rm supp}(G)$, it follows that there is an element in ${\rm supp}(G)$, and hence in $n$, which is not a term in ${\rm d}$. So ${\rm d}$ is a
proper subsequence of $\nu(n)$. The number sequence ${\rm d}$ was produced by inserting $x$ as a term into the sequence ${\rm h}$, and so the sequence $\nu(n)\setminus{\rm d}$ is obtained by deleting the term $x$
from the sequence $\nu(n)\setminus{\rm h}$. Since $|{\rm d}|\ge2$, there exists $z\in{\rm supp}\big(({\rm d})\big)$ such that $z({\rm d}) = x$. But $z\not= x$, and so $z \in {\rm supp}\big(({\rm h})\big)$; we can write
$({\rm h}) = (z\,\,{\rm s}_z)$, and so $({\rm d}) = (z\,\,x)\circ ({\rm h}) = (z\,\,x)\circ(z\,\,{\rm s}_z) = (z\,\,x\,\,{\rm s}_z)$. Similarly, since $yG = x$, we may write $G = (y\,\,x\,\,{\rm s}_x)$. Delete the term $x$ from $G$,
and obtain that $(\nu(n)\setminus{\rm d})^- = (y\,\,{\rm s}_x)$. Thus, $(y\,\,x)\circ G = (y\,\,x)\circ(\nu(n)\setminus{\rm h})^- = (y\,\,x)\circ(y\,\,x\,\,{\rm s}_x) = (x)(y\,\,{\rm s}_x) = (\nu(n)\setminus{\rm d})^-$. These equalities enable us to expand the product: \ $\bigcirc{\bf q}\circ{\sf w} = \bigcirc{\bf q}\circ CQ{\sf w''} = ({\rm d})(\nu(n)\setminus{\rm d})^-\circ CQ{\sf w''} = (z\,\,x)\circ({\rm h})(\nu(n)\setminus{\rm d})^-\circ CQ{\sf w''} = (z\,\,x)\circ({\rm h})(y\,\,x)\circ(\nu(n)\setminus{\rm h})^-\circ CQ{\sf w''} = (z\,\,x)\circ(y\,\,x)\circ({\rm h})(\nu(n)\setminus{\rm h})^-\circ CQ{\sf w''} = (z\,\,x)\circ(y\,\,x)\circ\bigcirc{\bf p}\circ {\sf w}$.
Recall that $x$ and $y$ are elements in distinct orbits $A$ and $B$ of $G\circ C{\sf w''}\restrict Y$. Recall also that $\bigcirc{\bf p}\circ{\sf w} = [({\rm h})\circ Q][G\circ C{\sf w''}]$. But
$[{\rm supp}(({\rm h})\circ Q)] \cap [{\rm supp}(G\circ C{\sf w''})] = \emptyset$, and $A$ and $B$ are distinct orbits of $\bigcirc{\bf p}\circ{\sf w}$ as well. Also, since $z \in {\rm supp}(({\rm h})) \subseteq
{\rm supp}(({\rm h})\circ Q)$, we have that $z$ must belong to a third orbit $D$ of $\bigcirc{\bf p}\circ{\sf w}\restrict(n\cup{\rm supp}({\sf w}))$. Consequently the sets $A, B, C$ will amalgamate to form a
single orbit $A\cup B\cup C$ of $\bigcirc{\bf p}\circ{\sf w}\restrict(n\cup{\rm supp}({\sf w}))$. Thus the permutation $\bigcirc{\bf q}\circ{\sf w}\restrict(n\cup{\rm supp}({\sf w}))$ will possess exactly two
fewer orbits than the permutation $\bigcirc{\bf p}\circ{\sf w}\restrict(n\cup{\rm supp}({\sf w}))$. Hence $\bigcirc{\bf q}\circ{\sf w} \not\simeq \bigcirc{\bf p}\circ{\sf w}$. \vspace{.8em}
\noindent SECOND POSSIBILITY: \ $({\rm h})\circ Q$ has more than one orbit. \vspace{.3em}
If $|n\cap{\rm supp}(Q)| = 1$, then $({\rm h})\circ Q$ is a single cycle, contrary to the present hypothesis. Thus $|{\rm supp}\big(({\rm h})\big)| = |n\cap{\rm supp}(Q)| > 1$. So, arguing as in the First Possibility,
we can find $\{x,y\}\subseteq{\rm supp}\big(({\rm h})\big)$ and distinct orbits $A$ and $B$ of $({\rm h})\circ Q$ with $\langle x,y\rangle \in A\times B$ and such that $x({\rm h}) = y$. Let ${\rm d}$ be the
sequence obtained by deleting the term $x$ from the sequence ${\rm h}$. By Lemma \ref{Numbers3}, there exists ${\bf q}\in{\rm Seq}(\sigma(n))$ such that $\bigcirc{\bf q} = ({\rm d})(\nu(n)\setminus{\rm d})^-$.
Observe that $({\rm d}) = ({\rm h})\circ(x\,\,y)$, and that if an element $z \in {\rm supp}\big((\nu(n)\setminus{\rm d})^-\big)$ satisfies $z(\nu(n)\setminus{\rm d})^- = x$ then $(\nu(n)\setminus{\rm d})^- =
(z\,\,x)\circ(\nu(n)\setminus{\rm h})^- = (\nu(n)\setminus{\rm h})^-\circ(x\,\,t)$ where $t := x(\nu(n)\setminus{\rm d})^-$. So $\bigcirc{\bf q}\circ{\sf w} = \bigcirc{\bf q}\circ CQ{\sf w''} =
({\rm d})(\nu(n)\setminus{\rm d})^-\circ CQ{\sf w''} = ({\rm h})\circ(x\,\,y)(\nu(n)\setminus{\rm h})^-\circ(x\,\,t)\circ CQ{\sf w''} = ({\rm h})(\nu(n)\setminus{\rm h})^-\circ CQ{\sf w''}(x\,\,y)\circ(x\,\,t) =
\bigcirc{\bf q}\circ{\sf w}\circ(x\,\,y)(t\,\,x)$. As in the First Possibility, we encounter $x, y$, and $t$ as elements in distinct orbits $A, B$, and $C$ of the permutation
$\bigcirc{\bf p}\circ{\sf w}\restrict(n\cup{\rm supp}({\sf w})$. By an argument similar to that in the First Possibility, we infer that $\bigcirc{\bf q}\circ{\sf w} \not\simeq \bigcirc{\bf p}\circ{\sf w}$. So
${\bf f}$ is not CI in Case Three too, and thus Theorem \ref{NixCircuits} is proved. \end{proof}
We have completed the proof of Theorem \ref{CCIT}, which tells us exactly which connected transpositional multigraphs are CI. This renders it easy to specify the class of all CI transpositional multigraphs on the
vertex set $n$. Recall that, where $n\in \{1,2\}$, every transpositional sequence is both permutatially complete and conjugacy invariant. The following summarizes the main results in \S3.
\begin{thm}\label{SummaryCI} For $n\ge 3$, let ${\bf u}$ be a sequence in $1^{n-2}2^1$. Let $\{{\cal T}({\bf u}_i): i\in m\}$ be the set of components of the transpositional multigraph ${\cal T}({\bf u})$,
where each ${\bf u}_i$ is the subsequence of ${\bf u}$ for which the vertex set of ${\cal T}({\bf u}_i)$ is $V_i = \bigcup{\rm Supp}({\bf u}_i)$, and where of course $\{V_i:i\in m\}$ is a partition of the set $n$. Then:
${\bf u}$ is conjugacy invariant if and only if ${\bf u}_i$ is conjugacy invariant for every $i\in m$.
${\bf u}_i$ is {\rm CI} for $|V_i|=3$ if and only if either $|{\bf u}_i|$ is odd or ${\cal T}({\bf u}_i)$ is a multitree with a simple multitwig.
${\bf u}_i$ is {\rm CI} for $|V_i|\ge4$ if and only if ${\cal T}({\bf u}_i)$ is a multitree, no vertex of which is on more than one nonsimple multiedge, and each even-multiplicity multiedge of which is a multitwig
whose non-leaf vertex has exactly two neighbors. \end{thm}
\begin{proof} The theorem's first claim is obvious. Its second claim is immediate from Theorem \ref{n=3}. Its third claim merely combines Theorems \ref{SufficiencyCI}, \ref{NecessaryCI}, \ref{Four} and
\ref{NixCircuits}. \end{proof}
\subsection{Unfinished work}
If a sequence ${\bf s}$ in ${\rm Sym}(n)$ is perm-complete then of course ${\rm Prod}({\bf s})$ is a coset of the subgroup ${\rm Alt}(n)$ of ${\rm Sym}(n)$. Ross Willard asks for what other ${\bf s}$ are there
subgroups $H_{\bf s} < {\rm Sym}(n)$ for which ${\rm Prod}({\bf s})\in{\rm Sym}(n)/H_{\bf s}$.
An obvious task ahead pertaining to conjugacy invariance is the formidible one of providing necessary and sufficient criteria for deciding conjugacy invariance of every permutational sequence ${\bf s}$ in
${\rm Sym}(n)$. The ultimate goal is criteria enabling one to recognize the family ${\cal C}_{\bf s}$ of conjugacy classes ${\sf C}$ of ${\rm Sym}(n)$ for which ${\sf C}\cap{\rm Prod}({\bf s})\not=\emptyset$.
Every $f\in{\rm Sym}(n)$ has an infinite number of factorizations into products of transpositions. But if the lengths of the nontrivial cyclic components of $f$ are $\ell_1,\ell_2,\ldots,\ell_d$ then the length of every minimal
${\bf t}$ is $\big(\sum_{i=1}^d \ell_i\big)-d$, and $f$ has, by our definition given now, exactly $\Phi({\rm Type}(f)) > \sum_{i=1}^d\ell_i^{\ell_i-2}$ distinct minimal length transpositional factorizations if $|{\rm supp}(f)|\ge5$.
\vspace{.5em}
\noindent{\bf Problem.} \ Specify exact values for $\Phi({\rm Type}(f))$. The enumeration gets nontrivial when $f$ is not single-cycled.\vspace{.5em}
Clearly, if every term $s_i$ of the sequence ${\bf s} := \langle s_0,s_1,\ldots,s_m\rangle$ in ${\rm Sym}(n)$
has a factorization, $s_i = \bigcirc{\bf t}_i = t_{i,0}\circ t_{i,1}\circ\cdots\circ t_{i,l_i}$ into a product of transpositions such that the conglomerate transpositional sequence ${\bf t} := {\bf t}_0{\bf t}_1\cdots{\bf t}_m$
is conjugacy invariant, then the permutational sequence ${\bf s}$ itself is conjugacy invariant. Thus we quickly get a sufficient condition for ${\bf s}$ to be conjugacy invariant. However, that condition is not
necessary to assure the conjugacy invariance of a permutational sequence.\vspace{.7em}
\noindent{\bf Counterexample.} Let ${\bf s} := \langle(0\,\,1\,\,2),(0\,\,2\,\,1)^{\beta(2)}\rangle$. We omit the easy verification that ${\rm Prod}({\bf s})\subseteq 3^1$, whence ${\bf s}$ is conjugacy invariant.
However, $(0\,\,1\,\,2)$ has exactly three distinct factorizations as a product of two transpositions; these are:
\[(0\,\,1\,\,2) = (0\,\,1)\circ(0\,\,2)\qquad\qquad (0\,\,1\,\,2) = (0\,\,2)\circ(1\,\,2)\qquad\qquad (0\,\,1\,\,2) = (1\,\,2)\circ(0\,\,1)\] Of course $(0\,\,2\,\,1)$ likewise has exactly three such factorizations, and since
the permutation $(0\,\,2\,\,1)$ occurs exactly twice as a term in ${\bf s}$, we infer each sequence ${\bf t}$ in $1^12^1$ that results from factorizations of each term of ${\bf s}$ into products of two transpositions
per term is six terms long.
The reader can check that there are exactly three distinct ${\rm Seq}({\bf t}_i)$ that result from the possible length-$6$ conglomerate transpositional sequences. As usual, each such
${\rm Seq}({\bf t}_i)$, for $i\in3$, determines a transpositional multigraph ${\cal T}({\bf t}_i)$ on the vertex set $3$. We list the multiedge sets of these three multigraphs; they are:
\[E_0 := \{(0\,\,1)^{\beta(2)},(1\,\,2)^{\beta(2)},(0\,\,2)^{\beta(2)}\}\qquad E_1 =\{(0\,\,1)^{\beta(2)},(1\,\,2),(0\,\,2)^{\beta(3)}\}\qquad E_2 = \{(0\,\,1)^{\beta(3)},(1\,\,2)^{\beta(3)}\}\] By Theorem
\ref{SummaryCI}, none of these three transpositional multigraphs ${\cal T}({\bf t}_i)$ is conjugacy invariant.\vspace{.5em}
This counterexample exhibits a conjugacy invariant permutational sequence ${\bf s}$ which lacks a conjugacy invariant conglomerate transpositional sequence that results from transpositional factorizations
of the terms in ${\bf s}$. We leave it to the reader to corroborate that the permutational sequence $\langle(0\,\,1\,\,2),(0\,\,3\,\,2),(0\,\,3\,\,1)\rangle$ in ${\rm Sym}(4)$ is a second, perhaps more interesting,
such counterexample.\vspace{.8em}
A transposition is a special sort of ``single-cycled'' permutation; i.e., an $f \in \bigcup\{1^{n-c}c^1:2\le c\le n\}$. Arthur Tuminaro \cite{Tuminaro} kicked off the study of conjugacy invariance of sequences of single-cycled
permutations.
|
1,116,691,497,390 | arxiv | \section{Introduction and Background}
Key exchange mechanisms (KEMs) play a critical role in the security of the internet and other communication systems. They allow two remote entities to securely agree on a symmetric key without explicit sharing, which can be subsequently used to establish an encrypted session. The currently used KEMs are mostly based on the RSA \cite{rivest1978method} or ECC \cite{koblitz1987elliptic}. RSA and ECC are PKE schemes whose underlying security relies on the difficulty of either integer factorization or discrete logarithm problems. However, these problems can be solved in polynomial time by quantum computation models \cite{shor1994algorithms} with the so-called Cryptographically Relevant Quantum Computers (CRQC) \cite{CRQC}. Thus, it is believed that the current PKE schemes (and KEMs, consequently) will be insecure in the post-quantum era \cite{mosca2018cybersecurity}, \cite{barker2020getting}.\par
To secure KEMs, tremendous research efforts have been made to design quantum-safe PKE schemes. Currently, NIST is also undertaking a standardization project for quantum-safe KEMs. Of numerous approaches, code-based cryptosystems (CBC) are considered a promising alternative to the existing PKE schemes \cite{daniel2015initial}, \cite{fernandez2019pre}. Based on the theory of error-correcting codes, their underlying security relies on that decoding a codeword without the knowledge of an encoding scheme is an $\mathcal{NP}$-complete problem \cite{8012331}. The idea of CBC was incepted by McEliece in 1978 \cite{mceliece1978public} has remained secure against classical and quantum attacks but at the cost of a larger key size.
To circumvent this issue, Misoczki \emph{et al}.~designed Quasi-Cyclic Moderate Density Parity-Check (QC-MDPC) codes to develop the QC-MDPC variant of the McEliece scheme \cite{misoczki2013mdpc}. This variant has received much attention because of its comparable security with significantly smaller key sizes. Bit Flipping Key Encapsulation (BIKE) \cite{BIKE} mechanism that is submitted to NIST for standardization as a quantum-safe KEM, is built on top of the QC-MDPC variant. Due to its promising security and performance features, BIKE has been selected in the final round of the NIST standardization competition as an alternate candidate \cite{NISTReport}. In addition, BIKE has been recently added to the list of supported KEM schemes for the post-quantum TLS protocol used in the AWS Key Management Service (KMS) offered by Amazon \cite{AWS}. \par
The QC-MDPC variant and BIKE leverage a probabilistic and iterative decoder in their decapsulation modules. The original design of QC-MDPC variant employed the orthodox Bit-Filliping (BF) decoder \cite{gallager1962low} with slight modifications \cite{misoczki2013mdpc}. However, the BF decoder (as a probabilistic and iterative decoder) suffers from higher DFR. Specifically, the decoder suffers from poor decoding performance when the number of iterations is restricted for performance considerations like accelerating the encoder. It fails in the decryption/decapsulation process, which degrades the performance and can also facilitate the side channel/reaction attacks. For example, Guo \emph{et al}.~\cite{guo2016key} introduced an efficient reaction attack for the QC-MDPC variant known as the GJS attack. In a GJS attack, the attacker firstly sends crafted ciphertexts to the victim Alice while observing the reaction of her decoder for every ciphertext (i.e., \emph{successful} or \emph{failure}). Then, utilizing the correlation between the faulty ciphertext patterns and Alice's private key, the attacker can fully recover her private key. Further, Nilsson \emph{et al}.~\cite{nilsson2018error} proposed a novel technique for fast generation of the crafted ciphertexts to improve the efficiency of the GJS attack. This attack can be regarded as a weaker version of Chosen-Ciphertext Attacks (CCA) since the adversary only needs to observe the decoder's reaction without access to the full decryption oracle (i.e., it does not analyze any decrypted plaintext). \par
To tackle these attacks, the Fujisaki-Okamoto (FO) CCA transformation model has been adopted in BIKE \cite{hofheinz2017modular}, \cite{drucker2021applicability}. The FO model uses a ciphertext protection mechanism so that the receiver can check the integrity of a received ciphertext. Thus, the ability of a GJS attacker to craft ciphertexts is limited. Although the FO model significantly mitigates the threat of reaction attacks, the scheme still must deploy a decoder with a negligible DFR to provide the Indistinguishability under Chosen-Ciphertext Attack (IND-CCA) security. Sendrier \emph{et al}.~\cite{sendrier2020low} argued that to provide $\lambda$-bit of IND-CCA security, the \emph{average} DFR (taken over the whole keyspace) must be upper bounded by $2^{-\lambda}$. For this reason, several modifications of the BF decoding algorithms have been proposed to offer negligible DFR, \cite{sendrier2020low, sendrier2019decoding, drucker2019toolbox, drucker2019constant, drucker2020qc, nilsson2021weighted}. For example, the latest version of BIKE deploys the Black-Grey-Flip (BGF) decoder \cite{drucker2020qc} that is the state-of-the-art variant of the BF algorithm. BGF uses only five iterations for decoding while offering a negligible DFR.\par
However, it is shown that there are some (private) key structures for which the probabilistic decoders show poor performance in terms of DFR \cite{drucker2019constant}, \cite{sendrierexistence}. They are referred to as \emph{weak-keys} since they are potentially at the risk of disclosure through side-channel/reaction attacks (such as GJS attack) \cite{guo2016key}, \cite{nilsson2018error}. Although the number of weak keys is much smaller than the size of the entire keyspace, their effect on the average DFR must be analyzed to ensure they do not endanger the IND-CCA security of the scheme. For example, Drucker \emph{et al.}~\cite{drucker2019constant} and Sendrier \emph{et al.}~\cite{sendrierexistence} have recently conducted some weak-key analysis for the QC-MDPC-based schemes and showed that the average DFR is not notably affected by the weak-keys (that have been identified so far), implying that the current IND-CCA claims hold. However, the state-of-the-art BGF decoder has not been investigated in their analysis. For example, Sendrier \emph{et al.}~\cite{sendrierexistence} have considered the previous version of the BIKE scheme that was submitted to the second round of the NIST competition (i.e., BIKE-1). In BIKE-1, the BackFlip decoder \cite{aragon2017bike} was deployed that enables the scheme to provide 128-bit security in 100 iterations (i.e., BackFlip-100). Moreover, the number of iterations has been limited to 20 for time-saving (significantly larger than the BGF decoder) in existing experiments. For compensating the few iterations, their results are compared with 97-bit security, i.e., the estimated security of BackFlip-20.\par
In view of the aforementioned discussion, the existent analysis on weak-keys does not extend towards the latest version of BIKE that adopts the contemporary BGF decoder. Therefore, it is important to investigate the impact of weak-keys on the latest version of BIKE. To the best of our knowledge, the effect of weak-keys on the average DFR of the BGF decoder has not been investigated. Thus, the IND-CCA security claims of the latest version of BIKE remain occluded. Motivated by this, we first implement the BIKE scheme in Matlab. Then, through extensive experiments and based on the model for IND-CCA security presented in \cite{hofheinz2017modular}, we show that the contribution of weak-keys in the average DFR of the BIKE's BGF decoder is greater than the maximum allowed level needed for achieving IND-CCA security. As a result, the negative effect of weak-keys on the average DFR can not be ignored and must be addressed before claiming the IND-CCA security of BIKE.
To address the weak-keys issue, we also propose a key-check mechanism that can be integrated into the key generation module of the BIKE scheme to ensure the private keys generated by users are not weak. The main contributions of this paper are summarized as follows:
\begin{itemize}
\item We perform an implementation of the BIKE scheme with the state-of-the-art BGF decoder and provide some technical key points required to implement the BIKE scheme.
\item Through extensive experiments and using the formal model for proving the IND-CCA security, we show that the negative effect of weak-keys on average DFR is greater than the maximum allowed level. It may put the IND-CCA security of the BIKE mechanism at risk.
\item We propose a key-check algorithm that can be integrated into the key generation subroutine of BIKE to ensure that users do not generate and adopt weak (private) keys.
\end{itemize}
The paper is organized as follows. In Section \ref{Section.2}, we provide the preliminaries required for understanding the working principles of the BIKE scheme. Section \ref{Section.3} presents the structure of weak-keys in the BIKE scheme and an intuitive understanding of their effect on IND-CCA security. In Section \ref{Section.4}, we present the results of our experimental evaluation. Finally, after introducing the key-check mechanism in Section \ref{Section.5}, we make concluding remarks in Section \ref{Section.6}.
\section{Preliminaries}
\label{Section.2}
In this section, we present the basic concepts that will help the readers to understand the other sections of the paper. We first briefly review the QC-MDPC variant of the McEliece scheme (we refer the readers to \cite{zajac2014overview, misoczki2013mdpc} for more information about the McEliece scheme and its QC-MDPC variant). Then, we review the BF decoding algorithm and the state-of-the-art BGF decoder. Finally, we describe the latest version of the BIKE scheme.\par
\subsection{QC-MDPC Codes}
Before we review QC-MDPC codes, we present some key concepts and definitions in the field of error correction codes. Error correction codes are widely used in communication protocols and recording systems to ensure reliable data transmission and storage. Considering a block of $k$ information bits, the error correction code $\mathbb{C}(n,k)$ computes $r=n-k$ redundancy bits (based on some encoding equations) and creates an $n$-bit block of data (called a codeword) consisted of $k$ information bit and $r$ redundancy bits. The codeword is subsequently sent to the relevant destination, which exploits the redundant bits (based on some decoding rules) for detecting and correcting any errors in the received message and successfully retrieves the actual information.\par
\begin{definition} [Linear Block Code \cite{costello2007channel}] $\mathbb{C}(n,k)$ is a linear error correction code if the modulo-2 sum (i.e., XOR binary operation) of any two or multiple codewords is a valid codeword.
\end{definition}
\begin{definition} [Hamming weight \cite{bonello2010low}] The Hamming weight of a codeword is defined as the number of non-zero bits in the codeword.
\end{definition}
\begin{definition} [Generator Matrix \cite{costello2007channel}] The linear block code $\mathbb{C}(n,k)$ has a generator matrix $\textbf{G}\in \mathbb{F}_{2}^{k\times n}$ which defines the one-to-one mapping between the $k$-bit message block $\textbf{m}\in \mathbb{F}_{2}^{k}$ and the corresponding $n$-bit codeword $\textbf{c} \in \mathbb{F}_{2}^{n}$, i.e., $\textbf{c}_{1\times n} = \textbf{m}_{1\times k}.\textbf{G}_{k\times n}$.
\end{definition}
Thus, $\textbf{G}$ is used by the encoder to generate the distinct codeword $\textbf{c}$ associated with the message block $\textbf{m}$. Note that the number of valid codewords for $\mathbb{C}(n,k)$ is $2^k$ which can be much smaller than $2^n$ (since $n>k$) , i.e., every binary vector over $\mathbb{F}_{2}^{n}$ is not necessarily a valid codeword of $\mathbb{C}$. Thus, $\mathbb{C}$ can be considered as a $k$-dimensional subset of $\mathbb{F}_{2}^{n}$.
\begin{definition} [Systematic Code \cite{ fang2015survey}] $\mathbb{C}(n,k)$ is called a systematic code if its generator matrix is written in the form of $\textbf{G} = [\textbf{I}_k |\textbf{A}_{k\times r}]$ in which $\textbf{I}_k$ is a $k\times k$ identity matrix and $\textbf{A}$ is a $k\times r$ coefficient matrix.
\end{definition}
If $\mathbb{C}$ is systematic, in each $n$-bit codeword $\textbf{c}$, the first $k$ bits are equal to the corresponding message block $\textbf{m}$, and the rest of the block is the $r=n-k$ parity-check (redundant) bits.
\begin{definition} [Parity-check Matrix \cite{costello2007channel}] The parity-check matrix $\textbf{H}\in \mathbb{F}_{2}^{r\times n}$ of a linear code $\mathbb{C}(n,k)$ is an $r\times n$ matrix that is orthogonal to all the codewords of $\mathbb{C}(n,k)$, i.e., $\textbf{c}$ is a valid codeword of $\mathbb{C}(n,k)$ if and only if $\textbf{c}.\textbf{H}^T = \textbf{0}$, where $T$ denotes the matrix transpose operation.
\end{definition}
If $\textbf{G}$ is written in the systematic form (i.e., $\textbf{G} = [\textbf{I}_k |\textbf{A}_{k\times r}]$), it is shown that $\textbf{H}$ can be computed through $\textbf{H} = [\textbf{A}^T|\textbf{I}_r]$. The decoder of $\mathbb{C}$ uses $\textbf{H}$ to decode the received vector.
\begin{definition} [Syndrome of a received vector \cite{costello2007channel}] Consider $\textbf{x}=(\textbf{c}\oplus \textbf{e})\in \mathbb{F}_{2}^{n}$ as a vector received by the decoder, where $\textbf{e}\in \mathbb{F}_{2}^{n}$ is the error vector with the maximum Hamming weight $t$ that represents the flipped bits of $\textbf{c}$ due to the noisy channel. The syndrome $\textbf{S}\in \mathbb{F}_{2}^{r}$ of $\textbf{x}$ is computed as $\textbf{S} =\textbf{x}.\textbf{H}^T$.
\end{definition}
For the syndrome vector $\textbf{S}$, we have
\begin{equation*}
\textbf{S} =\textbf{x}.\textbf{H}^T = (\textbf{c}\oplus \textbf{e}).\textbf{H}^T = \textbf{c}.\textbf{H}^T \oplus \textbf{e}.\textbf{H}^T =\textbf{e}.\textbf{H}^T,
\end{equation*}
because $\textbf{c}.\textbf{H}^T=\textbf{0}$. Thus, once $\textbf{x}$ is received by the decoder, its syndrome $\textbf{S}$ is firstly computed through $\textbf{S} =\textbf{x}.\textbf{H}^T$. Then, the decoder needs to obtain $\textbf{e}$ by solving $\textbf{S} =\textbf{e}.\textbf{H}^T$ which is then used to compute the sent codeword using $\textbf{c}=\textbf{x}\oplus \textbf{e}$. Finally, the message block $\textbf{m}$ associated with $\textbf{c}$ is returned as the decoded vector.
\begin{definition} [Syndrome Decoding (SD) Problem \cite{berlekamp1978inherent}] Given the parity-check matrix $\textbf{H}\in \mathbb{F}_{2}^{r\times n}$ and the syndrome vector $\textbf{S}\in \mathbb{F}_{2}^{r}$, the SD problem searches for a vector $\textbf{e}\in \mathbb{F}_{2}^{n}$ with the Hamming weight $\leq t$ such that $\textbf{S}=\textbf{e}.\textbf{H}^T$.
\end{definition}
The SD problem was proved to be $\mathcal{NP}$-complete if the parity-check matrix $\textbf{H}$ is random \cite{berlekamp1978inherent}. This establishes the essential security feature required by code-based cryptosystems to be quantum-resistant. This is because quantum computation models are considered to be unable to efficiently solve $\mathcal{NP}$-complete problems \cite{aaronson2008limits}.
\begin{definition} [Quasi-cyclic (QC) code \cite{baldi2014qc}] The binary linear code $\mathbb{C}(n,k)$ is QC if there exists an integer $n_0<n$ such that every cyclic shift of a codeword $\textbf{c}\in \mathbb{C}$ by $n_0$ bits results in another valid codeword of $\mathbb{C}$.
\end{definition}
In a systematic QC code, each codeword $\textbf{c}$ consists of $p$ blocks of $n_0$ bits, i.e., $n= n_0p$. Thus, every block includes $k_0=k/p$ information bits and $r_0=n_0-k_0$ parity bits. In a QC code $\mathbb{C}$ with $r_0=1$, we have
\begin{equation*}
r=(n-k)=(n_0-k_0)p=r_0p=p
\end{equation*}
In this case, it is shown that the parity-check matrix $\textbf{H}$ of $\mathbb{C}$ is composed of $n_0$ circulant blocks of size $p\times p$ (or $r\times r$, equivalently) \cite{baldi2014qc} which is written as
\begin{equation} \label{eq-1}
\textbf{H} = [\textbf{H}_0 \ \textbf{H}_1 \ldots \textbf{H}_{n_{0}-1}],
\end{equation}
where each circulant block $\textbf{H}_i$ has the following format:
\begin{equation} \label{eq-2}
\textbf{H}_i=\begin{bmatrix}
h_0^{(i)}& h_1^{(i)} & h_2^{(i)} & \ldots & h_{r-1}^{(i)}\\
h_{r-1}^{(i)}& h_0^{(i)} & h_1^{(i)} & \ldots & h_{r-2}^{(i)}\\
. & . & . & . & .\\
. & . & . & . & .\\
. & . & . & . & .\\
h_1^{(i)}& h_2^{(i)} & h_3^{(i)} & \ldots & h_0^{(i)}
\end{bmatrix}
\end{equation}
Note that $\textbf{H}_i$ can be described by its first row only, i.e., the other $r-1$ rows are obtained by cyclic shifts of the first row. It is also shown that the generator matrix $\textbf{G}$ of the above QC code $\mathbb{C}$ can be written as
\begin{equation} \label{eq-3}
\textbf{G}=[\textbf{I}_{k}|\textbf{Q}_{k\times r}]=\begin{bmatrix}
& | & (\textbf{H}_{n_{0}-1}^{-1}.\textbf{H}_0)^T\\
& | & (\textbf{H}_{n_{0}-1}^{-1}.\textbf{H}_1)^T\\
& | & (\textbf{H}_{n_{0}-1}^{-1}.\textbf{H}_2)^T\\
\textbf{I}_{k}& | & .\\
& | & .\\
& | & .\\
& | & (\textbf{H}_{n_{0}-1}^{-1}.\textbf{H}_{n_{0}-2})^T\\
\end{bmatrix}
\end{equation}
The above format can be proved using the fact that $\textbf{H}\textbf{G}^T=\textbf{0}$ and by performing some linear algebra operations on it.
\begin{definition} [QC-MDPC codes \cite{misoczki2013mdpc}] An $(n,r,w)$-QC-MDPC code is a QC code of length $n=n_0r$ and dimension $k=n-r=k_{0}r$ whose parity-check matrix has a constant row weight of $w=O(\sqrt{n})$.
\end{definition}
\noindent Note that here, we only consider those QC-MDPC codes in which $r_0=1$, i.e., $r=p$. \par
The most important characteristics of QC-MDPC codes is that the circulant blocks in the parity-check matrix can be described by their first row only ($r$ bits only). Thus, to construct $\textbf{H}$, one needs only the first row of the $n_0$ circulant blocks. Moreover, the parity-check matrix has a relatively small Hamming weight (i.e., $w<<n$). Therefore, instead of storing $n=n_{0}r$ bits, the positions (indexes) of $w$ non-zero bits can be used to store $\textbf{H}$. These are the key features of QC-MDPC codes that enable them to significantly mitigate the key size issue of the original McEliece scheme. As you will see in the next subsection, private key of the QC-MDPC variant (and BIKE, consequently) is the parity-check matrix of the selected code.\par
In the next subsection, we briefly review the new variant of the McEliece scheme that works based on QC-MDPC codes.
\subsection{The QC-MDPC PKE Scheme}
This cryptosystem is a variant of the McEliece code-based PKE scheme. It was proposed by Misoczki et al.~\cite{misoczki2013mdpc} to mitigate the key size problem of the original McEliece scheme. It consisted of three subroutines, i.e., \emph{Key Generation}, \emph{Encryption}, and \emph{Decryption} (see Fig. \ref{fig:1}).
\begin{figure}[t!]
\centering
\includegraphics[width=3.3in, height=1.1in]{figs/1.jpg}
\caption{Block diagram of the QC-MDPC variant of the McEliece scheme.}
\label{fig:1}
\end{figure}
\subsubsection{Key Generation}
\textbf{Private Key:} In this variant, the parity-check matrix $\textbf{H}$ of the underlying $(r,n,w)$-QC-MDPC code plays the role of private key. It has the format shown in equation (\ref{eq-1}). To generate $\textbf{H}$, $n_0$ circulant blocks of size $r\times r$ must be generated. To do this, for each block $\textbf{H}_i$ ($0\leq i\leq n_0-1$), a random sequence of $r$ bits and Hamming weight $w_i$ is generated such that $\sum_{i=0}^{n_0-1}w_i=w$. This sequence is considered as the first row of $\textbf{H}_i$. Then, the other $r-1$ rows are computed through cyclic shifts of the first row (i.e., $j$ cyclic shifts to generate row $j$, $1\leq j\leq r-1$).\par
To store the private key, $wlog_2(r)$ bits are needed. This is because each circulant block $\textbf{H}_i$ is represented by its first row only which can be stored using the indexes of its $w_i$ non-zero bits. This is much less than $n^2+k^2+nk$ bits needed to store private key of the original McEliece scheme. \par
\noindent
\textbf{Public Key:} The generator matrix $\textbf{G}$ of the underlying $(r,n,w)$-QC-MDPC code is the public key of this cryptosystem. It can be computed from the private key using equation (\ref{eq-3}). Since $\textbf{G}$ is quasi-cyclic (similar to the circulant blocks in $\textbf{H}$), it can be represented by its first row only which has $n=k+r$ bits. Note that, the first $k$ bits belong to the identity matrix $\textbf{I}_k$ that do not need to be stored (always have a specific format). Thus, $r$ bits are required to store the public key which shows a significant reduction in the key size compared with $nk$ bits in the original McEliece scheme. Unlike $\textbf{H}$, the first row of $\textbf{G}$ does not necessarily have a small (and fixed) Hamming weight. Thus, the idea of storing the indexes of non-zero bits can not be used for the storage of public key.
\subsubsection{Encryption}
The encryption of a plaintext message $\textbf{m}\in \mathbb{F}_{2}^{k}$ is performed using the following equation:
\begin{equation*}
\textbf{x}=\textbf{m}.\textbf{G}\oplus \textbf{e}=\textbf{c}\oplus \textbf{e},
\end{equation*}
where $\textbf{e}\in \mathbb{F}_{2}^{n}$ is a random vector of weight $t$ that is determined based on the error correcting capability of
the corresponding decoder.
\begin{table}[tb!]
\caption{System parameters of BIKE for different security levels.}
\label{BIKE_Par}
\begin{center}
\resizebox{0.5\textwidth}{!}{%
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{2}{*}{\bfseries Parameter} & \multirow{2}{*}{\bfseries Description} & \multicolumn{3}{c|}{\bfseries Value} \\
\cline{3-5}
& & Level 1 ($\lambda=128$) & Level 2 ($\lambda=192$)& Level 3 ($\lambda=256$)\\
\hline
\multirow{2}{*}{$r$} & Size of circulant blocks in& \multirow{2}{*}{12,323} & \multirow{2}{*}{24,659} & \multirow{2}{*}{40,973}\\
& the parity-check matrix $\textbf{H}$ & & & \\
\hline
\multirow{2}{*}{$w$}& Row weight of the & \multirow{2}{*}{142} & \multirow{2}{*}{206} & \multirow{2}{*}{274}\\
& parity-check matrix $\textbf{H}$& & & \\
\hline
\multirow{2}{*}{$t$} & Hamming weight of & \multirow{2}{*}{134} & \multirow{2}{*}{199} & \multirow{2}{*}{264}\\
& the error vector & & & \\
\hline
\multirow{2}{*}{$l$} & Size of the generated& \multirow{2}{*}{256} & \multirow{2}{*}{256} & \multirow{2}{*}{256}\\
& symmetric key $\textbf{K}_s$& & & \\
\hline
\end{tabular}%
}
\end{center}
\end{table}
\subsubsection{Decryption}
The receiver performs the following procedure to decrypt the received ciphertext $\textbf{x}\in \mathbb{F}_{2}^{n}$.
\begin{itemize}
\item Apply $\textbf{x}$ to the corresponding $t$-error correcting decoder $\psi_{\textbf{H}}$ that leverages the knowledge of $\textbf{H}$ for efficient decoding. The decoder finds the error vector $\textbf{e}$ and returns the corresponding codeword $\textbf{c}=\textbf{m}.\textbf{G}$.
\item Return the first $k$ bits of $\textbf{c}$ as the decoded plaintext message $\textbf{m}$ (because $\textbf{G}$ is in the systematic form).
\end{itemize}
Note that the systematic form of $\textbf{G}$ can put the scheme vulnerable to chosen ciphertext and message recovery attacks. The reason is that the first $k$ bits of ciphertext $\textbf{x}=\textbf{m}.\textbf{G}\oplus \textbf{e}$ includes a copy of the plaintext $\textbf{m}$ with some possible flipped bits (since $x_i=m_i\oplus e_i$ for $1\leq i\leq k$). In fact, two ciphertexts $\textbf{x}_1$ and $\textbf{x}_2$ are most likely distinguishable if an attacker knows their corresponding plaintexts $\textbf{m}_1$ and $\textbf{m}_2$. In the worst case, if $e_i=0$ for $1 \leq i\leq k$, the ciphertexts are certainly distinguishable. To address this issue, a CCA transformation model can be used (e.g., \cite{kobara2001semantically}) that converts plaintext $\textbf{m}$ to a random vector whose observation brings no useful knowledge for a CCA attacker.\par
Regarding the above mentioned decoder $\psi_{\textbf{H}}$, several decoding algorithms have been proposed so far \cite{gallager1962low, sendrier2020low, sendrier2019decoding, drucker2019toolbox, drucker2019constant, drucker2020qc, nilsson2021weighted}. We refer the readers to \cite{drucker2020qc} and \cite{drucker2019constant} for more information about the most efficient QC-MDPC decoders.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.9\textwidth]{figs/2.jpg}
\caption{Block diagram of the BIKE scheme. The $\parallel$ and $\nparallel$ symbols represent concatenation and de-concatenation operations, respectively.}
\label{fig:BIKE}
\end{figure*}
\subsection{The BIKE Scheme}
BIKE \cite{BIKE} is a code-based KEM scheme that leverages the QC-MDPC PKE scheme for encryption/decryption. It has been recently qualified for the final round of the NIST standardization project as an alternate candidate. In previous rounds of the NIST competition, BIKE was submitted in the form of three different versions (BIKE-1, BIKE-2, and BIKE-3) each one satisfied the needs of a specific group of cryptographic applications (e.g., bandwidth, latency, security, etc.). However, in the final round, following the recommendation of NIST, it was submitted in the form of a single version that relies heavily on BIKE-2. The final version suggests three sets of system parameters to satisfy the three different security levels defined by NIST, i.e., level 1 (128-bit security), level 2 (192-bit security), and level 3 (256-bit security) (see Table \ref{BIKE_Par}). To address the IND-CCA security issues exist in the QC-MDPC variant, the Fujisaki-Okamoto (FO) CCA transformation model has been integrated into the BIKE scheme \cite{hofheinz2017modular}, \cite{drucker2021applicability} (see Fig. \ref{fig:BIKE}). In addition, in the final version of BIKE, the state-of-the-art BGF decoder \cite{drucker2020qc} has been deployed in the decapsulation subroutine that provides negligible DFR in five iterations.\par
BIKE includes three subroutines, namely, key generation, encapsulation, and decapsulation (see Fig \ref{fig:BIKE}). The procedure is started by Bob who wants to establish an encrypted session with Alice. They need to securely share a symmetric key to start their encrypted session. To do this, Bob firstly generates his public and private keys by running the key generation subroutine. Then, he sends his public key to Alice who uses it to generate the ciphertext $\textbf{C}=(\textbf{C}_0, \textbf{C}_1)$ and the symmetric key $\textbf{K}_s$ (using the encapsulation subroutine). The first part of $\textbf{C}$ (i.e., $\textbf{C}_0$) is the main data encrypted by the underlying ($r,n,w$)-QC-MDPC scheme while the second part (i.e., $\textbf{C}_1$) protects it against malicious manipulations. Then, Alice sends $\textbf{C}$ to Bob through an insecure channel. By running the decapsulation subroutine, Bob applies $\textbf{C}_0$ to the corresponding QC-MDPC decoder (i.e., the BGF decoder) to decrypt the data. He also checks the integrity of $\textbf{C}_0$ using $\textbf{C}_1$ to ensure it has not been changed. Finally, Bob could generate the same symmetric key $\textbf{K}_s$ as Alice computed.
In BIKE, all the circulant matrix blocks (e.g., $\textbf{H}_0$ and $\textbf{H}_1$ of the parity-check matrix) are treated as polynomial rings since it increases the efficiency of the computations required in the key generation, encapsulation, and decapsulation subroutines. In this regard, considering $\textbf{a}$ as the first row of the $r\times r$ circulant matrix $\textbf{A}$, the $r$-bit sequence $\textbf{a}$ can be represented by the polynomial $(a_0+a_1x+a_2x^2+\ldots +a_{r-1}x^{r-1}) \in \mathcal{R}=\mathcal{F}_2[x]/(x^r-1)$ (see \cite{aragon2017bike} for more information). In the following, we briefly review the three subroutines of the BIKE scheme. We refer readers to \cite{BIKE} for more detailed information about BIKE.
\subsubsection{Key Generation} \label{section-key_gen}
\textbf{Private Key:} Since BIKE works based on the QC-MDPC variant, private key is the parity-check matrix of the underlying ($r,n,w$)-QC-MDPC code with $n_0=2$ circulant blocks. To generate the private key, two random polynomial $\textbf{h}_0$ and $\textbf{h}_1$ are generated with the Hamming weight of $|\textbf{h}_0|=|\textbf{h}_1|=w/2$. Then, $\sigma$, a random sequence of $l$ bits is generated. Finally, the private key is set as $\textbf{sk}=(\textbf{h}_0, \textbf{h}_1, \sigma$).\par
\noindent
\textbf{Public Key:} Set a public key as $\textbf{pk}=\textbf{h}=\textbf{h}_1.\textbf{h}_{0}^{-1}$.
\subsubsection{Encapsulation}\label{section-Encaps}
The encapsulation subroutine takes the public key $\textbf{h}$ as input and generates the ciphertext $\textbf{C}$ and the symmetric key $\textbf{K}_s$. To do this, three hash functions (modelled as random oracles) $\mathcal{H}:\{0,1\}^l\xrightarrow{}$ $\{0,1\}_{|t|}^{2r}$, $\mathcal{L}:\{0,1\}^{2r}\xrightarrow{}$ $\{0,1\}^{l}$, and $\mathcal{K}:\{0,1\}^{2r+l}\xrightarrow{}$ $\{0,1\}^{l}$ are defined and used here. The following procedure is performed in this subroutine.
\begin{itemize}
\item Randomly select an $l$-bit vector $\textbf{m}$ from the message space $\mathcal{M}=\{0,1\}^l$.
\item Compute $(\textbf{e}_0, \textbf{e}_1) = \mathcal{H}(\textbf{m})$ where $\textbf{e}_0$ and $\textbf{e}_1$ are error vectors of $r$ bits such that $|\textbf{e}_0|+|\textbf{e}_1|=t$.
\item Compute $\textbf{C} =(\textbf{C}_0, \textbf{C}_1) = (\textbf{e}_0 + \textbf{e}_1.\textbf{h}, \textbf{m} \oplus \mathcal{L}(\textbf{e}_0, \textbf{e}_1))$ and send it to the recipient.
\item Compute $\textbf{K}_s = \mathcal{K}(\textbf{m}, \textbf{C})$ as the secret symmetric key.
\end{itemize}
\subsubsection{Decapsulation}
The decapsulation subroutine takes the private key $\textbf{sk}$ and ciphertext $\textbf{C}$ as input and generates the symmetric key $\textbf{K}_s$ as follows.
\begin{itemize}
\item Decode $\textbf{C}_0$ by computing the syndrome $\textbf{S}=\textbf{C}_0.\textbf{h}_0$ and apply it to the corresponding BGF decoder to obtain the error vectors $\textbf{e}_{0}^{'}$ and $\textbf{e}_{1}^{'}$.
\item Compute $\textbf{m}^{'}=\textbf{C}_1\oplus \mathcal{L}(\textbf{e}_{0}^{'}, \textbf{e}_{1}^{'})$. If $\mathcal{H}(\textbf{m}^{'})\neq (\textbf{e}_{0}^{'}, \textbf{e}_{1}^{'})$, set $\textbf{m}^{'}=\sigma$.
\item Compute $\textbf{K}_s = \mathcal{K}(\textbf{m}^{'}, \textbf{C})$
\end{itemize}
The deployed CCA transformation model prevents a CCA attacker (e.g., a GJS attacker) to freely choose any error vector $(\textbf{e}_{0}, \textbf{e}_{1})$ required by the attack procedure and submits the obtained crafted ciphertext to the receiver (i.e., to craft a ciphertext based on a malicious plan). It is because the error vector $(\textbf{e}_{0}, \textbf{e}_{1})$ are computed by the one-way hash function $\mathcal{H}$. If the attacker changes the legitimate error vector $\textbf{e}=(\textbf{e}_{0}, \textbf{e}_{1}^{})$, the integrity check at the receiver will fail (i.e., $\mathcal{H}(\textbf{m}^{'})\neq (\textbf{e}_{0}^{'}, \textbf{e}_{1}^{'})$). Therefore, to feed the ciphertext with a desired error vector $\textbf{e}$, the attacker has to find the corresponding vector $\textbf{m}$ such that $\mathcal{H}(\textbf{m})=\textbf{e}$. This imposes heavy burden to the attacker since many queries must be submitted to the random oracle $\mathcal{H}$ to identify the corresponding vector $\textbf{m}$. We will discuss this problem in the next section.
\section{Weak-Keys in the BIKE Scheme}\label{Section.3}
In this section, we first present a formal definition for the weak-keys that we consider in this work. Then, we discuss the effect of weak-keys on the IND-CCA security of BIKE and review the weak-key structures that have been identified so far.
\subsection{Definition of Weak-Keys}
In this work, we consider a private key of the BIKE scheme as \emph{weak} if decoding of the ciphertexts generated using its corresponding public key results in a much higher DFR than the average DFR of the decoder. Regardless of performance degradation issues, such weak-keys can offer a significant advantage to CCA adversary for conducting a reaction attack (see the next subsection for more details). It is noteworthy that, prior research have indicated the possibilities of recovering weak private keys from the corresponding public key in the QC-MDPC PKE scheme \cite{bardet2016weak}, \cite{9383383}. Specifically, it is shown that there exists weak private keys whose structure can facilitate an adversary in compromising them by applying some linear algebra techniques such as extended Euclidean algorithm on their corresponding public keys. However, those weak structures are not relevant to IND-CCA security (the adversary does not need to conduct a chosen ciphertext attack to compromise those keys) and thus are not consider in this work. Instead, we assume that recovering private key from the corresponding public key is infeasible in the BIKE scheme. Therefore, the attacker needs to conduct a chosen ciphertext attack to recover the private key by leveraging the decoder's reactions.\par
\subsubsection{The Negative Effect of Weak-Keys on DFR}
To gain an intuitive insight into understanding the impact of weak-keys upon decoder's performance, we need an elaboration of the following question.\par
How do the columns of $\textbf{H}$ with a large intersections result in a higher DFR?
To answer this question, we consider $\textbf{H}$ as the parity-check matrix (i.e., private key) in which columns $j$ and $l$ ($j,l\in\{0,1,\ldots,n-1\}, \ j\neq l$) have $m$ non-zero bits at exactly the same positions (i.e., $m$ intersections between the two columns). If $\textbf{H}$ is a normal key (i.e., not a weak key), it is shown that the largest possible value of $m$ is usually small (e.g., 5) as compared with the Hamming weight of each column, i.e., $w/2$ \cite{sendrierexistence}. Now, assume we have private key $\textbf{H}$ in which $m$ (for $j$th and $l$th columns) is much larger than that of normal keys. Also, assume that, $e_j=0$ and $e_l=1$ in the original error vector $\textbf{e}$. In this case, it is intuitive to imagine that the number of unsatisfied parity-check equations (i.e., $upc$) for $j$th and $l$th bits will be highly correlated since they would have similar connections on the corresponding Tanner graph. In other words, both $j$ and $l$ bit nodes are involved in almost the same parity-check equations due to the large intersection between them. Thus, in a decoding iteration, if a specific parity-check equation (that involves bit node $j$ and $l$) is unsatisfied, it will be counted towards both $j$th and $l$th bits. Thus, it is highly likely that the decoder returns $e_j=e_l=1$ due to their (correlated) $upc$s being greater than the set threshold. In fact, a real error at $l$th bit results in a situation that convinces the decoder to incorrectly considers $e_j$ as a set error bit.
Again, in the next iteration, the same procedure is performed, this time the $j$th bit that was mistakenly considered as a set error bit (i.e., the decoder flipped it to $e_j=1$ in the previous iteration while its real value is 0) results a high value for the correlated $upc_j$ and $upc_l$. Thus, the decoder (again) identifies both of them as set error bits and flips their value. Although this corrects $e_j$, it results in an incorrect value for $e_l$. This process is repeated back and forth for all iterations, making decoder incapable of finding the correct vector $\textbf{e}$ leading to decoding failure.\par
\subsection{Weak-Keys and IND-CCA Security of BIKE}
As stated earlier, BIKE deploys a probabilistic and iterative decoder with a fixed number of iterations for ensuring that constant-time implementation needed for suppressing any side-channel knowledge available to an adversary \cite{drucker2019constant}. This decoder can fail to successfully decode a ciphertext in the allowed number of iterations. The decoding capability of such probabilistic decoders is generally represented in terms of (average) DFR. Prior research has shown that the higher DFR of a decoder deployed in QC-MDPC-based schemes (such as BIKE) can facilitate the efficient recovery of private key through an attack referred to as GJS (attack) \cite{guo2016key}. In this attack, some specific formats for the error vector $\textbf{e}$ are used to craft a large group of ciphertexts that are submitted to the decryption oracle (this constitutes CCA). Then, utilizing decryption failures, the attacker can recover the private key. As mentioned before, to circumvent this possibility, BIKE mechanism adopts FO CCA transformation model \cite{hofheinz2017modular} that prevents the attacker from cherry-picking the desired error vector $\textbf{e}$ needed for successful conduction of GJS attack ( i.e., in BIKE, $\textbf{e}$ is the output of a one-way hash function, thus forbids attackers from crafting some specifically chosen ciphertexts). However, despite this simple remedy, a lower DFR is still important for ensuring the required level of security (see below for details). \par
For a formal proof of IND-CCA security, we first need to define a $\delta$-correct KEM scheme. Based on the analysis provided in \cite{hofheinz2017modular}, a KEM scheme is $\delta$-correct if;
\begin{equation}
\label{eq.delta_cor}
\begin{split}
Pr[\emph{Decaps}(\textbf{C}, \textbf{sk})\neq \textbf{K}_s|(\textbf{sk},\textbf{pk})
\xleftarrow{} \emph{Key\_Gen}, (\textbf{C}, \textbf{K}_s)
\xleftarrow{} \\
\emph{Encaps}(\textbf{pk})
]\leq \delta
\end{split}
\end{equation}
Note that, the term on the left-hand side of the aforementioned inequality is the average DFR (hereinafter denoted as $\overline{DFR}$) taken over the entire key space and all the error vectors. Thus, if $\overline{DFR}\leq \delta$, then the KEM scheme is said to be $\delta$-correct.\par
For a $\delta$-correct KEM scheme, it is shown that the advantage of an IND-CCA adversary $\mathcal{A}$ is upper bounded as,
\begin{equation}\label{eq.IND_CCA}
Adv_{\text{\tiny $KEM$}}^{\text{\tiny $CCA$}}(\mathcal{A})\leq q.\delta + \beta ,
\end{equation}
where $q$ is the number of queries that $\mathcal{A}$ needs to submit to the random oracle model (i.e., the hash function $\mathcal{H}$) to find the valid vector $\textbf{m}$ that are needed for the desired error vector $\textbf{e}=(\textbf{e}_0,\textbf{e}_1)$ (see Fig.~2). $\beta$ in Eq.~(\ref{eq.IND_CCA}) is a complex term that is related to IND-CPA security and is not relevant to IND-CCA analysis and is thus not considered.\par
Based on the above definitions, a KEM scheme is IND-CCA secure offering $\lambda$-bit security if $\frac{T(\mathcal{A})}{Adv_{KEM}^{CCA}(\mathcal{A})}\geq 2^{\lambda}$ \cite{hofheinz2017modular}, where $T(\mathcal{A})$ is the running time of $\mathcal{A}$ that is approximated as $q.t_q$, with $t_q$ representing the running time of a single query. Since typically $t_q<1$, we have $T(\mathcal{A})<q$, such that $2^{\lambda}\leq \frac{T(\mathcal{A}}{Adv_{KEM}^{CCA}}\leq \frac{q}{q.\delta}$, resulting in $2^{\lambda}\leq \frac{q}{q.\delta}$. Therefore, to provide $\lambda$-bit of IND-CCA security, the KEM scheme must be $\delta$-correct with $\delta\leq 2^{-\lambda}$, or equivalently (using Eq.~(\ref{eq.delta_cor})),
\begin{equation}\label{eq.DFR}
\overline{DFR}\leq 2^{-\lambda}.
\end{equation}
However, for QC-MDPC codes used in the BIKE scheme, there exists no known mathematical model for an accurate estimation of $\overline{DFR}$ (taken over the whole key space and error vectors). Instead, $\overline{DFR}$ corresponding to needed security-level is estimated through experiments with limited number of ciphertexts (although sufficiently large) and then applying (linear) extrapolation (see \cite{sendrier2020low} for more details). As a result, the claimed DFR may necessarily not be same as the actual $\overline{DFR}$. In the worst case, we assume that there is a group of keys for which the value of DFR is high (i.e., the set of weak-keys $\mathcal{K}_w$). If the weak-keys have not been used in the experiments performed for estimating the average DFR, then the actual $\overline{DFR}$ may be larger than the estimated DFR (obtained empirically) such that the condition in Eq.~(\ref{eq.DFR}) is not met. Therefore, the impact of weak keys $\mathcal{K}_w$ on $\overline{DFR}$ must be investigated to estimate the actual value of average DFR and ensure the IND-CCA security of the scheme. To formulate the equation for IND-CCA security in presence of weak-keys, we consider $|\mathcal{K}_w|$ as size of $\mathcal{K}_w$ (i.e., the number of weak-keys), $DFR_w$ as the average DFR taken over $\mathcal{K}_w$, $\mathcal{K}_s$ as the set of other keys (i.e., $\mathcal{K}_w\cup \mathcal{K}_s$ is equal to the whole key space $\mathcal{K}$), and $DFR_s$ as the average DFR taken over $\mathcal{K}_s$. In this case, $\overline{DFR}$ becomes;
\begin{equation}\label{eq.Ave_DFR}
\overline{DFR}=\eta_s DFR_s + \eta_w DFR_w ,
\end{equation}
where $\eta_s=\frac{|\mathcal{K}_s|}{|\mathcal{K}|}$ and $\eta_w=\frac{|\mathcal{K}_w|}{|\mathcal{K}|}$. By combining Eqs.~(\ref{eq.DFR}) and (\ref{eq.Ave_DFR}), we have,
\begin{equation}\label{eq.condition}
\eta_s DFR_s\leq 2^{-\lambda} - \eta_w DFR_w
\end{equation}
From (\ref{eq.condition}), the modified condition for IND-CCA security is obtained as;
\begin{equation}\label{eq.Basic_cond}
\eta_w DFR_w \leq 2^{-\lambda}
\end{equation}
Therefore, to provide $\lambda$-bit of IND-CCA security, the set of weak-keys $\mathcal{K}_w$ must be negligible enough (compared with $|\mathcal{K}|$) such that $\eta_w DFR_w < 2^{-\lambda}$ (even if $DFR_w$ is significantly larger than $DFR_s$).
\subsection{Structure of Weak-Keys in BIKE}
The weak-keys for BIKE scheme can be determined by adopting the approach proposed in \cite{guo2016key} (i.e., the GJS attack methodology for recovering private keys). In this attack, the attacker primarily targets $\textbf{H}_0$ (i.e., the first block of private key). If $\textbf{H}_0$ is successfully recovered, then the attacker can easily compute $\textbf{H}_1$ from $\textbf{H}_0$ by performing simple linear algebra operations on $\textbf{G}.\textbf{H}^T=\textbf{0}$. Precisely, using Eq.~(\ref{eq-3}), we can re-write $\textbf{G}.\textbf{H}^T=\textbf{0}$ as $[\textbf{I}_k| \textbf{Q}_{k\times r}].[\textbf{H}_0 \ \textbf{H}_1]^T=\textbf{0}$, which results in $\textbf{H}_{0}^{T}+\textbf{Q}.\textbf{H}_{1}^{T}=\textbf{0}$, since $\textbf{I}_k.\textbf{H}_{0}^{T}=\textbf{H}_{0}^{T}$. Therefore, the attacker can easily obtain $\textbf{H}_1=[\textbf{Q}^{-1}.\textbf{H}_{0}^{T}]^T$ (note that, for $n_0=2$, we have $r=k$).\par
To find $\textbf{H}_0$, the attacker selects the error vector $\textbf{e}$ from a special subset $\boldsymbol{\Psi}_d$ ($d=1,2,\ldots,U$). The parameter $d$ is defined as the distance between two indexes (positions) $i$ and $j$ in the first row of $\textbf{H}_0$ (shown by $\textbf{h}_0$) which is formally defined as follows:
\begin{eqnarray*}
d(i,j)=min((i-j+r) \ mod \ r, (j-i+r) \ mod \ r) \\
for \ i,j \in \{0,1,2,\ldots,r-1\}
\end{eqnarray*}
For example, considering $r=10$, the distance between the first and last bits is 1 because $d(0,9)=min((19 \ mod \ 10), (1 \ mod \ 10))=min(9,1)=1$. Based on the above definition, $\boldsymbol{\Psi}_d$ is generated as
\begin{equation*}
\begin{split}
\boldsymbol{\Psi}_d=&\{\textbf{e}=(\textbf{e}_0,\textbf{e}_1) \ | \ \textbf{e}_1=\textbf{0}, \exists \ \{p_i\}_{i=1}^{t} \ s.t. \ e_{p_i}=1, \ \ and \\
& \ \ p_{2i}=(p_{2i-1}+d) \ mod \ r \ for \ i=1,2,\ldots, t/2\}
\end{split}
\end{equation*}
Note that, the second half of the error vector $\textbf{e}$ selected from $\boldsymbol{\Psi}_d$ is an all-$0$ vector, i.e., the Hamming weight of $\textbf{e}_0$ is $t$. Each $p_i$ ($i\in\{1,2,\ldots,t\}$) indicates the position of a non-zero bit in $\textbf{e}_0$.\par
As demonstrated in prior work research \cite{guo2016key}, when the error vectors are selected from $\boldsymbol{\Psi}_d$, there exists a strong correlation between the decoding failure probability and the existence of distance $d$ between the non-zero bits in $\textbf{h}_0$ (i.e., the first row of first circulate block $\textbf{H}_0$). In other words, if distance $d$ exists between two non-zero bits in $\textbf{h}_0$, then the probability of decoding failure is much smaller in contrast with the case when such a distance $d$ does not exist. Utilizing this important observation, firstly, the attacker (empirically) computes the probability of decoding failure for various distances $d=1,2,\ldots,U$. This is done by submitting many ciphertexts (generated using the error vectors selected from $\boldsymbol{\Psi}_d$) to the decryption oracle and recording the corresponding result of the decryption (i.e., \emph{successful} or \emph{failed}). Then, each value of $d$ is classified into either $\{$\emph{existing}$\}$ or $\{$\emph{not existing}$\}$ classes based on the obtained failure probability, i.e., a specific value of $d$ with a small failure probability is categorized as \emph{existing}, and vice versa. Based on the obtained categorized distances, the distance spectrum of $\textbf{h}_0$ is defined as follows:
\begin{equation*}
D(\textbf{h}_0)=\{d: \ 1 \leq d \leq U, \ d \in \{existing\} \ in \ \textbf{h}_0\}.
\end{equation*}
Moreover, since a distance $d$ may appear multiple times in $\textbf{h}_0$, the multiplicity of $d$ in $\textbf{h}_0$ is defined as follows:
\begin{eqnarray*}
\mu(d, \textbf{h}_0)=|\{(i,j): \ 0\leq i\leq j\leq r-1, \ h_{0,i}=h_{0,j}=1 \\ and \ d(i,j)=d \}|
\end{eqnarray*}
Finally, based on the obtained distance spectrum $D(\textbf{h}_0)$ and the multiplicity of every distance $d \in D(\textbf{h}_0)$, the attacker can reconstruct $\textbf{h}_0$. To do this, the attacker assigns the first two non-zero bits of $\textbf{h}_0$ at position 0 and $d_1$, where $d_1$ is the minimum distance in $D(\textbf{h}_0)$. Then, the third non-zero bit is put (iteratively) at a position such that the two distances between the third position and the previous two positions exist in $D(\textbf{h}_0)$. This iterative procedure continues until all the $w/2$ non-zero bits of $\textbf{h}_0$ are placed at their positions. Note that, the attacker needs to perform one or multiple cyclic shifts on the obtained vector to find the actual vector $\textbf{h}_0$. This is because the first non-zero bit was placed at position 0 which is not necessarily the case in $\textbf{h}_0$.\par
The structure of weak-keys in BIKE are determined based on the concepts introduced in the GJS attack (i.e., distance, distance spectrum, and multiplicity of a distance). In this regard, three types of weak-keys have been specified in \cite{sendrierexistence} which are detailed below.
\subsubsection{Type 1}
\label{t1}
In the first type, considering the polynomial representation of binary vectors, the weak-key $\textbf{h}=(\textbf{h}_0,\textbf{h}_1)$ with $f$ $d$-consecutive non-zero positions and Hamming weight $w$ is defined as \cite{sendrierexistence},
\begin{equation}\label{eq-Type_1}
\textbf{h}_i =\phi_d(x^l[(1+x+x^2+\ldots+x^{f-1})+\textbf{h}_{i}^{'}]), \ \ i\in\{0,1\},
\end{equation}
where $d \in\{1,2,,3,\ldots,\lfloor r/2\rfloor\}$ is the distance between non-zero bits of the $f$-bit pattern in the weak-key, $l\in \{0,1,2,\ldots,r-1\}$ determines the beginning position of the $f$-bit pattern, and $\phi_d()$ is a mapping function that replaces $x$ with $x^d$, thus, results in distance $d$ between any two successive $1'$s in $(1+x+x^2+\ldots+x^{f-1})$.\par
Note that, to construct the weak-key in this format, each block $\textbf{h}_{i}; \ i\in\{0,1\}$ (with Hamming weight of $w/2$), is first divided into two sections. The first section is an $f$-bit block in which all the $f$ bits are set to $1$ (i.e., $1+x+x^2+\ldots+x^{f-1}$, in the polynomial form). The second section is $\textbf{h}_{i}^{'}$ which is an $(r-f)$-bit block with the Hamming weight $(w/2-f)$ and randomly chosen nonzero bits. Then, the two sections are concatenated and an $l$-bit cyclic shift is applied on it (using the $x^l$ term). Finally, by applying $\phi_d$, the $f$ 1-consecutive non-zero bits of the first section are mapped to a block of $f$ $d$-consecutive non-zero bits. Note that, in this type of weak-keys, considering the distances $jd$ ($j\in\{1,2,3,\ldots, f-1\}$), a lower bound for the multiplicity metric $\mu$ can be obtained as $\mu(jd,\textbf{h}_i)\geq f-j$ for $d\in \{1,2,,3,\ldots,\lfloor r/2\rfloor\}$. \par
To compute $|\mathcal{K}_w|$, firstly, consider the second section of a weak-key defined using (\ref{eq-Type_1}), i.e., $\textbf{h}_{i}^{'}$. It is a $r-f$-bit vector of Hamming weight $w/2-f$. Thus, we have $r-f \choose w/2-f$ options for $\textbf{h}_{i}^{'}$. For the first part of the weak-key (i.e., the $f$-bit pattern), there is only one option as all the $f$ bits are set to $1$. The entire (concatenated) package is subsequently shifted (cyclically) by $l$ bit, where $l\in \{0,1,2,\ldots,r-1\}$. Thus, we have $r$ different options for the cyclic shifts. Finally, there can be $\lfloor r/2\rfloor$ different mappings for $\phi_d()$, as $d \in\{1,2,,3,\ldots,\lfloor r/2\rfloor\}$. Consequently, for $|\mathcal{K}_w|$, we have \cite{sendrierexistence};
\begin{equation} \label{eq-Num_1}
|\mathcal{K}_w|(f)\leq 2r\lfloor r/2\rfloor{r-f \choose w/2-f}
\end{equation}
Note that, the $2nd$ term is essential in Eq.~(\ref{eq-Num_1}) because there are two circulant blocks in $\textbf{h}$. Finally, $\eta_w$ for this type is obtained as;
\begin{equation}
\eta_w (f)= \frac{|\mathcal{K}_w|(f)}{{r \choose w/2}}=\frac{2r\lfloor r/2\rfloor{r-f \choose w/2-f}}{{r \choose w/2}}
\label{eq:t1}
\end{equation}
\subsubsection{Type 2}
\label{t2}
In the second type of weak-keys identified in \cite{sendrierexistence}, the focus is on having a single distance with a high multiplicity factor. In this type, the weak-key $\textbf{h}=(\textbf{h}_0,\textbf{h}_1)$ with Hamming weight $w$ and parameter $m$ has the multiplicity $\mu(d,\textbf{h}_i)=m$ for a distance $d \in\{1,2,,3,\ldots,\lfloor r/2\rfloor\}$ and $i\in\{0,1\}$.
If $m=w/2-1$, the number of weak-keys $|\mathcal{K}_w|$ is upper bounded by $2r\lfloor r/2\rfloor$. It is because for $m=w/2-1$, the distance between all the $w/2$ non-zero bits of $\textbf{h}_i$ is $d$. Unlike the first type, the $\textbf{h}_i$ blocks ($i\in\{0,1\}$) do not have a second section $\textbf{h}_{i}^{'}$. Thus, the term ${r-f \choose w/2-f}$ that appeared in Eq. (\ref{eq-Num_1}) is replaced with 1.\par
However, for $m<w/2-1$, the upper bound for $|\mathcal{K}_w|$ is obtained using a more complicated approach. In this regard, consider the general format $(z_1,o_1,z_2,o_2,\ldots, z_s,o_s)$ for $\textbf{h}_i$ ($i\in \{0,1\}$) that starts with $z_1$ $0'$s followed by $o_1$ $1'$s followed by $z_2$ $0'$s, etc., ($z_i, o_i>0$), i.e., $\sum_{i=1}^{s}o_i=w/2$ and $\sum_{i=1}^{s}z_i=r-w/2$. In this case, it is shown that the upper bound for $|\mathcal{K}_w|$ is \cite{sendrierexistence};
\begin{equation}
\begin{split} |\mathcal{K}_w|\leq 2\lfloor r/2\rfloor \sum_{z_1=1}^{(r-w+m+1)}\sum_{o_1=1}^{(m+1)}(o_1+z_1){w/2-o_1-1\choose s-2}\\
{r-w/2-z_1-1\choose s-2}
\label{eq:t2}
\end{split}
\end{equation}
where $s$ is the number of $z_i$ and $o_i$ blocks.\par
\noindent
Eq.~(\ref{eq:t2}) can be proved by applying the \emph{stars and bars} principle \cite{starsbars} on the sets of $\{o_i\}_{i=1}^{s}$ and $\{z_i\}_{i=1}^{s}$ separately . According to this principle, the number of $b$-tuples of positive integers ($x_1,x_2,\ldots,x_b$) whose sum is $N$ (i.e., $\sum_{i=1}^{b}x_i=N$) is;
\begin{equation} \label{eq-star_bar}
N-1 \choose b-1
\end{equation}
Considering the set $\{o_i\}_{i=1}^{s}$, the value of $o_1$ (the number of bits in the first container) varies from $1$ to $m+1$ and for each value of $o_1$ the principle is applied on the remaining $s-1$ containers, i.e., $b=s-1$ (note that we need to consider the condition $o_1\leq m+1$ to meet $\mu(1,\textbf{h}_i)=m$). Therefore, for every value of $o_1$, we have $b=s-1$ and $N=w/2-o_1$ (because $\sum_{i=1}^{s}o_i=w/2$ and $o_1$ $1$ bits are already allocated to the first container). It results in term $w/2-o_1-1\choose s-2$ in Eq.~(\ref{eq:t2}) (according to (\ref{eq-star_bar})). Similarly, for set $\{z_i\}_{i=1}^{s}$, for every value of $z_1$ ($1\leq z_1\leq r-w+m+1$), we have $b=s-1$ and $N=r-w/2-z_1$ that results in term $r-w/2-z_1-1\choose s-2$ in Eq.~(\ref{eq:t2}). The term $(o_1+z_1)$ is applied to consider the number of different circular shifts that are applicable in each case.\par
\subsubsection{Type 3}
\label{t3}
Unlike the previous types that consider a single block of the parity-check matrix, this type of weak-keys are defined such that the columns of $\textbf{h}_0$ and $\textbf{h}_1$ (in the parity-check matrix) jointly create an ambiguity for the BF-based decoder, resulting in a high DFR. If column $j$ of $\textbf{h}_0$ and column $l$ of $\textbf{h}_1$ both have $m$ non-zero bits at exactly the same positions (i.e., $m$ intersections between the two columns) and $m$ is large, their number of unsatisfied parity-check (i.e., $upc_j$ and $upc_l$) equations (counted during the decoding procedure) will be highly correlated. In this case, if $e_j=1$ or $e_l=1$ in the real error vector $\textbf{e}$, the high level of correlation can prevent the decoder from finding $\textbf{e}$ within the allowed number of iterations.
The upper bound for $|\mathcal{K}_w|$ in this type of weak keys is obtained as \cite{sendrierexistence}:
\begin{equation}
|\mathcal{K}_w|\leq r{w/2\choose m}{r-m\choose w/2-m}
\label{eq:t3}
\end{equation}
Firstly, $m$ positions should be chosen from the set of $w/2$ positions of the non-zero bits. Then, once the positions of $m$ non-zero bits are determined, the remaining $w/2-m$ position can be chosen from the remaining $r-m$ available positions. Finally, in every case, $r$ circular shifts of the obtained vector is also a weak-key. It results in the term $r$ in Eq.~(\ref{eq:t3}). Note that, there is no term $2$ in Eq.~(\ref{eq:t3}) (compared with Eqs.~(\ref{eq-Num_1}) and (\ref{eq:t2})) because the second block of $\textbf{h}$ follows the same structure as the first block such that $|\textbf{h}_0\star x^l\textbf{h}_1|=m$ for $l\in\{0,1,2,\ldots, r-1\}$, where $\star$ indicates component-wise product.
\section{Experimental Setup}\label{Section.4}
In this section, we first provide some key technical details about our BIKE implementation. Then, we present the results of our extensive experiments and provide a relevant discussion.
\subsection{Our Implementation}
\label{impl}
We implemented the key generation, encryption, and decoding modules of the BIKE scheme in MATLAB. We used the BIKE system parameters $(r,w,t)=(12323, 142, 134)$ suggested in the latest technical specification of BIKE \cite{aragon2017bike} for $\lambda=128$ bit security level. The simulations were performed on eight powerful severs equipped with Intel(R) Xeon(R) 2.5GHz CPU (6 processors) and 64GB RAM. We used the extrapolation technique proposed in \cite{sendrier2020low} to estimate DFR of the BGF decoder for the weak-keys. In the following, we provide some technical details on our implementation of key generation, encryption, and decoding procedures.\par
\emph{Key Generation}:
The most difficult challenge in the implementation of key generation module is to perform the polynomial inversion operation (over $\mathbb{F}_2$) which is needed to compute public key from private key (see Eq.~(\ref{eq-3}) and Section \ref{section-key_gen}). Note that, due to the large value of $r$, computing the inverse in the matrix domain is not an efficient approach, and thus can contribute towards computational overhead for our analysis. To have the light-weight inverse operation for our analysis, we adopted the latest extension of the Itoh-Tsujii Inversion (ITI) algorithm \cite{itoh1988fast,drucker2020fast} for polynomial inversion. This algorithm is based on the Fermat's Little Theorem that gives the inverse of polynomial $\textbf{a}=(a_0+a_1x+a_2x^2+\ldots+a_{r-1}x^{r-1})$ as $\textbf{a}^{-1}= \textbf{a}^{2^{r-1}-2}$. The ITI algorithm provides an efficient calculation of $\textbf{a}^{2^i}$ through $i$ cyclic shifts of the $\textbf{a}$'s binary vector. To utilize this approach, the adopted extension of the ITI algorithm uses a novel technique to convert $2^{r-1}-2$ to a series of $2^i$ sub-components that are computed using easy-to-implement cyclic shifts.\par
Based on the mentioned approach, the adopted algorithm needs to perform $\lfloor log(r-1)\rfloor + wt(r-2)-1$ multiplications and $\lfloor log(r-2)\rfloor + wt(r-2)-1$ squaring operations to compute the inverse, where $wt(r-2)$ indicates the Hamming weight of the value of $r-2$ written in the binary format. Thus, it is a scalable algorithm in terms of $r$ and much more efficient than the inverse computation in the matrix domain.\par
We also perform polynomial multiplications before generating the public key. We developed a technical solution explained in the next paragraph. Finally, for each set of experiments, we saved the generated public and private keys in a file such that the keys can be easily accessed by the encryption and decoding scripts, respectively.\par
\emph{Encryption}: To compute the ciphertexts, polynomial multiplication is the basic operation performed in the encryption module (see Section \ref{section-Encaps}). We developed the following approach to compute the multiplication of two polynomials $\textbf{a}=(a_0+a_1x+a_2x^2+\ldots+a_{r-1}x^{r-1})$ and $\textbf{b}=(b_0+b_1x+b_2x^2+\ldots+b_{r-1}x^{r-1})$ of degree $r-1$.\par
Assuming that $\textbf{a}.\textbf{b}=\textbf{c}=(c_0+c_1x+c_2x^2+\ldots+c_{r-1}x^{r-1})$, we computed the binary coefficients of $\textbf{c}$ as $c_0=a_0b_0\oplus a_1b_{r-1}\oplus \ldots \oplus a_{r-1}b_1$, $c_1=a_0b_1\oplus a_1b_0\oplus \ldots \oplus a_{r-1}b_2$, $\ldots$, $c_{r-1}=a_0b_{r-1}\oplus a_1b_{r-2}\oplus \ldots \oplus a_{r-1}b_0$. We observe that
\begin{equation} \label{eq-MUL}
c_i=\sum_{j=0}^{r-1}a_jb_{(i-j+r) \ mod \ r} \ \ \ mod \ 2 \ \ for \ i\in \{0,1,2,\ldots,r-1\}
\end{equation}
We implemented Eq.~(\ref{eq-MUL}) to perform the multiplications required for computing ciphertext $\textbf{C}_0$. The same approach is also used to perform the polynomial multiplications in the key generation module.\par
\emph{Decoding}: We implemented a BGF decoder based on Algorithm 1 provided in \cite{aragon2017bike} with 128 bit security level, i.e., $\lambda=128$, $NbIter=5$, $\tau=3$. In each iteration, the syndrome vector is firstly updated using the received ciphertext/updated error vector and the private key. Then, the number of unsatisfied parity-check ($upc$) equations for each node bit $i \in \{0,1,\ldots,n-1\}$ is counted and compared with the threshold $T=max(0.0069722.|\textbf{S}| + 13.530, 36)$, where $|\textbf{S}|$ is the Hamming weight of the syndrome vector updated at each iteration. If it is larger than $T$, the relevant bit $i$ in the error vector is flipped. The next iteration is executed using the updated error vector. Finally, after $NbIter=5$ iterations, if the updated syndrome $\textbf{S}$ is equal to $\textbf{e}\textbf{H}^T$, vector $\textbf{e}$ is returned as the recovered error vector; otherwise, the decoder returns \emph{failure}.\par
Note that, in the first iteration of the BGF decoder, two additional steps are performed that are related to the Black and Gray lists of the bit nodes. These two lists are created and maintained in the first iteration to keep track of those bit flips that are considered uncertain. The Black list includes those bit nodes that have been just flipped (i.e., $upc>T$) while the Gray list maintains the index of those bit nodes for which $upc$ is very close to the threshold such that $upc>T-\tau$. Then, to gain more confidence in the flipped bits, a Black/Gray bit node is flipped if its updated $upc$ is larger than the empirically set threshold $(w/2+1)/2+1$.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.3\textwidth, height=1.3in]{figs/3-a.jpg}
\includegraphics[width=0.3\textwidth, height=1.3in]{figs/3-b.jpg}
\includegraphics[width=0.3\textwidth, height=1.3in]{figs/3-c.jpg}\\
\includegraphics[width=0.3\textwidth, height=1.3in]{figs/3-d.jpg}
\includegraphics[width=0.3\textwidth, height=1.3in]{figs/3-e.jpg}
\includegraphics[width=0.3\textwidth, height=1.3in]{figs/3-f.jpg}
\caption{Result of linear extrapolation technique proposed in \cite{sendrier2020low} for estimation of DFR with varying values of $f$. (a) $f=5$, (b) $f=10$, (c) $f=15$, (d) $f=20$, (e) $f=25$, (f) $f=30$.}
\label{fig:DFR}
\end{figure*}
\subsection{Experimental Methodology}
As mentioned before, there exists no formal mathematical model that can lead to precise computation of DFR in BF-based decoders. To circumvent this, in prior works, DFR is estimated empirically. We adopt a similar empirical approach for DFR estimation in this work, but with the emphasis on weak-keys, a feature that has been overlooked in prior works, specifically in context of BGF decoders (i.e., recommended for BIKE mechanism submitted to NIST). To conduct our analysis and visualize the impact of weak-keys upon BIKE (i.e., BGF decoder), we leveraged our Matlab implementation as detailed above in Section \ref{impl}. We started our analysis by crafting Type I weak-keys (see Section \ref{t1}) for varying values of $r$ (this is selected such that 2 is primitive modulo $r$) and by incrementing parameter $f$ from 5 to 40 in steps of 5 for each value of $r$.
Ideally, one needs to perform an analysis on the value of $r$ that will result in the DFR corresponding to the required level of security. This is because the prior research has shown that the average DFR must be upper bound by $2^{-\lambda}$ for ensuring the $\lambda$-bit of IND-CCA security \cite{sendrier2020low}. Therefore, for 128-bit of security (i.e., minimum requirement for NIST standardization), the DFR of deployed decoder should be $2^{-128}$. In other words, $2^{128}$ ciphertexts must be generated and applied to the decoder to record a single failure, which is impracticable even on a powerful and efficient computing platform. In view of this bottleneck, prior research (see \cite{sendrier2020low}) have resorted to extrapolation techniques applied to DFR curve obtained with some small values of $r$ (as compared with that needed for DFR of $2^{-128}$, but sufficiently large to estimate the overall trend of DFR). This technique is based on the assumption (supported by empirical data) that $log(DFR_{\lambda}(r))$ is a concave and decreasing function for $DFR_{\lambda}(r)\geq 2^{-\lambda}$. More precisely, in this approach, DFR is empirically obtained for some smaller values of $r$ (which results in relatively large DFRs that can be measured using simulation), and then, the last two points on the DFR curve are linearly extrapolated to obtain the third point that corresponds to the desired value of $r$ needed for target security level (e.g., $r=12323$ in the BIKE BGF decoder for 128-bit security).
We adopt a similar methodology for our analysis --- i.e., for each value of $f$ (i.e., 5 to 40), we compute DFR with two relatively small values of $r$ before extrapolating them to $r = 12323$ (i.e., the same $r$ value proposed in BIKE \cite{BIKE} corresponding to DFR of $2^{-128}$). Precisely, we compute DFR at each point with at least 1000 failures, ensuring the confidence interval of 95\% \cite{sendrier2020low}. Our analysis revealed that, as per expectations, $DFR_{w}$ increases with $f$, thereby allowing us to move the tested values of $r$ from $r_1=9739$ and $r_2=9817$ (for $f=5, 10, 15$, and $20$) to $r_1=10103$ and $r_2=10181$ (for $f=25$), and $r_1=10181$ and $r_2=10273$ (for $f=30$). Moreover, for $f=35$ and $f=40$, since we expected large values for $DFR_w$, we did not need to perform the extrapolation approach, i.e., DFR was directly measured at $r=12323$.
Once we obtain DFR for each value of $f$ (5-40), Eq.~(\ref{eq.Basic_cond}) suggests that for providing the $\lambda$-bit of IND-CCA security, the $P_w = \eta_w.DFR_w$ term must be smaller than $2^{-\lambda}$, where $\eta_w= \frac{\mathcal{|K}_w|}{\mathcal{|K|}}$. If $P_w$ is less than $2^{-128}$ (i.e., for 128 bit security), then weak-keys have no significant impact upon DFR and thus on the BIKE IND-CCA security. Otherwise (i.e., $P_w$ is greater than $2^{-128}$), the IND-CCA security of BIKE is impacted by weak-keys and thus can potentially be of concern.
\subsection{Results}
Fig.~\ref{fig:DFR} presents the actual $DFR_w$ (i.e., $log_2(DFR_w)$) for two values of $r$ that we tested for each value of $f$ (i.e., 5 to 40) along with the results of linear extrapolation for $r = 12323$ (i.e., proposed value in BIKE \cite{BIKE} for 128-bit security). As is evident from Figure \ref{fig:DFR}, the DFR corresponding to $r = 12232$ (i.e, obtained through linear extrapolation) increases with $f$ (see $log_2(DFR_w) = -96.28$ for $f=5$ vs $log_2(DFR_w) = -18.99$ for $f= 30$). For larger values of $r$, i.e., 35 and 40, the corresponding values of $DFR_w$ are observed to be 0.8 and 1 ($log_2(DFR_w)$ will be -0.32 and 0), respectively, even when no extrapolation is performed and tested directly with $r=12323$. For gaining the insights into the IND-CCA security of BIKE mechanism in presence of weak-keys, we are interested in finding the values of term $P_w = \eta_w.DFR_w$. In accordance with our prior discussion presented in sub-sections \ref{t1} - \ref{t3} (see equations \ref{eq:t1} - \ref{eq:t3}), $\eta_w$ in this equation is dependent upon $f$ (in Type I weak-keys) and multiplicity parameter (in Type II and III weak-keys). The values of $\eta_w$ for Type I - Type III weak-keys with varying parameters (i.e., $f$ and multiplicity factor) are computed using Eqs.~\ref{eq:t1} - \ref{eq:t3}, and shown in Fig.~\ref{fig:Eta}. As expected, for large parameters $f$ and $m$, the number of weak-keys are negligible as compared with the size of the whole key space. This results in negligible values for $\eta_w$ at larger values of $f$ and $m$.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.3\textwidth, height=1.2in]{figs/4-a.jpg}
\includegraphics[width=0.3\textwidth, height=1.2in]{figs/4-b.jpg}
\includegraphics[width=0.3\textwidth, height=1.2in]{figs/4-c.jpg}
\caption{ $\eta_w$ for three types of weak-keys: (a) Type 1, (b) Type 2, and (c) Type 3.}
\label{fig:Eta}
\end{figure*}
\begin{table}[!ht]
\caption{The effect of Type I weak-keys on IND-CCA security of the BIKE scheme with parameters $(r,w,t)=(12323, 142, 134)$ and $\lambda=128$ (see Equation (\ref{eq.Basic_cond})).}
\label{Experiments}
\begin{center}
\resizebox{0.42\textwidth}{!}{%
\begin{tabular}{|c|c|c|c|}
\hline
\bfseries $f$ & \bfseries $log_2(\eta_w)$ & \bfseries $log_2(DFR_w)$& \bfseries $\eta_w.DFR_w$\\
\hline
5 & -10.225 & -96.28 & $2^{-106.50}$\\
\hline
$10$ & $-48.168$ & $-93.34$ & $2^{-141.51}$\\
\hline
15 & -86.6952 & -79.99 & $2^{-166.69}$\\
\hline
20 & -125.8586 & -72.14 & $2^{-198.00}$\\
\hline
25 & -165.7205 & -60.91 & $2^{-226.63}$\\
\hline
30 & -206.3566 & -18.99 & $2^{-225.35}$\\
\hline
35 & -247.8609 & -0.32 & $2^{-248.18}$ \\
\hline
40 & -290.3535 & 0 & $2^{-290.35}$\\
\hline
\end{tabular}%
}
\end{center}
\end{table}
Table \ref{Experiments} summarizes the impact of Type 1 weak-keys on IND-CCA security of the BIKE scheme. As is evident from Table 2, for $f=5$, the term $P_w = \eta_w.DFR_w$ results in a value that is not appropriate for ensuring the 128-bit of security (i.e., minimum requirement for NIST standardization). Precisely, for $f=5$, the corresponding value of $P_w$ is $2^{-106.50}\nleq2^{-128}$, which suggests that the corresponding security level with this parameter is not adequate for NIST standardization. In fact, the contribution of Type 1 weak-keys with $f=5$ in the average DFR of the decoder is larger than the maximum allowed value. For re-affirming our observations, we repeated our experiments for $f=5$, this time with $r=10009$ which is closer to $r=12232$ and performed the extrapolation procedure with $r=9817$ and $r=10009$. The result shows that DFR (at $r=12323$) in this case increases further to -80.5 as compared with previously obtained value of -96.28. This again re-confirms that the term $P_w = \eta_w.DFR_w$ in this case is greater than required value of $2^{-128}$ (i.e, $2^{-90.73}$ in this case). It is noteworthy that, we only obtained 13 failures out of 0.3M ciphertexts in this case (vs 1000 in previous case) due to a larger value of $r$. However, our repeated experiments with decoders suggests that failure rate generally stays consistent when tested with sufficiently large numbers of ciphertexts. Also, note that, the overall contribution of Type 1 weak-keys on the average DFR is the sum of values shown in the last column of the Table 2 (as well as the values for other $f$s that we have not considered in this work, e.g., $f=6, 7, 8$, etc), which may further impact the overall average DFR negatively. However, only the first row of the Table 2 suffice to demonstrate the negative effect of weak-keys on IND-CCA security of the BIKE scheme that may need an immediate attention of the research community prior to NIST standardization. Note that, we have not repeated our experiments for Types II and III weak-keys. We expect similar results for small values of $m$ in Types II and III (e.g., $m=8$) since their dependent parameter (i.e., multiplicity) exhibits a correlation with $f$ of Type I weak-keys (see Fig.~\ref{fig:Eta}). We leave this analysis for future to have the definitive insights.
\begin{comment}
\subsection{Results}
As mentioned before, there is no formal model for the accurate computation of DFR in BF-based decoders. Thus, in previous works, DFR has been computed empirically. On the other hand, because the value of DFR is negligible for normal keys and at practical values of $r$ (e.g., around $2^{-128}$ for the BGF decoder at $r=12,323$), it is infeasible to obtain the exact value of DFR through simulation (i.e., $2^{128}$ ciphertexts must be generated and applied to the decoder to record a single failure). To address this problem, in \cite{sendrier2020low}, an extrapolation-based technique has been used to obtain an upper bound for DFR. It is based on the assumption that $log(DFR_{\lambda}(r))$ is a concave and decreasing function for $DFR_{\lambda}(r)\geq 2^{-\lambda}$. In this approach, DFR is empirically obtained for two smaller values of $r$ (which result in larger DFRs that can be measured using simulation). Then, based on the above-mentioned assumption, the two obtained points are linearly extrapolated to obtain the third point at the desired value of $r$ (e.g., $r=12,323$ in the BIKE BGF decoder).\par
We applied the above-mentioned extrapolation technique to obtain $DFR_w$ for different values of $f$. Fig.~\ref{fig:DFR} shows the results. We varied the parameter $f$ from 5 to 40 with the incremental step of 5. The value of DFR at each point was obtained based on the observation of at least 1000 failures. It provides the confidence interval of 95\% \cite{sendrier2020low}. The results follow our expectations, $DFR_w$ increased with $f$. Thus, we manged to move the tested values of $r$ from $r_1=9739$ and $r_2=9817$ (for $f=5, 10, 15$, and $20$) to $r_1=10103$ and $r_2=10181$ (for $f=25$), and $r_1=10181$ and $r_2=10273$ (for $f=30$). Moreover, for $f=35$ and $f=40$, since we expected large values for $DFR_w$, we did not need to perform the extrapolation approach, i.e., DFR was directly measured at $r=12,323$. The results were obtained as 0.8 and 1 for $f=35$ and $f=40$, respectively.\par
To investigate the effect of weak-keys on IND-CCA security (see Eq.~(\ref{eq.Basic_cond})), we computed the value of $\eta_w$ for different types of weak-keys. Fig.~\ref{fig:Eta} shows the results. As expected, for large parameters $f$ and $m$, the number of weak-keys is negligible as compared with the size of the whole key space. This results in negligible values for $\eta_w$ at these values of $f$ and $m$. Finally, Table \ref{Experiments} summarizes the effect of Type 1 weak-keys on IND-CCA security of the BIKE scheme. According to Eq.~(\ref{eq.Basic_cond}), to provide $\lambda$-bit of IND-CCA security, the $\eta_w.DFR_w$ term must be smaller than $2^{-\lambda}$. As is evident from Table 2, this condition is not satisfied for $f=5$, i.e., $2^{-106.50}\leq2^{-128}$. In fact, the contribution of Type I weak-keys with $f=5$ in the average DFR of the decoder is larger than the maximum allowed value. Note that the overall contribution of Type I weak keys on the average DFR is the sum of values shown in the last column of the table (as well as the values for other $f$s that we did not consider, e.g., $f=6, 7, 8$, etc.). This may make the situation even worse. However, considering only the first row of the table suffices to prove that the negative effect of weak-keys on IND-CCA security of the BIKE scheme can not be ignored. For the same reason, we did not repeat the tests for Types II and III weak-keys. Similar negative results are expected for small values of $m$ in Types II and III (e.g., $m=8$).
\end{comment}
\section{A Key-Check Algorithm}\label{Section.5}
In light of our analysis mentioned above, herein, we propose an algorithm that can potentially supplement BIKE mechanism in selecting private keys that are not weak, and thus can aid in ensuring the IND-CCA security. The proposed algorithm is based on the structure of each type of weak-keys reviewed in Section \ref{Section.3}. The detailed working of the algorithm is depicted in Algorithm 1 and each involved step is explained below. \par
Assume that $\textbf{h}=(\textbf{h}_0 \ \textbf{h}_1)$ indicates a private key generated by a user, where $\textbf{h}_i$ ($i \in\{0,1\}$) is a polynomial with the maximum degree of $r-1$. We consider each polynomial $\textbf{h}_i$ as $\textbf{h}_i=\sum_{j\in Supp(\textbf{h}_i)}x^j$, where $Supp(\textbf{h}_i)=\{p_{1}^{(i)},p_{2}^{(i)},\ldots,p_{w/2}^{(i)}\}$ is called the support of $\textbf{h}_i$ that includes the positions of non-zero coefficients in $\textbf{h}_i$ (or non-zero bits in the corresponding binary vector). The proposed key-check algorithm takes $\textbf{h}_i$ ($i \in\{0,1\}$) as the input and returns either $Weak$ or $Normal$ as the output. It first initializes the distance vector $\textbf{D}=\{D_1,D_2,\ldots\,D_{\lfloor r/2\rfloor}\}$ by assigning 0 to all its elements, i.e., $D_d=0$ ($d\in\{1,2,\ldots,\lfloor r/2\rfloor\}$). Note that, at the end of the algorithm, every element $D_d$ will indicate the multiplicity of distance $d$ ($d\in\{1,2,\ldots,\lfloor r/2\rfloor\}$) in the private key. Thus, the largest element of $\textbf{D}$ will be compared with a specific threshold (e.g., 5, 10, etc.) to decide whether it should be considered as $Weak$ or $Normal$. \par
\begin{table}
\label{tab:freq}
\resizebox{0.5\textwidth}{!}{%
\begin{tabular}{l}
\hline
\textbf{Algorithm 1:} Key-Check Algorithm\\
\hline
\textbf{Input:} Multiplicity threshold $T$, $\textbf{h}_i=\sum_{j\in Supp(\textbf{h}_i)}x^j$ for $i\in\{0,1\}$ \\
\ \ \ \ \ \ \ \ \ \ and $Supp(\textbf{h}_i)=\{p_{1}^{(i)},p_{2}^{(i)},\ldots,p_{w/2}^{(i)}\}$, $r$, and $w$\\
\textbf{Output:} $Weak$ or $Normal$\\
1: \textbf{for} $i=0:1$\\
2: \ \ \ $\textbf{D}=\{D_1, D_2, \ldots, D_{\lfloor r/2\rfloor}\}=\textbf{0}$\\
3: \ \ \ \textbf{for} $j=1:w/2$ \\
4: \ \ \ \ \ \ \textbf{for} $k=j+1:w/2$ \\
5: \ \ \ \ \ \ \ \ \ $temp=distance(p_j^{(i)},p_k^{(i)})$ \\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $=min[(p_j^{(i)}-p_k^{(i)}+r \ mod \ r), (p_k^{(i)}-p_j^{(i)}+r \ mod \ r)]$\\
6: \ \ \ \ \ \ \ \ \ $D_{temp}=D_{temp} + 1$\\
7: \ \ \ \textbf{if} $max(\textbf{D}) > T:$\\
8: \ \ \ \ \ \ \textbf{return} $weak$ \\
9: \textbf{for} $j=1:w/2$\\
10: \ \ \ \textbf{for} $k=1:w/2$\\
11: \ \ \ \ \ \ \textbf{if} $|\textbf{h}_0\star x^{p_j^{(0)}-p_k^{(1)}}\textbf{h}_1|>T$ \\
12: \ \ \ \ \ \ \ \ \ \textbf{return} $Weak$ \\
13: \textbf{return} $Normal$
\end{tabular}
}
\end{table}
For every block $\textbf{h}_i$ ($i \in \{0,1\}$), the algorithm computes distance between the position of every non-zero coefficient (specified by $p_j^{(i)}$) and the positions of the remaining non-zero coefficients located at the right-hand side (note that the \textbf{for} loop in line 4 starts at $j+1$). Then, the multiplicity counter $D_{temp}$ associated with the computed distance $temp$ is increased by 1. This is repeated for all the $w/2$ non-zero coefficients of $\textbf{h}_i$. Finally, if the maximum element of $\textbf{D}$ is larger than the multiplicity threshold $T$, the key is identified as a weak-key.\par
Lines 9-13 of the algorithm checks the key against the weak-key structure of Type III. In Type III, the component-wise product of block $\textbf{h}_0$ and $\sigma$-bit shifts of block $\textbf{h}_0$ must have a size larger than $T$, where $\sigma$ is the distance between the position of non-zero coefficients in $\textbf{h}_0$ (specified by $p_j^{(0)}$) and the positions of non-zero coefficients in $\textbf{h}_1$ (specified by $p_k^{(1)}$). In fact, applying a circular shift of $p_j^{(0)}-p_k^{(1)}$ bits on $\textbf{h}_1$ (by applying the $x^{p_j^{(0)}-p_k^{(1)}}$ term) will move the non-zero coefficient of $\textbf{h}_1$ located at position $p_k^{(1)}$ to position $p_j^{(0)}$. In this case, the two polynomial ($\textbf{h}_0$ and $x^{p_j^{(0)}-p_k^{(1)}}\textbf{h}_1$) will have a non-zero coefficient at the same position $p_j^{(0)}$ which may generate a high number of intersections between the corresponding columns in the parity-check matrix. Finally, if the multiplicity of intersections is less than the threshold, the key is identified as a normal key.
\section{Conclusion}\label{Section.6}
This paper investigated the impact of weak-keys on IND-CCA security of the BIKE post-quantum key encapsulation mechanism. We first implemented the BIKE scheme with the parameters suggested in the BIKE technical specifications. Then, we performed extensive experiments to estimate the DFR of the BIKE BGF decoder for weak-keys. Our analysis suggests that the weak-keys are precarious for the IND-CCA security of the BIKE scheme and thus need immediate attention from the relevant research community prior to NIST standardization. We believe that this issue can be addressed by a potential key-check algorithm that we propose to supplement the BIKE mechanism. Theoretically, our key-check algorithm can prevent users from adopting weak private keys. The empirical analysis of the key-check algorithm and Type II and III weak-keys are left for future work to have an affirmative understanding of the effect of weak-keys on the BIKE mechanism.
\section*{Acknowledgement}
\noindent This work was partially funded by the Cyber Security Cooperative Research Centre (CSCRC).
\bibliographystyle{IEEEtran}
|
1,116,691,497,391 | arxiv |
\section{State-changing parallel transport}\label{sec:transport}
Let us apply the formalism above to a parallel transport process that modifies not the location of the entangling interval, but rather the state of the system. For definiteness, we work on the AdS$_3$ cylinder with a choice of time slice in the boundary CFT$_2$.
Consider a change of state by acting by an element $\xi(z)$ of Diff($S^1$), starting from the vacuum of AdS$_3$. The operator that implements this is
\begin{equation} X_{\xi} = \frac{1}{2\pi i} \oint \xi(z) T(z)\, dz~,\end{equation}
where $T(z)$ is the stress tensor of the boundary CFT. In particular, the diffeomorphism $\xi(z)= z^n$ is implemented by the usual Virasoro mode operator $X_{z^n}=L_{n-1}$.
Under such a general transformation, the modular Hamiltonian $H_{\rm mod}$ associated to some interval on the boundary transforms as
\begin{equation} \delta_{\xi} H_{\rm mod} =[X_{\xi},H_{\rm mod}]~. \label{eq:PT}\end{equation}
Notice that this is just the parallel transport equation, Eq.~\eqref{eq:paralleltransport}, minus the zero mode piece.
Now imagine computing the curvature, Eq.~\eqref{eq:curvature2}, by taking the parallel transport along a small square, i.e., first performing a transformation $\xi_1$ followed by a transformation $\xi_2$, then subtracting the opposite order. The result for the curvature is derived in Appendix~\ref{app:general} and is given by
\begin{equation} F = P_0([X_{\xi_1},X_{\xi_2}])~, \label{eq:berrycurvature}\end{equation}
where $P_0$ projects to the zero mode of its argument, and the operators $X_{\xi_i}$ are assumed to have no zero modes themselves. We note that while we focus here on CFT$_2$, this is a quite general result that applies to any parallel transport process of the form Eq.~\eqref{eq:PT}. Eq.~\eqref{eq:berrycurvature} together with its application in an explicit example constitute the main results of this section.
The projection operator in Eq.~\eqref{eq:berrycurvature} is defined by the property that it gives a nonzero answer when evaluated on the modular Hamiltonian (and in general, any other operators that commute with it). Meanwhile, it evaluates to zero on any other operators, which we have assumed take the form $[\cdot, H_{\rm mod}]$ in the decomposition Eq.~\eqref{eq:paralleltransport}. It is possible to construct the projection explicitly in cases where the modular Hamiltonian is known, for instance in our case of CFT$_2$. Let $\theta$ be the spatial boundary coordinate on a constant time slice. The modular Hamiltonian for an interval of angular radius $\alpha$ centered around $\theta=0$ on the cylinder is~\cite{Cardy:2016fqc, Blanco:2013joa}
\begin{equation} H_{\rm mod} = \int_{-\alpha}^{\alpha} d\theta\, \frac{\cos{\theta}-\cos{\alpha}}{\sin{\alpha}} \,T_{00}(\theta)~. \label{eq:Hmodtheta}\end{equation}
Here, the units are chosen so that the stress energy tensor is dimensionless, $T_{00} \sim -c/12$ in the vacuum on the cylinder, with $T_{00}(\theta)\equiv-(T(\theta)+\overline{T}(\theta))$.
It will be useful to work in planar coordinates. We consider the conformal transformation
\begin{equation}
z=e^{i\theta}~\label{eq:cylindertoplane}
\end{equation}
to map the cylinder to the plane (with radial ordering). In particular, the interval $[-\alpha,\alpha]$ in the $\theta$-coordinate is mapped to the circle arc with opening angle $2\alpha$ in the $z$-plane. The stress tensor transforms as
\begin{equation}
T(\theta)=\left(\frac{\partial z}{\partial \theta} \right)^{2}T(z)+\frac{c}{12}\{z,\theta\}~,
\end{equation}
where the Schwarzian derivative is defined by
\begin{equation} \{z,\theta\}=\frac{z'''}{z'}-\frac{3}{2} \left(\frac{z''}{z'}\right)^2~.
\end{equation}
Applying the transformation Eq.~\eqref{eq:cylindertoplane}, we find that the modular Hamiltonian on the plane is given by
\begin{equation} H_{\rm mod} = \frac{1}{i}\oint_{|z|=1} \frac{\frac{1}{2}(1+z^2) - z \cos{\alpha}}{\sin{\alpha}} T(z) \,dz~. \label{eq:Hmodplane1}\end{equation}
Notice that in Eq.~\eqref{eq:Hmodplane1} we have converted to the \emph{complete} modular Hamiltonian by integrating over the full range of coordinates instead of $[-\alpha,\alpha]$.
The reason is that an integration over the full circle allows for an expansion of quantities in terms of Virasoro modes. Moreover, we have conveniently subtracted the vacuum energy of the cylinder in going from Eq. \eqref{eq:Hmodtheta} to Eq. \eqref{eq:Hmodplane1} and only kept the holomorphic part of the stress tensor.
For simplicity, we will take $\alpha = \pi/2$ so that the interval extends along half of the cylinder (from $z=-i$ to $z=i$ in the Euclidean plane). The generalization to intervals with arbitrary $\alpha$ is straightforward. With this convention the modular Hamiltonian simplifies to
\begin{equation}
H_{\rm mod} =\frac{1}{2 i}\oint (1+z^2)T(z)\,dz~. \label{eq:Hmodplane}
\end{equation}
We can also express this in terms of the Virasoro modes on the plane,
\begin{equation}
L_n= \frac{1}{2\pi i}\oint z^{n+1}T(z)dz~\label{eq:Ln}~,
\end{equation}
which satisfy the Virasoro algebra
\begin{equation} \label{eq:virasoroalgebra}
[L_m,L_n]=(m-n)L_{m+n}+\frac{c}{12}m(m^2-1)\delta_{m+n,0}~.
\end{equation}
Then, Eq.~\eqref{eq:Hmodplane} can be re-expressed as
\begin{equation} \label{eq:HmodVir}
H_{\rm mod}=\pi (L_{-1}+L_{1})~.
\end{equation}
In the following, it will be useful to write formulae in terms of the diffeomorphism $\xi$ directly, rather than in terms of the corresponding operator $X_{\xi}$. In particular, we identify the modular Hamiltonian $H_{\rm mod}$ with the vector field $\xi(z)=\pi(1+z^2)$, as follows from Eq.~\eqref{eq:Hmodplane}. Moreover, if we take an operator of the form
\begin{equation} X_{\xi} = \frac{1}{2\pi i} \oint \xi(z) T(z)\, dz~,\label{eq:Xchi}\end{equation}
the commutator with $H_{\rm mod}$ can also be expressed in $\xi$ directly. Using Eqs.~\eqref{eq:Hmodplane} and \eqref{eq:Xchi}, applying the OPE
\begin{equation} T(w) T(z) = \frac{c/2}{(z-w)^4} + \frac{2T(w)}{(z-w)^2} + \frac{\partial T(w)}{z-w} + ...\end{equation}
and integrating by parts we find
\begin{equation}
[H_{\rm mod},X_{\xi}] = \frac{1}{2i} \oint \left[2z \xi(z)-(1+z^2)\xi'(z)\right]T(z)\, dz~.\label{eq:nonzeromode1}
\end{equation}
To implement Eq. \eqref{eq:berrycurvature} for the modular Berry curvature one needs to define the operator $P_0$ which projects onto the zero mode. Following the general prescription in Section \ref{sec:density}, one would like to decompose an arbitrary operator $X$ into the image and the kernel of the adjoint action of $H_{\rm mod}$,
\begin{equation} \label{eq:decomposition1}
X=\kappa H_{\rm mod}+[H_{\rm mod},Y]~,
\end{equation}
where $\kappa$ is the zero mode that needs to be extracted.
However, it turns out that there is a subtlety associated with the above decomposition in the case of the Virasoro algebra. In general, there are operators which are neither in the kernel, nor in the image of the adjoint action\footnote{For finite-dimensional vector spaces this is not the case if the kernel and image are disjoint, as follows from a simple dimension counting. In the infinite-dimensional set-up the situation is more complicated, e.g., one can write down linear maps which are injective but not surjective.}, which leads to an ambiguity in the definition of the zero mode projection $P_0$. We refer to Appendix \ref{sec:NonDiagonalization} for a discussion of these issues in the case of the Virasoro algebra.
For this reason, we will consider a different class of transformations, i.e., those which diagonalize the adjoint action of the modular Hamiltonian $H_{\rm mod}$ (see \cite{Das:2020goe} where a similar diagonalization in terms of so-called modular eigenmodes was considered). Therefore, we start from the eigenvalue equation
\begin{equation} \label{eq:eigenvalueeq}
[H_{\rm mod},X_{\lambda}]=\lambda X_{\lambda}~,
\end{equation}
where we have used the short-hand notation $X_{\lambda}\equiv X_{\xi_{\lambda}}$ for the operator associated to the transformation $\xi_{\lambda}$. Using Eq.~\eqref{eq:nonzeromode1} it is not difficult to see that Eq.~\eqref{eq:eigenvalueeq} is solved by
\begin{equation} \label{eq:eigenfunction}
\xi_{\lambda}(z)=\pi(1+z^2)\left(\frac{1-iz}{z-i} \right)^{-i\lambda/2\pi}~.
\end{equation}
In particular, we see that the operator with eigenvalue zero, $\lambda =0$, is the modular Hamiltonian itself, as one would expect from Eq.~\eqref{eq:eigenvalueeq}. Notice that the solutions in Eq.~\eqref{eq:eigenfunction} go to zero at the endpoints of the interval:
\begin{equation}\label{eq:vanishing}
\xi_{\lambda}(z)\to 0 \hspace{10pt} {\rm as} \hspace{10pt} z\to \pm i~.
\end{equation}
The eigenfunctions of $H_{\rm mod}$ therefore correspond to the transformations which change the state, but not the location of the boundary interval. They are not analytic at $z=\pm i$,\footnote{Note that due to Eq.~\eqref{eq:vanishing}, it is valid to apply a single integration by parts. Thus, Eq.~\eqref{eq:nonzeromode1} is maintained.} so strictly speaking they are not part of the Virasoro algebra (defined in the usual way as the space of smooth vector fields on the circle). However, they seem to be the natural transformations to consider in this context. We will refer to them as \emph{state-changing} transformations as opposed to the shape-changing transformations in Section \ref{sec:IntervalTransport}.
From Eq.~\eqref{eq:eigenvalueeq} combined with the Jacobi identity, these eigenfunctions form an algebra with commutation relations
\begin{equation} \label{eq:algebra}
[X_{\lambda},X_{\mu}]=(\lambda-\mu)X_{\lambda+\mu}~,
\end{equation}
which defines a continuous version of the Virasoro algebra\footnote{A Virasoro algebra with continuous index also appears in the context of the so-called dipolar quantization of 2d CFT \cite{Ishibashi:2015jba, Ishibashi:2016bey} which is related to the sine-square deformation~\cite{Gendiar:2008udd,Katsura:2011ss}, as well as in the study of non-equilibrium flows in CFT~\cite{Doyon:2013paa}.} with generators $X_{\lambda}$ labeled by a continuous parameter $\lambda \in \mathbb{R}$. Note that in the following we are leaving out the central extension (so strictly speaking we are working with a continuous version of the Witt algebra). We will return to discuss how to include the central extension in Section~\ref{sec:extension}.
It is natural to define the transformations in Eq.~\eqref{eq:eigenfunction} to have support only on the subregion $A$. In the case at hand, this makes all the contour integrals collapse to integrals over the semicircle from $-i$ to $i$, e.g.,~the $\lambda=0$ eigenfunction does not correspond to the \emph{complete} modular Hamiltonian, but simply to the half-sided one. The state-changing vector fields, which might look unfamiliar in terms of the $z$-coordinate, take a more familiar form when we map the entanglement wedge to a hyperbolic black hole geometry using \cite{Casini:2011kv}.
This can be seen in the following way. Starting with the boundary CFT$_d$ on the Euclidean cylinder $\mathbb{R} \times S^{d-1}$ with metric
\begin{equation}\label{cylinder}
ds^2 = dt_E^2 + d \theta^2 + \sin^2 \theta ~ d\Omega_{d-2}^2~,
\end{equation}
we consider a fixed sphere at $t_E=0$, $\theta = \theta_0$. We can apply the following conformal transformation considered in~\cite{Casini:2011kv}:
\begin{eqnarray} \label{CHM}
\tanh t_E &=& \frac{\sin \theta_0 \sin \tau }{\cosh u + \cos \theta_0 \cos \tau}~, \nonumber \\
\tan \theta &=& \frac{\sin \theta_0 \sinh u}{\cos \theta_0 \cosh u + \cos \tau}~,
\end{eqnarray}
which conformally maps the causal development of the sphere to the hyperbolic geometry $\mathbb{R}\times\mathbb{H}^{d-1}$ given by
\begin{equation}
ds^2 = \Omega^2 \left(d\tau^2 + du^2 + \sinh^2 u ~ d \Omega^2_{d-2} \right)~,\\
\end{equation}
with conformal factor
\begin{equation}
\Omega^2=\frac{\sin^2\theta_0}{(\cosh u +\cos \theta_0\cos\tau)^2-\sin^2\theta_0\sin^2\tau}~.
\end{equation}
Taking $d=2$ and $\theta_0 = \pi /2$ for the half interval entangling surface, the transformation Eq.~(\ref{CHM}) at the $\tau=0$ (or equivalently $t_E=0$) time slice reduces simply to
\begin{equation}
\tan \theta =\sinh u~.
\end{equation}
Written in terms of the coordinate $z=e^{i\theta}$ this leads to
\begin{equation} \label{eq:coordtrans}
e^{u}=\frac{1-iz}{z-i}~.
\end{equation}
Recall that the boundary region $A$ corresponds to $|z|=1$ and $-\pi/2\leq \arg(z)\leq\pi/2$ in the plane, so it is mapped to $u\in \mathbb{R}$. Moreover, the components of the vector field transform according to
\begin{equation} \label{eq:vectorfieldtrans}
\xi_{\lambda}(z)\frac{\partial}{\partial z}= \xi_{\lambda}(u)\frac{\partial}{\partial u}~
\end{equation}
with
\begin{equation}\label{eq:uzcoordtransf}
du =-2i\frac{dz}{1+z^2}~,
\end{equation}
so that the transformations take the simple form
\begin{equation} \label{eq:uvariable}
\xi_{\lambda}(u)=-2\pi i\,e^{-i\lambda u/2\pi}~.
\end{equation}
Hence, we find that the state-changing transformations, when written in terms of the $u$-variable, are simply plane wave solutions with frequency $\lambda/2\pi$ in this black hole background. Therefore, they are natural objects to consider in this geometry.
We can reintroduce both the right- and the left-movers by replacing $u\to u+i\tau$ in Eq.~\eqref{eq:coordtrans}, so that $z$ is allowed to take values in the half plane $\mathrm{Re}\, z\geq 0$ (the radial direction in the $z$-plane corresponds to time evolution in $\tau$). Eq.~\eqref{eq:vectorfieldtrans} is therefore modified according to
\begin{equation}
\xi_{\lambda}(z)\frac{\partial}{\partial z}=\xi_{\lambda}(u+i\tau)\left(\frac{\partial}{\partial u}-i\frac{\partial}{\partial \tau}\right)~, \quad \xi_{\lambda}(\bar{z})\frac{\partial}{\partial \bar{z}}=-\xi_{\lambda}(-u+i\tau)\left(\frac{\partial}{\partial u}+i\frac{\partial}{\partial \tau}\right)~.
\end{equation}
By setting $\lambda=0$ and adding the right- and left-moving contributions, we see that the modular Hamiltonian indeed acts by time translation in the black hole geometry:
\begin{equation}
H_{\rm mod}\sim \frac{\partial }{\partial \tau}~.
\end{equation}
Working in the algebra associated to the eigenfunctions of $H_{\rm mod}$, we do have a unique decomposition of the form Eq.~\eqref{eq:decomposition1}: one simply decomposes an arbitrary operator into eigenoperators, which have either $\lambda=0$ or $\lambda\neq 0$. Given such a decomposition it is easy to write down an operation which extracts the zero mode $\kappa$, namely a linear functional $P_0$ which satisfies\footnote{For technical reasons we set $P_0(H_{\rm mod})\sim \delta(0)$, instead of $P_0(H_{\rm mod})\sim 1$ as one might have naively expected. This results from the plane-wave normalizability of the eigenfunctions, Eq.~\eqref{eq:uvariable}. It ensures the modular Berry curvature is finite when evaluated on wave packets in Section \ref{sec:example}.}
\begin{equation} \label{eq:projconditions}
P_0(H_{\rm mod})\sim \delta(0)~, \hspace{10pt} P_0([H_{\rm mod},Y])=0~.
\end{equation}
In the $u$-coordinate such a functional can be written as
\begin{equation} \label{eq:projectionoperator}
P_0(X_{\xi})=\lim_{\Lambda\to \infty}\frac{i}{2\pi}\int_{-\Lambda}^{\Lambda}\xi(u)\,du~.
\end{equation}
Using the coordinate change Eq.~\eqref{eq:uzcoordtransf}, we can represent the projection in the $z$-coordinate as
\begin{equation}\label{eq:projectioninz}
P_0(X_\xi) =\lim_{\Lambda\to \infty}\frac{i}{2\pi}\int_{-\Lambda}^{\Lambda}\xi(u)\, du = \frac{1}{\pi}\int_{-i}^{i}\frac{\xi(z)}{(1+z^2)^2}\,dz~.
\end{equation}
When applied to the eigenfunctions of $H_{\rm mod}$ the projection becomes
\begin{equation}
P_0(X_{\lambda})=\lim_{\Lambda\to \infty}2\pi\int_{-\Lambda}^{\Lambda} e^{i\lambda u}\,du=4\pi^2\delta(\lambda)~,
\end{equation}
which is a standard representation of the Dirac delta function. To show that $P_0$ vanishes on commutators of the form $[H_{\rm mod},Y]$, it suffices to remark that one can take $Y$ to satisfy $[H_{\rm mod},Y]=\lambda Y$ with $\lambda\neq 0$ without loss of generality. This shows that Eq.~\eqref{eq:projectionoperator} defines a good projection operator in the sense of Eq.~\eqref{eq:projconditions}.
\subsection{Example} \label{sec:example}
We now have all the ingredients to compute the curvature in an explicit example. We consider a general perturbation of the form
\begin{align}
z'=z+\epsilon \, \xi(z)+\mathcal{O}(\epsilon^2)~, \label{eq:transf}
\end{align}
where $\xi(z)$ is a wave packet
\begin{equation}
\xi(z)=\frac{1}{2\pi}\int_{-\infty}^{\infty} c(\lambda) \xi_{\lambda}(z)\, d\lambda~,
\end{equation}
with $\xi_{\lambda}(z)$ defined in Eq.~\eqref{eq:eigenfunction}. We start by obtaining the correction to the transformed modular Hamiltonian upon acting with Eq.~\eqref{eq:transf}. Let us expand both the modular Hamiltonian and the parallel transport operator to first order in the small parameter $\epsilon$:
\begin{equation}
H'_{\rm mod} =H^{(0)}+\epsilon\, H^{(1)}+\mathcal{O}(\epsilon^2)~, \indent S = S^{(0)} + \epsilon\, S^{(1)} + \mathcal{O}(\epsilon^2)~.
\end{equation}
Using that $z=z'-\epsilon \xi(z')+\mathcal{O}(\epsilon^2)$, one can expand the transformed $H_{\rm mod}$ to order $\mathcal{O}(\epsilon^2)$.
\begin{comment}
We will need the following transformations properties:
\begin{equation}
1+z^2=1+\xi^2-2\xi \epsilon X_{\lambda}(\xi)+O(\epsilon^2)~,
\end{equation}
and
\begin{equation}
dz =(1-\epsilon X'_{\lambda}(\xi))\, d\xi+O(\epsilon^2)~,
\end{equation}
and
\begin{equation}
T(z)= \left(\frac{\partial \xi}{\partial z}\right)^2T(\xi) =(1+2\epsilon X'_{\lambda}(\xi))T(\xi)+O(\epsilon^2)~.
\end{equation}
\end{comment}
One finds that $H^{(0)}=H_{\rm mod}$ is the original modular Hamiltonian, while the correction is given by
\begin{equation} \label{eq:H1}
H^{(1)}=-\frac{1}{2i}\oint \left[2z\xi(z)-(1+z^2) \xi'(z)\right] T(z)\, dz~.
\end{equation}
Here,
\begin{comment}in the stress tensor transformation
\begin{equation}\label{eq:Ttransform}
T(z) = \left(\frac{\partial z'}{\partial z}\right)^2 T(z') + \frac{c}{12} \{z',z\}~,
\end{equation}
\end{comment}
we have neglected the Schwarzian contribution for simplicity. It will be treated separately in Section \ref{sec:extension}. We now expand the parallel transport equation
\begin{equation}
\delta H_{\rm mod}=[S,H_{\rm mod}]
\end{equation}
to first order in $\epsilon$. This gives two separate equations:
\begin{equation}
0= [S^{(0)}, H^{(0)}]~, \quad
H^{(1)} = [S^{(0)},H^{(1)}]+[S^{(1)},H^{(0)}]\label{eq:transportfirst}~.
\end{equation}
Solving Eq.~\eqref{eq:transportfirst} for the correction $S^{(1)}$ to the parallel transport operator gives the solution
\begin{equation}
S^{(0)}=0~, \indent S^{(1)}=X_{\xi}~.
\end{equation}
Both $S^{(0)}$ and $S^{(1)}$ are defined up to a zero mode, meaning that one can add to it an extra operator $Q$ for which $[Q,H_{\rm mod}]=0$ (e.g., the modular Hamiltonian itself) and the parallel transport equation would still be satisfied.
To compute the curvature we need to consider two different parallel transport operators $S_1$ and $S_2$ which we take to be defined according to the transformations
\begin{equation}
\xi_1(z)=\frac{1}{2\pi}\int_{-\infty}^{\infty} c_1(\lambda) \xi_{\lambda}(z)\, d\lambda~, \quad \xi_2(z)=\frac{1}{2\pi}\int_{-\infty}^{\infty} c_2(\lambda) \xi_{\lambda}(z)\, d\lambda~,
\end{equation}
respectively. After projecting out their zero modes, we take the commutator and project to the zero modes again to obtain the value of the curvature component. Therefore, we need to compute
\begin{equation} \label{eq:comm}
[S^{(1)}_1-\kappa_1H^{(0)},S^{(1)}_2-\kappa_2H^{(0)}]~,
\end{equation}
where $\kappa_i=P_0(S_i)$, is the zero mode coefficient of the parallel transport operator $S_i$. We can split Eq.~\eqref{eq:comm} into terms that we can treat separately. Notice that the term proportional to $[H^{(0)},H^{(0)}]$ is zero and can be removed. Moreover, the definition of the projection operator immediately implies
\begin{equation}
P_0([S^{(1)}_1,H^{(0)}])=P_0([S^{(1)}_2,H^{(0)}])=0~.
\end{equation}
To evaluate the last commutator we use the commutation relations in Eq.~\eqref{eq:algebra} to obtain
\begin{equation}
[S^{(1)}_1,S^{(1)}_2]=\frac{1}{4\pi^2}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} (\lambda_1-\lambda_2)c_1(\lambda_1) c_2(\lambda_2) X_{\lambda_1+\lambda_2}\, d\lambda_1d\lambda_2~.
\end{equation}
Applying the projection operator sets $\lambda_1=-\lambda_2$, so that we find
\begin{equation}
P_0([S^{(1)}_1,S^{(1)}_2])=2\int_{-\infty}^{\infty}\lambda\, c_1(\lambda)c_2(-\lambda)\, d\lambda~.
\end{equation}
Therefore, the final result for the modular Berry curvature associated to the state-changing transport problem is given by
\begin{equation}\label{eq:curvature}
F=2\int_{-\infty}^{\infty}\lambda\, c_1(\lambda)c_2(-\lambda)\, d\lambda~.
\end{equation}
Note that the curvature appropriately vanishes when two perturbations lie along the same direction, $c_1(\lambda)=c_2(\lambda)$. If we take the modes to be peaked at the eigenfunctions $\xi_{\lambda_i}(z)$ themselves, $c_i(\lambda)=\delta(\lambda-\lambda_i)$, the above formula reduces to
\begin{equation}
F=(\lambda_1-\lambda_2)\delta(\lambda_1+\lambda_2)~,
\end{equation}
which is a local formula in terms of the parameters $\lambda_i$.
\subsection{Lie algebra} \label{sec:algebra}
To diagonalize the adjoint action, we saw that we must work with a continuous version of the Virasoro algebra. Viewed in terms of vector fields on the circle, we must consider non-smooth vector fields on the circle, Eq.~\eqref{eq:eigenfunction}, which have support only along the interval. When mapped to the real line, these are just plane waves, Eq.~\eqref{eq:uvariable}. In the last section, we performed parallel transport using wave packets constructed out of these eigenfunctions. In terms of the coordinates on the real line,
\begin{equation}\label{eq:wavepacket2}
\xi(u)=\frac{1}{2\pi}\int_{-\infty}^{\infty} c(\lambda) \xi_\lambda(u) d\lambda~.
\end{equation}
Now we would like to be more precise about the sense in which the corresponding vector fields form a Lie algebra. This amounts to imposing extra conditions on $c(\lambda)$ for these to form a closed algebra, along with any other desirable properties.
The simplest choice would be to demand that the $\xi(u)$ be smooth. Then, since the smoothness of functions is preserved under pointwise multiplication, the corresponding vector fields $\xi(u)\partial_u$ will form a closed algebra. However, an arbitrary $\xi(u)$ will not necessarily have finite zero mode projection, nor will there necessarily exist a natural definition for a dual space. To define sensible wave packets we will impose two additional requirements:
\begin{itemize}
\item There is a notion of Fourier transform that maps the space to itself,
\item The $\xi(u)$ are integrable. This means that the projection, Eq.~\eqref{eq:projectionoperator}, is finite, and this property is preserved under commutation of the vector fields $\xi(u)\partial_u$~. It also allows us to define a dual space in terms of distributions.
\end{itemize}
To accomplish this, it is convenient to work with wave packets $\xi(u)$ that are \emph{Schwartz functions}. These are smooth, bounded functions whose derivatives are also all bounded: $|u^\alpha\partial^\beta \xi(u)| < \infty$ for all $\alpha,\beta>0$. In other words, they rapidly go to zero as $u\rightarrow\pm \infty$, faster than any reciprocal power of $u$. This definition excludes for example polynomials, but includes polynomials weighted by an exponential $e^{-c |u|^2}$ for $c\in \mathbb{R}$. By the Leibniz rule, the Schwartz space $\mathcal{S}$ is closed under pointwise multiplication, thus the corresponding vector fields form a closed Lie algebra. We denote $\mathcal{S}$ for the space of Schwartz functions and $\mathfrak{s}$ for the corresponding algebra of vector fields.
Since these functions are integrable, it is natural to define a dual space $\mathcal{S}'$ consisting of linear functionals $T:\mathcal{S}\rightarrow \mathbb{C}$, in terms of distributions:
\begin{equation} T[\xi(u)] = \int_{-\infty}^\infty \xi(u) T(u)du~. \end{equation}
A pairing between Schwartz functions and dual elements can be defined from this as $\left<T, \xi\right> \equiv T[\xi(u)]$. Likewise, there is also a dual space $\mathfrak{s}^*$ consisting of linear functionals on $\mathfrak{s}$, the algebra of vector fields. This is inherited from the dual space $\mathcal{S}'$, i.e., it consists of the space of distributions evaluated on Schwartz functions. There is a pairing $\left<\cdot,\cdot\right>$ between $\mathfrak{s}$ and $\mathfrak{s}^*$ which descends from the pairing on $\mathcal{S}$ and $\mathcal{S}'$.
Notice that, evaluated on the wave packets Eq.~\eqref{eq:wavepacket2}, the projection operator Eq.~\eqref{eq:projectionoperator}
\begin{equation} P_0: \xi(u) \mapsto 2\pi c(0)~ \label{eq:projectionoperator2}\end{equation}
is a linear functional, and thus it is an element of the dual space. The pairing is given by $\left<P_0, \xi\right> = P_0(\xi) = 2\pi c(0)$.
In the coordinates on the circle, recall that this dual element can be expressed from Eq.~\eqref{eq:projectioninz} as
\begin{equation} P_0: \xi(z) \mapsto \frac{1}{\pi} \int dz \frac{\xi(z)}{(1+z^2)}~. \label{eq:projectioninz2}\end{equation}
Notice that this dual element is not a smooth quadratic form on the circle as is typically considered in treatments of the dual space of the Virasoro algebra, but rather a more general distribution that involves singularities at $z=\pm i$\footnote{In the usual discussion of the Virasoro algebra the dual space is identified with the space of smooth quadratic differentials. Formally, one could argue that distributions such as $\delta(z-z_0)$ and $\delta'(z-z_0)$ are also part of some suitably defined notion of the dual space. Indeed, they define linear functionals
\begin{equation}
\xi\mapsto \xi(z_0)~,\hspace{10pt} \xi\mapsto -\xi'(z_0)~,
\end{equation}
which evaluate a function (or its derivative) at some point $z_0$. The projection operator $P_0$ in Eq. \eqref{eq:projectioninz2}, when integrated over the full circle and properly regularized, can be regarded in this fashion. See Appendix \ref{sec:NonDiagonalization} for more details, for example, Eqs. \eqref{eq:residue} - \eqref{eq:projop2}.}. A standard definition of the dual space is an attempt to get a space that is roughly the same size as the algebra itself. For infinite-dimensional spaces the formal dual is much larger and one needs some additional structure, e.g., that of a Hilbert space, to limit it.
We emphasize that there is considerable freedom in these definitions. A different choice would amount to taking a different set-up for varying the state in the parallel transport process. Our definitions allow us to perform parallel transport using wavefunctions that are `physical' in the sense of being Fourier transformable and integrable. The existence of a natural dual space also allows for contact with a geometrical picture in terms of coadjoint orbits, which we describe in the next section.
\subsection{Central extension}\label{sec:extension}
We have so far only considered changing the state with a transformation of the circle. When the transformations are diffeomorphisms on the circle, the group $\rm{Diff}(S^1)$ gets centrally extended to the full Virasoro group, $\rm{Diff}(S^1)\times \mathbb{R}$. Here we are considering a continuous version of the Virasoro generated by the transformations, Eq.~\eqref{eq:eigenfunction}. For the central extension, we proceed in direct analogy with the Virasoro case. In the following, the vector fields $\xi(z)$ should be understood to have non-zero support only between $z=\pm i$, so that this is the only part of the integral over the full circle that contributes.
We consider pairs $(\xi,\alpha)$, where $\xi$ is a vector field of the form Eq.~\eqref{eq:eigenfunction}, which diagonalizes the adjoint action, and $\alpha\in \mathbb{R}$. The Lie bracket is defined as \begin{equation} \label{eq:Liebracket}
\left[(\xi,\alpha),(\chi,\beta)\right]=\left(-[\xi,\chi],-\frac{1}{48\pi}\oint dz\,(\xi(z)\chi'''(z)-\xi'''(z)\chi(z))\right)~,
\end{equation}
where $[\xi,\chi]:=\xi\chi'-\chi\xi'$ is the commutator of vector fields. This is identical to the commutators for the Virasoro algebra, with the only difference being that we integrate only over half the circle, and also consider transformations $\xi$ which are not smooth at the endpoints. In terms of the operators $X_\lambda$, this extends the algebra in Eq.~\eqref{eq:algebra} to
\begin{equation} [\bar{X}_{\bar\lambda},\bar{X}_{\bar \mu}]=(\bar{\lambda}-\bar{\mu})\bar{X}_{\bar{\lambda}+\bar{\mu}}+\frac{c}{12}\bar{\lambda}(\bar{\lambda}^2+1)\delta(\bar{\lambda}+\bar{\mu})~.
\end{equation}
where we have defined rescaled barred variables through $X_\lambda = -2\pi \bar{X}_\lambda, \lambda= -2\pi\bar{\lambda}$ to bring this to a form that more closely resembles the usual Virasoro algebra with discrete labels.
One often introduces a new generator, denoted by $c$, which commutes with all other elements in the algebra, to write
\begin{equation}
(\xi,\alpha)=\xi(z) \partial_z- i\alpha c~.
\end{equation}
By definition, the central element $c$ commutes with $H_{\rm mod}$, i.e., $[H_{\rm mod},c]=0$. Therefore, we can think about the central element as another zero mode in the parallel transport problem.
Luckily, the situation for the central element is simpler than for the modular Hamiltonian itself. From the form of $H_{\rm mod}$, Eq. \eqref{eq:HmodVir}, and the algebra, Eq. \eqref{eq:Liebracket}, we see that the central element $c$ does not appear in commutators of the form $[H_{\rm mod},X]$. Therefore, the projection onto the coefficient of $c$ is simply given by the linear functional
\begin{equation}
(\xi,\alpha)\to \alpha~.
\end{equation}
One way to include the information of the central term is to make the Berry curvature give a $U(1)\times U(1)$-valued number (organized in terms of an extra element which we take to be $c$). More precisely, we define the zero mode projection operator $P^c_0$, which depends on $c$, by
\begin{equation}
P^c_0((X_{\xi},\alpha))=P_0(X_{\xi})- \alpha c~.
\end{equation}
The first term is the usual zero mode, while the second term keeps track of the central zero mode. It is easy to see how the result for the Berry curvature gets modified. Using Eq. \eqref{eq:berrycurvature} with $P_0^c$ instead of $P_0$, we see that the formula for the Berry curvature is given by
\begin{equation} \label{eq:berrycurvaturec}
F=P_0([X_{\xi_1},X_{\xi_2}])+\frac{c}{48\pi}\oint dz\, \left(\xi_1(z)\xi_2'''(z)-\xi_1'''(z)\xi_2(z)\right)~.
\end{equation}
As a consistency check, we can go back to our example in Section \ref{sec:example} and consider the contribution from the Schwarzian term in Eq.~(\ref{eq:H1}). Expanding the parallel transport equation, we need to solve
\begin{equation}
\label{eq:transportfirst2}
H^{(1)}= [S^{(1)}, H^{(0)}]~,
\end{equation}
where the change in the modular Hamiltonian due to the Schwarzian derivative to first order is given by
\begin{equation}\label{eq:Hschwartz}
H^{(1)}_{\rm Schw} = \frac{c}{24i} \oint dz (1+z^2) \xi'''(z)~,
\end{equation}
having used that $\{z',z\}=\epsilon \xi''' + {\cal{O}}(\epsilon^2)$. On the full circle, applying three integration by parts, this is just $H^{(1)}_{\rm Schw}=0$ (equivalently, no diffeomorphism has $\xi'''=z^{-1}$ or $z^{-3}$ which would give a pole). The situation is a bit more subtle on the half circle, since due to non-differentiability at the endpoints it is no longer valid to apply integration by parts multiple times. However, it is still the case that none of the eigenfunctions, Eq.~\eqref{eq:eigenfunction}, have $\xi'''=z^{-1}$ or $z^{-3}$, and so the Schwarzian contribution vanishes. Thus, in either case the solution to Eq.~(\ref{eq:transportfirst2}) with the new Lie bracket Eq.~\eqref{eq:Liebracket} is still given by $S^{(1)}=X_{\xi}~$.
The extra contribution to the commutator $[S^{(1)}_1,S^{(1)}_2]$ due to the central charge is indeed given by Eq.~\eqref{eq:berrycurvaturec}.
Note while it is not possible to apply integration by parts multiple times on Eq.~\eqref{eq:Hschwartz} for the half circle, we have \emph{defined} the central extension as the version that obeys integration by parts three times, which means it also respects the properties of the commutator, $[X_{\xi}, X_{\chi}] = - [X_{\chi},X_{\xi}]$. This is given by the antisymmetric combination for the central extension term as written in Eq.~\eqref{eq:Liebracket}. Likewise, one can check that the Jacobi identity is satisfied since this involves integration by parts once onto a commutator, which vanishes at the interval endpoints by Eq.~\eqref{eq:vanishing}. These properties are sufficient to ensure the consistency of the central extension.
\section{Kinematic space example}\label{app:kinematic}
We will now describe a version of the state-based parallel transport summarized in Section~\ref{sec:states}, which reproduces some of the results from kinematic space for CFT$_2$ on a time-slice. As we saw in Section~\ref{sec:IntervalTransport}, the parallel transport process for kinematic space could also be derived in the operator-based transport language. In this way of formulating the problem, the geometrical description of kinematic space in terms of coadjoint orbits~\cite{Penna:2018xqq} is more readily transparent.
We will start by setting up some geometry that is relevant for this problem. Consider the group $SL(2,\mathbb{R})$. Its Lie algebra $\mathfrak{sl}(2,\mathbb{R})$ consists of generators $t_\mu$, $\mu=0,1,2$ satisfying the commutation relations $[t_\mu,t_\nu]=\epsilon_{\mu \nu}\,^\rho t_\rho$, where the indices are raised by a metric $\eta_{ab}$ with signature $(-,+,+)$. We will make use of an explicit finite-dimensional representation by $2\times 2$ matrices given by
\begin{equation} \label{generators}
t_0= \frac{1}{2}
\begin{pmatrix}
0 & 1\\
-1 & 0
\end{pmatrix}~,
\quad t_1= \frac{1}{2}
\begin{pmatrix}
0 & 1\\
1 & 0
\end{pmatrix}~,
\quad t_2= \frac{1}{2}
\begin{pmatrix}
1 & 0\\
0 & -1
\end{pmatrix}~.
\end{equation}
This basis will be most convenient for the calculation of the Berry curvature. It can be easily expressed in terms of the basis used in Section~\ref{sec:bulk} as $t_0=\frac{1}{2}(L_-+L_+),t_1=\frac{1}{2}(L_--L_+),t_2=L_0$.
Now consider embedding coordinates $(X^0,X^1,X^2)$ describing $3$-dimensional Minkowski spacetime
with metric
\begin{equation}
ds^2=-(dX^0)^2+(dX^1)^2+(dX^2)^2~. \label{eq:MinkowskiMetric}
\end{equation}
Recall that $SL(2,\mathbb{R})/\mathbb{Z}_2\cong SO(2,1)$. A convenient parametrization for the algebra $\mathfrak{sl}(2,\mathbb{R})$ is given through the isomorphism to Mink$_3$:
\begin{equation}
\frac{1}{2}\begin{pmatrix}
X^2& X^1+X^0\\
X^1-X^0& -X^2
\end{pmatrix} \leftrightarrow (X^0,X^1,X^2)~.\label{eq:MinkowskiMap}
\end{equation}
The reason to express $\mathfrak{sl}(2,\mathbb{R})$ in this way is that the coadjoint orbits of the Lie group can be realized geometrically in Minkowski space. Any element of $\mathfrak{sl}(2,\mathbb{R})$ lies in one of three conjugacy classes (up to an overall factor $\pm 1)$. These can be classified by the value of $\epsilon \equiv |\mbox{tr}(g)|/2$ where $g\in SL(2,\mathbb{R})$: $\epsilon<1$ is elliptic, $\epsilon=1$ is parabolic and $\epsilon>1$ is hyperbolic.
\begin{figure}[t!]
\centerline{\includegraphics[scale=.35]{dSplot.png}}
\caption{The dS$_2$ hyperboloid describing kinematic space, which is a coadjoint orbit of $SO(2,1)$. The arrow points to a special point that corresponds to the coherent state $\ket{\phi}$.}
\label{fig:dSplot}
\end{figure}
We will assume that our representative is in the diagonal class
\begin{equation}
\Lambda=\rm{diag}(\lambda,-\lambda)/2~\label{eq:representative}
\end{equation}
with $\lambda \in \mathbb{R}$. Since $|\mbox{tr}(e^\Lambda)|/2>1$ for all $\lambda$, this is a hyperbolic element. Other choices lead to different orbits.
Consider a general group element
\begin{equation} \label{eq:elementsl2r}
g=\begin{pmatrix} a & b\\ c& d \end{pmatrix} \in SL(2,\mathbb{R})~,
\end{equation}
with $a,b,c,d\in \mathbb{R}$ and $ad-bc=1$. The coadjoint orbit is generated by the adjoint action of $\Lambda$ with arbitrary $g$,
\begin{equation} \label{eq:conjugation}
g \cdot \Lambda \cdot g^{-1} = \begin{pmatrix}
\frac{\lambda}{2}(bc+ad)&-\lambda ab\\
\lambda cd & -\frac{\lambda}{2}(bc+ad)
\end{pmatrix}~.
\end{equation}
The determinant is constant along the orbit, $\mbox{det}(g \cdot \Lambda \cdot g^{-1}) = -\lambda^2/4~.$ Applying the map to Minkowski space, Eq.~\eqref{eq:MinkowskiMap}, this results in the condition
\begin{align}
-(X^0)^2+(X^1)^2+(X^2)^2&=\lambda^2~.\label{eq:EmbeddingConstraint}
\end{align}
This is the defining equation of a single-sheeted hyperboloid with radius $\lambda$.
Take the embedding coordinates
\begin{align}
X^0 &= \lambda \cot{t}~,\nonumber\\
X^1 &= \lambda \csc{t} \cos{\theta}~,\nonumber\\
X^2 &= \lambda \csc{t} \sin{\theta}~.\label{eq:Embedding}
\end{align}
These satisfy Eq.~\eqref{eq:EmbeddingConstraint} and from Eq.~\eqref{eq:MinkowskiMetric} result in the induced metric
\begin{equation} ds^2 = \lambda^2 \csc^2{t} (-dt^2 + d\theta^2)~.\label{eq:dS2metric}\end{equation}
This is just the metric on $\mbox{dS}_2\simeq SO(1,2)/SO(1,1)$. We saw that this describes the coadjoint orbit passing through the representative, Eq.~\eqref{eq:representative}.
The coadjoint orbit can be thought of as a fiber bundle whose base space is $SO(1,2)/SO(1,1)$ and its fiber is $SO(1,1)$. We want to consider an appropriate section of the fiber bundle. The discussion below follows closely \cite{Oblak:2017ect}. Using the embedding coordinate Eq.~\eqref{eq:Embedding} and the map Eq.~\eqref{eq:MinkowskiMap}, we obtain the constraints
\begin{align}
2\mathrm{tr}(g\, \Lambda\, g^{-1}\, t_0)&=-X_0 = -\lambda\cot{t}~,\nonumber \\
2\mathrm{tr}(g\, \Lambda\, g^{-1}\, t_1)&= X_1 = \lambda\cos{\theta}\csc{t}~,\nonumber\\
2\mathrm{tr}(g\, \Lambda\, g^{-1}\, t_2)&= X_2 = \lambda\sin{\theta}\csc{t}~. \label{eq:systemequations}
\end{align}
Solving this system of equations, Eq.~\eqref{eq:systemequations}, we obtain
\begin{equation}
b=-\frac{\cot{t}+\cos{\theta}\csc{t}}{2a}~, \quad c=\frac{a(1-\sin{\theta}\csc{t})}{\cot{t}+\cos{\theta}\csc{t}}~, \quad d= \frac{1 +\sin{\theta}\csc{t}}{2a}~.
\end{equation}
We have the freedom to impose $a=1$, in which case the expressions somewhat simplify. Applying this back to Eq.~\eqref{eq:elementsl2r}, we obtain a section $g: \mbox{dS}_2 \rightarrow SL(2,\mathbb{R})$ for the bundle given by
\begin{equation} \label{section}
g= \begin{pmatrix}
1 & -\frac{1}{2}(\cos{t}+\cos{\theta})\csc{t}\\
\tan{(\frac{t-\theta}{2})}& \frac{1}{2}(1+\csc{t}\sin{\theta})
\end{pmatrix}~.
\end{equation}
Notice that $g$ reduces to the identity for $t=\theta=\pi/2$ which corresponds to the point of intersection of the hyperboloid with the axis labeled by the $t_2$ generator.
Now we will apply some of these tools to the problem of state-based parallel transport for the group $SL(2,\mathbb{R})$, with the aim of describing kinematic space. Recall that to define a state-based Berry phase it is necessary to choose a suitable `Hamiltonian' with an eigenstate $\ket{\phi}$ that serves as the base state for the parallel transport process. The `Hamiltonian' is one which generates a specified subgroup of $SL(2,\mathbb{R})$, which we interpret as a flow in time. The state is acted on by group elements in a unitary representation, which we denote by $\mathcal{D}(g)$, $\mathcal{D}(u)$ for $g\in SL(2,\mathbb{R}), u\in \mathfrak{sl}(2,\mathbb{R})$. In the coadjoint orbit language, eigenstates of subalgebras of the symmetry algebra are known as \emph{coherent states}. Specifically, we will choose our Hamiltonian to be $t_2$, which generates an $\mathfrak{so}(1,1)$ subalgebra. This exponentiates to the hyperbolic group element
\begin{equation}\label{expgen}
\mathcal{J}= e^{\eta t_2/2}~
\end{equation}
with $\eta\in \mathbb{R}$. Taking $X\rightarrow \mathcal{J} X \mathcal{J}^{-1}$ using the isomorphism, Eq.~\eqref{eq:MinkowskiMap}, we see the adjoint action with respect to $\mathcal{J}$ acts geometrically as
\begin{align}
X^0&\rightarrow X^0 \cosh{(\eta/2)} + X^1 \sinh{(\eta/2)}~,\\
X^1&\rightarrow X^0 \sinh{(\eta/2)} + X^1 \cosh{(\eta/2)}~,\\
X^2&\rightarrow X^2~,
\end{align}
in other words, it acts as a boost with rapidity $-\eta/2$ in the $X^0-X^1$ direction in embedding space.
We define our coherent state through the condition that the boost leaves it invariant up to a phase,
\begin{equation}\label{eq:coherenteigen}
\mathcal{D}(\mathcal{J})\ket{\phi}=e^{i \eta \zeta}\ket{\phi}~, \indent \mathcal{D}(t_2)\ket{\phi}= 2\zeta\ket{\phi}~,
\end{equation}
with $\zeta\in \mathbb{R}$ since $\mathcal{D}(\mathcal{J})$ is assumed to be unitary and $\mathcal{D}(t_2)$ Hermitian in the representation. By a theorem of Perelomov~\cite{Perelomov:1986tf} (see also~\cite{Yaffe:1981vf}), coherent states are in one-to-one correspondence with points on an orbit. Our state $\ket{\phi}$ corresponds to the point $(0,0,1)$ on the dS$_2$ hyperboloid that is left fixed by the action of the boost (see Figure~\ref{fig:dSplot}). It is geometrically simple to see that the action of the other generators $t_0, t_1$ do not leave this point invariant, which corresponds to the statement that $\ket{\phi}$ is not also an eigenstate of these generators.
Recall that the Maurer-Cartan form is given by
\begin{equation}
\Theta=g^{-1}dg~.
\end{equation}
The Berry phase is
\begin{equation} \label{eq:thetaformula}
\theta(\gamma)=\oint_\gamma{A}~, \quad A=i\bra{\phi}\mathcal{D}(\Theta)\ket{\phi}~.
\end{equation}
We now use Eq.~\eqref{section} to evaluate the pullback of the Mauer-Cartan form from $SL(2,\mathbb{R})$ to dS$_2$. Taking the expectation value of the generators in the state $\ket{\phi}$, then applying the commutation relations, the eigenvalue condition Eq.~\eqref{eq:coherenteigen} and using $\zeta\in \mathbb{R}$, we see that only $t_2$ has a nonvanishing expectation value in $\ket{\phi}$. Thus, only this part contributes to the Berry phase. We find
\begin{equation} \label{eq:integrand}
A=i\bra{\phi}\mathcal{D}(\Theta)\ket{\phi}=i\zeta\csc{t}\cos{\left(\frac{t+\theta}{2}\right)}\sec{\left(\frac{t-\theta}{2}\right)}(dt-d\theta)~.
\end{equation}
From this we can define the Berry curvature
\begin{equation}\label{Berrycurv}
F=dA=\frac{i\zeta}{\sin^2{t}} dt\wedge d\theta~.
\end{equation}
Using Stokes' theorem one can write the integral of the Berry connection in Eq.~\eqref{eq:integrand} as
\begin{equation}\label{Berryphase}
\theta(\gamma)=i\zeta \int_B{\frac{1}{\sin^2{t}}dt\wedge d\theta}~,
\end{equation}
where $B$ is any two-dimensional region with boundary $\partial B =\gamma$.
For a CFT$_2$ restricted to a time-slice, kinematic space consists of the space of intervals on this time-slice. Given a causal ordering based on containment of intervals, this is just a dS$_2$ spacetime, Eq.~\eqref{eq:dS2metric}, with a time coordinate set by the interval radius, $(\theta_R-\theta_L)/2$~\cite{Czech:2015qta}. The curvature, Eq.~\eqref{Berrycurv}, is a volume form on kinematic space. Recalling the relation between time and interval size, it matches the kinematic space curvature, Eq.~\eqref{eq:curvature3}, derived from the operator-based method in Section~\ref{sec:IntervalTransport} (note that an exact matching of the normalization is unimportant, as the overall normalization for the modular Berry phase will be at any rate affected by the choice of normalization for the modular Hamiltonian). The Berry phase, Eq.~\eqref{Berryphase}, computes the volume of region $B$ within this dS$_2$ spacetime. It also precisely reproduces the Berry phase for kinematic space derived by other means in~\cite{Czech:2019vih,Czech:2017zfq}.
\section{General formulation}\label{app:general}
We will derive a general formula for the curvature assuming that there is a unique way of separating out the zero mode. As we discuss in the next appendix, this is not generally true when the state-changing transformations are elements of the Virasoro algebra, however it holds for the transformations that we consider in the main text. The results of Section~\ref{sec:transport} utilize the formula for the curvature presented in this appendix.
Consider a Lie algebra ${\mathfrak g}$ and a trajectory of elements $X(\lambda)\in {\mathfrak g}$ specified by some parameter $\lambda$. We write ${\rm Ad}_X$ for the adjoint action of $X$ on ${\mathfrak g}$, ${\rm Ad}_X(Y)=[X,Y]$. We make the assumption that the kernel of ${\rm Ad}_X$ and the image of ${\rm Ad}_X$ do not intersect anywhere along the path, which is guaranteed if $[X,Y]\neq 0$ implies $[X,[X,Y]]\neq 0$. Moreover, we will be interested in smooth trajectories $X(\lambda)$ along which the kernel and image of ${\rm Ad}_X$ vary smoothly. In particular, we will assume their dimensions do not jump.
Crucially, we will make the further assumption\footnote{For finite-dimensional Lie algebras the dimensions of the kernel and the image add up to the total dimension of the Lie algebra. Since they do not intersect, this then implies that the kernel and image of ${\rm Ad}_X$ together span the full Lie algebra. For infinite-dimensional Lie algebras the situation is more complicated, as we explain in Appendix \ref{sec:NonDiagonalization}.} that any $Y$ can be uniquely decomposed as
$Y=K+I$ with $K$ in the kernel and $I$ in the image of ${\rm Ad}_X$. We will call the corresponding projection operators $P_K$ and $P_I$, with the property that
\begin{equation} \label{eq:projectrors}
P_I+P_K = 1~.
\end{equation}
Notice that we are not using an inner product, which means that the projectors are not orthogonal in any sense.
Besides the projectors $P_K$ and $P_I$, we will denote ${\rm Ad}_X$ simply by $A$, and its inverse by $A^{-1}$. Note that $A$ has a kernel so it does not have an inverse, but since by assumption $A$ defines a non-degenerate map from the image of the image of ${\rm Ad}_X$
to itself, it does have a well-defined inverse on these subspaces. The map $A^{-1}$ is defined to be the inverse on these subspaces and zero everywhere else. These operators then obey the following set of identities:
\begin{eqnarray}\label{eq:identities}
A P_K = P_K A & = & 0~, \\ \label{eq:identity1}
A^{-1} P_K = P_K A^{-1} & = & 0~, \label{eq:identity2}\\
AA^{-1} = A^{-1} A & = & P_I~. \label{eq:identity3}
\end{eqnarray}
We now vary $X$ to $X+\delta X$ by some small change $\delta \lambda$ along the path. In particular, we can use the above identities to express the variations of $P_K$, $P_I$ and $A^{-1}$ in terms of the variation of $A$. After some algebra we find that
\begin{eqnarray} \label{deltaPK}
\delta P_K &=& -\delta P_I = -P_K \delta A A^{-1} P_I - P_I A^{-1} \delta A P_K~, \\
\delta A^{-1} & = & -A^{-1} \delta A A^{-1} + P_I A^{-2} \delta A P_K + P_K \delta A A^{-2} P_I~. \label{deltaAinverse}
\end{eqnarray}
In particular, we used
\begin{equation}
P_K\delta P_I = P_K\delta A^{-1}A P_I~, \quad P_I\delta P_I =P_I A^{-1}\delta A P_K~,
\end{equation}
for deriving Eq.~\eqref{deltaPK} and
\begin{equation}
P_K\delta A^{-1} =\delta P_I A^{-1}~, \quad P_I\delta A^{-1} = A^{-1}\delta P_I -A^{-1}\delta A A^{-1}~,
\end{equation}
for Eq.~\eqref{deltaAinverse}. We also used that $P_I A^{-1}=A^{-1}P_I=A^{-1}$ and $P_I A=AP_I=A$.
Given a variation $\delta X$, we want to express it as
\begin{equation}
\delta X = [S,X] + P_K\delta X~, \label{eq:decomposition}
\end{equation}
where $P_K\delta X$ is in the kernel of ${\rm Ad}_X$. Moreover, we want to remove the modular zero mode from $S$, so that $S$ is uniquely defined.
We do this by requiring that $P_K S = S P_K=0$, and with the above equations it is then easy to see that
\begin{equation}
S=-A^{-1}(\delta X)~.
\end{equation}
We are now going to compute the parallel transport along a small square, by first doing the variation $\delta_1 X$ and then
$\delta_2 X$, and then subtracting the reverse order. For the difference, we get
\begin{equation}
F= (1-(A^{-1}+\delta_1 A^{-1})(\delta_2 X)) (1 - A^{-1}(\delta_1 X)) - (1 \leftrightarrow 2)~.
\end{equation}
The first order terms vanish, thus it is necessary to expand to second order. One term we get
at second order is
\begin{equation}
F_1 = - [A^{-1} (\delta_1 X), A^{-1} (\delta_2 X)]~.
\end{equation}
There is also another term coming from the variations of $A^{-1}$, which evaluates to
\begin{equation} \label{eq:F2}
F_2 = (A^{-1} \delta_1 A A^{-1} -
P_I A^{-2} \delta_1 A P_K - P_K \delta_1 A A^{-2} P_I
)(\delta_2 X) - (1 \leftrightarrow 2)~.
\end{equation}
In order to simplify Eq.~\eqref{eq:F2} further, we need several other identities. For example, multiplying
\begin{equation} \label{eq:identity3-1}
A([Y,Z])=[AY,Z]+[Y,AZ]
\end{equation}
by $A^{-1}$ we get the identity
\begin{equation} \label{eq:identity3-2}
A^{-1}([AY,Z]+[Y,AZ])=P_I([Y,Z])~.
\end{equation}
From this it follows that
\begin{equation} \label{eq:identity4}
A^{-1} [Y,P_K Z] = A^{-1}[P_I Y, P_K Z] = P_I([A^{-1}Y,P_K Z])~,
\end{equation}
where we used Eqs.~\eqref{eq:projectrors}, \eqref{eq:identity1} and \eqref{eq:identity3}.
Next we consider the first term in $F_2$ minus the same term with $1$ and $2$ interchanged. It is given by
\begin{equation}
F_2^1 =A^{-1} \delta_1 A A^{-1}(\delta_2 X) - (1 \leftrightarrow 2)~.
\end{equation}
We use $\delta_1 A Y=[\delta_1 X,Y]$ to rewrite it as
\begin{eqnarray}
F_2^1 & = & A^{-1} ([\delta_1 X,A^{-1}(\delta_2 X)] + [A^{-1}(\delta_1 X),\delta_2 X]) \nonumber \\
& = & A^{-1} ([(AA^{-1}+P_K)\delta_1 X,A^{-1}(\delta_2 X)] + [A^{-1}(\delta_1 X),(AA^{-1}+P_K)\delta_2 X]) \nonumber \\
& = & P_I ([A^{-1}(\delta_1 X),A^{-1}(\delta_2 X)]) + A^{-1}([P_K \delta_1 X,A^{-1}(\delta_2 X)] +
[A^{-1}(\delta_1 X),P_K\delta_2 X])~. \nonumber\\
\end{eqnarray}
In the last equality we make use of Eq.~\eqref{eq:identity3-2}. Applying Eq.~\eqref{eq:identity4} to the last two terms gives
\begin{equation}
F_2^1=P_I ([A^{-1}(\delta_1 X),A^{-1}(\delta_2 X)] + [A^{-2}(\delta_1 X),P_K\delta_2 X]- [A^{-2}(\delta_2 X),P_K \delta_1 X] )~.
\end{equation}
The second term in $F_2$ reads
\begin{eqnarray}
F_2^2 &=& - P_I A^{-2} \delta_1 A P_K(\delta_2 X) + P_I A^{-2} \delta_2 A P_K(\delta_1 X) \nonumber \\
&=&-
A^{-2}([\delta_1 X,P_K \delta_2 X] - [\delta_2 X,P_K \delta_1 X])~.
\end{eqnarray}
Using the identity Eq.~\eqref{eq:identity4} twice it follows that
\begin{equation}
F_2^2 = - P_I
([A^{-2}(\delta_1 X),P_K \delta_2 X] - [A^{-2} (\delta_2 X),P_K \delta_1 X])~.
\end{equation}
\begin{comment}
We can actually drop the $P_I$ since for any $Y,Z$ we have that $P_K([P_I Y, P_K Z])=0$. This follows from
\begin{equation}
P_K([P_I Y, P_K Z]) = P_K([A A^{-1} Y, P_K Z])=P_K A([A^{-1}Y,P_K Z])=0~.
\end{equation}
So we finally obtain
\begin{equation}
F_2^2 =[A^{-2} \delta_2 X,P_K \delta_1 X]-[A^{-2} \delta_1 X,P_K \delta_2 X]~.
\end{equation}
\end{comment}
The last term to consider is
\begin{eqnarray}
F_2^3 &=&P_K \delta_2 A A^{-2} P_I
(\delta_1 X) - P_K \delta_1 A A^{-2} P_I
(\delta_2 X) \nonumber \\
&=& P_K([\delta_2 X, A^{-2}(\delta_1 X)]-[\delta_1 X, A^{-2}(\delta_2 X)] )~.
\end{eqnarray}
This expression does not admit an obvious simplification. Combining all terms we see that the first term in $F_2^1$ cancels part of $F_1$, the second and third terms in $F_2^1$
cancel against $F_2^2$, so that we are left with a simple and compact expression for the full curvature:
\begin{equation} \label{eq:curv}
F = - P_K( [A^{-1} (\delta_1 X), A^{-1} (\delta_2 X)] + [\delta_1 X, A^{-2}(\delta_2 X)] - [\delta_2 X, A^{-2}(\delta_1 X)])~.
\end{equation}
One can easily check that the curvature commutes with $X$.
Notice that only the $P_I$ components of $\delta X$ contribute to the curvature due to the
observation that
\begin{equation} \label{eq:PKPIPK}
P_K([P_I Y, P_K Z]) = P_K([A A^{-1} Y, P_K Z])=P_K A([A^{-1}Y,P_K Z])=0~,
\end{equation}
where we used Eq.~\eqref{eq:identity3-1}. Moreover, we find that
\begin{eqnarray}
W &=& A^2 ([A^{-2} (\delta_1 X), A^{-2} (\delta_2 X)]) \nonumber \\
&=& 2 [A^{-1} (\delta_1 X), A^{-1} (\delta_2 X)] +
[P_I \delta_1 X, A^{-2}(\delta_2 X)] + [ A^{-2}(\delta_1 X),P_I \delta_2 X]
\end{eqnarray}
is almost the same as Eq.~\eqref{eq:curv}, except for the factor of two, and the appearance of the projector $P_I$. It is obvious that $P_KW=0$ and if we add $P_KW$ to $F$ we can drop the $P_I$ in the resulting expression, as follows from Eq.~\eqref{eq:PKPIPK}. Therefore, the final expression for the curvature reads
\begin{equation}
F = P_K( [A^{-1} (\delta_1 X), A^{-1} (\delta_2 X)] )~.
\end{equation}
The simple form of this result
suggests that there is a shorter derivation and it would be interesting to further investigate this possibility.
\section{Non-diagonalization for Virasoro}\label{sec:NonDiagonalization}
There are subtleties in expressing a Virasoro generator $X$ as $X=X_0+[H_{\rm mod},Y]$ with $X_0$ a zero mode of the modular Hamiltonian $H_{\rm mod}$ in the Virasoro algebra. We will give here a summary of why the assumed decomposition, Eq.~\eqref{eq:decomposition}, used to derive the curvature cannot be applied to the full Virasoro algebra, and hence why we have chosen to restrict to a different set of transformations.
We will first be more precise about the notion of `generator.' A generator of Diff($S^1$) can be expressed as
\begin{equation} \label{eq:generator}
X=\sum_n c_n L_n~,
\end{equation}
where the modes $L_n$ satisfy the Virasoro algebra, Eq.~\eqref{eq:virasoroalgebra}.
We can equivalently represent $X$ as a function on $S^1$, $f(\theta)=\sum c_n e^{in\theta}$, or as a vector field,
$\xi = \sum c_n z^{n+1} \partial_z$ in radial quantization. For the arguments we are interested in the central charge can be considered separately, see Section~\ref{sec:extension}.
One can ask what values of $c_n$ are allowed in Eq.~\eqref{eq:generator}. This leads to different `definitions' of the Virasoro algebra. Some choices that are preserved under commutation are:
\begin{itemize}
\item algebraic: require only a finite number of the $c_n$ to be non-zero~,
\item semi-algebraic: require that $c_n=0$ for $n$ sufficiently negative (alternatively, one could require $c_n=0$ for $n$ sufficiently positive)~,
\item analytic: require the function $f$ or vector field $\xi$ to be smooth~.
\end{itemize}
In the case where the generators are self-adjoint, then semi-algebraic reduces to algebraic.
For each of these choices of infinite-dimensional Lie algebras, we can ask to what extent the statement that any generator $X$ can be written
as $X=X_0+[H_{\rm mod},Y]$ with $X_0$ a zero mode of the modular Hamiltonian $H_{\rm mod}$ holds.
\subsection{Algebraic and semi-algebraic case}
In the algebraic case, one can prove that the only algebra element that commutes with $H_{\rm mod}$ is $H_{\rm mod}$ itself. First, recall that
\begin{equation} H_{\rm mod}=\pi(L_1+L_{-1})~.\end{equation}
Now consider elements with only a finite number of non-zero $c_n$, running from $n=-L,...,K$, with $K$ and $L$ positive. Then, the commutator
\begin{equation}
[H_{\rm mod},\sum_{n=-L}^{K} c_n L_n] = \sum_{n=-L-1}^{K+1} c'_n L_n~
\end{equation}
maps a vector space of dimension $K+L+1$ into a vector space of dimension $K+L+3$. Its kernel is dimension one so its cokernel
must be dimension three. Therefore, the number of generators which can be written as $[H_{\rm mod},X]$ is codimension three. In fact, one can write every generator as
\begin{equation} \label{j1}
X = aH_{\rm mod} + bL_2 + cL_{-2} + [H_{\rm mod},Y]~,
\end{equation}
for some $a,b,c$, which can be seen iteratively by taking a suitable $Y$ with $L=K=1$ and combining $H_{\rm mod}, L_2, L_{-2}$ to isolate $L_0$, then taking a suitable $Y$ with $L=1,K=2$ combined with all the previous generators to isolate $L_3$, and so on and so forth. Crucially, this decomposition is not unique. For instance, we could have equally well written a similar decomposition with $L_3,L_{-3}$ instead of $L_2,L_{-2}$.
To solve
\begin{equation}
L_{-2} = [H_{\rm mod},Y]~,
\end{equation}
it is necessary to express $Y$ as an infinite series $Y=\sum_{k=-3}^{-\infty} c_k L_k$ which is not part of the algebra:
\begin{equation}
Y= \frac{1}{4} L_{-3} - \frac{2}{4\cdot 6} L_{-5} + \frac{2}{6 \cdot 8} L_{-7} - \frac{2}{8 \cdot 10}L_{-9} + \ldots
\end{equation}
If we denote by $Y_k$ the sum of the first $k$ terms which truncates at $L_{-2k-1}$, then we have
\begin{equation}
\frac{1}{\pi}[H_{\rm mod},Y_k]=L_{-2} +\frac{(-1)^{k+1}}{k+1} L_{-2k-2}~,
\end{equation}
so that for large $k$ this becomes `close' to $L_{-2}$. We can introduce a metric so that this notion of closeness becomes more precise, e.g.,
\begin{equation}
|| \sum_n c_n L_n ||^2 \equiv \sum_n |c_n|^2\label{eq:norm}
\end{equation}
defines a metric on the Lie algebra. But the Lie algebra is not complete with respect to this metric, i.e., limits of Lie algebra elements
which converge in this norm will not in general converge to an element of the Lie algebra.
Even ignoring the fact that the algebra is not complete with respect to Eq.~\eqref{eq:norm}, there is the additional issue that this way of interpreting $L_{-2}$ as the commutator of an element of the algebra with $Y$ is too strong. Indeed, we can also find an infinite series $Y$ obeying
\begin{equation}
[H_{\rm mod},Y]=H_{\rm mod}~,
\end{equation}
which looks like
\begin{equation}
Y=\ldots + c_6 L_6 + c_4 L_4 + c_2 L_2 + c_{-2} L_{-2} + c_{_4} L_{-4} + c_{-6} L_{-6} +\ldots
\end{equation}
This also has the property that if one truncates $Y$, the $Y_k$ obeys $[H_{\rm mod},Y_k]=H_{\rm mod}+Z_k$, with $Z_k$ small defined with respect to the above norm. This would not allow for a decomposition separating out the zero mode part from the image of the adjoint action without intersection.
Notice that considering the semi-algebraic rather than algebraic case also does not fix the issue. A semi-infinite series in one direction can either remove $L_2$ or $L_{-2}$
from the expression Eq.~(\ref{j1}), but not both.
\subsection{Analytic case}
In the analytic case, the equation $[H_{\rm mod},X]=Y$ is the differential equation
\begin{equation}\label{eq:analyticdiffeq}
(1+z^2)X'(z) -2z X(z) = -\frac{1}{\pi}Y(z)~,
\end{equation}
where we replaced everything by the corresponding smooth function. This differential equation is equivalent to
\begin{equation}
\frac{d}{dz} \left( \frac{X(z)}{1+z^2} \right) = -\frac{1}{\pi}\frac{Y(z)}{(1+z^2)^2}~.
\end{equation}
Therefore,
\begin{equation} \label{eq:X(z)}
X(z) = -\frac{c_0}{2}(1+z^2) -\frac{1}{\pi}(1+z^2) \int^z \frac{Y(z')}{(1+z'^2)^2}\, dz'~,
\end{equation}
where $c_0$ is an integration constant, and the integration is over the circle. The differential equation does not have an analytic solution for all $Y(z)$. In fact, we will argue that in order to find an analytic solution we require three conditions on $Y$, so that once again the space of smooth vector fields which can be written as $[H_{\rm mod},X]$ is codimension three.
The first two conditions come from exploring the behavior of the integrand near $z=\pm i$, where we find that there will be logarithmic branch cut singularities unless the residues at $z=\pm i$ vanish. Thus, the first two conditions on $Y(z)$ for Eq.~\eqref{eq:X(z)} to be analytic are
\begin{equation}\label{eq:vanishresidue}
{\rm Res}_{z=\pm i} \, \frac{Y(z)}{(1+z^2)^2} =0~.
\end{equation}
Note that it is admissible for $Y(z)/(1+z^2)^2$ to have double pole at $z=\pm i$, as these integrate
to a single pole, which is then canceled by the $(1+z^2)$ prefactor in Eq.~\eqref{eq:X(z)}. Therefore, the double poles do not give rise to singularities.
There is also another condition, namely that the contour integral of $X'(z)$ around the unit circle vanishes so that we get a periodic
function $X(z)$ after integration. Since polynomials in $z$ are automatically periodic, it suffices to consider the behavior of the integrand, $Y(z)/(1+z^2)^2$. Assuming that $Y(z)$ is analytic except possibly at $z=0$, this amounts to the condition
\begin{equation} \label{eq:third condition}
{\rm Res}_{z=0} \, \frac{Y(z)}{(1+z^2)^2} =0~.
\end{equation}
Note that poles near $z=\pm i$ do not affect the periodicity so we can subtract them before applying this condition if necessary, and we also assume the residues vanish as above, so that we have a well-defined integral.
To see how this works in practice, it is useful to evaluate this for a trial function $Y$ inspired by the algebraic case:
\begin{equation}
Y_0 = a(1+z^2) + b z^{-1} + c z^3~,
\end{equation}
which contains $L_2$, $L_{-2}$ and $H_{\rm mod}$. We notice that
\begin{align}
\frac{Y_0}{(1+z^2)^2} &= \frac{i(b+c)}{4(z-i)^2} +\frac{-b-ia+c}{2(z-i)} + \ldots~, \\
\frac{Y_0}{(1+z^2)^2} &= \frac{-i(b+c)}{4(z+i)^2} +\frac{-b+ia+c}{2(z+i)} + \ldots
\end{align}
near $z=\pm i$ respectively. The residue of $Y_0/(1+z^2)^2$ at $z=i$ equals $-(iY_0(i)+Y_0'(i))/4$ and the residue at $z=-i$ equals
$(iY_0(-i)-Y_0'(-i))/4$, and these are required to vanish by Eq.~\eqref{eq:vanishresidue}. This translates to $b=c$ and $a=0$. Recall that the differential equation, Eq.~\eqref{eq:analyticdiffeq}, extracts the non-zero mode part, i.e., the vector fields that can be written as $[H_{\rm mod}, X]$. We could also ask how to extract the zero mode part. In this case it seems the most natural choice to extract $a$, which is given by the difference of the two residues, as the coefficient of the zero mode.
Even in the case $b=c$ and $a=0$ with vanishing residues, we see that $X$ will now involve a term $(1+z^2)\log z$ since $Y=z^{-1}+z^3=(z^2+1)^2z^{-1}-2z$. This still has a branch cut singularity, and therefore will not be single-valued. This is where a version of the third condition, Eq.~\eqref{eq:third condition}, is necessary. To be more precise about this requirement, take a finite polynomial in $z,z^{-1}$ for $Y$.
We first subtract the harmless double poles and the harmful single poles (which we require to vanish independently) so that we get
an expression of the type
\begin{equation} \label{j3}
Z(z)\equiv\frac{Y(z) - A - Bz -Cz^2- Dz^3}{(1+z^2)^2}~,
\end{equation}
where the coefficients $A,B,C,D$ are chosen so as to cancel the single and double poles. To accomplish this, it is necessary for an overall factor $(1+z^2)^2$ to factor out of the numerator. The choice of coefficients can then be determined by the requirement that the numerator of $Z$ and its derivative both vanish at $z=\pm i$.
Explicitly, they are given by
\begin{eqnarray} \label{j4}
A & = & \frac{1}{4}(2Y(-i)+2Y(i)+iY'(-i)-iY'(i))~, \\
B & = & \frac{1}{4}(3iY(-i)-3iY(i)-Y'(-i)-Y'(i))~, \\
C & = & \frac{i}{4}(Y'(-i)-Y'(i))~, \\
D & = & \frac{1}{4}(iY(-i)-iY(i)-Y'(-i)-Y'(i))~. \label{eq:D}
\end{eqnarray}
With this choice of coefficients the expression, Eq.~\eqref{j3}, is now
well-behaved everywhere, i.e., the numerator has a factor $(1+z^2)^2$, and the quotient is also a finite polynomial in $z$
and $z^{-1}$. The only problematic contribution to the integral is coming from the $z^{-1}$ term which does not become a periodic function
when integrated. So the remaining number is the coefficient in front of $z^{-1}$ in the polynomial $Z(z)$ in Eq.~(\ref{j3}).
We denote by $Y_-$ the terms in $Y$ with a negative power of $z$. The non-negative powers in $Y$ only give rise to non-negative powers
in $Z$ and are never problematic. So we can equivalently consider
\begin{equation} \label{j5}
Z_-(z)\equiv\frac{Y_-(z) - A - Bz -Cz^2- D z^3}{(1+z^2)^2}~,
\end{equation}
and we are interested in the coefficient in front of $z^{-1}$ in $Z_-(z)$. We can extract this using a small contour integral.
But we might as well extract it using a large contour integral as $Z_-$ is analytic everywhere except at $0$ and $\infty$. Then the integral is dominated by $D$, so it is necessary that $D=0$ for the integral to be single-valued. In fact, $D$ is equal to the sum of the residues at $z=i$ and $z=-i$, as can be seen from Eq.~\eqref{eq:D}, so this version of the third condition with the double poles subtracted out reduces to
\begin{equation}
{\rm Res}_{z=i} \frac{Y_-}{(1+z^2)^2} + {\rm Res}_{z=-i} \frac{Y_-}{(1+z^2)^2} =0~.
\end{equation}
Since the residues of the complete $Y/(1+z^2)^2$ have to vanish separately, we could equivalently require the same
condition for $Y_+$.
For more general non-polynomial $Y$, we can apply the same argument, except that now $Y_-$ is analytic outside the unit disk and
$Y_+$ is analytic inside the unit disk. By the version of the Riemann-Hilbert problem that applies to simple closed curves, a decomposition of analytic functions on the circle of the type $Y_- + Y_+$ exists.
\subsection{Issues from non-diagonalization}
In this subsection, we will show that the ambiguities in the diagonalization of the Virasoro algebra with respect to the adjoint action translate to ambiguities in the projection operator. This leads to different answers for the Berry curvature that are physically inequivalent. In this case, we want to construct a zero-mode projector $P_0$ so that it evaluates to zero for the integrand of Eq.~\eqref{eq:nonzeromode1}, while it gives a non-zero value for Eq.~\eqref{eq:Hmodplane}. We emphasize that this is a different projection operator than the one considered in Section~\ref{sec:transport}, in particular it is finite rather than a delta function. A contour integral prescription can be devised in such a way as to satisfy the properties:
\begin{itemize}
\item The functional is non-zero on the modular Hamiltonian, i.e., $P_0\left(H_{\rm mod}\right)=1$~,
\item It projects out the commutator of the modular Hamiltonian with anything else, i.e., $P_0\left([H_{\rm mod},X_{\xi}]\right)=0~$, for any vector field $\xi(z)$~.
\end{itemize}
There are several different choices that obey both of these properties:
\begin{align}
P^{(1)}_0(X_{\xi})&\equiv-\frac{1}{\pi^2}\int_{|z+i\epsilon|=1}\frac{\xi(z)}{(1+z^2)^2}\,dz~,\\
P^{(2)}_0(X_{\xi})&\equiv\frac{1}{\pi^2}\int_{|z-i\epsilon|=1}\frac{\xi(z)}{(1+z^2)^2}\,dz~,\\
P^{(3)}_0(X_{\xi})&\equiv \frac{1}{2}\left(P^{(1)}_0(X_{\xi})+P^{(2)}_0(X_{\xi})\right)~. \label{eq:projop}
\end{align}
By explicitly computing the residues, one can express these in terms of the diffeomorphism $\xi$ and its derivative evaluated at the endpoints of the interval as
\begin{align} \label{eq:residue}
P^{(1)}_0(X_{\xi})&=\frac{1}{2\pi}\left[\xi(-i)+i\xi'(-i)\right]~,\\
P^{(2)}_0(X_{\xi})&=\frac{1}{2\pi}\left[\xi(i)-i\xi'(i)\right]~,\\
P^{(3)}_0(X_{\xi})&=\frac{1}{4\pi}\left[i\,\xi'(-i)-i\,\xi'(i)+\xi(-i)+\xi(i)\right]~.\label{eq:projop2}
\end{align}
Note that the sum of contours $P_0^{(2)}-P_0^{(1)}$ does not satisfy the required properties, as it vanishes on the modular Hamiltonian. The difference of contours, Eqs.~\eqref{eq:projop} and \eqref{eq:projop2}, is perhaps the most symmetrical choice. It can be seen to result from the decomposition, Eq.~\eqref{j1}, by additionally imposing that the linear functional evaluated on the extra terms $L_2,L_{-2}$ in the decomposition give zero. However, recall that this decomposition was not unique. A different choice would have resulted in a different linear functional, and therefore a different $P_0$.
Moreover, we have considered the possibility of defining a zero mode projector $P_0$ using very early or very late time modular flow. However, we found that this prescription is also ambiguous and depends on whether one considers very early or very late times.
\begin{figure}[t!]
\centerline{\includegraphics[scale=.5]{contour.png}}
\caption{One simple choice of linear functional, constructed from the difference of $|z-i\epsilon|=1$ and $|z+i\epsilon|=1$ contours. When considering a non-restricted set of generators, there is an ambiguity in the choice of projection. For instance, it is also possible to choose either of these contours separately (but not their sum) and still satisfy the required properties for the linear functional. This ambiguity is tied to the fact that the adjoint action is not diagonalizable over the Virasoro algebra.}
\label{fig:contour}
\end{figure}
It is also easy to see that this has a direct physical implication by leading to different results for the curvature. For instance, consider an infinitesimal diffeomorphism of the form
\begin{equation}\label{conformal transformation}
\theta\rightarrow \theta +2\epsilon \sin{(m \theta)}~,
\end{equation}
where $m \in \mathbb{Z}$. The parameter $\epsilon$ is assumed to be small and dimensionless.
One can consider a parallel transport process consisting of a series of such infinitesimal transformations, where $m$ can vary from step to step. It is described by a function $m(\lambda)$, where $\lambda$ denotes the point along the path evaluated in the continuum limit.
Mapping from the cylinder to the plane using Eq.~\eqref{conformal transformation} and expanding to first order in $\epsilon$, this sinusoidal perturbation becomes
\begin{align}
\xi(z) =z+\epsilon(z^{m+1}-z^{-m+1})+\mathcal{O}(\epsilon^2)~. \label{conformal transformation 2}
\end{align}
Up to terms that are higher order in $\epsilon$, Eq.~\eqref{conformal transformation 2} can be inverted to $z = \xi - \epsilon(\xi^{m+1}-\xi^{-m+1}) + \mathcal{O}(\epsilon^2)$. Inserting this in Eq.~\eqref{eq:Hmodplane} for $H_{\rm mod}$, we find the correction to the modular Hamiltonian:
\begin{equation}
H^{(1)}=\pi\left[(m+1)(L_{-m+1}+ L_{m-1}) +(m-1)(L_{-m-1}+L_{m+1})\right]~.
\end{equation}
Recall that expanding both the parallel transport equation $H_{\rm mod}=[S,H_{\rm mod}]$ order by order in $\epsilon$ gave Eq.~\eqref{eq:transportfirst}.
Solving for the correction to the parallel transport operator gives
\begin{equation}
S^{(1)}= L_m-L_{-m}~, \indent S^{(0)}=0~.
\end{equation}
Take two transformations of the form Eq. \eqref{conformal transformation 2} with different values for the integer $m$, say $m_1$ and $m_2$. This gives two different parallel transport operators, $S_1$ and $S_2$. To compute the curvature, Eq.~\eqref{eq:berrycurvature}, we are interested in computing the commutator
\begin{equation} \label{eq:commutator}
[S^{(1)}_1-\kappa_1H^{(0)},S^{(1)}_2-\kappa_2H^{(0)}]~,
\end{equation}
where $\kappa_i=P_0(S_i)$, is the zero mode coefficient of the parallel transport operator $S_i$. We can split Eq.~\eqref{eq:commutator} into terms that we can treat separately. Notice that the term proportional to $[H^{(0)},H^{(0)}]$ is zero and can be neglected. By definition, the projection operator vanishes on $[S^{(1)}_i,H^{(0)}]$, so this contribution to the curvature is zero. An explicit computation shows that
\begin{equation}
[S^{(1)}_1,S^{(1)}_2]= (m_1-m_2)(L_{m_1+m_2}-L_{-m_1-m_2})+(m_1+m_2)(L_{-m_1+m_2}-L_{m_1-m_2})~.
\end{equation}
We will now project onto the zero modes of each of the terms. This is where the ambiguity enters since the result depends on the choice of linear functional. We find
\begin{align}
F^{(1)} = P^{(1)}_0([S^{(1)}_1,S^{(1)}_2])&= \frac{2i}{\pi}(m_2^2-m_1^2)\sin\left(\frac{m_1\pi}{2}\right)\sin\left(\frac{m_2\pi}{2}\right)~,\\
F^{(2)} = P^{(2)}_0([S^{(1)}_1,S^{(1)}_2])&= -F^{(1)}~,\\
F^{(3)} = P^{(3)}_0([S^{(1)}_1,S^{(1)}_2])&= 0~.
\end{align}
Notice that in the case where the $m_1,m_2$ are even, all curvatures agree and in fact identically vanish. Indeed, it is possible to argue that the curvature defined in this way always vanishes for diffeomorphisms that vanish on the interval endpoint. However, in general they do not agree and the result is ambiguous.
\section{Geometric Berry phases}\label{sec:berry}
Geometric phases can arise in quantum mechanics when a Hamiltonian depends continuously on certain parameters, such as an external magnetic field. This results in a state that differs from the starting state by a phase under a closed path in parameter space. Several generalizations of this notion have recently arisen in studies of conformal field theory and holography, relying for instance on the fact that entanglement can act as a connection that relates the Hilbert spaces of different subsystems.
The applications to holography utilize group-based generalizations of the familiar geometric phases of quantum mechanics. In this section, we will review the tools that are relevant, making a distinction between two different approaches for group-based parallel transport depending on whether it is applied to states (a Schr\"{o}dinger-type picture) or density matrices (a Heisenberg approach). Before moving on to new results, we give some examples of how these different approaches have so far been applied to holography.
\subsection{States}\label{sec:states}
We begin by describing the parallel transport of states that transform under a unitary representation of a group (see~\cite{Oblak:2017ect} for applications to the Virasoro group). The basic idea is to generalize beyond a path in a space of parameters, as in quantum mechanics, to a path in a group representation. A gauge connection can be defined relating different tangent spaces along the path. If some unitaries in the representation act trivially on a starting state, this constitutes a redundancy by which the state may not return to itself under a closed path through the group manifold.
Specifically, consider a group $G$ with Lie algebra $\mathfrak{g}$, and a unitary representation $\mathcal{D}$ which acts on a Hilbert space $\mathcal{H}$. Take a state $\ket{\phi}\in\mathcal{H}$ that is an eigenstate of all elements in a `stabilizer' subalgebra $\mathfrak{h}\subset \mathfrak{g}$, or equivalently it is left invariant up to a phase under the action of the corresponding subgroup $H\subset G$. Let $U(\gamma(t))\in \mathcal{D}$ with $\gamma(t)\in G$, $t\in [0,T]$ be a continuous path through this representation, which corresponds to a continuous path of states $\ket{\phi(t)} = U(\gamma(t))\ket{\phi}$. The states $\ket{\phi(t)}$ for all $\gamma(t)$ are often called generalized coherent states, and they parametrize the coset space $G/H$~\cite{Perelomov:1986tf,Yaffe:1981vf}.
The Berry connection is defined as
\begin{equation} A = i \bra{\phi(t)} d \ket{\phi(t)} = i \bra{\phi} U^{-1} d U \ket{\phi}~,\end{equation}
where $d$ is the exterior derivative on the group manifold, and we have used $U^\dagger = U^{-1}$ since the representation is unitary. The connection is just $A = i \bra{\phi} \mathcal{D}(\Theta) \ket{\phi}$ with $\Theta$ the Maurer-Cartan form associated to the group, $\Theta(\dot{\gamma}(t)) = \left.\frac{d}{d\tau}\right|_{\tau=t}[\gamma(t)^{-1} \gamma(\tau)]$. Under action by an element of the stabilizer subgroup, the state changes by a phase $\ket{\phi(t)} \rightarrow e^{i \alpha} \ket{\phi(t)}$. The connection then transforms as a gauge field, $A \rightarrow A -d\alpha$.
The associated Berry curvature is
\begin{equation} F = dA~, \end{equation}
and the geometric phase is defined as
\begin{equation} \theta(\gamma) = \int_\gamma A~.\end{equation}
This phase is in general gauge dependent, but is gauge invariant when the path $\gamma$ is closed. In this case, we can write
\begin{equation} \theta(\gamma) = \oint_\gamma A~ = \int_{B | \partial B = \gamma} F~,\end{equation}
where in the last line we have used Stokes' theorem to convert this to the flux of the Berry curvature over any surface $B$ with boundary $\gamma$. This measures the phase picked up by the state $\ket{\phi}$ under a closed trajectory through the group representation.
Similar techniques are relevant in the study of Nielsen complexity, which describes the geometry of the space of states related by unitaries, starting from a given reference state. A specific path through unitaries is known as a `circuit.' In conformal field theory, one can choose a reference state such as a primary that is invariant under a subset of the conformal symmetry. Defining the complexity further requires a notion of distance between states. Certain choices have relations to the Berry connection or curvature of state-based parallel transport~\cite{Caputa:2018kdj,Bueno:2019ajd,Erdmenger:2020sup,Flory:2020eot,Flory:2020dja,Chagnet:2021uvi} (for the application of similar mathematical structures to a description of other definitions for complexity, see~\cite{Caputa:2021sib}).
\begin{figure}[t!]
\centerline{\includegraphics[scale=.3]{geodesicplot.png}}
\caption{(a) Kinematic space can be defined as the space of pairs of spacelike separated points in a CFT, which are in correspondence with bulk minimal area spacelike geodesics ending on these points. The blue curve is one such geodesic, in the special case that the endpoints lie on the same time slice. (b) The parallel transport of operators in kinematic space can be related to lengths in the bulk AdS spacetime. Depicted here is a constant time slice of anti-de Sitter spacetime. Pairs of points on the boundary define bulk geodesics (blue, solid curves). As the interval position is varied, these trace out an envelope in the bulk (dashed purple circle). The length of this envelope is directly related to the Berry phase associated to the boundary parallel transport of bilocal operators evaluated at the endpoints~\cite{Czech:2017zfq}.}
\label{fig:geodesicplot}
\end{figure}
Another application arises in a subfield of holography known as `kinematic space,' which studies the geometric properties of the space of spacelike pairs of points in a CFT$_d$ and their role in probing the geometry of the bulk anti-de Sitter (AdS) spacetime~\cite{Czech:2015qta,deBoer:2015kda,Czech:2016xec,deBoer:2016pqk}. It was demonstrated that certain bilocal operators in a CFT pick up phases under a parallel transport that displaces the location of the spacelike points where they are evaluated. In the bulk AdS spacetime this was shown to compute the length of a curve traced out by geodesics limiting to these point pairs on the boundary (see Figure~\ref{fig:geodesicplot})~\cite{Czech:2017zfq}. As we show in Appendix~\ref{app:kinematic}, these results for kinematic space can be understood using the language of state-based parallel transport.
\subsection{Density matrices} \label{sec:density}
Consider a subregion $A$ on a time slice of a CFT. Associated to this region is an algebra of operators $\mathcal A_A$. Assuming some short distance cutoff, the state is described by a reduced density matrix $\rho_A$, obtained from tracing the full state over the complement $\bar{A}$ of $A$. From this we can define the modular Hamiltonian $H_{\rm mod}$ through $\rho_A = e^{- H_{\rm mod}}/(\mbox{tr } e^{- H_{\rm mod}})$. The modular Hamiltonian encodes information about the entanglement properties of the state. It will be formally useful to refer to the `complete' modular Hamiltonian $H_{\rm mod, A} - H_{\rm mod, \bar{A}}$. We will often drop the subscript $A$, and additionally allow the modular Hamiltonian to depend on some parameter $H_{\rm mod}(\lambda)$. This could for instance encode changes in the size of region $A$ as was studied in~\cite{Czech:2017zfq, Czech:2019vih}, or a change of state as we describe in the next section.
The physical data associated to $A$ is not the set of operators in $\mathcal{A}$, but rather their expectation values. As such, there can be symmetries, i.e, transformations which act on the algebra while leaving no imprint on measurable quantities. We define a \emph{modular zero mode} $Q_i$ as a Hermitian operator that commutes with the modular Hamiltonian, \\
\begin{equation} [Q_i,H_{\rm mod}] = 0~. \end{equation}
The modular zero mode can be exponentiated to the unitary
\begin{equation} V = e^{-i \sum_i s_i Q_i}~. \label{eq:V}\end{equation}
Under the flow $\mathcal O \rightarrow V^\dagger \mathcal O V$, the expectation values of algebra elements are left unchanged while taking the algebra to itself. The transformation by modular zero modes therefore constitutes a kind of gauge redundancy.
Given an operator, it is often useful to separate the zero mode part out from a contribution that is non-ambiguous. In the finite-dimensional case, we can compute the zero mode contribution by using the projection operator
\begin{equation} P_0[\mathcal O] = \sum_{E, q_i, q_i'} \ket{E, q_i} \bra{E, q_i} \mathcal O \ket{E, q_i'}\bra{E,q_i'}~, \label{eq:projection} \end{equation}
where $\ket{E,q_i}$ are simultaneous eigenstates of $H_{\rm mod}$ and $Q_i$. Note that later we will be working with an infinite-dimensional algebra, where this formula no longer applies. We will show how to define an appropriate projection relevant for that situation in Section~\ref{sec:transport}.
The zero mode frame redundancy leads to a Berry transport problem for operators. Imagine a process that modifies the algebra $\mathcal{A}_A$ depending on a parameter $\lambda$, for instance by changing the interval $A$ or the state. We start by diagonalizing the modular Hamiltonian,
\begin{equation} H_{\rm mod} = U^\dagger \Delta U~, \label{eq:HmodinU}\end{equation}
where $\Delta$ is a diagonal matrix of eigenvalues. $H_{\rm mod}, U$ and $\Delta$ are functions of $\lambda$ that vary along the path. Taking the derivative gives the `parallel transport equation,'
\begin{equation} \dot{H}_{\rm mod} = [\dot{U}^\dagger U, H_{\rm mod}] + U^\dagger \dot{\Delta} U~, \label{eq:paralleltransport}\end{equation}
where $\cdot = \partial_\lambda$. The first term on the right-hand side lies in the image of the adjoint action, $[\cdot, H_{\rm mod}].$ The second term encodes the change of spectrum under the parallel transport. It is a zero mode since it commutes with the modular Hamiltonian, in other words, it lies in the kernel of the adjoint action. We will assume that there is a unique decomposition into the image and kernel of the adjoint action, so that the entire zero mode contribution can be isolated from the second term: $P_0[\dot{H}_{\rm mod}] = U^\dagger \dot{\Delta} U$. For a discussion of subtleties associated with this assumption for the Virasoro algebra, see Appendix~\ref{sec:NonDiagonalization}.
This equation exhibits a redundancy due to the presence of modular zero modes. For instance, the modular Hamiltonian together with Eq.~\eqref{eq:paralleltransport} could be equally well expressed in terms of $U\rightarrow \tilde U = UV$ where $V$ given by Eq.~\eqref{eq:V} is generated by a modular zero mode. Instead of Eq.~\eqref{eq:HmodinU} this gauge choice leads to
\begin{equation} H_{\rm mod} = V^\dagger U^\dagger \Delta U V~.\end{equation}
A reasonable choice for fixing this ambiguity is to impose that
\begin{equation} P_0[\partial_\lambda\tilde U^\dagger \tilde U] = 0~.\label{eq:projectionassump}\end{equation}
Since $V$ preserves the zero mode space, $P_0[V^\dagger \dot{U}^\dagger U V] = V^\dagger P_0[\dot{U}^\dagger U]V$ from Eq.~\eqref{eq:projection}. Likewise, $\dot{V^\dagger} V$ is a modular zero mode from Eq.~\eqref{eq:V}, so it projects to itself. Thus, this condition reduces to
\begin{equation} -V^\dagger \dot{V} + V^\dagger P_0[\dot{U^\dagger}U] V = 0~, \end{equation}
where we have used $\dot{V^\dagger} V = - V^\dagger \dot{V}$ since $V$ is unitary. We therefore obtain a more familiar expression for parallel transport of the operator $V$,
\begin{equation} \left(\partial_\lambda - \Gamma\right) V = 0~, \end{equation}
where
\begin{equation} \Gamma = P_0[\dot{U^\dagger} U] \end{equation}
is a Berry connection that encodes information about how the zero mode frame changes as we vary the modular Hamiltonian. It transforms as $\Gamma \rightarrow V^\dagger \Gamma V - V^\dagger \dot{V}$ under $U\rightarrow U V$. After performing the parallel transport around a closed loop, $\dot{U}^\dagger U$ has a definite value by Eq.~\eqref{eq:projectionassump}. However, $U$ itself may differ by a modular zero mode,
\begin{equation} U(\lambda_f) = U(\lambda_i) e^{-i \sum_i \kappa_i Q_i}~. \end{equation}
Here, $\lambda_f=\lambda_i$ are the endpoints of a closed path. The coefficients $\kappa_i$ contain information about the loop.
There is also a curvature, $F$, associated to this parallel transport process. We can evaluate the curvature by performing parallel transport around a small loop. Here, `small' means that we replace the derivatives with infinitesimal transformations. We can think of the operator $S_{\delta \lambda}=\tilde{U}^\dagger \delta_{\lambda}\tilde U$ as a generator of parallel transport. It transforms as a gauge field \begin{equation}
S_{\delta \lambda} \to V^{\dagger} S_{\delta \lambda} V + V^{\dagger}\delta_{\lambda }V
\end{equation}
under a change of modular frame $\tilde{U}\to \tilde{U}V$ and satisfies $P_0[S_{\delta \lambda}]=0$ by Eq.~\eqref{eq:projectionassump}. The curvature $F$ associated to this gauge field is what we call the \emph{modular Berry curvature}. It can be represented in the usual way by performing two consecutive infinitesimal transformations $\lambda_i \to \lambda_i+\delta_1\lambda$, followed by $\lambda_i+\delta_1\lambda \to \lambda_i+\delta_1\lambda+\delta_2\lambda$. Doing the same with $(1 \leftrightarrow 2)$ and taking the difference gives a closed loop with
\begin{equation}\label{eq:curvature2}
F=(1+S_{\delta_2\lambda}(\lambda_i+\delta_1\lambda))(1+S_{\delta_1\lambda}(\lambda_i))-(1 \leftrightarrow 2)~.
\end{equation}
Here, we use that the holonomy operator along the line $[\lambda_i,\lambda_i+\delta\lambda]$ is given by
\begin{equation}
\exp \left(\int_{\lambda_i}^{\lambda_i+\delta \lambda} \tilde{U}^{\dagger}\delta_{\lambda}\tilde{U} \right) =1+S_{\delta \lambda}(\lambda_i)~.
\end{equation}
In Appendix~\ref{app:general}, we will derive a general expression for the curvature, Eq.~\eqref{eq:curvature2}, and we apply it in Section~\ref{sec:transport} to the case of state-changing parallel transport.
\subsubsection{Example: Shape-changing parallel transport}\label{sec:IntervalTransport}
As an example, we will review how this framework for parallel transport of operators can be applied to a parallel transport process of the modular Hamiltonian intervals whose location is varied in the CFT vacuum. This reduces to the study of kinematic space, which we see can also be described using state-based parallel transport in Appendix~\ref{app:kinematic}.
We consider our subregion $A$ to be an interval on a fixed time slice of the CFT with endpoints located at $\theta_L$ and $\theta_R$. Generalizing to subregions with endpoints which are not in the same time slice is straightforward. The modular Hamiltonian associated to $A$ can be written in terms of $\mathfrak{sl}(2,\mathbb{R})$ generators as
\begin{align}\label{eq:Kplus}
H_{\rm mod} & = s_1 L_1 + s_0 L_0 + s_{-1}L_{-1}~.
\end{align}
Here, we have omitted the $\bar{L}$ operators for simplicity. The coefficients in Eq.~(\ref{eq:Kplus}) depend on $\theta_L,\theta_R$ and can be determined by requiring that the generator keeps the interval fixed. Explicitly, they are given by
\begin{equation} \label{eq:coefficients}
s_1 = \frac{2\pi \cot \left(\frac{\theta_R -\theta_L}{2}\right)}{e^{i \theta_R} + e^{i \theta_L}}~, ~~~
s_0 = -2\pi \cot\left(\frac{\theta_R - \theta_L}{2}\right)~, ~~~
s_{-1} = \frac{2\pi \cot \left(\frac{\theta_R -\theta_L}{2}\right)}{e^{-i \theta_R} + e^{-i \theta_L}}~.
\end{equation}
In case of $A$ extending along half the interval, taking for example $\theta_R=-\theta_L=\pi/2$, the modular Hamiltonian can be found from Eq.~\eqref{eq:coefficients} to be $H_{\rm mod}=\pi(L_1+L_{-1})$.
We now construct a one-parameter family of modular Hamiltonians by changing the shape of the interval. The simplest trajectory is given by just changing one of the endpoints, e.g., taking the parameter $\lambda = \theta_L$. The change in modular Hamiltonian is now captured by the parallel transport equation Eq.~\eqref{eq:paralleltransport}, which in this case reads
\begin{equation} \delta_{\theta_L}H_{\rm mod} = [S_{\delta \theta_L}, H_{\rm mod}]~. \label{eq:shapechanging}\end{equation}
We can solve Eq.~\eqref{eq:shapechanging} for the shape-changing parallel transport operator $S_{\delta \theta_L}$ by first diagonalizing the action of the modular Hamiltonian
\begin{equation}\label{eq:eigen}
[H_{\rm mod},V_\mu] = i \mu V_\mu~,
\end{equation}
with $\mu \in {\mathbb{R}}$. It is not difficult to see that the following operators are solutions
\begin{align}
V_{-2\pi}=\partial_{\theta_L}H_{\rm mod}~,\quad V_0=H_{\rm mod}~, \quad V_{2\pi}=\partial_{\theta_R}H_{\rm mod}~,
\end{align}
with $\mu=-2\pi,0,2\pi$ respectively. The operators $V_{2\pi}$ and $V_{-2\pi}$ saturate the modular chaos bound~\cite{DeBoer:2019kdj}.
Importantly, notice that this class of deformations is characterized by imaginary eigenvalues in Eq.~\eqref{eq:eigen}. The generator of modular parallel transport therefore takes the form
\begin{equation}
S_{\delta \theta_L} = -\frac{i}{2\pi} \partial_{\theta_L}H_{\rm mod}~.
\end{equation}
For this particular operator Eq.~(\ref{eq:projectionassump}) is automatically satisfied, since it can be written as the commutator of $H_{\rm mod}$. Similarly, one can show that $S_{\delta \theta_R} = \frac{i}{2\pi} \partial_{\theta_R}H_{\rm mod}$. Then, using Eq.~(\ref{eq:curvature2}) one can compute the modular Berry curvature for this shape-changing transport to be
\begin{equation} \label{eq:curvature3}
F=[S_{\delta\theta_L},S_{\delta \theta_R}] =-\frac{i}{4\pi}\frac{H_{\rm mod}}{\sin^2\left(\frac{\theta_R - \theta_L}{2}\right)}~.
\end{equation}
In particular, applying the projection $P_0$ to this expression does not change it, as the curvature is proportional to a zero mode. In Appendix \ref{app:kinematic}, we rederive the result in Eq.~\eqref{eq:curvature3} from the point of view of kinematic space. The curvature, Eq.~\eqref{eq:curvature3}, is simply the volume form on kinematic space.
\section{Coadjoint orbit interpretation}\label{sec:coadjoint}
Various versions of the parallel transport problem we consider exhibit connections to the geometry of symplectic manifolds known as coadjoint orbits. For the state-based parallel transport summarized in Section~\ref{sec:states} applied to the Virasoro algebra, connections to coadjoint orbits were described in~\cite{Oblak:2017ect}. In Appendix~\ref{app:kinematic}, we additionally explain how to use state-based parallel transport to obtain coadjoint orbits of $SO(2,1)$, which describe kinematic space~\cite{Penna:2018xqq}. We will begin by reviewing the notion of coadjoint orbits, and then we explain how our operator-based parallel transport can be related to the geometry of orbits.
Consider a Lie group $G$ with Lie algebra $\mathfrak{g}$. Let $\mathfrak{g}^*$ be the dual space, i.e., the space of linear maps $T:\mathfrak{g}\rightarrow \mathbb{C}$. This defines an invariant pairing $\left<T,X\right> \equiv T(X)$ for $X\in \mathfrak{g}, T\in \mathfrak{g}^*$.
The group $G$ acts on the algebra $\mathfrak{g}$ through the adjoint action,
\begin{equation} \mbox{Ad}_g(X) = \left.\frac{d}{d\lambda}\left(g e^{\lambda X} g^{-1}\right)\right|_{\lambda=0}~, \indent g\in G, \, X\in \mathfrak{g}~.\label{eq:Ad}\end{equation}
For matrix groups such as $SO(2,1)$, which we consider in Appendix~\ref{app:kinematic}, Eq.~\eqref{eq:Ad} is just $\mbox{Ad}_g(X) = g X g^{-1}$.
The adjoint action of the algebra on itself can be defined from this as
\begin{equation} \mbox{ad}_X(Y) = \frac{d}{d\rho} \left.(\mbox{Ad}_{e^{\rho X}}(Y))\right|_{\rho=0} = [X,Y]~, \indent X,Y\in \mathfrak{g}~.\end{equation}
The adjoint action descends to an action on the dual space. This \emph{coadjoint action} $\mbox{ad}_X^*$ on $\mathfrak{g}^*$ is defined implicitly through
\begin{equation} \label{eq:mcDef} \left<\mbox{ad}_X^* z, Y\right> = \left<z, \mbox{ad}_X Y\right>~, \indent z\in \mathfrak{g}^*, \, X,Y\in \mathfrak{g}~. \end{equation}
For a given $T\in \mathfrak{g}^*$, the orbit $\mathcal{O}_T = \{\mbox{ad}_X^*(T)\, | \, X\in \mathfrak{g} \}$ generated by the coadjoint action is known as a \emph{coadjoint orbit}.
Let $x_1,x_2$ be coadjoint vectors tangent to the orbit $\mathcal{O}_T$, and let $X_1, X_2$ be the adjoint vectors that are dual to these through the invariant pairing. Then, the \emph{Kirillov-Kostant symplectic form} associated to this orbit is~\cite{Witten:1987ty, alekseev1988, kirillov2004, Alekseev:2018pbv}
\begin{equation} \omega(x_1, x_2) = \left<T,[X_1,X_2]\right>~. \end{equation}
This is manifestly anti-symmetric and $G$-invariant. It is also closed and nondegenerate~\cite{Witten:1987ty}, and hence it defines a symplectic structure on $\mathcal{O}_T$. Thus, coadjoint orbits are naturally symplectic manifolds. For matrix groups, the algebra and dual space are isomorphic through the Cartan-Killing form, which is non-degenerate in this case. It suffices to consider an orbit of the adjoint action, and these generate symplectic manifolds. This is the setting of Appendix~\ref{app:kinematic}. We emphasize that in the general case this is not true and one must work in the dual space.
It will be useful to review the case of the Virasoro group, along with a suitable generalization given by the algebra described in Sections~\ref{sec:algebra} and~\ref{sec:extension} that applies to our case of interest. Recall that the Virasoro group consists of $\rm{Diff}(S^1)$ together with its central extension, $\widehat{\rm{Diff}(S^1)}=\rm{Diff}(S^1)\times \mathbb{R}$. For our problem, we are considering a continuous version of the ordinary Virasoro algebra, with a central extension described in Section~\ref{sec:extension}. In either case, the formulae will be the same, with the difference that in the second scenario the vector fields $\xi$ should be understood to be non-differentiable at the interval endpoints, with vanishing support outside the interval. Thus, in the latter case all integrals should be understood to cover only the range of the interval rather than the full circle.
For either algebra we consider elements $\xi(z)\partial_z - i \alpha c$ where $\xi(z) \partial_z$ is a vector field on the circle (smooth for Virasoro, and of the form Eq.~\eqref{eq:eigenfunction} for its generalization) and $\alpha\in \mathbb{R}$ is a parameter for the central extension, generated by the algebra element $c$. The only non-trivial commutators are
\begin{equation} \left[\xi_1(z)\partial_z, \xi_2(z)\partial_z\right] = -(\xi_1 \xi_2'-\xi_1' \xi_2)\partial_z + \frac{i c}{48\pi}\oint dz \, (\xi_1 \xi_2''' - \xi_1''' \xi_2)~.\end{equation}
In the Virasoro case, using $L_n =z^{n+1} \partial_z$ the bracket Eq.~\eqref{eq:Liebracket} indeed leads to the usual form of the Virasoro algebra, Eq.~\eqref{eq:virasoroalgebra}.
For both algebras we can define a pairing between an adjoint vector $(\xi, \alpha)$ and a coadjoint vector $(T,\beta)$ given by
\begin{equation} \left<(T,\beta),(\xi,\alpha)\right> = -\left[\oint dz \, T(z)\xi(z) + \alpha \beta\right].\end{equation}
Now consider algebra elements $X_{\xi_1}=(\xi_1, \alpha_1)$ and $X_{\xi_2}=(\xi_2, \alpha_2)$, and let $x_{\xi_1},x_{\xi_2}$ be the corresponding dual elements. The Kirillov-Kostant symplectic form through dual element $(T, \beta)$ is
\begin{align}
\omega(x_{\xi_1},x_{\xi_2}) = \left<(T,\beta), [X_{\xi_1},X_{\xi_2}]\right> = \oint dz \left[T (\xi_1 \xi_2'-\xi_1' \xi_2) + \frac{\beta}{48\pi} (\xi_1 \xi_2''' - \xi_1''' \xi_2)\right]~.
\end{align}
Focusing now on the case of our non-smooth generalization of the Virasoro algebra, we can define the coadjoint orbit $\mathcal{O}_{T_*}$ through the unorthodox element $T_* = (P_0,c)$ of the dual space defined by the projection operator, Eq.~\eqref{eq:projectionoperator2}, together with its central extension $c$ in the full algebra. Again considering elements $x_{\xi_1},x_{\xi_2}$ in the dual space that correspond to algebra elements $X_{\xi_1},X_{\xi_2}$ through the pairing, and using Eq.~\eqref{eq:projectioninz}, this becomes
\begin{equation}
\omega(x_{\xi_1},x_{\xi_2}) = \left<T_*, [X_{\xi_1},X_{\xi_2}]\right> = P_0([X_{\xi_1},X_{\xi_2}]) + \frac{c}{48\pi} \oint dz \left[ (\xi_1 \xi_2''' - \xi_1''' \xi_2)\right]~.
\end{equation}
This is precisely Eq.~\eqref{eq:berrycurvaturec} for the curvature. Thus, the modular Berry curvature for state-changing parallel transport is now related to the symplectic form on this orbit.
What is the holographic bulk interpretation of such a non-standard orbit? We will argue that the corresponding geometry is related to the backreaction of a cosmic brane.
\section{Discussion}
We have considered the case of boundary parallel transport of a fixed interval under a change in global state, which is in contrast to the situation considered in~\cite{Czech:2019vih} where the state is held fixed while the interval location is varied. However, a general parallel transport process will change both the state and the location of the interval. In such a situation, the curvature will contain cross-terms between the $X_\lambda$'s of Eq.~\eqref{eq:projectionoperator} and the $V_\mu$'s of Section~\ref{sec:IntervalTransport}. Both are eigenoperators of the adjoint action of the modular Hamiltonian, $[H_{\rm mod}, X_\lambda] = \lambda X_\lambda$ and $[H_{\rm mod}, V_\mu] = i \mu V_\mu$, but notice that the eigenvalue of the $X_\lambda$'s is real while that of $V_\mu$ is purely imaginary. By the Jacobi identity, the commutator $[X_\lambda, V_\mu]$ will have an eigenvalue that is the sum of the two, thus it has both a real and imaginary part. This is never zero, which means $[X_\lambda, V_\mu]$ does not have a zero mode. The curvature, Eq.~\eqref{eq:berrycurvature}, is given by the projection onto this zero mode, which means that computed in these directions that mix changes of state and interval location, it must vanish. Thus, it appears to be sufficient to consider state and interval location-based transport separately.
In the bulk, we have demonstrated an abstract connection between state-changing parallel transport of boundary intervals and a certain family of Euclidean bulk solutions. The holographic dual of the modular Berry curvature was argued to be an entanglement wedge symplectic form on this geometry. This is similar in spirit to the results of \cite{Belin:2018bpg,Belin:2018fxe}, but in the case of mixed states. However, a direct phase space interpretation of this symplectic form in Lorentzian signature is not so obvious. Associating a phase space, i.e., a solution space of a proper initial value problem, to an entanglement wedge involves some subtleties, e.g., the possibility of edge modes \cite{Donnelly:2016auv,Speranza:2017gxd,Donnelly:2020xgu} and boundary ambiguities at the RT surface that must be fixed by a suitable choice of boundary conditions. Possibly, one could exploit the relation to the hyperbolic black hole and identify the relevant phase space with the one associated to the (outside of the) black hole. This would lead to geometric setup for which the Lorentzian continuation is more well-behaved. In particular, this approach requires a further study of the choice of boundary conditions that are natural to put at the horizon.
\begin{figure}[t!]
\centerline{\includegraphics[scale=.5]{gluing.png}}
\caption{An example of a time-dependent geometry limiting to different boundary states $\ket{\psi_i}$ at each time. Could the Berry phase associated to state-dependent parallel transport compute the length of a curve (such as the thick orange curve) in such a geometry?}
\label{fig:gluing}
\end{figure}
It would also be interesting to explore a bulk description within a single Lorentzian geometry. For instance, one could imagine constructing a time-dependent geometry by gluing together certain slowly varying time-independent geometries that are each dual to different boundary states. Since this will not in general give an on-shell solution, one could try to turn on suitable sources on the boundary as a function of time, in such a way that time evolution
under the modified Hamiltonian (with sources) provides precisely the sequence of states under consideration. In such a situation, one could look for a corresponding on-shell bulk solution with modified asymptotics. It would be interesting to explore whether the Berry phase associated to state-changing parallel transport computes a length within a time-dependent geometry (see Figure~\ref{fig:gluing}).
Additionally, it would be interesting to explore further the connections to Uhlmann holonomy described in~\cite{Kirklin:2019ror}. This is a version of parallel transport constructed from purification of density matrices subject to certain maximization conditions on transition probabilities. Through appropriate insertion of stress-tensors at the boundary, this is claimed in \cite{
Faulkner:2015csl,Faulkner:2016mzt, Lewkowycz:2018sgn,Banerjee:2011mg} to describe the shape-changing transport problem considered in Section~\ref{sec:IntervalTransport}. In this setting, the Berry curvature associated to a parallel transport process that changes the state was argued to be dual to the symplectic form of the entanglement wedge. While similar in spirit to much of this work, it would be interesting to further study the relation to our work in the context of key differences, such as the need for diagonalizing the adjoint action and the use of non-smooth vector fields.
The problem we study also has relevance for thermalization in 2d CFT. For example, the Krylov complexity contains information about operator growth in quantum chaotic systems. Roughly speaking, this is given by counting the operators that result under nested commutators with respect to a `Hamiltonian' of the system. In~\cite{Caputa:2021ori}, the Krylov complexity was studied for the case where this Hamiltonian takes the form of Eq.~\eqref{eq:HmodVir}, using an oscillator representation of the Virasoro algebra. This is similar to the modular Berry transport process we have considered, with the exception again of the use of non-smooth vector fields.
In studying operator-based parallel transport, we uncovered some subtleties regarding the diagonalization of the adjoint action for arbitrary Virasoro generators (an explanation of these issues was given in Appendix~\ref{sec:NonDiagonalization}). For this reason we considered a set of certain non-smooth vector fields on the circle, Eq.~\eqref{eq:eigenfunction}, which explicitly diagonalize the adjoint action so that the curvature results of Appendix~\ref{app:general} may be applied. It would be interesting to further study this issue. For instance, we found that the adjoint action could not be diagonalized over the usual Virasoro algebra, defined as the set of smooth vector fields on the circle.\footnote{This is actually not uncommon in the case of infinite-dimensional vector spaces. For example, when one tries to diagonalize the derivative operator on the space of polynomial functions one naturally finds exponential functions, which are not part of the original space. The non-analyticities we found should be regarded in the same way.} Instead, we saw that the set of generators not expressible as $[H_{\rm mod}, X]$ was dimension three, larger than the dimension of the kernel (which is in this case one-dimensional and generated by $H_{\rm mod}$). Furthermore, there was an ambiguity in the non-zero mode piece. One could ask whether it is possible to consider parallel transport generated by elements of the usual Virasoro algebra, and perhaps resolve the ambiguities in the decomposition by taking a suitable choice of norm. Along these lines, one could consider only Virasoro algebra elements that are contained within physical correlators. It would be interesting to apply techniques from algebraic quantum field theory to see if this eliminates some of the ambiguities we have encountered.
To properly diagonalize the adjoint action we were led to consider vector fields on the circle that are non-differentiable on the endpoints of the interval. These form a continuous version of the Virasoro algebra. Our Berry curvature can be understood formally as the Kirillov-Kostant symplectic form on an orbit associated to this algebra. It would be interesting to conduct a more rigorous study of this algebra and its central extension. It is also worth noting that we considered a dual space of distributions on the circle, which is larger than the set of smooth quadratic differentials considered in the classification of~\cite{Witten:1987ty}. For this reason, the orbits we consider differ considerably from known Virasoro orbits since the associated representative, Eq.~\eqref{eq:projectioninz2}, is not a quadratic form on the circle. To our knowledge, such orbits have not been studied before in the literature. We have identified at least one physical implication of such unconventional orbits, and thus it would be interesting to revisit the classification of Virasoro orbits using more general duals.
\section{Bulk phase space interpretation}\label{sec:bulk}
A Berry curvature for pure states constructed from Euclidean path integrals was shown to be equal to the integral of the bulk symplectic form over a Cauchy slice extending into the bulk in \cite{Belin:2018fxe,Belin:2018bpg} (see also~\cite{Marolf:2017kvq}). The notion of Uhlmann holonomy is one particular generalization of Berry phases to mixed states, and it was argued in \cite{Kirklin:2019ror} that its holographic dual is the integral of the bulk symplectic form over the entanglement wedge. However, the arguments for arriving at this result for Uhlmann holonomy are purely formal, and to the best of our knowledge this identification has not been worked out in an explicit example. The derivation also lacks a precise definition for the entanglement wedge symplectic form, which we will provide.
In this section, we will comment on a possible bulk interpretation of the modular Berry curvature for state-changing parallel transport. We will see that the result for the curvature that we obtained in the previous sections is closely related to an integral of a bulk symplectic form on a geometry with a conical singularity. See~\cite{Lewkowycz:2013nqa, Dong:2016fnf, Dong:2013qoa, Hung:2011nu, Krasnov:2001cu} for a related discussion of this geometry.
\subsection{The conical singularity geometry} \label{sec:conicalsing}
We consider a Euclidean geometry obtained through the backreaction of a codimension-2 brane homologous to the boundary interval $A$. This leads to a family of Euclidean bulk solutions, which we denote by $\mathcal{M}_n$, where $n$ is a function of the tension of the brane~\cite{Dong:2016fnf}:
\begin{equation}
\mathcal{T}_n=\frac{n-1}{4n G}~.
\end{equation}
In the limit $n\to 1$, the cosmic brane becomes tensionless and settles on the location of the the usual RT surface associated to the entangling region, but for non-zero tension the brane backreacts on the geometry. The resulting geometries $\mathcal{M}_n$ are used in the context of the holographic computation of R\'enyi entropies $S_n$ in the boundary CFT, and we will argue that these are also relevant for a holographic interpretation of the modular Berry curvature.
Let us first examine the boundary dual of the backreaction process. Inserting a cosmic brane which anchors the boundary at $z_1$ and $z_2$ corresponds to the insertion of twist fields $\mathcal{O}_{n}$ in the CFT at $z_1$ and $z_2$ \cite{Hung:2011nu}. The field $\mathcal{O}_{n}(z)$ is a (spinless) conformal primary of dimension~\cite{Calabrese:2004eu}
\begin{equation}
\Delta_n=\frac{c}{12}\left(n-\frac{1}{n}\right)~.
\end{equation}
We use the fact that the cosmic brane can be computed as a correlation function of $\mathbb{Z}_n$ twist operators $\mathcal{O}_n, \mathcal{O}_{-n}$ in the boundary theory \cite{Hung:2011nu,Dong:2016fnf}.
\begin{figure}[t!]
\centerline{\includegraphics[scale=.3]{EntanglementWedge.png}}
\caption{The conical singularity geometry $\mathcal{M}_n$ and entanglement wedge region $\Sigma_n$ corresponding to the boundary region $A$. The thick striped line corresponds to the cosmic brane extending from $-i$ to $i$. The backreaction process creates a conical singularity of opening angle $2\pi/n$.}
\label{fig:EntanglementWedge}
\end{figure}
Geometrically, we can think about the twist field as creating a conical singularity at the insertion point. Let us denote the two-dimensional geometry obtained from $\mathcal{O}_{n}(z_1), \mathcal{O}_{-n}(z_2)$ by $\mathcal{B}_n$. We are interested in the stress tensor profile on the boundary of the backreacted geometry, which by this reasoning is given by the stress tensor on the plane in the background of two twist fields: \begin{equation} \label{eq:twistfields}
\langle T(z)\rangle_{\mathcal{B}_n}= \frac{\langle T(z)\mathcal{O}_n(z_1)\mathcal{O}_{-n}(z_2)\rangle_{\mathbb{C}} }{\langle\mathcal{O}_n(z_1)\mathcal{O}_{-n}(z_2)\rangle_{\mathbb{C}} }~.
\end{equation}
Using the general form of the three-point function in a CFT in terms of conformal dimensions, it now follows that $T(z)$ has poles of order two at $z_1$ and $z_2$ respectively.
To describe the geometry $\mathcal{M}_n$ explicitly, we consider the complex plane with coordinate $z$ which is flat everywhere except for two conical singularities at $z=z_1$ and $z=z_2$. The singular points are assumed to have a conical deficit of magnitude
\begin{equation}
\Delta \varphi = 2\pi\left(1-\frac{1}{n}\right).
\end{equation}
We can use a uniformizing function $f(z)$ to map the $z$-plane with conical singularities to the smooth covering space, which we denote by $\widetilde{\mathcal{B}}_n$, which is a complex plane with coordinate $z'$ defined by
\begin{equation} \label{eq:conical}
z'=f(z)=\left(\frac{z-z_1}{z-z_2} \right)^{\frac{1}{n}}~.
\end{equation}
This maps $z_1 \to 0$ and $z_2 \to \infty$ so that the interval between $z_1$ and $z_2$ goes to the positive real axis $[0,\infty)$. The power of $\frac{1}{n}$ removes the conical singularity by gluing the $n$ sheets of the $z$-plane together, each represented by a wedge of opening angle $\frac{2\pi}{n}$.
In terms of the coordinate $z'$ we extend $\widetilde{\mathcal{B}}_n$ into the bulk by introducing a `radial' coordinate $w'$ with metric of the form
\begin{equation} \label{eq:metrichyperbolic}
ds^2=\frac{dw'^2+dz'd\bar{z}'}{w'^2}~.
\end{equation}
Here, we restrict the range of $z'$ by the identification $z'\sim e^{2\pi i/n}z'$, as this represents a fundamental domain $\widetilde{\mathcal{B}}_n/\mathbb{Z}_n$ in the covering space. The bulk coordinate approaches the boundary in the limit $w'\to 0$. The metric in Eq.~\eqref{eq:metrichyperbolic} is a wedge of three-dimensional hyperbolic space $\mathbb{H}^3$. We now use the following transformation:
\begin{equation}
w'= w \frac{1}{N}\sqrt{f'(z)\bar{f}'(\bar{z})}~,\hspace{10pt} z'=f(z)-w^2\frac{1}{N}\frac{f'(z)\bar{f}''(\bar{z})}{2\bar{f}'(\bar{z})}~,
\end{equation}
where $f(z)$ is defined in Eq.~\eqref{eq:conical} and
\begin{equation}
N=1+w^2\frac{f''(z)\bar{f}''(\bar{z})}{4f'(z)\bar{f}'(\bar{z})}~.
\end{equation}
This transformation reduces to the conformal transformation in Eq.~\eqref{eq:conical} when we go to the boundary $w\to 0$. The metric in the new coordinates reads
\begin{equation}\label{eq:FGmetric}
ds^2 =\frac{dw^2}{w^2}+\frac{1}{w^2}\left(dz-w^2\frac{6}{c}\bar{T}(\bar{z})d\bar{z}\right)\left(d\bar{z}-w^2\frac{6}{c}T(z) dz\right)~,
\end{equation}
where
\begin{equation} \label{eq:stresstensor}
T(z)=\frac{c}{12}\{f(z),z\}=\frac{c}{24}\left(1-\frac{1}{n^2}\right)\frac{(z_1-z_2)^2}{(z-z_1)^2(z-z_2)^2}~,
\end{equation}
with a similar expression holding for the anti-holomorphic component of the stress tensor $\bar{T}(\bar{z})$. The metric Eq.~\eqref{eq:FGmetric} falls into the class of Ba\~{n}ados geometries \cite{Banados:1998gg}, and $T(z)$ has the interpretation of the expectation value of the stress tensor in the boundary CFT on $\mathcal{B}_n$. Therefore, Eq.~\eqref{eq:stresstensor} agrees with the expression, Eq.~\eqref{eq:twistfields}, in terms of twist fields. The formula for $T(z)$ can also be seen more directly from the way the stress tensor in a CFT transforms under a conformal transformation. Starting from the vacuum stress tensor in the $z'$-coordinate, $T(z')=0$, and applying Eq.~\eqref{eq:conical}, the transformation picks up precisely the Schwarzian contribution in Eq. \eqref{eq:stresstensor}.
We can also give a description for these geometries in the language of Chern-Simons (CS) theory. It is known that Euclidean AdS$_3$ can be described by two copies of a Chern-Simons theory with gauge connections $A, \bar{A}$ valued in $\mathfrak{sl}(2,\mathbb{C})$, and where the Chern-Simons coupling is related to Newton's constant by $k=(4G_3)^{-1}$~\cite{cmp/1104202513}. We can expand these connections (with complex coefficients) over $\mathfrak{sl}(2,\mathbb{R})$ generators $L_0,L_\pm$ satisfying $[L_0,L_\pm] = \mp L_\pm$, $[L_+,L_-] = 2 L_0$. In an explicit two-dimensional representation of the algebra, these are
\begin{equation} \label{Lgenerators}
L_0= \frac{1}{2}
\begin{pmatrix}
1 & 0\\
0 & -1
\end{pmatrix}~,
\quad L_+=
\begin{pmatrix}
0 & 0\\
-1 & 0
\end{pmatrix}~,
\quad L_-=
\begin{pmatrix}
0 & 1\\
0 & 0
\end{pmatrix}~.
\end{equation}
We can then describe the geometries, Eq.~\eqref{eq:FGmetric}, using the connections
\begin{equation} \label{eq:CSconnections}
A=\frac{1}{2w}\begin{pmatrix} dw& -2\, dz \\ w^2\frac{12}{c}T(z)\, dz & -dw \end{pmatrix}, \hspace{20pt} \bar{A}=-\frac{1}{2w}\begin{pmatrix} dw & w^2\frac{12}{c}\bar{T}(\bar{z})\,d\bar{z} \\ -2\,d\bar{z} & -dw \end{pmatrix}~.
\end{equation}
Each metric in this family of solutions corresponds to a choice of gauge connections, Eq.~\eqref{eq:CSconnections}, with the same $T(z), \bar{T}(\bar z)$ through the relation $ds^2 =\frac{1}{2}\mbox{tr}((A-\bar{A})^2)$.
It will be useful to extract the radial dependence in Eq.~\eqref{eq:CSconnections} by using a suitable gauge transformation
\begin{equation} A = b a b^{-1} + b db^{-1}~, \indent \bar{A} = b^{-1} \bar{a} b + b^{-1}db~, \end{equation}
with gauge parameters
\begin{equation} \label{eq:gaugefieldbdy}
a=\begin{pmatrix} 0 & -dz \\ \frac{6}{c}T(z)\,dz & 0 \end{pmatrix}~, \hspace{10pt} \bar{a}=\begin{pmatrix} 0 & -\frac{6}{c}\bar{T}(\bar{z})\,d\bar{z} \\ d\bar{z} & 0 \end{pmatrix}~, \hspace{10pt}b(w)=\begin{pmatrix} \frac{1}{\sqrt{w}} & 0 \\ 0 & \sqrt{w} \end{pmatrix}~.
\end{equation}
\subsection{Symplectic form}\label{sec:sympform}
We now turn our attention to the bulk symplectic form. It is useful to work in the Chern-Simons formulation of three-dimensional gravity. For a similar discussion of the symplectic structure of $3$d gravity in this setting, especially as pertains to the connection to coadjoint orbits, see~\cite{Cotler:2018zff, Barnich:2017jgw, Coussaert:1995zp, Kraus:2021cwf}.
The CS action with CS coupling $k$ and gauge connection $A$ is given by
\begin{equation} S_{\rm CS} = \int \mathcal{L}_{\rm CS} = \frac{k}{4\pi}\int \mbox{tr}\left(A \wedge dA + \frac{2}{3} A\wedge A \wedge A\right)~.\end{equation}
We would like to evaluate the symplectic form. Taking the variation of the action for a single copy gives
\begin{equation} \delta \mathcal{L}_{\rm CS} = \frac{k}{2\pi}\mbox{tr}\left(\delta A \wedge F\right) + d\Theta ~\end{equation}
in terms of field strength $F = dA + A\wedge A$, and where $\Theta = \frac{k}{4\pi}\mathrm{tr}(A\wedge \delta A)$. The symplectic form for CS theory on some spatial region $\Sigma$ is then given by
\begin{equation}
\omega= \int_\Sigma \delta \Theta = \frac{k}{4\pi}\int_{\Sigma} \mathrm{tr}(\delta_1 A \wedge \delta_2 A)~.
\end{equation}
In the following, we will assume that $\Sigma$ is topologically a disk, i.e., it has a single boundary but no singularities in the interior. The symplectic form is a two-form on the space of classical solutions satisfying $F=0$. Because we are working with a disk which admits no nontrivial cycles, a variation $\delta A$ which leaves this condition invariant is of the form
\begin{equation} \label{eq:deltaA}
\delta A =d_A\zeta\equiv d\zeta+[A,\zeta]
\end{equation}
for some gauge transformation $\zeta$, as follows from $\delta F = d_A \delta A = d_A^2 \zeta = 0$.
We now consider the symplectic form for such a transformation. Using the identity
\begin{equation}
\mathrm{tr}\left([A,\zeta] \wedge \delta A\right)=-\mathrm{tr}\left(\zeta\wedge [A,\delta A]\right)~
\end{equation}
and integrating by parts we obtain
\begin{align}
\omega&=\frac{k}{4\pi}\int_{\Sigma} \mathrm{tr}(d_A\zeta \wedge \delta A)=\frac{k}{4\pi}\oint_{\partial \Sigma} \mathrm{tr}(\zeta \wedge \delta A)-\frac{k}{4\pi}\int_{\Sigma} \mathrm{tr}(\zeta \wedge d_A\delta A) \nonumber \\ \label{eq:sympform}
&=\frac{k}{4\pi}\oint_{\partial \Sigma} \mathrm{tr}(\zeta \wedge \delta A)~.
\end{align}
From Eq.~\eqref{eq:sympform} we see that the symplectic form $\omega$ is localized at the boundary of $\Sigma$.
Suppose that $\partial \Sigma$ lies in the asymptotic boundary of the geometry, in the $w=0$ plane, and that we have gauged away the radial dependence. Using the explicit form of the connections, Eqs.~\eqref{eq:CSconnections} and~\eqref{eq:gaugefieldbdy}, we can evaluate the symplectic form in Eq.~\eqref{eq:sympform}. We see that the field variation can be expressed in terms of the stress tensor as
\begin{equation} \label{eq:deltaA2}
\delta A =\frac{6}{c}\begin{pmatrix} 0 & 0 \\ \delta T & 0 \end{pmatrix} dz~.
\end{equation}
It is also possible to solve Eq.~\eqref{eq:deltaA} for $\delta T$. Decomposing $\zeta$ over the $\mathfrak{sl}(2,\mathbb{R})$ generators as $\zeta=\zeta_{-}L_{-1}+\zeta_0L_0+\zeta_+L_{1}$
and using the form of the gauge field in Eq.~\eqref{eq:gaugefieldbdy}, one can compute $d_A\zeta$. Matching with Eq.~\eqref{eq:deltaA2} gives a solution of the form
\begin{equation}
\delta T = \frac{c}{12}\xi''' + 2 T \xi' + \partial T \xi~, \label{eq:stresstransform}
\end{equation}
where we have written $\xi\equiv-\zeta_{-}$ for the component of the gauge transformation associated to the $L_{-1}$ generator. This is the usual stress tensor transformation law. From the form of the gauge transformation and the variation $\delta A$ in Eq.~\eqref{eq:deltaA}, and using the Brown-Henneaux relation $(4G_3)^{-1} = c/6$ combined with the gravitational value for the CS coupling, we find that
\begin{equation}
\omega=\frac{1}{4\pi}\oint_{\partial \Sigma} dz\, \xi \wedge \delta T~.
\end{equation}
Using Eq.~\eqref{eq:stresstransform} the symplectic form becomes
\begin{equation}
\omega=\frac{1}{4\pi}\oint_{\partial \Sigma} dz \left( \frac{c}{12}\xi \wedge \xi''' +2 T\, \xi \wedge \xi' \right).
\end{equation}
Plugging in two diffeomorphisms $\xi_1$ and $\xi_2$, the final result for the symplectic form reads:
\begin{equation} \label{eq:sympformfinal}
\omega= \frac{1}{2\pi}\oint_{\partial \Sigma} dz\, \left( T \left(\xi_1\xi_2'-\xi_2\xi_1'\right)+\frac{c}{24}\left( \xi_1\xi_2'''-\xi_2\xi_1'''\right) \right)~.
\end{equation}
When the stress tensor $T(z)=T$ is a constant, Eq.~\eqref{eq:sympformfinal} is reminiscent of the Kirillov-Kostant symplectic form on the coadjoint orbit $\rm{Diff}(S^1)/U(1)$ (or $\rm{Diff}(S^1)/SL(2,\mathbb{R})$ for the vacuum stress tensor) of the Virasoro group $\widehat{\rm{Diff}(S^1)}$ with central charge $c$. We will now argue that in order to match the modular Berry curvature we need to consider a non-standard orbit corresponding to the conical singularity geometry described in Section \ref{sec:conicalsing}.
\subsection{Contour prescription}
Let us return to the Euclidean geometry $\mathcal{M}_n$, which is obtained from the backreaction of a cosmic brane with tension $\mathcal{T}_n$. We showed that the stress tensor profile at the boundary is given by Eq.~\eqref{eq:stresstensor}. Let us now restrict to transformations which leave the interval at the boundary fixed. This corresponds to Dirichlet boundary conditions $\delta A=0$ at the cosmic brane.
We consider the symplectic form
\begin{equation}
\omega_{n}=\frac{k}{4\pi}\int_{\Sigma_n}\mathrm{tr}(\delta_1 A \wedge \delta_2 A)~,
\end{equation}
supported on some region $\Sigma_{n}$ which corresponds to the entanglement wedge in the geometry $\mathcal{M}_n$, see Figure \ref{fig:EntanglementWedge}. The subscript in the symplectic form indicates that it depends on $n$. The entanglement wedge has two boundary components:
\begin{equation}
\partial \Sigma_{n} = \gamma_n \cup \mathrm{Brane}_n~,
\end{equation}
where $\gamma_n$ is the entangling region at the asymptotic boundary extending between $z_1$ and $z_2$ and $\mathrm{Brane}_n$ is the cosmic brane anchored at those points. In Section~\ref{sec:sympform}, we have seen that the bulk symplectic form localizes to the boundary of $\Sigma_n$ (using that the region is topologically trivial), because $\mathrm{tr}(\delta_1 A \wedge \delta_2 A)=d\eta$ is an exact form with $\eta=\mathrm{tr}(\xi \wedge \delta A)$. The expression for $\omega_n$ therefore reduces to a boundary term of the form
\begin{equation}
\omega_n =\frac{k}{4\pi}\left[\int_{\gamma_n}\eta+\int_{\mathrm{Brane}_n}\eta\right]~.
\end{equation}
The contribution at the cosmic brane vanishes due to the boundary conditions we put on the field variations there, i.e., $\delta A=0$ at $\mathrm{Brane}_n$. We are therefore left with the integral over the entangling region $\gamma_n$ at the asymptotic boundary. There, $\eta$ takes the form
\begin{equation}
k \, \eta = \xi \wedge \delta T =\frac{c}{12}\left(\xi_1\xi_2'''-\xi_2\xi_1'''\right)+2\, T[\xi_1,\xi_2]~,
\end{equation}
in terms of the boundary stress tensor profile $T$ of the geometry $\mathcal{M}_n$. Plugging in Eq.~\eqref{eq:stresstensor} with $z_1=i$ and $z_2=-i$, we find that
\begin{equation} \label{eq:omegan}
\omega_n =\frac{c}{12\pi}\left(1-\frac{1}{n^2} \right)\int_{\gamma_n}\frac{[\xi_1,\xi_2]}{(z^2+1)^2}\, dz+\frac{c}{48\pi}\int_{\gamma_n}\left(\xi_1\xi _2'''-\xi_2\xi_1'''\right)dz~.
\end{equation}
Note that the integrand is singular at the endpoints of the integration region $\gamma_n$. Therefore, we should implement some kind of regularization procedure for the integral to avoid the twist field insertion points. A standard choice would be the \emph{principal value} prescription, where we excise a small ball of size $\epsilon$ around each of the singularities located at the endpoints of $\gamma_n$. After computing the integral, we take $\epsilon \to 0$. The resulting expression for $\omega_n$ is UV divergent ($\omega_n \sim \log \epsilon$).
In the limit $n\to 1$ the first term in Eq.~\eqref{eq:omegan} vanishes. This is expected, since as the cosmic branes becomes tensionless the geometry reduces to pure $\mathrm{AdS}_3$, for which the bulk symplectic form is identically zero (up to the central charge term). To extract a non-zero answer from $\omega_n$, we first take a derivative with respect to $n$ and define
\begin{equation} \label{eq:bulksymp}
\omega \equiv \lim_{n\to 1}\frac{\partial}{\partial n} \frac{\omega_n}{k}~.
\end{equation}
This corresponds to studying the first order correction of the backreaction process. The appearance of the operator $\lim_{n\to 1}\partial_n$ is not unfamiliar in the context of computing entanglement entropy using Euclidean solutions with conical singularties of the form $\mathcal{M}_n$\footnote{In fact, the entanglement entropy $S$ associated to the subregion $A$ can be computed by the formula $S=-\lim_{n\to 1}\partial_n\log Z_n$, where $\log Z_n\sim -I\left[\mathcal{M}_n\right]$ is the classical action evaluated on the conical singularity geometry $\mathcal{M}_n$.}. Eq.~\eqref{eq:bulksymp} is our proposal for the bulk symplectic form associated to the entanglement wedge, and we will now show that it matches the modular Berry curvature.
To make the connection with the boundary computation, we rewrite the integral over the entangling region in terms of the variable $u$ defined in Eq.~\eqref{eq:coordtrans}. Notice that the unit semicircle $-\pi/2\leq \arg(z)\leq\pi/2$ is mapped to the line $u\in [-\infty,\infty]$, since $z=1$ goes to $u=0$. In particular, the points $u=\pm \Lambda$ correspond to
\begin{equation} \label{eq:endpoints}
z=\frac{1+ie^{\pm \Lambda}}{e^{\pm \Lambda}+i} \sim e^{\pm i\left(\frac{\pi}{2} - \epsilon\right)}~,
\end{equation}
if we identify $\Lambda$ with the UV regulator by $\Lambda = -\log\frac{\epsilon}{2}$, in the limit $\Lambda\rightarrow \infty,\epsilon\rightarrow 0$. In the limit $\Lambda \to \infty $, the endpoints go to $z\to \pm i$ along the unit circle, so Eq.~\eqref{eq:endpoints} is precisely the principal value prescription for $\gamma_n$.
Moreover, under the transformation in Eq.~(\ref{eq:coordtrans}) the integration measure changes as Eq.~\eqref{eq:uzcoordtransf}. Therefore, we can represent the integral over the entangling region $\gamma_n$ in terms of the $u$-variable as
\begin{equation}
\frac{1}{\pi}\int_{-i}^{i}\frac{\xi(z)}{(1+z^2)^2}\,dz=\lim_{\Lambda\to \infty}\frac{i}{2\pi}\int_{-\Lambda}^{\Lambda}\xi(u)\, du~,
\end{equation}
which is precisely the projection operator $P_0(X_{\xi})$ in Eq.~\eqref{eq:projectionoperator}. Thus, we can rewrite the symplectic form $\omega_n$ as
\begin{equation}
\omega_n =\frac{c}{12}\left(1-\frac{1}{n^2} \right)P_0(\left[X_{\xi_1},X_{\xi_2}\right])+\frac{c}{48\pi} \int_{-i}^i \left(\xi_1\xi _2'''-\xi_2\xi_1'''\right)dz~.
\end{equation}
Taking the derivative with respect to $n$ and setting $n\to 1$ according to Eq.~\eqref{eq:bulksymp} gives the final result:
\begin{equation}
\omega = P_0(\left[X_{\xi_1},X_{\xi_2}\right])~,
\end{equation}
which agrees with the curvature $F$ in Eq.~(\ref{eq:berrycurvature}). Notice that the information about the central zero mode discussed in Section \ref{sec:extension} is also contained in $\omega_n$: it simply corresponds to taking $\lim_{n\to 1} \omega_n$ directly.
\section{Introduction}
A particular goal of holography is to understand the emergence of geometry from the boundary conformal field theory. Recent applications of quantum information theory in holography have given a means of directly probing geometry of the bulk, and thus have provided a promising avenue for addressing this question.
One geometrical application of entanglement is an auxiliary space for holography known as kinematic space, which can be defined as the space of pairs of spacelike points in a CFT$_d$~\cite{Czech:2016xec, deBoer:2016pqk}. Perturbations of entanglement entropy are seen to propagate as fields on this space~\cite{deBoer:2015kda}. For CFT$_2$, kinematic space can additionally be obtained from the set of entanglement entropies associated to intervals~\cite{Czech:2015qta}. While fixed by the asymptotic conformal symmetry, kinematic space provides a tool for the reconstruction of bulk geometry in certain sufficiently symmetrical and controlled settings. For instance, it reconstructs geometry for locally AdS$_3$ spacetimes~\cite{Asplund:2016koz}. It also probes the geometry only outside of entanglement shadow regions that are inaccessible to spacelike geodesics~\cite{Freivogel:2014lja}. This auxiliary space is a symplectic manifold, specifically it is a particular coadjoint orbit of the conformal group~\cite{Penna:2018xqq}.
The drawback here is of course the reliance on symmetries and special geometries. Is it possible to use entanglement to probe more general geometries? To this end, transport for $2$d kinematic space was generalized to a parallel transport process for the modular Hamiltonian~\cite{Czech:2017zfq, Czech:2019vih}.\footnote{For approaches to general reconstruction using null surfaces rather than extremal surfaces, see~\cite{Maldacena:2015iua, Engelhardt:2016wgb, Engelhardt:2016crc}. To move beyond entanglement shadow regions and geodesic barriers~\cite{Engelhardt:2013tra} one could also use timelike geodesics as probes. These are dual to circuit complexity as defined by the Nielsen geometric approach. They describe an auxiliary symplectic geometry which is also a coadjoint orbit of the conformal group, just a different one than kinematic space~\cite{Chagnet:2021uvi}.} In this setup, there is an associated Berry connection on kinematic space that computes lengths of curves in the bulk. More generally, a modular Berry connection can be shown to relate frames for CFT algebras associated to different states and subregions. Entanglement provides a connection that sews together nearby entanglement wedges and probes the geometry near the extremal surface. This connection builds spacetime from entanglement, reminiscent of the ER=EPR proposal~\cite{Maldacena:2013xja}. While the modular Hamiltonian admits a particularly simple, local description only in special cases, the parallel transport of modular Hamiltonians is true more generally, and its bulk description relies only on leading order in $1/N$ and sufficient smoothness of the extremal surface.
The parallel transport of modular Hamiltonians has been studied in the setting where the interval shape is varied, which connects to kinematic space~\cite{Czech:2016xec}. Shape-changing parallel transport has also been applied to study cases in holography where the modular chaos bound is saturated, which is governed by a certain algebra of modular scrambling modes that generate null deformations close to the extremal surface~\cite{DeBoer:2019kdj}. We are interested in generalizing beyond the case where the shape or interval location is varied, to consider modular parallel transport governed by a change of global \emph{state} (see also~\cite{Kirklin:2019ror} for a similar approach). For instance, one could imagine acting on a CFT on the cylinder by a large diffeomorphism contained in the Virasoro algebra. This would modify the algebra of operators on the interval. The redundancy by certain symmetries known as \emph{modular zero modes} which change the algebra but leave physical observables fixed results in a connection and non-trivial parallel transport, even in the case where the interval remains fixed. A general modular transport problem would consist of an amalgamation of these two kinds of parallel transport, with a simultaneous modification of both the state and interval shape.
Ultimately, we consider special transformations which do not lie in the Virasoro algebra as typically defined since they are not analytic, rather they vanish at the interval endpoints and are non-differentiable at these points. The reason for this is technical: to uniquely isolate the zero mode contribution it is necessary to have a decomposition into kernel and image of the adjoint action of the modular Hamiltonian. As we explain in Appendix~\ref{sec:NonDiagonalization}, this is not possible for the Virasoro algebra. This is a subtlety that, to our knowledge, has not been previously studied. For a large class of transformations which obey certain properties, we derive a general expression for the Berry curvature in Appendix~\ref{app:general}. We also explain how these non-standard vector fields have a simple interpretation as plane waves in the hyperbolic black hole geometry using the map of Casini, Huerta and Myers~\cite{Casini:2011kv}.
We define a suitable algebra of vector fields on the circle constructed from wave packets of these eigenstates. Much as similar parallel transport problems for state-based parallel transport and for kinematic space are governed by the geometry of symplectic manifolds known as coadjoint orbits, here that is the case as well. We show that the Berry curvature for state-based parallel transport is equal to the Kirillov-Kostant symplectic form on an orbit associated to this algebra of vector fields.
State-changing parallel transport can also be related to bulk geometry. This has the advantage of accessing different geometrical data in the bulk, compared to the setting where only the interval shape is varied. We find that the Berry curvature for a fixed interval and changing state computes the symplectic form for a Euclidean conical singularity geometry obtained from the backreaction of a cosmic brane, subject to a suitable principal value prescription for regulating divergences near the interval endpoint. To match the curvature, we must impose Dirichlet boundary conditions at the location of the extremal surface. We interpret this as describing (and defining) a symplectic form associated to the entanglement wedge. In the discussion, we connect to earlier work on the holographic interpretation of the Berry curvature, and comment on the relation to the entanglement wedge symplectic form in the case of operator-based parallel transport.
Modular parallel transport, either in the case of a changing shape or a changing state, is a parallel transport of \emph{operators} and density matrices. It is distinct from existing algebraic applications of parallel transport of \emph{states}, which for instance transform under unitary representations of a symmetry group. As part of this work we hope to clarify some of the differences, as well as various applications of each. In particular, we both review how kinematic space for CFT$_2$ can be understood in the language of operator-based parallel transport in Section~\ref{sec:states}, while also providing a new derivation of this same kinematic space using state-based parallel transport in Appendix~\ref{app:kinematic}. This gives two different ways of viewing the same problem, both utilizing group theory, reminiscent of the `Heisenberg' versus `Schr\"{o}dinger' pictures for quantum mechanics.\\
\noindent {\bf Outline}: We begin in Section~\ref{sec:berry} by giving a summary of both state and operator-based parallel transport, and providing a few examples of each. In Section~\ref{sec:transport}, we derive the boundary parallel transport process for transformations that diagonalize the adjoint action and compute the curvature in an example. We go into further detail in Section~\ref{sec:coadjoint} about the algebraic structure and the connection to coadjoint orbits. In Section~\ref{sec:bulk}, we present our proposal for the bulk dual using the symplectic form for Euclidean conical singularity solutions created from the backreaction of a cosmic brane. We end with a discussion about some subtleties and suggest future research directions. In Appendix~\ref{app:kinematic}, we provide a derivation of kinematic space using state-based parallel transport, and in Appendix~\ref{app:general} we derive a general expression for the curvature for operator-based parallel transport, which applies for any algebra. Finally, in Appendix~\ref{sec:NonDiagonalization} we discuss some subtleties about diagonalization of the adjoint action for the Virasoro algebra.
\section*{Acknowledgements}
It is a pleasure to thank Ben Freivogel for initial collaboration. We also thank Raf Bocklandt, Bowen Chen, Bartlomiej Czech, Jackson R. Fliss, Victor Godet, Kurt Hinterbichler, Jani Kastikainen, Esko Keski-Vakkuri, Lampros Lamprou, Sergey Shadrin, Erik Verlinde and Zi-zhi Wang for discussions. DP is supported by NAWA “Polish Returns 2019.” RE and CZ are supported by the ERC Consolidator Grant QUANTIVIOL. JdB and JvdH are supported by the European Research Council under the European Unions Seventh Framework Programme (FP7/2007-2013), ERC Grant agreement ADG 834878. This work is supported by the
Delta ITP consortium, a program of the Netherlands Organisation for Scientific Research (NWO) that is funded
by the Dutch Ministry of Education, Culture and Science (OCW).
|
1,116,691,497,392 | arxiv | \section{Introduction}\label{sec:intro}
\paragraph{Polynomial interpolation problem.} The classical polynomial
interpolation problem starts with a set of data points,
$(a_1, b_1)$, \dots, $(a_{n+1},$ $b_{n+1})$, where $a_i, b_i\in\mathbb{Q}$, $a_i\neq a_j$
for $i\neq j$, and asks for
a univariate polynomial $f\in \mathbb{Q}[x]$ s.t. $f(a_i)=b_i$ for $i=1, \dots, n+1$.
While
such a polynomial of degree $\leq n$ exists and is unique, depending on different
choices of linear bases, several formulas, under the name of Newton, Lagrange,
and Vandermonde, have become classical.
In theoretical computer science (TCS), the following problem also bears the name
interpolation of polynomials, studied from 1970's:
given a black-box access to a multivariate polynomial $f\in \mathbb{F}[x_1, \dots, x_n]$
($\mathbb{F}$ a field), compute $f$ by writing out its sum-of-monomials expression.
Several algorithms have been
proposed to solve this problem \cite{Zip79,BT88,KY88,Zip90,ks01}. A natural
generalization is to consider expressing $f$ using
the more powerful arithmetic circuits. In this more general setting,
the problem is called the reconstruction problem for arithmetic circuits
\cite[Chap. 5]{sy2010}, and the sum-of-monomials
expression of $f$ is viewed as a subclass of arithmetic circuits, namely depth-2
$\Sigma \Pi$ circuits. Reconstruction problems for various models
gained quite momentum recently
\cite{bc98,shpilka07,sv08,ks09,ams10,gkl11,kay12,gkl2012,GKQ14}.
As mentioned, for interpolation of univariate polynomials, different formulas
depend on different choices of linear bases. On the other hand, for interpolation
(or more precisely, reconstruction) of multivariate polynomials in the TCS
setting, the algorithms depend on the computation models
crucially. In the latter
context, to our best knowledge, only the linear basis of monomials (viewed as
depth-2 $\Sigma \Pi$ circuits) has been considered.
\paragraph{Schubert polynomials.} In this
paper, we consider the interpolation of multivariate polynomials in the
TCS
setting, but in another linear basis of multivariate polynomials, namely the
Schubert polynomials. This provides another natural direction for
generalizing the multivariate polynomial interpolation problem. Furthermore, as
will be explained below, such an interpolation algorithm can be used to compute
certain quantities in
geometry that are of great interest, yet not well-understood.
Schubert polynomials were discovered by Lascoux and Sch\"utzenberger \cite{LS82}
in the study of cohomology rings of flag manifolds in 1980's. See \Cref{def:skew}
or \Cref{def:schub} for the definition\footnote{\Cref{def:schub} is one of the
classical definitions of
Schubert polynomials, while \Cref{def:skew} defines Schubert polynomials in the
context of skew Schubert polynomials. }, and
\cite{MD_schubert,Manivel} for detailed results. For now we only
point out that (1) Schubert polynomials in $\mathbb{Z}[x_1,
\dots, x_n]$ are indexed by ${\mathbf{v}}=(v_1, \dots, v_n)\in \mathbb{N}^n$, denoted by
$Y_{{\mathbf{v}}}$;\footnote{In the literature, it is more common that Schubert
polynomials indexed by permutations instead of $\mathbb{N}^n$. These two index sets are
equivalent, through the correspondence between
permutations and $\mathbb{N}^n$ as described in \Cref{sec:prel}. We adopt $\mathbb{N}^n$ in the introduction
because they are easier to work with when dealing with a fixed number of
variables.} (2) $Y_{{\mathbf{v}}}$ is homogeneous of degree
$\sum_{i=1}^nv_i$.\footnote{Algorithms in this work run
in time polynomial in the degree of the polynomial. By (2) this is equivalent to
that the indices of Schubert polynomials are given in unary.}
Schubert polynomials have many distinguished properties. They form a linear basis
of multivariate polynomials, and yield a generalization of Newton
interpolation formula to the multivariate case \cite[Sec. 9.6]{Lascoux_symmetric}.
Also, Schur polynomials are special Schubert polynomials (\Cref{fact:schub}
(1)).
A Schubert polynomial can contain exponentially many monomials: the
complete homogeneous symmetric polynomials are special Schubert polynomials
(\Cref{fact:schub} (2)).
It is not clear to us whether Schubert polynomials have polynomial-size
arithmetic circuits. Because of these reasons, interpolation in the Schubert basis
could not be covered by the aforementioned results for the reconstruction
problems, unless the
arithmetic circuit complexity of Schubert polynomials is understood better: at
present, we are only able to put Schubert polynomials in $\mathrm{VNP}$.
While Schubert polynomials are mostly studied due to their deep geometric
meanings (see e.g. \cite{KM05}), they do have certain algorithmic aspects that
have been studied shortly after their introduction in 1982. Indeed, an early paper
on Schubert polynomials by Lascoux and Sch\"utzenberger was concerned about using
them to compute the Littlewood-Richardson coefficients \cite{LS85}.
That procedure has been implemented in the program system {\tt Symmetrica}
\cite{KKL92}, which includes a set of routines to work with Schubert
polynomials. On the other hand, the complexity-theoretic study of the algorithmic
aspects of Schubert polynomials seems lacking, and we hope that this paper serves
as a modest
step towards this direction.
\paragraph{Our results.} Our main result is about deterministic interpolation of
sparse polynomials with integer
coefficients in the Schubert basis, modulo an oracle that computes Schubert
polynomials. The complexity is measured by the bit size of the representation.
\begin{theorem}\label{thm:interpolate}
Suppose we are given (1) black-box access to some polynomial $f\in\mathbb{Z}[x_1, \dots,
x_n]$, $f=\sum_{{\mathbf{v}}\in \Gamma} a_{{\mathbf{v}}} Y_{{\mathbf{v}}}$, $a_{\mathbf{v}}\neq 0\in
\mathbb{Z}$ with the promise that $\deg(f)\leq d$ and $|\Gamma|\leq m$; (2) an oracle that
computes the evaluation of Schubert polynomials on nonnegative
integral points. Then there exists a deterministic algorithm, that outputs the
expansion of $f$ in the basis of Schubert polynomials. The algorithm runs in time
polynomial in $n$, $d$, $m$, and $\log(\sum_{{\mathbf{v}}\in \Gamma}|a_{{\mathbf{v}}}|)$.
\end{theorem}
In fact, \Cref{thm:interpolate} relies on the algorithm in
\Cref{thm:interpolate_good} which applies to a more general setting: it only
requires the linear basis where the leading monomials are ``easy to isolate.'' See
\emph{Our techniques} for a more detailed discussion.
\Cref{thm:interpolate} generalizes and derandomizes a result by Barvinok
and Fomin, who in \cite{BF97} present a randomized algorithm
that interpolates sparse symmetric polynomials in the Schur basis. As mentioned,
Schur polynomials are special Schubert polynomials, and the Jacobi-Trudi formulas
for Schur polynomials yield efficient
algorithms to compute them. So to recover Barvinok and Fomin's result,
apply our interpolation algorithm to symmetric
polynomials, and replace the $\text{\#P}$ oracle computing Schubert polynomials by
the efficient algorithm computing Schur polynomials. Likewise, for those Schubert
polynomials with efficient evaluation procedures\footnote{For example, there are determinantal formulas \cite[Sec. 2.6]{Manivel} for
Schubert polynomials indexed by 2143-avoiding permutations ($\nexists
i<j<k<\ell$ s.t. $\sigma(j)<\sigma(i)<\sigma(\ell)<\sigma(k)$), and 321-avoiding
permutations ($\nexists i<j<k$ s.t. $\sigma(k)<\sigma(j)<\sigma(i)$).
2143-avoiding permutations are also known as vexillary permutations and form a
generalization of Grassmannian permutations.}, we
can get rid of the $\text{\#P}$ oracle to obtain a polynomial-time interpolation
algorithm.
Our second result concerns the evaluation of Schubert polynomials. In fact, we
shall work with a generalization of Schubert polynomials, namely \emph{skew
Schubert polynomials} as defined by Lenart and Sottile \cite{LS03}. We will
describe the definition in \Cref{sec:prel}. For now, we only remark that skew
Schubert polynomials generalize Schubert polynomials in a way analogous to how
skew Schur polynomials generalize Schur polynomials. A skew Schubert polynomial,
denoted by
$Y_{{\mathbf{w}}/{\mathbf{v}}}$, is
indexed by ${\mathbf{v}}\leq {\mathbf{w}}\in \mathbb{N}^m$ where $\leq$ denotes the Bruhat
order\footnote{Bruhat order on codes is inherited from the Bruhat order on
permutations through the correspondence between codes and permutations as in \Cref{sec:prel}.}.
Schubert polynomials
can be defined by setting ${\mathbf{w}}$ to correspond to the permutation maximal in the
Bruhat order.
\begin{theorem}\label{thm:eval}
Given ${\mathbf{v}}, {\mathbf{w}}\in\mathbb{N}^m$ in unary, and ${\mathbf{a}}\in\mathbb{N}^n$ in binary,
computing $Y_{{\mathbf{w}}/{\mathbf{v}}}({\mathbf{a}})$ is in $\text{\#P}$.
\end{theorem}
\begin{corollary}\label{cor:eval}
Given ${\mathbf{v}}\in\mathbb{N}^n$ in unary, and ${\mathbf{a}}\in\mathbb{N}^n$ in binary,
computing $Y_{{\mathbf{v}}}({\mathbf{a}})$ is in $\text{\#P}$.
\end{corollary}
Note that in \Cref{thm:eval} we have ${\mathbf{v}}, {\mathbf{w}}\in \mathbb{N}^m$, while in
\Cref{cor:eval} we have ${\mathbf{v}}\in \mathbb{N}^n$. This is because, while for
Schubert polynomials we know $Y_{\mathbf{v}}$ for ${\mathbf{v}}\in\mathbb{N}^n$ depends on $n$
variables, such a relation is not clear for skew Schubert polynomials.
Finally, we also study these polynomials in the framework of algebraic complexity.
\begin{theorem}\label{thm:vnp}
Skew Schubert polynomials, and therefore Schubert polynomials, are in $\mathrm{VNP}$.
\end{theorem}
\paragraph{Applications of our algorithms.}
A long-standing open problem about Schubert polynomials is to give a
combinatorially positive
rule, or in other words, a $\text{\#P}$ algorithm for the \emph{generalized
Littlewood-Richardson (LR) coefficients}, defined as the
coefficients of the
expansion of products of two Schubert polynomials in the Schubert basis. They are
also the coefficients of the expansion of skew Schubert polynomials in the
Schubert basis \cite{LS03}. These numbers are of great interest in algebraic
geometry, since they are the intersection numbers of Schubert varieties in the
flag manifold. See \cite{ABS14,MPP14} for recent developments on this.
The original LR coefficients are a special case when replacing Schubert with Schur
in the above definition. It is known that the original LR coefficients are
$\text{\#P}$-complete, by the celebrated LR rule, and a result of
\cite{Nara06}. Therefore the generalized LR coefficients are also $\text{\#P}$-hard
to
compute, while as mentioned, putting this problem in $\text{\#P}$ is considered to be
very difficult -- in fact, we were not aware of any non-trivial complexity-theoretic upper
bound. On the other hand, by interpolating skew Schubert polynomials in the
Schubert basis, we have the following.
\begin{corollary}\label{cor:psharpp}
Given ${\mathbf{w}}, {\mathbf{v}}\in \mathbb{N}^m$, let $\Gamma=\{{\mathbf{u}}\in \mathbb{N}^m \mid a^{{\mathbf{v}},
{\mathbf{u}}}_{{\mathbf{w}}}\neq 0\}$. Then there exists a deterministic algorithm that, given
access to an $\text{\#P}$ oracle,
computes $(a^{{\mathbf{v}}, {\mathbf{u}}}_{{\mathbf{w}}} | {\mathbf{u}}\in \mathbb{N}^m)$ in time polynomial in
$|{\mathbf{w}}|, |\Gamma|$, and $\log(\sum_{{\mathbf{u}}\in \Gamma} a^{{\mathbf{v}},
{\mathbf{u}}}_{{\mathbf{w}}})$.
\end{corollary}
The algorithm in \Cref{cor:psharpp} has the benefit of running in time polynomial
in the
\emph{bit size} of $a^{{\mathbf{v}}, {\mathbf{u}}}_{{\mathbf{w}}}$.
Therefore, when $|\Gamma|$ is small compared to $\sum_{\mathbf{w}} a^{{\mathbf{v}},{\mathbf{u}}}_{\mathbf{w}}$,
our algorithm is expected to lead to a notable saving, compared to those algorithms that are solely based on positive rules,
e.g. \cite{Kogan01}. (\cite{Kogan01} furthermore only deals with the case of
Schubert times Schur.)
Of course, in practice we need to take into account the time for evaluating Schubert polynomials.
In addition, we note that Barvinok and Fomin's original motivation is to compute e.g. the Littlewood-Richardson coefficients,
Kostka numbers, and the irreducible characters of the symmetric group.
See \cite[Sec. 1]{BF97} for the definitions and importance of these numbers.
Since our algorithm can recover theirs (without referring to a $\text{\#P}$ oracle), it can be used to compute these quantities as well.
Note that our algorithm is moreover deterministic.
Our original motivation of this work was to better understand this approach of Barvinok and Fomin to compute the
LR coefficients. This topic recently receives attention in complexity theory \cite{Nara06,MNS12,BI13_positivity}, due to
its connection to the geometric complexity theory (GCT) \cite{GCT_JACM,MNS12}.
Though this direction of generalization does not apply to GCT directly, we believe it helps in a better understanding
(e.g. a derandomization) of this approach of computing the LR coefficients.
\paragraph{Our techniques.} We achieve \Cref{thm:interpolate} by first formalizing some natural properties of a
linear basis of (subrings of) multivariate polynomials (\Cref{subsec:good}).
These are helpful for the interpolation purpose.
If a basis satisfies these properties, we call this basis \emph{interpolation-friendly}, or I-friendly for short.
Then we present a deterministic interpolation algorithm for I-friendly bases (\Cref{thm:interpolate_good}).
We then prove that the Schubert basis is interpolation-friendly\footnote{While the proofs
for these properties of Schubert polynomials are easy, and should be
known by experts, we include complete proofs as we could not find
complete proofs or explicit statements.
Most properties of Schubert polynomials, e.g. the
$\text{\#P}$ result, \Cref{lem:dom}, \Cref{prop:schub_bound}, can also
be obtained by a combinatorial tool called RC graphs \cite{BB93}. We
do not attempt to get optimal results (e.g. in \Cref{cor:long_trans} and
\Cref{prop:schub_bound}), but are content with bounds that are good
enough for our purpose. } (\Cref{sec:schub_good}).
Technically, for the interpolation algorithm, we combine the structure
of the Barvinok-Fomin algorithm with several ingredients from the
elegant deterministic interpolation algorithm for sparse polynomials in the
monomial basis by Klivans and Spielman \cite{ks01}. We deduce the key property for
Schubert polynomials to be I-friendly, via the transition formula of Lascoux
and Sch\"utzenberger \cite{LS85}. The concept of I-friendly bases and
the corresponding interpolation algorithm may be of independent interest, since
they may be used to apply to other bases of (subrings of) the multivariate
polynomial
ring, e.g. Grothendieck polynomials and Macdonald polynomials
\cite{Lascoux_polynomial}.
We would like to emphasize a subtle point of Schubert polynomials
that is cruicial for our algorithm: for $Y_{{\mathbf{v}}}$, if the monomial
${\mathbf{x}}^{\mathbf{u}}$ is in $Y_{{\mathbf{v}}}$, then ${\mathbf{v}}$ dominates ${\mathbf{u}}$ reversely;
that
is, $v_n\geq u_n$, $v_n+v_{n-1}\geq u_n+u_{n-1}$, \dots, $v_n+\dots+v_1\geq
u_n+\dots +u_1$. While by no means a difficult property and known to
experts, it is interesting
that the only reference we can find is a footnote in Lascoux's book \cite[pp. 62,
footnote 4]{Lascoux_polynomial}, so we prove it in \Cref{lem:dom}. On
the other hand, in the literature a weaker property is often mentioned, that is
${\mathbf{v}}$ is
no less than ${\mathbf{u}}$ in the reverse lexicographic order.
However, this order turns out to \emph{not} suffice for the purpose of interpolation.
\paragraph{Comparison with the Barvinok-Fomin algorithm.}
The underlying
structures of the Barvinok-Fomin algorithm and ours are quite similar. There are
certain major differences though.
From the mathematical side, note that Barvinok
and Fomin used the dominance order of monomials, which corresponds to the use of
upper triangular matrix in \Cref{subsec:good}. On the other hand, we make
use of the reverse dominance order, which corresponds to the use of lower
traingular matrix in \Cref{prop:schub_friendly}.
It is not hard to see that the dominance order could not work for all Schubert polynomials.
We also need to upgrade several points (e.g. the computation, and the bounds on coefficients) from Schur polynomials to Schubert polynomials.
From the algorithmic side, both our algorithm and the Barvinok-Fomin algorithm
reduce multivariate interpolation to univariate interpolation.
Here are two key differences. Firstly, Barvinok and Fomin relied on randomness to
obtain a set of linear forms
s.t. most of them achieve distinct values for a small set of vectors. We resort to
a deterministic construction of Klivans and Spielman for this set, therefore
derandomizing the Barvinok-Fomin algorithm. Secondly, our
algorithm has a recursive structure as the Barvinok-Fomin algorithm. But in each
recursive step, the approaches are different; ours is based on the method of the
Klivans-Spielman algorithm.
As a consequence, our
algorithm does not need to know the bounds on the coefficients in the
expansion, while the basic algorithm in \cite[Sec. 4.1]{BF97} does. Barvinok
and Fomin avoided the dependence on this bound via binary search and probabilistic
verification in \cite[Sec. 4.2]{BF97}. However, it seems difficult to derandomize
this probabilistic verification procedure.
\paragraph{Organization.} In \Cref{sec:prel} we present certain
preliminaries. In \Cref{sec:sharp_p}
we define skew Schubert polynomials and Schubert polynomials, and present the
proof for \Cref{thm:eval}, \Cref{cor:eval} and \Cref{thm:vnp}.
In \Cref{sec:interpolate} we define interpolation-friendly bases, and present the
interpolation algorithm in such bases.
In \Cref{sec:schub_good} we prove that Schubert polynomials form an
I-friendly basis, therefore proving \Cref{thm:interpolate}. We remind the reader
that skew Schubert polynomials are only studied in \Cref{sec:sharp_p}.
\section{Preliminaries}\label{sec:prel}
\paragraph{Notations.} For $n\in \mathbb{N}$, $[n]:=\{1, \dots, n\}$.
Let ${\mathbf{x}}=(x_1, \dots, x_n)$ be a tuple of $n$ variables. When no ambiguity,
${\mathbf{x}}$ may represent the set $\{x_1, \dots, x_n\}$.
For ${\mathbf{e}}=(e_1, \dots, e_n)\in \mathbb{N}^n$, the monomial with exponent ${\mathbf{e}}$
is ${\mathbf{x}}^{\mathbf{e}}:=x_1^{e_1}\dots x_n^{e_n}$.
Given $f\in \mathbb{Z}[{\mathbf{x}}]$ and ${\mathbf{e}}\in\mathbb{N}^n$, $\mathrm{Coeff}({\mathbf{e}}, f)$ denotes
the coefficient of ${\mathbf{x}}^{\mathbf{e}}$ in $f$. ${\mathbf{x}}^{\mathbf{e}}$ (or ${\mathbf{e}}$) is in $f$ if
$\mathrm{Coeff}({\mathbf{e}}, f)\neq 0$, and $E_f:=\{{\mathbf{e}}\in \mathbb{N}^n\mid
{\mathbf{x}}^{\mathbf{e}}\in f\}$.
Given two vectors ${\mathbf{c}}=(c_1, \dots, c_n)$ and ${\mathbf{e}}=(e_1, \dots, e_n)$ in
$\mathbb{Q}^n$, their inner product is $\langle {\mathbf{c}}, {\mathbf{e}}\rangle=\sum_{i=1}^n c_i e_i$.
Each ${\mathbf{c}}\in \mathbb{Q}^n$ defines a linear form ${\mathbf{c}}^\star$, which maps
${\mathbf{e}}\in\mathbb{Q}^n$ to $\langle {\mathbf{c}}, {\mathbf{e}}\rangle$.
\paragraph{Codes and permutations.} We call ${\mathbf{v}}=(v_1,
\dots, v_n)\in \mathbb{N}^n$ a \emph{code}. We identify ${\mathbf{v}}$ with ${\mathbf{v}}'=(v_1, \dots,
v_n, 0, \dots, 0)\in \mathbb{N}^m$, for $m\geq n$. The weight of the code ${\mathbf{v}}$, denoted
by $|{\mathbf{v}}|$, is $\sum_{i=1}^n v_i$. A code ${\mathbf{v}}\in\mathbb{N}^n$ is \emph{dominant} if
$v_1\geq v_2\geq \dots\geq v_n$. We define a \emph{partition} to be a dominant
code, and often represent it using $\alpha$.
${\mathbf{v}}$ is \emph{anti-dominant} if $v_1\leq v_2\leq \dots \leq v_k$ and
$v_{k+1}=\dots=v_n=0$.
For $N\in \mathbb{N}$, $S_N$ is the symmetric group on $[N]$. A permutation
$\sigma\in
S_N$ is written as
$\underline{\sigma(1),\sigma(2), \dots, \sigma(N)}$. We identify $\sigma$ with
$\sigma'=\underline{\sigma(1), \dots, \sigma(N), N+1, \dots, M}\in S_M,$
for $M\geq N$. The length of $\sigma$, denoted by $|\sigma|$, is the number of
inversions of $\sigma$.
That is, $|\sigma| = | \{ (i,j) : i < j; \sigma(i) > \sigma(j) \} |$.
Given a permutation $\sigma\in S_N$, we can associate a code
${\mathbf{v}}\in \mathbb{N}^n$,\footnote{This is known as the \emph{Lehmer code} of
a permutation; see e.g. \cite[Section 2.1]{Manivel}.} by assigning
$v_i=|\{j: j>i, \sigma(j)<\sigma(i)\}|$.
On the other hand, given a code ${\mathbf{v}} \in \mathbb{N}^n$ we
associate a permutation $\sigma\in S_N (N \geq n)$ as follows. ($N$ will be clear from the
construction procedure.) To start, $\sigma(1)$ is assigned as $v_1+1$. $\sigma(2)$
is the $(v_2+1)$th number, skipping $\sigma(1)$ if necessary (i.e. if $\sigma(1)$ is within the first $(v_2 + 1)$ numbers).
$\sigma(k)$ is then the $(v_k+1)$th number, skipping some of $\sigma(1), \dots, \sigma(k-1)$ if
necessary.
For example, it can be verified that $\underline{316245}$ gives the code $(2, 0, 3, 0, 0, 0)=(2, 0, 3)$ and vice versa.
Given a code ${\mathbf{v}}$ its associated permutation is denoted as $\langle
{\mathbf{v}}\rangle$. Conversely, the code of a permutation $\sigma\in S_N$ is denoted as
$\acode{\sigma}$. It is clear that $|\sigma|=|{\mathbf{v}}|$.
\paragraph{Bases.} Let $R$ be a (possibly nonproper)
subring of the polynomial ring
$\mathbb{Z}[{\mathbf{x}}]$. Suppose $M$ is a basis of $R$ as a $\mathbb{Z}$-module.
$M$ is usually indexed by some \emph{index set} $\Lambda$, and
$M=\{t_{\lambda}\mid \lambda\in\Lambda\}$.
For example $\Lambda=\mathbb{N}^n$ for $R=\mathbb{Z}[{\mathbf{x}}]$, and $\Lambda=\{\text{partitions in
}\mathbb{N}^n\}$ for $R=\{\text{symmetric polynomials}\}$.
$f\in R$ can be
expressed uniquely as $f=\sum_{\lambda \in \Gamma} a_{\lambda} t_{\lambda}$,
$a_{\lambda}\neq 0\in \mathbb{Z}$, $t_{\lambda}\in M$, and a finite $\Gamma\subseteq
\Lambda$.
\paragraph{A construction of Klivans and Spielman.} We present a
construction of Klivans and Spielman that is the key to the derandomization here.
Given positive integers
$m$, $n$, and $0<\epsilon<1$, let $t=\lceil
m^2n/\epsilon \rceil$. Let $d$ be another positive integer, and fix a prime $p$
larger than $t$ and $d$. Now define a set of $t$ vectors in $\mathbb{N}^n$ as $\mathrm{KS}(m, n,
\epsilon, d, p):=\{{\mathbf{c}}^{(1)}, \dots, {\mathbf{c}}^{(t)}\}$ by
$
{\mathbf{c}}^{(k)}_i=k^{i-1}\ \mod p,
$ for $i\in[n]$. Note that ${\mathbf{c}}^{(k)}_i\leq p=O(m^2nd/\epsilon)$.
The main property we need from $\mathrm{KS}$ is the following.
\begin{lemma}[{\cite[Lemma 3]{ks01}}]\label{lem:ks}
Let ${\mathbf{e}}^{(1)}, \dots, {\mathbf{e}}^{(m)}$ be $m$ distinct vectors from $\mathbb{N}^n$ with
entries in
$\{0, 1, \dots, d\}$. Then
\begin{eqnarray}
\mathrm{Pr}_{k\in[t]}[\langle {\mathbf{c}}^{(k)}, {\mathbf{e}}^{(j)}\rangle \text{ are distinct for } j\in[m] ]\geq 1- m^2n/t\geq 1-\epsilon \nonumber
\end{eqnarray}
\end{lemma}
\paragraph{On $\text{\#P}$ and $\mathrm{VNP}$.} The standard definition
of $\text{\#P}$ is as follows:
function $f:\cup_{n\in \mathbb{N}}\{0, 1\}^n\to \mathbb{Z}$ is in $\text{\#P}$, if there exists a
polynomial-time Turing machine $M$ and a polynomial $p$, s.t. for any $x\in\{0,
1\}^n$, $f(x)=|\{y\in\{0, 1\}^{p(n)}$ $\text{ s.t. } M \text{ accepts } (x,
y)\}|$.
In the proof of \Cref{thm:eval} in \Cref{sec:sharp_p}, we find it
handy to consider the class of Turing machines that
output a nonnegative integer (instead
of just accept or reject), and functions $f:\cup_{n\in\mathbb{N}}\{0, 1\}^n\to
\mathbb{N}$ s.t. there exists a
polynomial-time Turing machine $M$ and a polynomial $p$, s.t. for any $x\in\{0,
1\}^n$, $f(x)=\sum_{y\in\{0, 1\}^{p(n)}} M(x, y)$. As described in \cite{CSW13},
such functions are also in $\text{\#P}$, as we can construct a usual Turing machine
$M'$, which takes $3$ arguments $(x, y, z)$, and $M'(x, y, z)$ accepts if and
only if $z<M(x, y)$. Then $\sum_{y, z}M'(x, y, z)=\sum_y M(x, y)$. Note that $z\in\{0, 1\}^{q(n)}$ for some polynomial $q$ as $M$ is polynomial-time.
The reader is referred to \cite{sy2010} for basic notions like arithmetic
circuits. $\text{VP}$ denotes the class of polynomial families
$\{f_n\}_{n\in\mathbb{N}}$ s.t. each $f_n$ is a polynomial in $\mathsf{poly}(n)$ variables, of
$\mathsf{poly}(n)$ degree, and can be computed by an arithmetic circuit of size
$\mathsf{poly}(n)$. $\mathrm{VNP}$ is the class of polynomials $\{g_n\}_{n\in\mathbb{N}}$ s.t.
$g_n(x_1, \dots, x_n)=\sum_{(c_1, \dots, c_m)\in\{0, 1\}^m} f_n(x_1, \dots, x_n,
c_1, \dots,
c_m)$ where $m=\mathsf{poly}(n)$, and $\{f_n\}_{n\in\mathbb{N}}$ is in $\text{VP}$. Valiant's criterion
is useful to put polynomial families in $\mathrm{VNP}$.
\begin{theorem}[Valiant's criterion, \cite{Val79_1}]\label{thm:valiant}
Suppose $\phi:\{0, 1\}^\star\to \mathbb{N}$ is a function in $\text{\#P}/\mathsf{poly}$. Then the
polynomial family $\{f_n\}_{n\in\mathbb{N}}$ defined by
$f_n=\sum_{{\mathbf{e}}\in\{0,1\}^n}\phi({\mathbf{e}}){\mathbf{x}}^{\mathbf{e}}$ is in $\mathrm{VNP}$.
\end{theorem}
\section{Skew Schubert polynomials in $\text{\#P}$ and $\mathrm{VNP}$ }\label{sec:sharp_p}
In this section we first define skew Schubert polynomials via the labeled
Bruhat order as in \cite{LS03}. We also indicate how Schubert polynomials form a
special case of skew Schubert polynomials. We then put these polynomials, and
therefore Schubert polynomials, in $\text{\#P}$ and $\mathrm{VNP}$.
Also note that it is more convenient to work with permutations instead of codes in this section.
\paragraph{Definition of skew Schubert polynomials.} The Bruhat
order on permutations in $S_N$ is defined
by its covers: for $\sigma, \pi\in S_N$, $\sigma\mathrel{\dot{<}} \pi$ if (i)
$\sigma^{-1}\pi$ is a transposition $\tau_{ik}$ for $i<k$, and (ii)
$|\sigma|+1=|\pi|$. Assuming (i), condition (ii) is
equivalent to:
\begin{enumerate}
\item[(a)] $\sigma(i) < \sigma(k)$;
\item[(b)] for any $j$ such that $i<j<k$, either $\sigma(j)>\sigma(k)$, or
$\sigma(j)<\sigma(i)$.
\end{enumerate}
This is because $\pi$ gets an inversion added due to the transposition $\tau_{ik}$.
So in no position between $i$ and $k$ can $\sigma$ take a value between $\sigma(i)$ and $\sigma(k)$.
Else, the number of inversions in $\pi$ will change by more than 1.
Taking the transitive closure gives the Bruhat order ($\leq$). The maximal
element in Bruhat order is $\pi_0=\underline{N, N-1, \dots, 1}$, whose code is
${\mathbf{d}}=(N-1, N-2, \dots, 1)$.
The \emph{labeled Bruhat order} is the key to the definition
of skew Schubert polynomials. While naming it as an order, it is actually a
directed graph with multiple labeled edges, with the vertices being the
permutations in $S_N$. For $\sigma\mathrel{\dot{<}}\pi$ s.t. $\sigma^{-1}\pi=\tau_{st}$,
$s\leq j< t$ and $b=\sigma(s)=\pi(t)$, add a labeled direct edge as $\sigma
\xrightarrow{(j, b)} \pi$. That is, for each $\sigma\mathrel{\dot{<}}\pi$, there are $t-s$ edges
between them.
For any saturated chain $C$ in this graph, we associate a monomial
${\mathbf{x}}^{{\mathbf{e}}(C)}$, where ${\mathbf{e}}(C)=(e_1, \dots, e_{N-1})$, and $e_i$ counts the
number of $i$ appearing as the first coordinate of a label in $C$. A chain $$
\sigma_0\xrightarrow{(j_1, b_1)}\sigma_1\xrightarrow{(j_2,
b_2)}\dots\xrightarrow{(j_m, b_m)} \sigma_m
$$
is \emph{increasing} if its sequence of labels is increasing in the lexicographic
order on pairs of integers.
Now we arrive at the definition of skew Schubert polynomials.
\begin{definition}\label{def:skew}
Let ${\mathbf{d}}$ and $\pi_0$ be as above. Given two
permutations $\sigma$ and $\pi$, s.t. $\sigma$ is no larger than $\pi$ in the
Bruhat order ($\sigma\leq \pi$), the \emph{skew Schubert
polynomial}
\begin{equation}\label{eq:skew}
Y_{\pi/\sigma}({\mathbf{x}}):=\sum_{C} {\mathbf{x}}^{{\mathbf{d}}}/{\mathbf{x}}^{{\mathbf{e}}(C)},
\end{equation}
summing over all increasing chains in the labeled Bruhat order from
$\sigma$ to $\pi$. The \emph{Schubert polynomial}
$Y_{\sigma}:=Y_{\pi_0/\sigma}$.
\end{definition}
Now in the increasing chain (or any chain in the labeled Bruhat order), each edge increases $|\sigma|$ by 1.
So the number of edges in an increasing chain from $\sigma$ to $\pi$ is $|\pi| - |\sigma|$.
\paragraph{Skew Schubert polynomials in $\text{\#P}$ and $\mathrm{VNP}$.} Before describing the
$\text{\#P}$ algorithm for skew Schubert polynomials, we
note the following. First, by the correspondence between codes and permutations,
from a code ${\mathbf{v}}\in\mathbb{N}^n$ we can compute $\langle {\mathbf{v}}\rangle\in S_N$ in time
polynomial in $|{\mathbf{v}}|$. Also, we have $N=\mathrm{max}_{i\in[n]}\{v_i+i\}\leq |{\mathbf{v}}_i|+n\leq |{\mathbf{v}}_0|+n$.
Second, the length of the path from $\sigma$ to $\pi$ is $|\pi|-|\sigma|$.
To start with let us see how \Cref{cor:eval} follows from \Cref{thm:eval}.
\begin{proof}[Proof of \Cref{cor:eval}]
Given ${\mathbf{v}}$, we can compute $\langle {\mathbf{v}}\rangle\in S_N$. Note that $N\leq
|{\mathbf{v}}|+n$. Then form ${\mathbf{w}}=(N-1, N-2, \dots, 1)$. Invoke \Cref{thm:eval}
with $({\mathbf{v}}, {\mathbf{w}}, {\mathbf{a}})$.
\end{proof}
\begin{proof}[Proof of \Cref{thm:eval}]
Consider the following Turing machine $M$: the input to $M$ are (1) codes ${\mathbf{v}}, {\mathbf{w}} \in \mathbb{N}^m$ in unary;
(2) ${\mathbf{a}} = (a_1, \ldots, a_n) \in \mathbb{N}^n$ in binary; and
(3) a sequence ${\mathbf{s}}$ of triplets of integers $(s_i,j_i,t_i) \in [N] \times [N] \times [N]$ where $s_i\leq j_i < t_i$, $N= |{\mathbf{w}}|+m$, and
$i\in[\ell]$, $\ell=|{\mathbf{w}}|-|{\mathbf{v}}|$.
(Note that all the conditions on ${\mathbf{s}}$ can be checked efficiently.)
The output of $M$ is a nonnegative integer.
Given the input $({\mathbf{v}}, {\mathbf{w}}, {\mathbf{a}}, {\mathbf{s}})$, $M$ uses ${\mathbf{s}}$ as a guide to compute an increasing chain from
$\langle {\mathbf{v}}\rangle$ to $\langle {\mathbf{w}}\rangle$ in the labeled Bruhat order of $S_N$.
Let $\sigma_0=\langle {\mathbf{v}}\rangle$.
Suppose the chain to be constructed is $\sigma_0\xrightarrow{(j_1, b_1)}
\sigma_1\xrightarrow{(j_2, b_2)}\dots\xrightarrow{(j_\ell, b_\ell)} \sigma_\ell$.
At step $i$, $0\leq i<\ell$, $M$ also maintains an exponent vector ${\mathbf{e}}_i\in\mathbb{N}^{N-1}$, and the label $(j_i, b_i)$.
To start, $M$ sets ${\mathbf{e}}_0=(0, \dots, 0)$, and $(j_0, b_0)=(0, 0)$ (lexicographically minimal).
Then when the step $i-1$ finishes, $M$ maintains ${\mathbf{e}}_{i-1}$, $\sigma_{i-1}$, and $(j_{i-1}, b_{i-1})$.
Then at step $i$, based on $(s_i, j_i, t_i)$, $M$ performs the following.
First, it computes $\sigma_i=\sigma_{i-1}\tau_{s_i t_i}$, and checks whether $|\sigma_{i-1}|+1=|\sigma_i|$:
if equal, then continue; otherwise, return $0$ (not a valid chain).
Second, it sets ${\mathbf{e}}_i$ by adding $1$ to the $j_i$th component of ${\mathbf{e}}_{i-1}$, and keeping the other components same.
Third, it computes $b_i=\sigma_i(t_i)$, and checks whether $(j_i, b_i)$ is larger than $(j_{i-1}, b_{i-1})$
in the lexicographic order: if it is larger, then continue; otherwise, return $0$ (not a valid chain).
When the $\ell$th step finishes, $M$ obtains $\sigma_\ell$ and ${\mathbf{e}}_\ell$.
It first checks whether $\sigma_\ell=\langle {\mathbf{w}}\rangle$: if equal, then continue.
Otherwise, return $0$ (not a valid chain).
$M$ then computes ${\mathbf{a}}^{\mathbf{d}}/{\mathbf{a}}^{{\mathbf{e}}_\ell}$ as the output.
This finishes the description of $M$.
Clearly $M$ runs in time polynomial in the input size.
$M$ terminates within $\ell$ steps where $\ell=|{\mathbf{w}}|-|{\mathbf{v}}|$, and recall that ${\mathbf{w}}$ and ${\mathbf{v}}$ are given in unary.
Finally note that $Y_{{\mathbf{w}}/{\mathbf{v}}}({\mathbf{a}})$ is equal to $\sum_{{\mathbf{s}}}M({\mathbf{v}}, {\mathbf{w}}, {\mathbf{a}}, {\mathbf{s}})$,
where ${\mathbf{s}}$ runs over all sequences of triplets of indices as described at the beginning of the proof.
By the discussion of $\text{\#P}$ at the end of \Cref{sec:prel}, this puts the
evaluation of $Y_{{\mathbf{w}}/{\mathbf{v}}}$ in $\text{\#P}$.
\end{proof}
\begin{proof}[Proof of \Cref{thm:vnp}] Let us outline the proof of \Cref{thm:vnp}, which basically follows from the proof of \Cref{thm:eval}.
By Valiant's criterion (\Cref{thm:valiant}), to put $Y_{{\mathbf{w}}/{\mathbf{v}}}$ in $\mathrm{VNP}$, it suffices
to show that the coefficient of any monomial in $Y_{{\mathbf{w}}/{\mathbf{v}}}$ is in $\text{\#P}$.
Therefore we consider the following Turing machine $M'$, which is modified from the Turing machine $M$ in the proof of \Cref{thm:eval} as follows.
Firstly, $M'$ takes input $({\mathbf{v}}, {\mathbf{w}}, {\mathbf{a}}, {\mathbf{s}})$, where ${\mathbf{a}}$ is thought of as an exponent vector, and is given in unary.
Second, in the last step, $M'$ checks whether ${\mathbf{d}}-{\mathbf{e}}_\ell$ equals ${\mathbf{a}}$ or not.
If equal, then output $1$. Otherwise output $0$.
It is clear that this gives a $\text{\#P}$ algorithm to compute the coefficient of ${\mathbf{x}}^{\mathbf{a}}$.
The only small problem here is that in the literature (\cite[Prop. 2.20]{B_act}, \cite[Thm. 2.3]{Koi05}) we can find,
Valiant's criterion is only stated for multilinear polynomials.\footnote{It is
well-known though that Valiant's criterion works for even non-multilinear
polynomials. However, the only reference we are aware of is \cite[pp. 10,
Footnote 1]{Rap_survey}, where no details are provided. Therefore, we describe the
procedure to overcome this for completeness. }
But it only takes a little more effort to overcome this;
essentially, this is because the degree is assumed to be polynomially bounded, so ${\mathbf{a}}$ can be given in unary.
First, note that $N$ (as in the beginning of the proof of \Cref{thm:eval})is an upper bound on the individual degree for each $x_i$.
Then introduce $N$ copies for each variable $x_i$.
Since ${\mathbf{a}}=(a_1, \dots, a_n)$ is given in unary, we can assume w.l.o.g. that each $a_i$ is an $N$-bit string
$(a_{i,1}, \dots, a_{i,N})$, and if $a_i=k$, the first $k$ bits are set to $1$, and the rest $0$.
(These conditions are easy to enforce in the definition of $M'$.)
We then use $a_{i, j}$ to control whether we have the $j$th copy of $x_i$, or $1$, using the formula $a_{i,j}x_i+1-a_{i,j}$.
After this slight modification, the Valiant's criterion applies, and the proof is concluded.
\end{proof}
\section{Sparse interpolation in an interpolation-friendly basis}\label{sec:interpolate}
\subsection{Interpolation-friendly bases}\label{subsec:good}
Let $M=\{t_\lambda\mid \lambda \in \Lambda\}$ be a basis of a $\mathbb{Z}$-module
$R\subseteq \mathbb{Z}[x_1, \dots, x_n]$, indexed by $\lambda\in\Lambda$.
Given a function $K:\Lambda \times \mathbb{N} \to \mathbb{R}^+$, $M$ is called \emph{$K$-bounded},
if $\forall t_{\lambda}\in M$, the absolute values of the coefficients in the
monomial expansion of $t_\lambda\in M$, $t_\lambda\in\mathbb{Z}[x_1, \dots, x_n]$ are
bounded by $K(\lambda, n)$.\footnote{Note that while $\Lambda$ depends on $n$
already, we feel that it is clearer to explicitly designate $n$ as a parameter
of $K$, as seen e.g. in \Cref{prop:schub_bound}.}
For a $(0, 1)$, nonsingular matrix $A$,
$L_A:=\{{\mathbf{c}}^\star \in (\mathbb{Q}^n)^{\star}\mid \exists {\mathbf{c}}'\in (\mathbb{Z}^+)^n,
{\mathbf{c}}=A{\mathbf{c}}'\}$.
Note that ${\mathbf{c}}'$ is a vector with positive integer components. Also recall that
for ${\mathbf{c}}\in \mathbb{Q}^n$, ${\mathbf{c}}^\star$ denotes the linear form determined by ${\mathbf{c}}$.
$M$ is \emph{$L_A$-compatible}, if for every $t_\lambda\in M$, we can associate an
exponent ${\mathbf{e}}_\lambda$ s.t. (1) for any ${\mathbf{c}}^\star \in
L_A$, ${\mathbf{c}}^\star$ achieves the maximum uniquely at ${\mathbf{e}}_\lambda$ over
$E_{t_\lambda}=\{{\mathbf{e}}\in\mathbb{N}^n\mid {\mathbf{x}}^{\mathbf{e}}\in t_\lambda\}$;
(2) ${\mathbf{e}}_\lambda\neq {\mathbf{e}}_{\lambda'}$ for $\lambda\neq
\lambda'$; (3) the coefficient of ${\mathbf{x}}^{{\mathbf{e}}_\lambda}$ in $t_\lambda$ is
$1$. ${\mathbf{x}}^{{\mathbf{e}}_\lambda}$ (resp. ${\mathbf{e}}_\lambda$) is called the \emph{leading
monomial} (resp. \emph{leading exponent}) of $t_\lambda$ w.r.t. $L_A$. By
the conditions (1) and (2), the leading monomials are distinct across
$M$, and for each $t_\lambda$, the leading monomial is unique. We
assume that from $\lambda$ it is easy to compute ${\mathbf{e}}_\lambda$, and vice versa.
In fact, for Schubert polynomials, $\lambda$ is from $\Lambda=\mathbb{Z}^n$,
and ${\mathbf{e}}=\lambda$.
Combining the above two definitions, we say $M$ is $(K,
L_A)$-interpolation-friendly, if
(1) $M$ is $K$-bounded; (2) $M$ is $L_A$-compatible. We also call it $(K,
L_A)$-friendly for short, or I-friendly when $K$ and $L_A$ are
understood from the context.
\begin{itemize}
\item A trivial example is the monomial basis for $\mathbb{Z}[{\mathbf{x}}]$, where $K=1$, $A$ is the identity transformation, and a leading monomial for ${\mathbf{x}}^{\mathbf{e}}$ is just itself;
\item For symmetric polynomials, the basis of Schur polynomials (indexed by
partitions $\alpha$) is $(K, L_A)$-friendly, where (1) $K(\alpha,
n)=\sqrt{|\alpha|!}$
by \Cref{prop:schur_bound}, (2)
$A=(r_{i,j})_{i,j\in[n]}$ where $r_{i,j}=0$ if $i>j$, and $1$ otherwise, by
the fact that every exponent
vector in $\mathrm{s}_\alpha$ is dominated by $\alpha$ (\cite[Sec.
2.2]{BF97}). ($A$ is the upper triangular matrix with $1$'s on the diagonal
and above.)
The associated leading monomial for $\mathrm{s}_\alpha$ is ${\mathbf{x}}^\alpha$.
In retrospect, the fact that Schur polynomials form an I-friendly basis is the
mathematical support of the Barvinok-Fomin algorithm \cite{BF97}.
\end{itemize}
\subsection{Sparse interpolation in an interpolation-friendly basis}
In this section we perform deterministic sparse polynomial interpolation in an interpolation-friendly basis.
The idea is to combine the structure of the Barvinok-Fomin algorithm with some ingredients from the Klivans-Spielman algorithm.
We first briefly review the idea of the Klivans-Spielman algorithm \cite[Section 6.3]{ks01}.
Suppose we want to interpolate $f\in\mathbb{Z}[x_1, \dots, x_n]$ of degree $d$ with $m$ monomials.
Their algorithm makes use of the map
\begin{equation}\label{eq:phi}
\phi_{{\mathbf{c}}}(x_1, \dots, x_n)=(y^{c_1}, \dots, y^{c_n})
\end{equation}
where ${\mathbf{c}}=(c_1, \dots, c_n)\in (\mathbb{Z}^+)^n$.
If ${\mathbf{c}}$ satisfies the property:
$
\forall {\mathbf{e}}\neq {\mathbf{e}}'\in f, \langle {\mathbf{c}}, {\mathbf{e}}\rangle \neq \langle {\mathbf{c}}, {\mathbf{e}}'\rangle,
$
then we can reduce interpolation of multivariate polynomials to interpolation of univariate polynomials:
first apply the univariate polynomial interpolation (based on the Vandermonde matrix) to get a set of coefficients.
Then to recover the exponents, modify $\phi_{\mathbf{c}}$ as
\begin{equation}\label{eq:phi_prime}
\phi_{{\mathbf{c}}}'(x_1, \dots, x_n)=(p_1 y^{c_1}, \dots, p_n y^{c_n}),
\end{equation}
where $p_i$'s are distinct primes, and get another set of coefficients.
Comparing the two sets of coefficients we can compute the exponents.
Note that the components of ${\mathbf{c}}$ need to be small for the univariate interpolation to be efficient.
To obtain such a ${\mathbf{c}}$, Klivans and Spielman exhibit a small -- polynomial in $n$, $m$, $d$
and an error probability $\epsilon\in(0, 1)$ -- set of test vectors ${\mathbf{c}}$ s.t. with probability $1-\epsilon$, a vector from this set
satisfies the above property.
Furthermore, the components of these vectors are bounded by $O(m^2nd/\epsilon)$.
Their construction was reviewed in \Cref{lem:ks}, \Cref{sec:prel}.
Now suppose $M$ is a $(K, L_A)$-friendly basis for $R\subseteq \mathbb{Q}[{\mathbf{x}}]$, and we
want
to recover $f=\sum_{\lambda\in \Gamma}a_\lambda t_\lambda$ of degree $\leq d$, and
$|\Gamma|=m$. To apply the above idea to an arbitrary I-friendly basis $M$, the
natural
strategy is to extract the leading monomials w.r.t. $L_A$. However, as each basis
polynomial can be quite complicated, there are many other non-leading monomials
which may interfere with the leading ones. Specifically, we need to explain the
following:
\begin{enumerate}
\item[(1)] Whether extremely large coefficients appear after the map $\phi_{\mathbf{c}}$,
therefore causing the univariate interpolation procedure to be inefficient?
\item[(2)] Whether some leading monomials are preserved after the map
$\phi_{\mathbf{c}}$? (That is, will the image of every leading monomial under
$\phi_{\mathbf{c}}$ be cancelled by non-leading monomials?)
\end{enumerate}
It is immediate to realize that I-friendly bases are designed to overcome the above issues.
(1) is easy: by the $K$-bounded property, for any monomial ${\mathbf{x}}^{\mathbf{u}}$ in $f$, the absolute value of $\mathrm{Coeff}({\mathbf{u}}, f)$ is bounded by
$K\cdot (\sum_\lambda |a_\lambda|)$.
Therefore, the coefficients of the image of $f$ under $\phi_{\mathbf{c}}$ are bounded by $O\big(\binom{n+d}{d} \cdot K\cdot (\sum_\lambda|a_\lambda|)\big)$.
(2) is not hard to overcome either; see the proof of \Cref{thm:interpolate_good} below.
These properties are used implicitly in the Barvinok-Fomin algorithm.
There is one final note: if, unlike in the monomial basis case, the procedure cannot produce all leading monomials at one shot,
we may need to get one $t_\lambda$ and its coefficient, subtract that off, and recurse.
This requires us to compute $t_\lambda$'s efficiently.
As this is the property of $t_\lambda$, not directly related to the interpolation problem,
we assume an oracle which takes an index $\lambda\in\Lambda$ and an input ${\mathbf{a}}\in\mathbb{N}^n$, and returns $t_\lambda({\mathbf{a}})$.
\begin{theorem}\label{thm:interpolate_good}
Let $M=\{t_{\lambda}\mid \lambda \in \Lambda\}$ be a $(K, L_A)$-friendly basis for
$R\subseteq \mathbb{Z}[{\mathbf{x}}]$, $K=K(\lambda, n)$, $A$ a $(0, 1)$ invertible matrix. Given
an access to an
oracle $\mathcal{O}=\mathcal{O}(\lambda, {\mathbf{a}})$ that computes basis polynomial
$t_{\lambda}({\mathbf{a}})$ for ${\mathbf{a}}\in\mathbb{N}^n$, there exists a deterministic algorithm
that, given a black box containing $f=\sum_{\lambda\in\Gamma}a_\lambda t_\lambda$,
$a_\lambda\neq 0\in \mathbb{Z}$ with the promise that $\deg(f)\leq d$ and $|\Gamma|\leq
m$, computes such an expansion
of $f$ in time $\mathsf{poly}(n, d, m, \log(\sum_\lambda|a_\lambda|), \log K)$.
\end{theorem}
\begin{proof}
We first present the algorithm. Recall the maps $\phi_{\mathbf{c}}$ and $\phi'_{\mathbf{c}}$
defined in \Cref{eq:phi} and \Cref{eq:phi_prime}.
\begin{description}
\item[Input:] A black box $\mathcal{B}$ containing $f\in R\subseteq \mathbb{Z}[x_1, \dots, x_n]$
with the promises:
(1) $\deg(f) \leq d$; (2) $f$ has $\leq m$ terms in the $M$-basis.
An oracle $\mathcal{O}=\mathcal{O}(\lambda, {\mathbf{a}})$ computing $t_{\lambda}({\mathbf{a}})$ for ${\mathbf{a}}\in\mathbb{N}^n$.
\item[Output:] The expansion $f=\sum_{\lambda\in\Gamma}a_\lambda t_\lambda$.
\item[Algorithm:] \begin{enumerate}
\item By \Cref{lem:ks}, construct the Klivans-Spielman set $\mathrm{KS}=\mathrm{KS}(m, n, 1/3, nd,
p)$.
\item For every vector ${\mathbf{c}}$ in $\mathrm{KS}$, do:
\begin{enumerate}
\item $f_{\mathbf{c}}\leftarrow 0$. $i\leftarrow 0$. ${\mathbf{d}}\leftarrow A{\mathbf{c}}$.
\item While $i< m$, do:
\begin{enumerate}
\item Apply the map $\phi_{\mathbf{d}}$ to $\mathcal{B}-f_{\mathbf{c}}$ (with the help of $\mathcal{O}$),
and use the univariate interpolation
algorithm to obtain $g(y)=\sum_{i=0}^{k}b_iy^i$. If $g(x)\equiv 0$,
break.
\item Apply the map $\phi_{\mathbf{d}}'$ to $\mathcal{B}-f_{\mathbf{c}}$ (with the help of $\mathcal{O}$),
and use the univariate interpolation algorithm to obtain
$g'(y)=\sum_{i=0}^{k}b_i'y^i$.
\item From $b_k$ and $b_k'$, compute the corresponding monomial ${\mathbf{x}}^{\mathbf{e}}$, and
its coefficient $a_{\mathbf{e}}$.
From ${\mathbf{x}}^{\mathbf{e}}$, compute the corresponding label $\nu\in \Lambda$, and set
$a_\nu\leftarrow a_{\mathbf{e}}$.
\item $f_{\mathbf{c}}\leftarrow f_{\mathbf{c}}+a_\nu t_\nu$. $i\leftarrow i+1$.
\end{enumerate}
\end{enumerate}
\item Take the majority of $f_{\mathbf{c}}$ over ${\mathbf{c}}$ and output it.
\end{enumerate}
\end{description}
We prove the correctness of the above algorithm.
As before, let ${\mathbf{e}}_\lambda$ be the leading vector of $t_\lambda$ w.r.t. $L_A$.
Note that as $A^{T}$ is $(0, 1)$-matrix, entries in the vector
$A^{T}{\mathbf{e}}_\lambda$ are in $\{0, 1, \dots, nd\}$. By the property of $\mathrm{KS}(m,
n, 1/3, nd, p)$, no less than $2/3$ fraction of the vectors from ${\mathbf{c}}\in\mathrm{KS}$
satisfy that $\langle {\mathbf{c}}, A^{T}{\mathbf{e}}_\lambda\rangle$ are distinct over $\lambda\in\Gamma$; call
these vectors ``distinguishing.'' We shall show that for any distinguishing
vector, the algorithm outputs the correct expansion, so step (3) would succeed.
Fix a distinguishing vector ${\mathbf{c}}$. As ${\mathbf{e}}_\lambda\neq {\mathbf{e}}_{\lambda'}$ for
$\lambda\neq \lambda'$ and $A$ is invertible, there exists a unique
$\nu\in\Lambda$ s.t. ${\mathbf{e}}_\nu$ achieves the maximum at $\langle{\mathbf{c}},
A^T{\mathbf{e}}_\lambda\rangle$ over $\lambda\in\Gamma$.
As $\langle {\mathbf{c}}, A^{T}{\mathbf{e}}_\lambda\rangle=\langle A{\mathbf{c}},
{\mathbf{e}}_\lambda\rangle=\langle {\mathbf{d}}, {\mathbf{e}}_\lambda\rangle$,
by the definition of $M$ being $L_A$-compatible,
we know that within each $t_\lambda$, ${\mathbf{d}}^\star$ achieves the unique
maximum at ${\mathbf{e}}_\lambda$ over $E_{t_\lambda}$, the set of all exponent vecors in
$t_\lambda$. Thus over all ${\mathbf{e}}\in f$,
${\mathbf{d}}^\star$ achieves the unique maximum at ${\mathbf{e}}_\nu$. This means that
$y^{\langle{\mathbf{d}},{\mathbf{e}}_\nu\rangle}$ is the monomial in $g(y)$ of maximum degree,
and could not be affected by other terms. As
$\mathrm{Coeff}({\mathbf{e}}_\nu, t_\nu)=1$, $\mathrm{Coeff}({\mathbf{e}}_\nu, f)$ is just
the coefficient of $t_\nu$ in the expansion of $f$. So we have justified that from
Step (2.b.i) to (2.b.iii), the algorithm extracts the monomial of maximum degree
in $g(y)$,
computes the corresponding coefficient and exponent in $f$, and interprets as a
term $a_\lambda t_\lambda$.
To continue computing other terms, we just need to note that ${\mathbf{c}}$ is still
distinguishing
w.r.t. ($f-$ some of the terms within $f$). This justifies Step (2.b.iv).
To analyze the running time, the FOR-loop in Step (2) and the WHILE-loop
in Step (2.b) take $O(m^2n)$ and $m$ rounds, respectively. In the univariate
polynomial interpolation step, as the components in
${\mathbf{c}}$ are bounded by $O(m^2n\cdot nd)$ and $A$ is $(0, 1)$, the components in
${\mathbf{d}}$ are bounded by
$O(m^2n^3d)$. It follows that $k=\deg(g)=\langle{\mathbf{d}},
{\mathbf{e}}_\nu\rangle=O(m^2n^3d^2)$. By the $K$-bounded property, the coefficients of
$g(y)$ are of magnitude $O\big(\binom{n+d}{d}\cdot K\cdot
(\sum_\lambda|a_\lambda|)\big)$. So the running time
for the univariate interpolation step, and therefore for the whole algorithm,
is $\mathsf{poly}(m, n, d, \log(\sum_\lambda|a_\lambda|),\log K)$.
\end{proof}
\section{Schubert polynomials form an interpolation-friendly basis}\label{sec:schub_good}
In this section, our ultimate goal is to prove \Cref{prop:schub_bound} and
\Cref{prop:schub_friendly}, which establish that Schubert polynomials form an
I-friendly
basis. The main theorem \Cref{thm:interpolate} follows immediately. For this, we
need to review some properties of Schur polynomials,
the definition of Schubert polynomials via divided differences, and the transition
formula \cite{LS85}. The transition formula is the main technical tool to deduce
the properties of Schubert polynomials we shall need for \Cref{prop:schub_bound}
and \Cref{prop:schub_friendly}. These include \Cref{lem:dom} which helps us to find
the matrix $A$ needed for the $L_A$-compatible property, and an alterative proof
for Schubert polynomial in $\text{\#P}$.
\paragraph{Schur polynomials.} For a positive integer $\ell$, the complete
symmetric polynomial
$h_\ell({\mathbf{x}})\in\mathbb{Z}[{\mathbf{x}}]$ is the sum over all monomials of degree $\ell$, with
coefficient $1$ for every monomial. We also define $h_0({\mathbf{x}})=1$, and
$h_\ell({\mathbf{x}})=0$ for any $\ell < 0$. For a partition $\alpha=(\alpha_1, \dots,
\alpha_n)$ in $\mathbb{N}^n$, the Schur polynomial $\mathrm{s}_\alpha({\mathbf{x}})$ in $\mathbb{Z}[{\mathbf{x}}]$
can be defined by the Jacobi-Trudi formula as
$\mathrm{s}_\alpha({\mathbf{x}})=\det[h_{\alpha_i-i+j}({\mathbf{x}})]_{i, j\in[n]}.$
Note that $\deg(\mathrm{s}_\alpha({\mathbf{x}}))=|\alpha|$.
Via this determinantal expression, we have
\begin{proposition}[{\cite[Sec. 2.4]{BF97}}]\label{prop:compute_schur}
For ${\mathbf{a}}\in\mathbb{Z}^n$, $\mathrm{s}_\alpha({\mathbf{a}})$ can be computed using
$O(|\alpha|^2\cdot n + n^3)$
arithmetic operations, and the bit lengths of intermediate numbers are polynomial
in those of ${\mathbf{a}}$.
\end{proposition}
Littlewood's theorem shows Schur polynomials have positive
coefficients. We also need the following bound on coefficients -- the Kostka
numbers -- in $\mathrm{s}_\alpha({\mathbf{x}})$.
\begin{proposition}[{\cite[Sec. 2.2]{BF97}}]\label{prop:schur_bound}
For any ${\mathbf{e}}\in\mathbb{N}^n$, $0\leq \mathrm{Coeff}({\mathbf{e}}, \mathrm{s}_\alpha({\mathbf{x}}))\leq
\sqrt{|\alpha|!}$.
\end{proposition}
\paragraph{Definition of Schubert polynomials via divided differences.}
We follow the approach in \cite{Lascoux_symmetric,Lascoux_parallel}.
For $i\in[n-1]$, let
$\chi_i$ be the switching operator on $\mathbb{Z}[x_1, \dots, x_n]$:
$$f^{\chi_i}(x_1, \dots,
x_i, x_{i+1}, \dots, x_n):=f(x_1, \dots, x_{i+1}, x_i, \dots, x_n).
$$
Then the divided difference operator $\partial_i$ on $\mathbb{Z}[x_1, \dots, x_n]$ is
$\partial_i(f):= \frac{f-f^{\chi_i}}{x_i-x_{i+1}}.$
\begin{definition}\label{def:schub}
For ${\mathbf{v}}=(v_1, \dots, v_n)\in \mathbb{N}^n$, the Schubert polynomial $Y_{\mathbf{v}}\in \mathbb{Z}[x_1,
\dots, x_n]$ is defined recursively as follows:
\begin{enumerate}
\item If ${\mathbf{v}}$ is dominant, then $Y_{\mathbf{v}}=x_1^{v_1} x_2^{v_2}\dots x_n^{v_n}$.
\item If $v_i>v_{i+1}$, then $Y_{{\mathbf{v}}'}=\partial_i Y_{\mathbf{v}}$ where
${\mathbf{v}}'=(v_1, \dots, v_{i+1}, v_i-1, \dots, v_n)$.
\end{enumerate}
\end{definition}
It is not hard to see that this defines $Y_{{\mathbf{v}}}$ for any ${\mathbf{v}}$.
We list some basic facts about Schubert polynomials.
\begin{fact}[{\cite{Manivel}}]\label{fact:schub}
\begin{enumerate}
\item If ${\mathbf{v}}=(v_1, \dots, v_n)$ is anti-dominant with $k$ being the last
nonzero index, $Y_{\mathbf{v}}$ equals the Schur polynomial $\mathrm{s}_\alpha(x_1, \dots,
x_k)$ where $\alpha=(v_k, \dots, v_1)$.
\item As a special case of (1), if ${\mathbf{v}}=(0, \dots, 0, w, 0, \dots 0)$ where $w$
is at the $k$th position,
then $Y_{\mathbf{v}}({\mathbf{x}})$ is the complete homogeneous symmetric polynomial
$h_w(x_1, \dots, x_k)$.
\item If ${\mathbf{v}}=[v_1, \dots, v_k, 0, \dots, 0]\in \mathbb{N}^n$, then
$Y_{{\mathbf{v}}}\in\mathbb{Z}[x_1, \dots, x_k]$.
\end{enumerate}
\end{fact}
\paragraph{The transition formula and its applications.}
Given a code ${\mathbf{v}}\in\mathbb{N}^n$, let
$k$ be the largest $t$ s.t. $v_t$ is nonzero, and ${\mathbf{v}}'=[v_1, \dots, v_k-1, 0,
\dots, 0]$. For convenience let $\sigma=\aperm{{\mathbf{v}}'}$.
Then the transition formula of Lascoux and Sch\"utzenberger \cite{LS85} is:
$$Y_{\mathbf{v}}=x_k Y_{{\mathbf{v}}'}+ \sum_{{\mathbf{u}}} Y_{{\mathbf{u}}},$$
where ${\mathbf{u}}\in \mathbb{N}^n$ satisfies that: (i) $\aperm{{\mathbf{u}}}\sigma^{-1}$ is a
transposition $\tau_{ik}$ for $i<k$; (ii) $|{\mathbf{u}}|=|{\mathbf{v}}|$. Assuming (i),
condition (ii) is
equivalent to:
\begin{multline}\label{eqn:trans}
\text{(a) } \sigma(i) < \sigma(k);
\text{(b) for any } j \text{ s.t. } i<j<k, \\
\text{ either } \sigma(j)>\sigma(k),
\text{ or } \sigma(j)<\sigma(i).
\end{multline}
Let $\Psi_{\mathbf{v}}$ be the set of codes with weight
$|{\mathbf{v}}|$ appearing in the transition formula for ${\mathbf{v}}$, and
$\Phi_{{\mathbf{v}}}=\Psi_{{\mathbf{v}}}\cup \{{\mathbf{v}}'\}$. Any ${\mathbf{u}}\in
\Psi_{\mathbf{v}}$ is uniquely determined by the transposition $\tau_{ik}$, therefore, by
some $i\in [k-1]$.
The transition formula yields the following simple, yet rarely
mentioned\footnote{The only reference we know of is in
\cite[pp. 62, Footnote 4]{Lascoux_polynomial}. } property of Schubert
polynomials.
This is the key to show that the Schubert basis is $L_A$-compatible for some
appropriate $A$. For completeness we include a proof here.
Given ${\mathbf{v}}$ and ${\mathbf{u}}$ in $\mathbb{N}^n$, ${\mathbf{v}}$ dominates ${\mathbf{u}}$ reversely, denoted
as ${\mathbf{v}}\triangleright{\mathbf{u}}$, if
$v_n\geq u_n$, $v_n+v_{n-1}\geq
u_n+u_{n-1}$, \dots, $v_n+\dots +v_2\geq u_n+\dots +u_2$,
$v_n+\dots+v_1=u_n+\dots+u_1$.
\begin{lemma}\label{lem:dom}
For ${\mathbf{u}}\in \mathbb{N}^n$, if ${\mathbf{x}}^{{\mathbf{u}}}$ is in $Y_{\mathbf{v}}$, then ${\mathbf{v}}\triangleright {\mathbf{u}}$.
Furthermore, $\mathrm{Coeff}({\mathbf{v}}, Y_{\mathbf{v}})=1$.
\end{lemma}
\begin{proof}
We first induct on the weight. When the weight is $1$, the claim holds
trivially. Assume the claim holds for weight $\leq w$, and consider
${\mathbf{v}}\in\mathbb{N}^n$ with $|{\mathbf{v}}|=w+1$. We now induct on the reverse dominance order,
from small to large. The smallest one with weight $w+1$ is $[w+1, 0, \dots, 0]$.
As $Y_{[w+1, 0, \dots, 0]}=x_1^{w+1}$, the claim holds.
For the induction step, we make use of the transition formula. Suppose
$k$ is the largest $i$ s.t. $v_i$ is nonzero, ${\mathbf{v}}'=[v_1, \dots, v_k-1, 0,
\dots, 0]$, and $\sigma=\aperm{{\mathbf{v}}'}$.
Then by the transition formula, $Y_{\mathbf{v}}=x_k Y_{{\mathbf{v}}'}+ \sum_{{\mathbf{u}}} Y_{{\mathbf{u}}}$,
where ${\mathbf{u}}\in \mathbb{N}^n$
satisfies that: (i) $\aperm{{\mathbf{u}}}\sigma^{-1}$ is a
transposition $\tau_{ik}$ for $i<k$; (ii) $|{\mathbf{u}}|=|{\mathbf{v}}|$. Assuming (i), the
condition (ii) is
equivalent to that: (a) $\sigma(i) < \sigma(k)$; (b) for any $j$ s.t. $i<j<k$,
either $\sigma(j)>\sigma(k)$, or $\sigma(j)<\sigma(i)$. Thus ${\mathbf{u}}$ and ${\mathbf{v}}'$
can only differ at positions $i$ and $k$, and $u_i > v'_i=v_i$, $u_k\leq
v'_k<v_k$. It follows
that ${\mathbf{v}}\triangleright{\mathbf{u}}$, and it is clear that ${\mathbf{v}}\neq {\mathbf{u}}$. By the induction
hypothesis on $|{\mathbf{v}}|$, each monomial
in $Y_{{\mathbf{v}}'}$ is reverse dominated by ${\mathbf{v}}'$, and $\mathrm{Coeff}({\mathbf{v}}',
Y_{{\mathbf{v}}'})=1$. As $Y_{{\mathbf{v}}'}$ depends only
on $x_1, \dots, x_k$ by \Cref{fact:schub} (3), each monomial in
$x_kY_{{\mathbf{v}}'}$ is reverse dominated by ${\mathbf{v}}$. By the induction
hypothesis on the reverse dominance order, every monomial in $Y_{\mathbf{u}}$ is
reverse dominated by ${\mathbf{u}}$, thus is reverse dominated by ${\mathbf{v}}$ and cannot be
equal to ${\mathbf{v}}$. Thus
$\mathrm{Coeff}({\mathbf{v}}, Y_{\mathbf{v}})=1$, which is from $x_kY_{{\mathbf{v}}'}$. This finishes
the induction step.
\end{proof}
We then deduce another property of Schubert polynomials from the transition
formula. Starting with ${\mathbf{v}}_0$, we can form a chain of transitions
$
{\mathbf{v}}_0 \to {\mathbf{v}}_1 \to {\mathbf{v}}_2 \to \dots \to {\mathbf{v}}_i \to \dots
$
where ${\mathbf{v}}_{i}\in \Psi_{{\mathbf{v}}_{i-1}}$. The following lemma shows that
long enough transitions lead to anti-dominant codes.
\begin{lemma}[{\cite[Lemma 3.11]{LS85}}]\label{lem:long_trans}
Let ${\mathbf{v}}_0\to {\mathbf{v}}_1\to \dots \to {\mathbf{v}}_\ell$ be a sequence of codes in $\mathbb{N}^n$,
s.t. ${\mathbf{v}}_{i}\in \Psi_{{\mathbf{v}}_{i-1}}$, $i\in[\ell]$. If none of ${\mathbf{v}}_i$'s are
anti-dominant, then $\ell \leq n\cdot |{\mathbf{v}}_0|$.
\end{lemma}
Based on \Cref{lem:long_trans}, we have the following corollary.
Recall that for ${\mathbf{v}}\in\mathbb{N}^n$, $\Phi_{\mathbf{v}}$ is the
collection of codes (not necessarily of weight $|{\mathbf{v}}|$) in the transition
formula for ${\mathbf{v}}$.
\begin{corollary}\label{cor:long_trans}
Let ${\mathbf{v}}_0\to {\mathbf{v}}_1\to \dots \to {\mathbf{v}}_\ell$ be a sequence of codes in $\mathbb{N}^n$,
s.t. ${\mathbf{v}}_{i}\in \Phi_{{\mathbf{v}}_{i-1}}$, $i\in[\ell]$. If none of ${\mathbf{v}}_i$'s are
anti-dominant, then $\ell \leq n\cdot (|{\mathbf{v}}_0|^2+|{\mathbf{v}}_0|)$.
\end{corollary}
\begin{proof}
For $w\in[|{\mathbf{v}}_0|]$, let $i_w$ be the last index $i$ in $[\ell]$ s.t. ${\mathbf{v}}_i$
is of weight $w$. As none of ${\mathbf{v}}_i$'s are anti-dominant, by
\Cref{lem:long_trans}, $i_{w-1}-i_w\leq n\cdot w+1 \leq n\cdot |{\mathbf{v}}_0|+1$.
The result then follows.
\end{proof}
In fact, by following the proof of \Cref{lem:long_trans} as in \cite{LS85},
it is not hard to show that in \Cref{cor:long_trans} the same bound as in \Cref{lem:long_trans}, namely $\ell \leq n \cdot |{\mathbf{v}}_0|$, holds.
From \Cref{cor:long_trans}, a $\text{\#P}$ algorithm for Schubert polynomials can also be derived.
\begin{proof}[An alternative proof of \Cref{cor:eval}]
Consider the following Turing
machine $M$: the input to $M$ is a code ${\mathbf{v}}\in\mathbb{N}^n$ in unary, a point
${\mathbf{a}}=(a_1, \dots, a_n)\in\mathbb{N}^n$ in binary, and a sequence ${\mathbf{s}}$ of pairs of
indices $(s_i, t_i)\in
[n]\times [n]$, $s_i\leq t_i$, and $i \in [\ell]$ where $\ell:=n\cdot
(|{\mathbf{v}}|^2+|{\mathbf{v}}|)$.
The output of $M$ is a
nonnegative integer. Given the input $({\mathbf{v}}, {\mathbf{a}}, {\mathbf{s}})$, $M$ computes a
sequence
of codes ${\mathbf{v}}_0\to{\mathbf{v}}_1\to\dots\to{\mathbf{v}}_\ell$, and keeps track of a
monomial ${\mathbf{x}}^{{\mathbf{e}}_0}\to{\mathbf{x}}^{{\mathbf{e}}_1}\to\dots\to{\mathbf{x}}^{{\mathbf{e}}_\ell}$, where
${\mathbf{e}}_i\in\mathbb{N}^n$. The pair
of indices $(s_{i+1}, t_{i+1})$ is used as the instruction to
obtain ${\mathbf{v}}_{i+1}$ from ${\mathbf{v}}_i$, and ${\mathbf{e}}_{i+1}$ from ${\mathbf{e}}_i$.
To start, ${\mathbf{v}}_0={\mathbf{v}}$, and ${\mathbf{e}}=(0, \dots, 0)$. Suppose at step $i$,
${\mathbf{e}}_i=(e_1, \dots, e_n)$, and ${\mathbf{v}}_i=(v_1, \dots, v_k, 0, \dots, 0)$,
$v_k\neq 0$. ($k$ is the maximal nonzero index in ${\mathbf{v}}_i$.)
If ${\mathbf{v}}_i$ is anti-dominant, then $Y_{{\mathbf{v}}_i}$ equals to some Schur polynomial
by \Cref{fact:schub} (1). Using \Cref{prop:compute_schur} $M$ can
compute the evaluation of that Schur polynomial on ${\mathbf{a}}$ efficiently,
and then multiply with the value $\prod_{i\in[n]}a_i^{e_i}$ as the output. Note
that as will be seen below, the weight of ${\mathbf{e}}_i$ is $\ell\leq n\cdot
|{\mathbf{v}}_0|^2$, so the bit length of $\prod_{i\in[n]}a_i^{e_i}$ is polynomial in the input size.
In the following ${\mathbf{v}}_i$ is not anti-dominant. $M$ checks whether $t_{i+1}= k$.
If not, $M$ outputs $0$.
In the following $t_{i+1}=k$. $M$ then checks whether $s_{i+1}=t_{i+1}$.
If $s_{i+1}=t_{i+1}$, $M$ goes to step $i+1$ by setting ${\mathbf{v}}_{i+1}=(v_1, \dots, v_k-1, 0, \dots, 0)$,
and ${\mathbf{e}}_{i+1}=(e_1, \dots, e_{k-1}, e_k+1, e_{k+1}, \dots, e_n)$.
If $s_{i+1}<t_{i+1}$, then $M$ tests whether $s_{i+1}$ is an index in $\Psi_{{\mathbf{v}}_i}$, as follows.
It first computes the permutation $\sigma:=\langle (v_1, \dots, v_{k-1}, v_k-1, 0, \dots, 0)\rangle\in S_N$,
using the procedure described in \Cref{sec:prel}.
Note that $N= \mathrm{max}_{i\in[n]}\{v_i+i\}\leq |{\mathbf{v}}_i|+n\leq |{\mathbf{v}}_0|+n$.
Then it tests whether $s_{i+1}$ is in $\Psi_{{\mathbf{v}}_i}$, using \Cref{eqn:trans}.
If $s_{i+1}$ is not in $\Psi_{{\mathbf{v}}_i}$, then $M$ outputs $0$.
Otherwise, $M$ goes to step $i+1$ by setting ${\mathbf{v}}_{i+1}=\acode{\sigma\tau_{s_{i+1}, t_{i+1}}}$, and ${\mathbf{e}}_{i+1}={\mathbf{e}}_i$.
This finishes the description of $M$.
Clearly $M$ runs in time polynomial in the input size.
$M$ terminates within $\ell$ steps by \Cref{cor:long_trans}.
$M$ always outputs a nonnegative integer as Schur polynomials are polynomials with positive coefficients.
Finally note that $Y_{\mathbf{v}}({\mathbf{a}})$ is equal to $\sum_{\mathbf{s}} M({\mathbf{v}}, {\mathbf{a}}, {\mathbf{s}})$, where ${\mathbf{s}}$ runs over all the sequences of pairs of
indices as described at the beginning of the proof.
By the discussion on $\text{\#P}$ in \Cref{sec:prel}, this puts evaluating
$Y_{\mathbf{v}}$ on ${\mathbf{a}}$ in $\text{\#P}$.
\end{proof}
\paragraph{The Schubert basis is interpolation-friendly.} Now we are in the
position to prove that the Schubert basis is
interpolation-friendly.
\begin{proposition}\label{prop:schub_bound}
$\{Y_{{\mathbf{v}}}\in\mathbb{Z}[x_1, \dots, x_n] \mid {\mathbf{v}}\in\mathbb{N}^n\}$ is $K$-bounded for
$K({\mathbf{v}}, n)=n^{2n\cdot (|{\mathbf{v}}|^2+|{\mathbf{v}}|)}\cdot \sqrt{|{\mathbf{v}}|!}$.
\end{proposition}
\begin{proof}
The alternative proof of \Cref{cor:eval} for Schubert polynomials implies that $Y_{\mathbf{v}}$
can be written as a sum of at most $(n^2)^{n\cdot (|{\mathbf{v}}|^2+|{\mathbf{v}}|)}$
polynomials $f$,
where $f$ is of the form ${\mathbf{x}}^{\mathbf{e}}\cdot \mathrm{s}_\alpha$, $|\alpha|+|{\mathbf{e}}|=|{\mathbf{v}}|$.
The coefficients in Schur polynomial of degree $d$ are bounded by $\sqrt{d!}$ by \Cref{prop:schur_bound}.
The claim then follows.
\end{proof}
\begin{proposition}\label{prop:schub_friendly}
$\{Y_{{\mathbf{v}}}\in\mathbb{Z}[x_1, \dots, x_n] \mid {\mathbf{v}}\in\mathbb{N}^n\}$ is $L_A$-compatible for
$A=(r_{i,j})_{i, j\in[n]}$, $r_{i,j}=0$ if $i < j$, and $1$ otherwise. The
leading monomial of $Y_{{\mathbf{v}}}$ w.r.t. $L_A$ is ${\mathbf{x}}^{\mathbf{v}}$.
\end{proposition}
The matrix $A$ in \Cref{prop:schub_friendly} is the lower triangular matrix
of $1$'s on the diagonal and below. Compare with that for Schur polynomials,
described in \Cref{subsec:good}.
\begin{proof}
This follows easily from \Cref{lem:dom}: note that for any
${\mathbf{c}}=(c_1, \dots, c_n)\in(\mathbb{Z}^+)^n$, ${\mathbf{u}}\inY_{{\mathbf{v}}}$, $\langle A{\mathbf{c}},
{\mathbf{u}}\rangle=c_1(u_1+\dots + u_n)+c_2(u_2+\dots+u_n)+\dots+c_n u_n\leq
c_1(v_1+\dots+v_n)+c_2(v_2+\dots+v_n)+\dots+c_nv_n=\langle A{\mathbf{c}}, {\mathbf{v}}\rangle$.
As $c_i>0$, the equality holds if and only if ${\mathbf{u}}={\mathbf{v}}$.
\end{proof}
Now we conclude the article by proving the main \Cref{thm:interpolate}.
\begin{proof}[Proof of \Cref{thm:interpolate}]
Note that $n^{2n\cdot(|{\mathbf{v}}|^2+|{\mathbf{v}}|)}\cdot \sqrt{|{\mathbf{v}}|!}$ is upper bounded
by
$2^{O(n\log n\cdot
(|{\mathbf{v}}|^2+|{\mathbf{v}}|)+|{\mathbf{v}}|\log(|{\mathbf{v}}|))}$, and recall $|{\mathbf{v}}|=\deg(Y_{\mathbf{v}})$.
Then
combine \Cref{prop:schub_bound}, \Cref{prop:schub_friendly}, and
\Cref{thm:interpolate_good}.
\end{proof}
\paragraph{Acknowledgement.}
Part of the work was done when
Youming was
visiting
the Simons Institute for the program Algorithms and Complexity in Algebraic
Geometry. We are grateful to Allen Knutson for his answer
at \url{http://mathoverflow.net/q/186603}. We would like to thank the anonymous
reviewers whose suggestions help to improve the writing of this paper greatly.
Youming's research was supported by
Australian Research Council DE150100720.
Priyanka's research was supported by core grants for Centre for Quantum Technologies, NUS.
\bibliographystyle{alpha}
|
1,116,691,497,393 | arxiv | \section{Introduction}\label{sec.1}
The thermal conductivity of insulating (dielectric) crystals is
computed almost exclusively on the basis of the phonon Boltzmann
equation. We refer to \cite{HS} for a discussion more complete
than possible in this contribution. On the microscopic level the
starting point is the Born-Oppenheimer approximation (see
\cite{Teu} for a modern version), which provides an effective
Hamiltonian for the slow motion of the nuclei. Since their
deviation from the equilibrium position is small, one is led to a
wave equation with a \textit{weak} nonlinearity. As already
emphasized by R.~Peierls in his seminal work \cite{Pe}, physically
it is of importance to retain the structure resulting from the
atomic lattice, which forces the discrete wave equation.
On the other hand, continuum wave equations with weak nonlinearity
appear in the description of the waves in the upper ocean and in
many other fields. This topic is referred to as weak turbulence.
Again the theoretical treatment of such equations is based mostly
on the phonon Boltzmann eqation, see e.g.~\cite{Za}. In these
applications one considers scales which are much larger than the
atomistic scale, hence quantum effects are negligible. For
dielectric crystals, on the other side, quantum effects are of
importance at low temperatures. We refer to \cite{HS} and discuss
here only the classical discrete wave equation with a small
nonlinearity.
If one considers crystals with a single nucleus per unit cell,
then the displacement field is a 3-vector field over the crystal
lattice $\Gamma$. The nonlinearity results from the weakly
non-quadratic interaction potentials between the nuclei. As we
will see, the microscopic mechanism responsible for the validity
of the Boltzmann equation can be understood already in case the
displacement field is declared to be scalar, the nonlinearity to
be due to an on-site potential, and the lattice $\Gamma =
\mathbb{Z}^3$. This is the model I will discuss in my notes.
As the title indicates there is no complete proof available for
the validity of the phonon Boltzmann equation. The plan is to
explain the kinetic scaling and to restate our conjecture in terms
of the asymptotics of certain Feynman diagrams.
\section{Microscopic model}\label{sec.2}
\setcounter{equation}{0}
We consider the simple cubic crystal $\mathbb{Z}^3$. The
displacement field is denoted by
\begin{equation}\label{2.1}
q_x \in \mathbb{R}\,, \quad x \in \mathbb{Z}^3\,,
\end{equation}
with the canonically conjugate momenta
\begin{equation}\label{2.2}
p_x \in \mathbb{R}\,, \quad x \in \mathbb{Z}^3\,.
\end{equation}
We use units in which the mass of the nuclei is $m=1$. The
particles interact harmonically and are subject to an on-site
potential, which is divided into a quadratic part and a
non-quadratic correction. Thus the Hamiltonian of the system reads
\begin{equation}\label{2.4}
H= \frac{1}{2} \sum_{x \in \mathbb{Z}^3} \Big( p_x^2 +
\omega_0^2q_x^2\Big)+ \frac{1}{2}\sum_{x,y \in
\mathbb{Z}^3}\alpha(x-y)q_xq_y +\sum_{x \in
\mathbb{Z}^3}V(q_x)=H_0+\sum_{x \in \mathbb{Z}^3}V(q_x)\,.
\end{equation}
The coupling constants have the properties
\begin{equation}\label{2.5}
\alpha(x)=\alpha(-x)\,,
\end{equation}
\begin{equation}\label{2.5a}
|\alpha(x)|\leq\alpha_0 e^{-\gamma|x|}
\end{equation}
for suitable $\alpha_0$, $\gamma>0$, and
\begin{equation}\label{2.5a}
\sum_{x\in \mathbb{Z}^3} \alpha(x)=0\,,
\end{equation}
because of the invariance of the interaction between the
nuclei under the translation $q_x \leadsto q_x + a$ .
For the anharmonic on-site potential we set
\begin{equation}\label{2.6}
V(u)=\sqrt{\varepsilon}\frac{1}{3}\lambda u^3
+\varepsilon(\lambda^2/18 \omega^2_0)u^4\,,\; u\in \mathbb{R}\,.
\end{equation}
$\varepsilon$ is the dimensionless scale parameter, eventually
$\varepsilon\to 0$. The quartic piece is added so to make sure
that $H\geq 0$. In the limit $\varepsilon\to 0$ its contribution
will vanish and for simplicity of notation we will omit it from
the outset. Then the equations of motion are
\begin{eqnarray}\label{2.7}
&&\frac{d}{dt}q_x (t) = p_x (t)\,,\nonumber\\
&&\frac{d}{dt}p_x (t) = -\sum_{y\in \mathbb{Z}^3}\alpha(y-x)q_y(t)
- \omega_0^2 q_x (t)- \sqrt{\varepsilon}\lambda q_x (t)^2\,, \quad
x \in \mathbb{Z}^3\,.
\end{eqnarray}
We will consider only finite energy solutions. In particular, it
is assumed that $|p_x|\to 0$, $|q_x|\to 0$ sufficiently fast as
$|x|\to \infty$. In fact, later on there will be the need to
impose random initial data, which again are assumed to be
supported on finite energy configurations. In the kinetic limit
the average energy will diverge as $\varepsilon^{-3}$.
It is convenient to work in Fourier space. For $f:\mathbb{Z}^3 \to
\mathbb{R}$ we define
\begin{equation}\label{2.8}
\widehat{f}(k)=\sum_{x\in\mathbb{Z}^3}e^{-i2\pi k\cdot
x}f_x\,,
\end{equation}
$ k\in \mathbb{T}^3 = [ - \frac{1}{2},\frac{1}{2}]^3$, with inverse
\begin{equation}\label{2.9}
f_x=\int_{\mathbb{T}^3}dk e^{ i 2\pi k\cdot x}\widehat{f}(k)\,,
\end{equation}
$dk$ the 3-dimensional Lebesgue measure. The dispersion relation
for the harmonic part $H_0$ is
\begin{equation}\label{2.8}
\omega(k) = \big(\omega_0^2 + \widehat{\alpha}(k)\big)^{1/2}\geq
\omega_0>0\,,
\end{equation}
since $\widehat{\alpha}(k)>0$ for $k\neq 0$ because of the
mechanical stability of the harmonic lattice with vanishing
on-site potential.
In Fourier space the equations of motion read
\begin{eqnarray}\label{2.10}
&&\hspace{-26pt}\frac{\partial}{\partial t}\widehat{q}(k,t) = \widehat{p}(k,t)\,,\nonumber\\
&&\hspace{-26pt}\frac{\partial}{\partial t}\widehat{p}(k,t) = -
\omega (k)^2
\widehat{q}(k,t)\nonumber\\
&&\hspace{36pt}-\sqrt{\varepsilon}\lambda \int_{\mathbb{T}^6}dk_1
dk_2 \delta(k-k_1-k_2) \widehat{q}(k_1,t) \widehat{q}(k_2,t)
\end{eqnarray}
with $k\in \mathbb{T}^3$. Here $\delta$ is the $\delta$-function
on the unit torus, to say, $\delta (k')$ carries a point mass
whenever $k'\in\mathbb{Z}^3$.
It will be convenient to concatenate $q_x$ and $p_x$ into a single
complex-valued field. We set
\begin{equation}\label{2.11}
a(k)= \frac{1}{\sqrt{2}} \big(\sqrt{\omega} \widehat{q}(k) + i
\frac{1}{\sqrt{\omega}} \widehat{p}(k)\big)
\end{equation}
with the inverse
\begin{equation}\label{2.12}
\widehat{q}(k) = \frac{1}{\sqrt{2}}\frac{1}{\sqrt{\omega}}\big(
a(k) + a(-k)^\ast\big)\,,\; \widehat{p}(k) = \frac{1}{\sqrt{2}} i
{\sqrt{\omega}}\big(-a(k) + a(-k)^\ast \big)\,.
\end{equation}
To have a concise notation, we introduce
\begin{equation}\label{2.13}
a(k,+)= a(k)^\ast \,,\;a(k,-)=a(k)\,.
\end{equation}
Then the $a$-field evolves as
\begin{eqnarray}\label{2.14}
&&\hspace{-10pt}\frac{\partial}{\partial t} a(k,\sigma,t) = i
\sigma \omega(k) a(k,\sigma,t)+ i \sigma \sqrt{\varepsilon}\lambda
\sum_{\sigma_1,\sigma_2=\pm1} \int_{\mathbb{T}^6}d k_1
d k_2\nonumber\\
&&\hspace{5pt}(8 \omega(k) \omega(k_1) \omega(k_2))^{-1/2}
\delta(-\sigma k+\sigma_1k_1+\sigma_2k_2) a(k_1,\sigma_1,t)
a(k_2,\sigma_2,t)\,.
\end{eqnarray}
\section{Kinetic limit and Boltzmann equation}\label{sec.3}
\setcounter{equation}{0}
The kinetic limit deals with a special class of initial
probability measures. Their displacement field has a support of
linear size $\varepsilon^{-1}$ and average energy of order
$\varepsilon^{-3}$. More specifically, these probability measures
have the property of being locally Gaussian and almost stationary
under the dynamics. Because of the assumed slow variation in space
the covariance of such probability measures changes only slowly,
i.e.~on the scale $\varepsilon^{-1}$, in time.
Let us assume then that the initial data for (\ref{2.14}) are random and
specified by a
Gaussian probability measure on phase space. It is assumed to have mean
\begin{equation}\label{3.1}
\langle a(k,\sigma)\rangle_\varepsilon^\textrm{G} =0\,
\end{equation}
and for the covariance we set
\begin{equation}\label{3.2}
\langle a(k,\sigma) a(k',\sigma) \rangle_\varepsilon^\textrm{G}
=0\,,
\end{equation}
\begin{equation}\label{3.3}
W^{\varepsilon} (y,k) = \varepsilon^3
\int_{(\mathbb{T}/\varepsilon)^3} d\eta e^{ i 2\pi
y\cdot\eta}\langle a(k-\varepsilon \eta/2,+) a(k+ \varepsilon
\eta/2,-)\rangle_\varepsilon^\textrm{G}\,,
\end{equation}
$y\in(\varepsilon\mathbb{Z})^3$, which defines the \textit{Wigner
function} rescaled to the lattice $(\varepsilon\mathbb{Z})^3$.
Local stationarity is ensured by the condition
\begin{equation}\label{3.4}
\lim_{\varepsilon \to 0} W^{\varepsilon}(\lfloor
r\rfloor_\varepsilon,k) = W^0(r,k)\,,
\end{equation}
where $\lfloor r\rfloor_\varepsilon$ denotes integer part modulo
$\varepsilon$. Note that $W^\varepsilon$ is normalized as
\begin{equation}\label{3.5}
\sum_{y\in (\varepsilon \mathbb{Z})^3}\int_{\mathbb{T}^3}dk
W^{\varepsilon}(y,k) = \int_{\mathbb{T}^3}dk \langle a(k,+) a(k,-)
\rangle^\textrm{G}_\varepsilon\,.
\end{equation}
The condition that the limit in (\ref{3.4}) exists thus implies
that the average phonon number increases as $\varepsilon^{-3}$,
equivalently the average total energy increases as
\begin{equation}\label{3.18}
\langle\int_{\mathbb{T}^3}d^3k \omega(k) a(k,+) a(k,-)
\rangle_\varepsilon^\textrm{G} = \langle
H_0\rangle_\epsilon^\textrm{G}= \mathcal{O}(\varepsilon^{-3})\,.
\end{equation}
Let $\langle \cdot\rangle_t$ be the time-evolved measure at time
$t$. Its rescaled Wigner function is
\begin{eqnarray}\label{3.6}
W^{\varepsilon} (y,k,t) = \varepsilon^3
\int_{(\mathbb{T}/\varepsilon)^3} d\eta e^{ i 2\pi
y\cdot\eta}\langle a(k- \varepsilon \eta/2,+) a(k+ \varepsilon
\eta/2,-)\rangle_{t/\varepsilon}\,.
\end{eqnarray}
Kinetic theory claims that
\begin{equation}\label{3.7}
\lim_{\varepsilon\to 0}W^\varepsilon (\lfloor r
\rfloor_\varepsilon, k,t) = W(r,k,t)\,,
\end{equation}
where $W(r,k,t)$ is the solution of the phonon Boltzmann equation
\begin{eqnarray}\label{3.8}
&&\hspace{-20pt}\frac{\partial}{\partial t} W(r,k,t) + \frac{1}{2\pi} \nabla
\omega (k)\cdot \nabla_r W( r,k,t)\nonumber\\
&&\hspace{-5pt}= \frac{\pi}{2}\lambda^2 \sum_{\sigma_1,\sigma_2=\pm 1}
\int_{\mathbb{T}^6} d k_1 d k_2 (\omega(k) \omega(k_1)
\omega(k_2))^{-1} \delta
(\omega(k)+\sigma_1\omega(k_1)+\sigma_2\omega(k_2))\nonumber\\
&&\hspace{80pt}\delta(k+\sigma_1 k_1+\sigma_2k_2)\big(W(r,k_1,t)
W(r,k_2,t)\nonumber\\
&&\hspace{80pt} + \sigma_1
W(r,k,t)W(r,k_2,t)+\sigma_2W(r,k,t)W(r,k_1,t)\big)
\end{eqnarray}
to be solved with the initial condition $W(r,k,0)=W^0(r,k)$.
The free streaming part is an immediate consequence of the
evolution of $W$ as generated by $H_0$. The strength of the cubic
nonlinearity was assumed to be of order $\sqrt{\varepsilon}$,
which results in an effect of order 1 on the kinetic time scale.
The specific form of the collision operator will be explained in
the following section. It can be brought into a more familiar form
by performing the sum over $\sigma_1,\sigma_2$. Then the collision
operator has two terms. The first one describes the merging of two
phonons with wave number $k$ and $k_1$ into a phonon with wave
number $k_2=k+k_1$, while the second term describes the splitting
of a phonon with wave number $k$ into two phonons with wave
numbers $k_1$ and $k_2$, $k=k_1+k_2$. In such a collision process
energy is conserved and wave number is conserved modulo an integer
vector.
In (\ref{4.7}) the summand with $\sigma_1 =1= \sigma_2$ vanishes
trivially. However it could be the case that the condition for
energy conservation,
\begin{equation}\label{3.9}
\omega(k) + \omega(k') - \omega(k+k')=0\,,
\end{equation}
has also no solution. If so, the collision operator vanishes. In
fact, for nearest neighbor coupling only, $\alpha(0)=6$,
$\alpha(e)=-1$ for $|e|=1$, $\alpha(x)=0$ otherwise, it can be
shown that (\ref{3.9}) has no solution whenever $\omega_0 > 0$. To
have a non-zero collision term we have to require
\begin{equation}\label{3.10}
\int dk \int dk' \delta(\omega(k)+\omega(k')-\omega(k+k'))>0\,,
\end{equation}
which is an implicit condition on the couplings $\alpha(x)$. A
general condition to ensure (\ref{3.10}) is not known. A simple
example where (\ref{3.10}) can be checked by hand is
\begin{equation}\label{3.11}
\omega(k)=\omega_0 + \sum^3_{\alpha=1} (1-\cos(2\pi
k^\alpha))\,,\quad k=(k^1,k^2,k^3)\,.
\end{equation}
It corresponds to suitable nearest and next nearest neighbor
couplings.
There is a second more technical condition which requires that
\begin{equation}\label{3.12}
\sup_k \int dk'\delta
(\omega(k)+\omega(k')-\omega(k+k'))=c_0<\infty\,.
\end{equation}
It holds for the dispersion relation (\ref{3.11}). This uniform
bound allows for a simple proof that the Boltzmann equation has a
unique solution for short times provided $W^0(r,k)$ is bounded.
\section{Feynman diagrams}\label{sec.4}
\setcounter{equation}{0}
Denoting by $\langle\cdot\rangle_t$ the average with respect to
the measure at time $t$ (in microscopic units), the starting point
of the time-dependent perturbation series is the identity
\begin{eqnarray}\label{4.1}
&&\hspace{-60pt} \langle\prod^n_{j=1} a(k_j,\sigma_j)\rangle_t =
\exp\big[it\big(\sum^n_{j=1} \sigma_j\omega(k_j)\big)\big]\langle
\prod^n_{j=1}
a(k_j,\sigma_j)\rangle^\textrm{G}\nonumber\\
&&\hspace{26pt} + i\sqrt{\varepsilon}\int^t_0 ds
\exp\big[i(t-s)\big(\sum^n_{j=1} \sigma_j\omega(k_j)\big)\big]\nonumber\\
&&\hspace{26pt}\Big(\sum^n_{\ell=1}
\sum_{\sigma',\sigma''=\pm1}\sigma_\ell
\int_{\mathbb{T}^6} dk'dk''\phi(k_\ell,k',k'')\delta(-\sigma_\ell k_\ell+\sigma'k'+\sigma''k'')\nonumber\\
&&\hspace{26pt} \langle (\prod^n_{j=1\;j\neq\ell} a(k_j,\sigma_j))
a(k',\sigma')a(k'',\sigma'')\rangle_s\Big)\,.
\end{eqnarray}
Here
\begin{equation}\label{4.2}
\phi(k,k',k'')=\lambda(8\omega(k)\omega(k')\omega(k''))^{-1/2}
\end{equation}
One starts with $n=2$ and $\sigma_1=1$, $\sigma_2=-1$. Then on the
right hand side of (\ref{4.1}) there is the product of three
$a$'s. One resubstitutes (\ref{4.1}) with $n=3$, etc. Thereby one
generates an infinite series, in which only the average over the
initial Gaussian measure $\langle\cdot\rangle^\textrm{G}$ appears.
To keep the presentation transparent, let me assume that
$\langle\cdot\rangle^\textrm{G}$ is a translation invariant
Gaussian measure with
\begin{eqnarray}\label{4.3}
&&\hspace{-30pt}\langle a(k,\pm)\rangle^\textrm{G}=0\,,\;\langle
a(k,\sigma)a(k',\sigma)\rangle^\textrm{G}=0\,,\nonumber\\
&&\hspace{-30pt}\langle a(k,+)a(k',-)\rangle^\textrm{G}
=\delta(k-k') W(k)\,.
\end{eqnarray}
Then the measure at time $t$ is again translation invariant.
Kinetic scaling now merely amounts to considering the long times
$t/\varepsilon$. The Wigner function at that time is then
represented by the infinite series
\begin{equation}\label{4.4}
\langle a(q,-)a(p,+)\rangle_{t/\varepsilon}=\delta(q-p)
\Big(W(q)+\sum^\infty_{n=1}W^\varepsilon_n(q,t)\Big)\,.
\end{equation}
The infinite sum is only formal. Taking naively the absolute value
at iteration $2n$ one finds that
\begin{equation}\label{4.5}
|W^\varepsilon_n(q,t)|\leq
\varepsilon^n(t/\varepsilon)^{2n}((2n)!)^{-1}(2n)!
c^{2n}((2n+2)!/2^{n+1}(n+1)!)\,.
\end{equation}
Here $\varepsilon^n=(\sqrt{\varepsilon})^{2n}$,
$(t/\varepsilon)^{2n}/(2n)!$ comes from the time integration,
$(2n)!$ from the sum over $\ell$ in (\ref{4.1}), $c^{2n}$ from the
$k$-integrations and the initial $W(k)$, and the factor
$(2n+2)!/2^{n+1}(n+1)!$ from the Gaussian pairings in the
initial measure. Thus even at fixed $\varepsilon$ there are too
many terms in the sum.
Since no better estimate is available at present, we concentrate
on the structure of a single summand $W^\varepsilon_n(q,t)$.
$\delta(q-p)W^\varepsilon_n(q,t)$ is a sum of integrals. The
summation comes from\smallskip\\
-- the sum over $\sigma',\sigma''$ in (\ref{4.1})\\
-- the sum over $\ell$ in (\ref{4.1})\\
-- the sum over all pairings resulting from the average with
respect to the initial Gaussian measure
$\langle\cdot\rangle^\textrm{G}$.\smallskip\\
Since each single integral has a rather complicated structure, it
is convenient to visualize them as \textit{Feynman diagrams}.
A Feynman diagram is a graph with labels. Let us first explain the
graph. The graph consists of two binary trees. It is convenient to
draw them on a ``backbone" consisting of $2n+2$ equidistant
horizontal level lines which are labelled from 0 (bottom) to
$2n+1$ (top). The two roots of the tree are two vertical bonds
from line $2n+1$ to level line $2n$. At level $m$ there is
\textit{exactly one} branch point with two branches in either
tree. Thus there are exactly $2n$ branch points. At level 0 there
are then $2n+2$ branches. They are connected according to the
pairing rule, see Figure below.
In the Feynman graph each bond is oriented with arrows pointing
either up $(\sigma=+1)$ or down $(\sigma=-1)$. The left root is
down while the right root is up. If there is no branching the
orientation is inherited from the upper level. At a pairing the
orientation must be maintained. Thus at level 0 a branch with an
up arrow can be paired only with a branch with a down arrow, see
(\ref{4.3}). Every internal line in the graph must terminate at
either end by a branch point. Every such internal line admits
precisely two orientations.
\setlength{\unitlength}{1cm}
\begin{picture}
(10,8)(-3.5,0)
\put(-1,-0.1){$0$} \put(-1,0.9){$1$} \put(-1,1.9){$2$}
\put(-1,2.9){$3$} \put(-1,3.9){$4$} \put(-1,4.9){$5$}
\put(8,-0.1){$0$} \put(8,0.9){$t_1$} \put(8,1.9){$t_2$}
\put(8,2.9){$t_3$} \put(8,3.9){$t_4$} \put(8,4.9){$t$}
\put(1.3,4.1){$1$}\put(3.5,4.1){$2$}\put(2.9,3.1){$3$}\put(4,3.1){$4$}
\put(4.9,2.1){$5$}\put(6,2.1){$6$}\put(0.9,1.1){$7$}\put(2,1.1){$8$}
\put(2.4,5.2){$q$}\put(5.4,5.2){$p$}
\linethickness{0.1pt}
\put(0,0){\line(1,0){7}}
\put(0,1){\line(1,0){7}}
\put(0,2){\line(1,0){7}}
\put(0,3){\line(1,0){7}}
\put(0,4){\line(1,0){7}}
\put(0,5){\line(1,0){7}}
\thicklines
\put(1,-0.3){\line(0,1){1.3}}
\put(2,-0.15){\line(0,1){1.15}}
\put(3,-0.15){\line(0,1){3.15}}
\put(4,-0.15){\line(0,1){3.15}}
\put(5,-0.15){\line(0,1){2.15}}
\put(6,-0.3){\line(0,1){2.3}}
\put(1.5,1){\line(0,1){3}}
\put(2.5,4){\line(0,1){1}}
\put(3.5,3){\line(0,1){1}}
\put(5.5,2){\line(0,1){3}}
\put(1,1){\line(1,0){1}}
\put(3,3){\line(1,0){1}}
\put(1.5,4){\line(1,0){2}}
\put(5,2){\line(1,0){1}}
\put(1,-0.3){\line(1,0){5}}
\put(2,-0.15){\line(1,0){1}}
\put(4,-0.15){\line(1,0){1}}
\put(1.5,3.5){\vector(0,1){0.15}}
\put(1,0.5){\vector(0,1){0.15}}
\put(2,0.5){\vector(0,-1){0.15}}
\put(2.5,4.5){\vector(0,-1){0.15}}
\put(3,2.5){\vector(0,1){0.15}}
\put(4,2.5){\vector(0,1){0.15}}
\put(5,1.5){\vector(0,-1){0.15}}
\put(6,1.5){\vector(0,-1){0.15}}
\put(5.5,4.5){\vector(0,1){0.15}}
\put(3.5,3.5){\vector(0,-1){0.15}}
\end{picture}\\
\newpage
Next we insert the labels. The level lines 0 to $2n+1$ are
labelled by times $0<t_1\ldots<t_{2n}<t$. The left root carries
the label $q$ while the right root carries the label $p$. Each
internal line is labelled with a wave number $k$.
To each Feynman diagram one associates an integral through the
following steps.\smallskip\\
(i) The time integration is over the simplex $0\leq
t_1\ldots\leq t_{2n}\leq t$ as $dt_1\ldots dt_{2n}$.\smallskip\\
(ii) The wave number integration is over all internal lines as
$\int dk_1 \ldots \int dk_\kappa$, where $\kappa=3n-1$ is the
number of internal
lines.\smallskip\\
(iii) One sums over all orientations of the internal lines.\smallskip\\
The integrand is a product of three factors.\smallskip\\
(iv) There is a product over all branch points.
At each branchpoint there is a root, say wave vector $k_1$ and
orientation $\sigma_1$, and there are two branches, say wave vectors $k_2$,
$k_3$ and orientations $\sigma_2$, $\sigma_3$. Then each branch
point carries the weight
\begin{equation}\label{4.6}
\delta(-\sigma_1k_1+\sigma_2k_2+\sigma_3k_3)\sigma_1\phi(k_1,k_2,k_3)\,.
\end{equation}
If one regards the wave vector $k$ as a current with orientation
$\sigma$, then (\ref{4.6}) expresses Kirchhoff's rule for
conservation of the current.\smallskip\\
(v) By construction each bond carries a time difference
$t_{m+1}-t_m$, a wave vector $k$, and an orientation $\sigma$.
Then to this bond one associates the phase factor
\begin{equation}\label{4.7}
\exp[i(t_{m+1}-t_m)\sigma\omega(k)/\varepsilon]\,.
\end{equation}
The second factor is the product of such phase factors over all bonds.\smallskip\\
(vi) The third factor of the integrand is given by
\begin{equation}\label{4.8}
\prod^{n+1}_{j=1} W(k_j)\,,
\end{equation}
where $k_1,\ldots,k_{n+1}$ are the wave numbers of the bonds
between level 0 and level 1.\smallskip\\
(vii) Finally there is the prefactor
$(-1)^n\varepsilon^{-n}$.\smallskip
To illustrate these rules we give an example for $n=2$, see Figure
above. The associated integral is given by, more transparently keeping the $\delta$-functions
from the pairings,
\begin{eqnarray}\label{4.8a}
&&\hspace{-8pt}\varepsilon^{-2}\int_{0\leq t_1\leq\ldots\leq t_4 \leq
t}dt_1\ldots dt_4
\int_{\mathbb{T}^{24}}dk_1\ldots dk_8\nonumber\\
&&\delta(q+k_1-k_2)\delta(k_2+k_3+k_4)\delta(-p-k_5-k_6)
\delta(-k_1+k_7-k_8)\nonumber\\
&&\phi(q,k_1,k_2)\phi(k_2,k_3,k_4)\phi(p,k_5,k_6)\phi(k_1,k_7,k_8)\nonumber\\
&&\hspace{0pt}\delta(k_7-k_6)W(k_7)\delta(k_8-k_3)W(k_8)\delta(k_4-k_5)W(k_4)\nonumber\\
&&\exp\big[\{i(t-t_4)(-\omega_q+\omega_p)+
i(t_4-t_3)(\omega_1-\omega_2+\omega_p)\nonumber\\
&&+i(t_3-t_2)(\omega_1+\omega_3+\omega_4+\omega_p)
+i(t_2-t_1)(\omega_1+\omega_3+\omega_4-\omega_5-\omega_6)\nonumber\\
&& +it_1(\omega_7-\omega_8+\omega_3+\omega_4-\omega_5-\omega_6)\}/\varepsilon\big]
\end{eqnarray}
with $\omega_q=\omega(q)$, $\omega_p=\omega(p)$,
$\omega_j=\omega(k_j)$.
$\delta(q-p)W^\varepsilon_n(q,t)$ is the sum over all Feynman
diagrams with $2n+2$ levels and thus is a sum of oscillatory
integrals. In the limit $\varepsilon\to 0$ only a few leading
terms survive while all remainders vanish. E.g., the Feynman
diagram above is subleading. In fact, the conjecture
of kinetic theory can be stated rather concisely:\bigskip\\
\textbf{Kinetic Conjecture}: \textit{
In a leading Feynman diagram
the Kirchhoff rule never forces an identification of the form $\delta(k_j)$ with some wave vector $k_j$. In addition,
the sum of the $2(n-m+1)$ phases from the bonds between level lines $2m$
and $2m+1$ vanishes for every choice of internal wave
numbers. This cancellation must hold for $m=0,\ldots,n$.}\bigskip
Since we assumed the initial data to be spatially homogeneous, the
phonon Boltzmann equation (\ref{3.8}) simplifies to
\begin{eqnarray}\label{4.9}
&&\hspace{-16pt}\frac{\partial}{\partial t} W(k,t)= 4\pi \lambda^2
\sum_{\sigma_1,\sigma_2=\pm1} \int_{\mathbb{T}^6} d k_1 d k_2
\phi(k,k_1,k_2)^2 \delta
(\omega(k)+\sigma_1\omega(k_1)+\sigma_2\omega(k_2))\nonumber\\
&&\hspace{16pt}\delta(k+\sigma_1 k_1+\sigma_2k_2)\big(W(k_1,t)
W(k_2,t) + 2\sigma_2 W(k,t)W(k_1,t)\big)\,,
\end{eqnarray}
where we used the symmetry with respect to $(k_1,\sigma_1)$ and
$(k_2,\sigma_2)$. To (\ref{4.9}) we associate the Boltzmann
hierarchy
\begin{equation}\label{4.9a}
\frac{\partial}{\partial t}f_n= \mathcal{C}_{n,n+1} f_{n+1}\,,\;
n=1,2,\ldots\,,
\end{equation}
acting on the symmetric functions $f_n(k_1,\ldots,k_n)$ with
\begin{eqnarray}\label{4.10}
&&\hspace{-16pt}\mathcal{C}_{n,n+1}f_{n+1}(k_1,\ldots,k_n)=4\pi\lambda^2\sum^n_{\ell=1}
\sum_{\sigma',\sigma''=\pm 1}\int_{\mathbb{T}^6} d k' d k''
\phi(k_\ell,k',k'')^2\nonumber\\
&&\hspace{16pt}
\delta(\omega(k_\ell)+\sigma'\omega(k')+\sigma''\omega(k''))
\delta(k_\ell+\sigma'k'+\sigma''k'')\nonumber\\
&&\hspace{16pt}
[f_{n+1}(k_1,\ldots,k',k_{\ell+1},\ldots,k'')+2\sigma''
f(k_1,\ldots,k_n,k')]\,.
\end{eqnarray}
Under the condition (\ref{3.12}) and provided
$\|W\|_\infty<\infty$, the hierarchy (\ref{4.9a}) has a unique
solution for short times. In case
\begin{equation}\label{4.11}
f_n(k_1,\ldots,k_n,0)=\prod^n_{j=1}W(k_j)\,,
\end{equation}
the factorizatin is maintained in time and each factor agrees with
the solution of the Boltzmann equation (\ref{4.9}). From
(\ref{4.9a}) one easily constructs the perturbative solution to
(\ref{4.9}) with the result
\begin{eqnarray}\label{4.12}
&&\hspace{-16pt}W(k,t)=W(k)+\sum^\infty_{n=1}
\frac{1}{n!}t^n(\mathcal{C}_{1,2}\ldots
\mathcal{C}_{n,n+1}W^{\otimes n+1})(k)\nonumber\\
&&\hspace{26pt} =W(k)+\sum^\infty_{n=1}W_n(k,t)\,.
\end{eqnarray}
The series in (\ref{4.12}) converges for $t$ sufficiently small.
For $n=1,2$ the oscillating integrals can be handled by direct
inspection with the expected results $\lim_{\varepsilon\to
0}W^\varepsilon_1(k,t)=W_1(k,t)$, $\lim_{\varepsilon\to
0}W^\varepsilon_2(k,t)=W_2(k,t)$. If the leading terms are as
claimed in the Kinetic Conjecture, then they agree with the series
(\ref{4.12}). The complete argument is a somewhat tricky counting
of diagrams, which would lead us too far astray. Thus the most
immediate project is to establish that all subleading diagrams
vanish in the limit $\varepsilon\to 0$. This would be a step
further when compared to the investigations \cite{BeCa},
\cite{ErSaY}.
Of course a complete proof must deal with the uniform convergence
of the series in (\ref{4.4}).\bigskip\\
{\bf Acknowledgements}. I thank Jani Lukkarinen for instructive
discussions and Gianluca Panati for a careful reading.
|
1,116,691,497,394 | arxiv | \section{Introduction}
The ingrained notion that light travels along a straight line was first defied in 2007 by Siviloglou {\it et al.} \cite{Siviloglou2007a}, who introduced a novel kind of light beam with the ability to self-accelerate along a parabolic trajectory upon free space propagation \cite{Siviloglou2007b}. Such light beams, known as Airy beams, are natural solutions of the normalised paraxial wave equation. Crucially, even though they seem to propagate along parabolic trajectories, their centroid propagates along straight lines, in accordance to the electromagnetic momentum conservation law. Along with the discovery of Airy beams, their fascinating properties prompted the development of novel applications, which have impacted a wide diversity of fields, such as, optical manipulation, microscopy, laser material processing, among others (see for example \cite{Efremidis2019} for an extensive review). More importantly, the experimental demonstration of Airy beams ignited the quest for other kinds of accelerating beams \cite{Bandres2008Accelerating, Bandres2009, Greenfield2011, Zhang2012, RosalesGuzman2013Airy, Ruelas2014, Patsyk2018, Aleahmad2012}. Of particular interest is the case of accelerating parabolic beams, which form a complete and infinite orthogonal family of solutions of the normalised paraxial wave equation \cite{Bandres2008Accelerating, Davis2009}. Such beams also propagate in free space in a non diffracting way describing parabolic trajectories.
Noteworthy, most of the work carried out with accelerating beams has only considered the case of homogeneously polarised beams, while the manipulation of other degrees of freedom is gaining popularity, giving rise to a more general class of beams generally known as structured light fields. This is the case of complex vector light beams, classically-entangled in their spatial and polarisation degrees of freedom, which feature a non-homogeneous polarisation distribution across the transverse plane\cite{Forbes2021,Roadmap, Rosales2018Review}. Such beams have gained popularity in recent time not only due to their unique traits, such as their quantum-like non separability \cite{konrad2019quantum, Eberly2016, toninelli2019concepts, forbes2019classically, Toppel2014}, but also due to the many applications they are pioneering \cite{Hu2019, BergJohansen2015, Ndagano2018, Ndagano2017, Otte2020, Sit2017}. In vector beams, their spatial and polarisation degrees of freedom are coupled in a non-separable way, which generates the non-homogeneous polarisation distribution. Importantly, while the polarisation degree of freedom is restricted to a two-dimensional space, the spatial one is not, as any of the unbounded solution sets of the wave equation, either in its exact or paraxial version, can be used. Examples of vector beams that have been experimentally demonstrated are Bessel, Laguerre-, Ince- and Mathieu-Gauss beams, amongst others, all of which propagate along straight trajectories \cite{Zhan2009,Dudley2013,Otte2018a,Yao-Li2020,Rosales2021}. Along this line, previous works have demonstrated the acceleration of vectorial fields, in which case, their polarisation structures rotates around the optical axis, while still propagate along straight lines \cite{Agela2021}. Perhaps the only known case of a vector beam capable to propagate along a parabolic trajectory is the Airy-vortex vector beam \cite{Zhou2015}.
We propose and experimentally demonstrate a new family of vector beams, which we term Accelerating Vector Waves (AVWs), that are non-separable weighted superpositions of the polarisation and spatial degrees of freedom encoded in the orthogonal set of accelerating waves. These beams exhibit two interesting properties, namely, that their non-homogeneous polarisation distributions propagate in free space along parabolic trajectories maintaining a maximum degree of coupling, and that, even though the non-homogeneous transverse polarisation distribution of an individual AVW contains different states of elliptical polarisation, all of them are located on a great circle on the Poincar\'e sphere representation for polarisation. Here, we start by describing these beams theoretically, then move to their implementation in the laboratory, and finally show experimental results to showcase their novel features. Due to their intriguing properties, we expect AVWs will attract the wide interest of the optical community, stemming not only from their potential applications but also from their fundamental aspects.
\section{Theory}
Accelerating parabolic waves (APWs) are solutions of the paraxial wave equation in parabolic coordinates. They are non-diffracting beams that accelerate during free-space propagation. Their experimentally realisable finite--energy form is given by \cite{Bandres2008Accelerating}
\begin{align}
\phi_n(\eta, \xi, z) =& \exp[i(z/2k\kappa^2 - ia)(\eta^2-\xi^2)/2]\times\\ \nonumber
&\exp[i(z/2k\kappa^2-ia)^3/3]\Theta_n(\eta)\Theta_n(i\xi),
\end{align}
where the parabolic coordinates $(\eta, ~\xi)$ are related to the Cartesian coordinates by $(\eta^2/2 - \xi^2/2,~\eta\xi) = (x/\kappa-(z/2\kappa^2)^2+iaz/k\kappa^2, ~y/\kappa)$, $\kappa$ is a transverse scale parameter, $k$ is the wave number, $z$ is the propagation distance and $a$ is parameter that controls the exponential aperture of the beam at $z=0$. The functions $\Theta_n(\cdot)$ correspond to the solutions of the differential equation
\begin{align}\label{eq:quartic}
\left(-\frac{1}{2}\dd{}{\eta}{2} + \frac{\eta^4}{4}\right)N(\eta) = E\:N(\eta),
\end{align}
which corresponds to the one--dimensional Schr\"odinger equation with potential $V(\eta) = \eta^4/4$ (known as quartic potential) and $m=\hbar=1$ \cite{Banerjee1978}. Importantly, the eigen-solutions $\Theta_n$ ($n\in \mathbb{N}$) of Eq. \ref{eq:quartic} form an orthogonal set of functions, whose parity is governed by $n$. Since these eigen-solutions cannot be expressed in a closed form, a suitable numerical method must be employed to obtain them \cite{Driscoll2014}. In particular, we are interested in square integrable eigen-solutions of Eq.~\ref{eq:quartic}. Fig.~\ref{fig:Concept}(a) shows the intensity profiles of the scalar APWs for $n=\{0,1,2,3\}$. As can be seen, for $n=0$ the intensity profile of the beam contains only one main lobe of maximum intensity and additional subsequent lobes of decaying intensity. In general, for $n>0$ the intensity profile is formed by $n+1$ lobes of maximum intensity.
\begin{figure}[t]
\centering
\includegraphics[width=.46\textwidth]{concept}
\caption{(a) Intensity profiles of accelerating parabolic scalar waves of orders $n=\{0, 1, 2, 3\}$. (b) Schematic representation of the non-separable superposition of two orthogonal scalar modes carrying orthogonal polarisations to generate an accelerating vector wave $\AVW{}_{n,m}(\eta,\xi,z)$. (c) Illustration of the free-space propagation trajectory of an AVW along the $z$ axis. Green and orange ellipses represent right- and left-handed circular polarisation, respectively, and white lines represent linear polarisation.}\label{fig:Concept}
\end{figure}
Mathematically, the accelerating Vector Waves (AVWs) are constructed as a superposition of two scalar APWs with orthogonal polarisations, such that at the $z$--plane and for indices $n,m$ they are given by
\begin{align}\nonumber
\AVW{}_{n,m}(\eta,\xi,z) =& \cos\alpha \:\phi_n(\eta,\xi,z)\hatt{e}_1(\theta,\varphi) + \\ \label{eq:avw}
&\sin\alpha \exp(i\beta)\:\phi_m(\eta,\xi,z)\hatt{e}_2(\theta,\varphi),
\end{align}
where the weighting factor $\alpha\in[0,\pi/2]$ allows the field to change from scalar to vector, and the parameter $\beta=[0,\pi]$ controls the inter-modal phase. The basis vectors
\begin{align}
\hatt{e}_1(\theta,\varphi) = \cos(\theta/2)\hatt{e}_R + \exp(i\varphi)\sin(\theta/2)\hatt{e}_L,\\
\hatt{e}_2(\theta,\varphi) = \sin(\theta/2)\hatt{e}_R - \exp(i\varphi)\cos(\theta/2)\hatt{e}_L,
\end{align}
represent the general elliptical polarisation basis. Note that we can obtain the left/right-handed circular polarisation basis by setting $\theta=\pi$ and $\varphi=0$ and the horizontal/vertical basis with $\theta=\pi/2$ and $\varphi=0$. Without loss of generality, here we will restrict our results to the circular polarisation basis, only briefly mentioning some theoretical examples of the horizontal/vertical basis. Fig.~\ref{fig:Concept}(b) illustrates conceptually the above description for the specific case $\AVW{}_{0,1}(\eta,\xi,z)$ with $\alpha=\pi/4$ and $\beta=0$ as polarisation distributions overlay onto their corresponding intensity profiles. Left and middle panels show the two scalar modes $\theta \:\phi_n(\eta,\xi,z)\hatt{e}_L$ and $\phi_m(\eta,\xi,z)\hatt{e}_R$ with Right Circular polarisation (RCP) and Left Circular Polarisation (LCP), respectively, represented by green and orange ellipses for the first and second case respectively. Notice the intensity patters of the scalar modes are different, as required to obtain vector modes. In a similar way, the right panel presents the non-separable superposition of both scalar modes (Eq.~\ref{eq:avw}). The parabolic trajectory described by AVWs can be seen schematically in Fig.~\ref{fig:Concept}(c) for $\AVW{}_{2,3}(\eta,\xi,z)$ propagating along the $z$ axis. Mathematically, this is expressed in the transverse shift $y_s = [z/(2k)]^2/\kappa^3$ \cite{Davis2008}, which is independent of the indices $n$, $m$.
\section{Experimental details
We implemented the AVW described above using a Digital Micromirror Device (DMD) and following the technique that we proposed and fully characterised in a previous article \cite{Rosales2020}. This device is polarisation-insensitive, very flexible and versatile, allowing the generation of vector modes with arbitrary spatial distributions, such as elliptical or parabolic \cite{Rosales2020,Rosales2021,Yao-Li2020,Xiaobohu2021}. In essence, a DMD is illuminated with two modes carrying orthogonal polarisations, impinging at slightly different angles but exactly at the geometric centre of the hologram displayed on the DMD. The hologram contains a superposition of the two transmittance functions that generate the constituting scalar modes of Eq.~\ref{eq:avw}, each with an additional unique linear spatial grating that redirects the mode along a specific angle, and whose periods are carefully chosen to guarantee both generated beams co-propagate along the same axis, where the desired vector beam is created. Transmittance functions are calculated as the inverse Fourier transform of the desired modes $\phi_n(\eta, \xi, z)$ \cite{Davis2008, Davis2009}, thus we add a lens in a $2f$ configuration, where $f$ is the focal length of the lens ($f=200$ mm in our case), and measure at its back focal plane. Intensity patterns of the generated beams were captured with a high resolution CCD camera (FL3-U3-120S3C-C with resolution of $4000\times3000$ pixels and a pixel size of 1.55 $\mu$m). Polarisation reconstruction was achieved through Stokes polarimetry, using a set of intensity measurements as detailed in \cite{Rosales2021}. Fig.~\ref{fig:vectormodes}(a) show\RH{s} an example of the experimentally measured Stokes parameters $S_0$, $S_1$, $S_2$ and $S_3$ for the specific mode $\AVW{}_{1,2}(\eta,\xi,z=0)$. Reconstructed intensity and polarisation distributions of a set of representative examples of the experimentally generated $\AVW{}_{n,m}(\eta,\xi,z=0)$ modes using the circular polarisation basis are presented in Fig.~\ref{fig:vectormodes}(b), both for the theory (top row) and the experiment (bottom row). Notice the high similarity between the latter, demonstrating the good performance of our generation method.
\begin{figure}[t]
\centering
\includegraphics[width=.46\textwidth]{Experimental}
\caption{(a) Example of experimentally measured Stokes parameters $S_0$, $S_1$, $S_2$ and $S_3$ for $\AVW{}_{1,2}(\eta,\xi,z=0)$. (b) Theoretical and experimental reconstructed intensity and polarisation distribution from the Stokes parameters for the modes $\AVW{}_{0,2}(\eta,\xi,0)$, $\AVW{}_{1,2}(\eta,\xi,0)$, $\AVW{}_{1,3}(\eta,\xi,0)$ and $\AVW{}_{2,3}(\eta,\xi,0)$.}
\label{fig:vectormodes}
\end{figure}
\section{Results and discussion
\begin{figure}[t]
\centering
\includegraphics[width=.48\textwidth]{Trajectory}
\caption{Shift of the $y$-coordinate of the maximum intensity lobe position for three AVWs upon free-space propagation. The continuous curve represents the expected position predicted by theory, whereas the data points correspond to experiment for the cases $\AVW{}_{0,2}(\eta,\xi,z)$ (squares), $\AVW{}_{1,2}(\eta,\xi,z)$ (triangles) and $\AVW{}_{2,3}(\eta,\xi,z)$ (circles). Notice that all three cases accelerate in an identical way. Insets show the transverse polarisation distribution overlapped with the intensity profile of the $\AVW{}_{1,2}(\eta,\xi,z)$ at three different planes.}
\label{fig:trajectory}
\end{figure}
The vector modes described by equation \ref{eq:avw} and shown in Fig.~\ref{fig:vectormodes} propagate along parabolic trajectories maintaining not only their intensity and polarisation distribution but also a maximum coupling between both. We corroborated this by tracking the transverse spatial coordinates $(x,y)$ of one of the lobes of maximum intensity as function of their propagation distance $z$. We observed that while the $x$ coordinate remains almost constant, the $y$ coordinate shifts following a quadratic trend. Figure \ref{fig:trajectory}, in which the coordinate $y$ is plotted against the propagation distance $z$, clearly shows such behaviour for a representative set of AVWs given by $\AVW{}_{0,2}(\eta,\xi,z)$, $\AVW{}_{1,2}(\eta,\xi,z)$ and $\AVW{}_{2,3}(\eta,\xi,z)$. Since all modes shown where generated with the same initial parameters $k$ and $\kappa$, they accelerated in an identical way. Insets show examples of the polarisation distribution overlapped with the intensity distribution at three propagation distances $z=0$ mm, $z=20$ mm and $z=60$ mm for the mode $\AVW{}_{1,2}(\eta,\xi,z)$.
As mentioned earlier, AVWs can be generated with arbitrary degrees of non-separability, evolving from scalar to vector, via the parameter $\alpha$ (see Eq.~\ref{eq:avw}). More precisely, as $\alpha$ increases from 0 to $\pi/4$ the mode changes monotonically from a pure scalar mode with right-handed circular polarisation ($\alpha=0$) to a pure scalar mode with left-handed circular polarisation ($\alpha=\pi/2$), passing through a pure vector mode ($\alpha=\pi/4$). Intermediate values of $\alpha$ produce vector modes with intermediate degrees of non-separability, which can be measured through the concurrence or Vector Quality Factor (VQF), which is a measure borrowed from quantum mechanics that allows to quantify the degree of coupling between the spatial and polarisation degrees of freedom \cite{McLaren2015,Ndagano2015,Zhaobo2019}. Experimentally, the VQF can be quantified directly from the Stokes parameters as \cite{Selyem2019,Manthalkar2020},
\begin{equation}
VQF=\sqrt{1-\left(\frac{\mathbb{S}_1}{\mathbb{S}_0} \right)^2-\left(\frac{\mathbb{S}_2}{\mathbb{S}_0} \right)^2-\left(\frac{\mathbb{S}_3}{\mathbb{S}_0} \right)^2},
\label{concurrence}
\end{equation}
where $\mathbb{S}_i$ ($i=0,1,2,3$) is a number that results from integrating the Stokes parameters $S_i$ over the entire transverse profile, {\textit i. e.}, $\mathbb{S}_i=\iint_{-\infty}^\infty S_{i} dA$. Figure \ref{fig:VQF} shows a representative example of the VQF as function of $\alpha$ for the specific case $\AVW{}_{2,3}(\eta,\xi,z=0)$. As expected, the VQF increases from 0 to 1, as $\alpha$ increases from 0 to $\pi/4$ and then it decreases back to zero, as $\alpha$ reaches the value $\pi/2$. Insets show the intensity profile overlapped with the polarisation distribution for three key values, namely $\alpha=0, \pi/4$ and $\pi/2$.
\begin{figure}[t]
\includegraphics[width=.46\textwidth]{VQF}
\caption{Vector Quality Factor (VQF) as a function of the weighing coefficient $\alpha\in[0,\pi/2]$ for the mode $\AVW{}_{2,3}(\eta,\xi,z=0)$. Insets show the intensity and polarisation distributions for $\alpha=0,\pi/4$, and $\pi/2$.}
\label{fig:VQF}
\end{figure}
Finally, we analyse the polarisation states distribution of AVWs on the Poincar\'e sphere. It turns out that for a particular AVW all of them are mapped onto a great circle. Fig.~\ref{fig:Sphere}(a) shows numerical simulations in the circular (top row) and horizontal/vertical linear (bottom row) bases. It can be seen that in the first case, the great circles intersect the North and South poles, meaning all polarisation states in the AVW are oriented at exactly the same polarisation angle and only differ in their ellipticity, containing all polarisation states from circular to linear. Interestingly, a change in the inter-modal phase $\beta$ originates a rotation of the great circle around the $S_3$ axis (Fig.~\ref{fig:Sphere}(a) top left panel), and a change in the weighting coefficient $\alpha$ causes the polarisation states location from complete to incomplete great circles, in such a way that the arc length is proportional to it (Fig.~\ref{fig:Sphere}(a) top right panel).
Similarly, in the case of horizontal/vertical linear polarisation basis (Fig.~\ref{fig:Sphere}(a) bottom row), the great circles intersect the cross points between the $S_1$ axis and the sphere, and rotate around the $S_1$ axis when changing $\beta$. The effect of $\alpha$ is as in the circular basis. For instance, for $\alpha=\pi/2$ and $\beta=0$, the polarisation distribution is mapped to the equator of the Poincar\'e sphere. For the case of circular polarisation basis, we corroborated this experimentally, as shown in Fig.~\ref{fig:Sphere}(b), that shows a remarkable similarity to its theoretical counterpart.
\begin{figure}[h!]
\centering
\includegraphics[width=.46\textwidth]{HOPS}
\caption{Geometric representation on the Poincar\'e sphere of the transverse polarisation distribution of the vector mode $\AVW{}_{0,2}(\eta,\xi,z=0)$. Left column shows modes with increasing values of the inter-modal phase, namely $\beta=0$ (blue), $\beta=pi/4$ (purple), $\beta=pi/4$ (green) and $\beta=3pi/4$ (yellow). Right column shows modes with different weighting coefficients, namely $\alpha=\pi/12, \pi/6, \pi/3$ and $4\pi/12$, with the same inter-modal phases as in the left column for the sake of better visibility. (a) Theoretical results for the case of AVW in the circular (top row) and linear (bottom row) polarisation basis. (b) Corresponding experimental results for the case of the circular basis.}
\label{fig:Sphere}
\end{figure}
In summary, we have introduced theoretically and demonstrated experimentally a new family of vector beams with the ability to accelerate along parabolic trajectories upon free space propagation. Such accelerating beams differ quite dramatically from common vector beams, which always propagate along straight trajectories. These families of vector beams are constructed as a weighted superposition of the spatial and polarisation degrees of freedom carrying an inter-modal phase. To generate them, the spatial degree of freedom is encoded in a set of orthogonal solutions of the one--dimensional Schr{\"o}dinger equation with a quartic potential, known as accelerating waves. An important feature of such modes is their propagation-invariant spatial and polarisation structures, as we corroborated experimentally. Further, the weighting coefficient allows tuning from purely scalar to completely vectorial, passing through intermediate states, which was also corroborated experimentally using the well-known measure of concurrence from quantum mechanics adapted for vector beams. Another important feature of these accelerating vector modes lies in their transverse polarisation distribution, which is mapped onto great circles on the Poincar\'e sphere. In particular, in the circular polarisation basis the great circles intersect the North and South poles, and contain states of polarisation from linear to circular, all with the same ellipticity angle. For comparison, cylindrical vector modes are mapped either to an equatorial line or to the whole Poincar\'e sphere (known as full-Poincar\'e modes). Noteworthy, the inter-modal phase allows to rotate the circle of polarisation around the Poincar\'e sphere, leaving the points on North and South poles fixed. Given their interesting properties, we expect AVWs to find applications in fields such as optical manipulation, laser material processing, among others.
\begin{funding}
Consejo Nacional de Ciencia y Tecnología (PN2016-3140); National Natural Science Foundation of China (61975047).
\end{funding}
\input{References.bbl}
\end{document}
|
1,116,691,497,395 | arxiv | \section{'Twas Brillig, and the Slithy Toves}
Recently, unsupervised language models like $\mathrm{BERT}$ \citep{devlin-etal-2019-bert} have become popular within natural language processing (NLP).
These pre-trained sentence encoders, known affectionately as $\mathrm{BERT}$oids \citep{rogers-etal-2020-primer}, have pushed forward the state of the art in many NLP tasks.
Given their impressive performance, a natural question to ask is whether models like these implicitly learn to encode linguistic structures, such as part-of-speech tags or dependency trees.
There are two strains of research that investigate this question. On
one hand, \defn{stimuli-analysis} compares the relative probabilities a language model assigns to words which could fill a gap in a
cloze-style task.
This allows the experimenter to test whether neural models do well at capturing specific linguistic phenomena, such as subject--verb agreement \citep{linzen-etal-2016-assessing, gulordava-etal-2018-colorless} or negative-polarity item licensing \citep{marvin-linzen-2018-targeted, warstadt-etal-2019-investigating}.
Another strain
of research directly analyses the neural network's representations; this is called \defn{probing}.
Probes are supervised models which attempt to predict a target linguistic structure using a model's representation as its input \citep[e.g.][]{alain2016understanding, conneau-etal-2018-cram, hupkes2018visualisation}; if the probe is able to perform the task well, then it is argued that the model has learnt to implicitly encode that structure in its representation.\footnote{Methods which analyse stimuli are also sometimes termed `probes' \citep[e.g.][]{ niven-kao-2019-probing}, but in this paper we use the term to refer specifically to supervised models.} \looseness=-1
Work from this inchoate probing literature is frequently cited to support the claim that models like $\mathrm{BERT}$ encode a large amount of syntactic knowledge. %
For instance, consider the two excerpts below demonstrating how a couple of syntactic probing papers have been interpreted:\footnote{\citet{jawahar-etal-2019-bert} and \citet{hewitt-manning-2019-structural} are more reserved about their claims; these examples merely show how such work is frequently interpreted, regardless of intent.}%
\begin{aquote}{\citealp{tian-etal-2020-skep}}
\textit{[The training objectives of BERT/GPT-2/XLNet] have shown great abilities to capture dependency between words and syntactic structures \citep{jawahar-etal-2019-bert}}
\end{aquote}
\begin{aquote}{\citealp{soulos-etal-2020-discovering}} \textit{Further work has found impressive degrees of syntactic structure in Transformer encodings \citep{hewitt-manning-2019-structural}} \end{aquote}
Our position in this paper is simple: %
we argue that the literature on syntactic probing is methodologically flawed, owing to a conflation of syntax with semantics.
We contend that no existing probing work has rigorously tested whether $\mathrm{BERT}$ encodes
syntax, and \textit{a fortiori} this literature should not be used to support this claim. %
To investigate whether syntactic probes actually probe syntax (or instead rely on semantics), we train two probes (\cref{sec:two_probes}) on the output representations produced by three pre-trained encoders %
on normal sentences---$\mathrm{BERT}$ \citep{devlin-etal-2019-bert}, $\operatorname{\mathrm{GPT-2}}$ \citep{gpt2}, and $\mathrm{RoBERTa}$ \citep{roberta}. We then evaluate these probes on a novel corpus of syntactically well-formed sentences made up of pseudowords (\cref{sec:gen_jabb}), and find that their performance drops substantially in this setting: on one probe, the average $\mathrm{BERT}$oid $\mathrm{UUAS}$ is reduced by $15.4$ points, and on the other the relative advantage that $\mathrm{BERT}$ exhibits over a baseline drops by $53\%$. This suggests that the probes are leveraging statistical patterns in distributional semantics to aide them in the search for syntax. %
According to one of the probes, $\operatorname{\mathrm{GPT-2}}$ falls behind a simple baseline, but in some cases the leads remains substantial, e.g.\ $20.4$ $\mathrm{UUAS}$ points in the case of $\mathrm{BERT}$.
We use these results not to draw conclusions about any $\mathrm{BERT}$oids' syntactic knowledge, but instead to urge caution when drawing conclusions from probing results.
In our discussion, we contend that evaluating $\mathrm{BERT}$oids' syntactic knowledge requires more nuanced experimentation than simply training a syntactic probe as if it were a parser \cite{hall-maudslay-etal-2020-tale}, and call for the separation of syntax and semantics in future probing work.
\section{Syntax and Semantics}
When investigating whether a particular model encodes syntax, those who have opted for stimuli-analysis have been careful to isolate syntactic phenomena from semantics \citep{marvin-linzen-2018-targeted, gulordava-etal-2018-colorless, goldberg}, but the same cannot be said of most syntactic probing work, which conflates the two.
To see how the two can be separated, consider the famous utterance of \newcite{chomsky1957syntactic}:
\begin{enumerate}[label={(\arabic*)}]
\item Colourless green ideas sleep furiously
\end{enumerate}
\noindent whose dependency parse is give in \cref{fig:colourless}. %
\citeauthor{chomsky1957syntactic}'s point is that (1) is semantically nonsensical, but syntactically well formed.
Syntactic probes are typically evaluated on real-world data, not on \citeauthor{chomsky1957syntactic}-style
sentences of (1)'s ilk. The same is true for parsers, but from a \emph{machine-learning point of view} this is not problematic, %
since the goal of a statistical parser is to parse well the data that one may encounter in the real world.
The probing literature, however, is inherently making a epistemological claim: whether $\mathrm{BERT}$ knows syntax.\footnote{This is not an engineering claim because the NLP engineer is unlikely to care whether $\mathrm{BERT}$'s representations encode syntactic structure---they just care about building reliable models that perform well on real data. An open question, however, is whether representations \emph{require} a notion of syntax to properly generalise; this is not addressed in our work.}
Indeed, we already know that $\mathrm{BERT}$ significantly improves the performance of statistical parsing models on real-world data \citep{zhou-zhao-2019-head}; there is no reason to develop specialist probes to reinforce that claim.
As probing consider a scientific qustion, it follows that the probing literature needs to consider syntax from \emph{a linguistic point of view} and, thus, it requires a linguistic definition of syntax.
At least in the generative tradition, it taken as definitional that grammaticality, i.e.\ syntactic well-formedness, is distinct from the meaning of the sentence.
It is this distinction that the nascent syntactic probing literature has overlooked.
\begin{figure}
\centering
\resizebox{0.9\columnwidth}{!}{%
\begin{dependency}[]
\begin{deptext}[column sep=0.1cm]
\word{Colourless} \& \word{green} \& \word{ideas} \& \word{sleep} \& \word{furiously} \\
\end{deptext}
\depedge[edge height=6ex,edge style={thick}]{3}{1}{amod}
\depedge[edge height=3ex,edge style={thick}]{3}{2}{amod}
\depedge[edge height=3ex,edge style={thick}]{4}{3}{nsubj}
\depedge[edge height=3ex,edge style={thick}]{4}{5}{advmod}
\deproot[edge height=7.5ex, edge style={thick}]{4}{root}
\end{dependency}}
\caption{\citeauthor{chomsky1957syntactic}'s classic, albeit with the spelling corrected. %
}
\label{fig:colourless}
\end{figure}
\section{Generating Jabberwocky Sentences}\label{sec:gen_jabb}
To tease apart syntax and semantics when evaluating probes, we construct a new evaluation corpus of syntactically valid English \defn{Jabberwocky sentences}, so called after \citet{jabberwocky} who wrote verse consisting in large part of pseudowords (see \cref{sec:jabberwocky}). In written language, a pseudoword is a sequence of letters which looks like a valid word in a particular language (usually determined by acceptability judgments), but which carries with it no lexical meaning.
For our Jabberwocky corpus, we make use of the ARC Nonword Database, which contains $358,534$ monosyllabic English pseudowords \citep{rastle-etal-2002-ARC}.
We use a subset of these which were filtered out then manually validated for high plausibility by \citet{kharkwal2014taming}.
We conjugate each of these words using hand-written rules assuming they obey the standard English morphology and graphotactics.
This results in $1361$ word types---a total of $2377$ varieties when we annotate these regular forms with several possible fine-grained part-of-speech realisations.
To build sentences, we take the test portion of the English EWT Universal Dependency \citep[UD;][]{nivre-etal-2016-universal} treebank and substitute words (randomly) with our pseudowords whenever we have one available with matching fine-grained part-of-speech annotation.\footnote{More specifically, for nouns we treat elements annotated (in UD notation) with \texttt{Number=Sing} or \texttt{Number=Plur}; for verbs we treat \texttt{VerbForm=Inf}, \texttt{VerbForm=Fin | Mood=Ind | Number=Sing | Person=3 | Tense=Pres}, \texttt{VerbForm=Fin | Mood=Ind | Tense=Pres}, or \texttt{VerbForm=Part | Tense=Pres}; for adjectives and adverbs we treat \texttt{Degree=Cmp} or \texttt{Degree=Sup}, along with unmarked. These cases cover all regular forms in the EWT treebank.} Our method closely resembles \citet{kasai-frank-2019-jabberwocky}, except they do so to analyse parsers in place of syntactic probes. %
An example of one of our Jabberwocky sentences is shown in \cref{fig:jabberwocky}, along with its unlabeled undirected parse (used by the probes) which is taken from the vanilla sentence's annotation in the treebank.
\begin{figure}
\centering
\pgfkeys{%
/depgraph/reserved/edge style/.style = {%
-, >=stealth, %
},%
}
\resizebox{0.95\columnwidth}{!}{%
\begin{dependency}[hide label, arc edge]
\begin{deptext}[column sep=0.1cm]
\word{I} \& \word{\red{povicated}} \& \word{your} \& \word{\red{briticists}} \& \word{very} \& \word{much} \\
\& {\small \word{enjoyed}} \& %
\& {\small \word{presentations}} \& %
\& %
\\
\end{deptext}
\depedge[edge height=3ex,edge style={thick}]{1}{2}{}
\depedge[edge height=3ex,edge style={thick}]{4}{3}{}
\depedge[edge height=4.5ex,edge style={thick}]{2}{4}{}
\depedge[edge height=3ex,edge style={thick}]{6}{5}{}
\depedge[edge height=6ex,edge style={thick}]{2}{6}{}
\end{dependency}}
\caption{An unlabeled undirected parse from the EWT treebank, with Jabberwocky substitutions in red.}
\label{fig:jabberwocky}
\end{figure}
\section{Two Syntactic Probes}\label{sec:two_probes}
\saveForCR{{We note that methods of investigating model interpretability via the analysis of stimuli are also sometimes termed `probes' \citep[e.g.][]{ niven-kao-2019-probing}, but in this paper we use the term to refer specifically to supervised models.}}
A \defn{syntactic probe} is a supervised model trained to predict the syntactic structure of a sentence using representations produced by another model. %
The main distinction between syntactic probes and dependency parsers is
one of researcher intent---probes are not meant to best the state of the art, but are a visualisation method \citep{hupkes2018visualisation}. As such, probes are typically minimally parameterised so they do not ``dig'' for information \citep[but see][]{pimentel-etal-2020-information}. If a syntactic probe performs well using a model's representations, it is argued that that model implicitly encodes syntax.
Here we briefly introduce two syntactic probes, each designed to learn the \defn{syntactic distance} between a pair of words in a sentence, which is the number of steps between them in an undirected parse tree (example in \cref{fig:jabberwocky}).
\newcite{hewitt-manning-2019-structural} first introduced syntactic distance, %
and propose %
the \defn{structural probe} as a means of identifying it; it takes a pair of embeddings %
and learns to predict the syntactic distance between them. %
An alternative to the structural probe which learns parameters for the same function is a structured perceptron dependency parser, originally introduced in \newcite{mcdonald-etal-2005-non}, and first applied to probing in \citet{hall-maudslay-etal-2020-tale}. Here we call this the \defn{perceptron probe}. Rather than learning syntactic distance directly, the perceptron probe instead learns to predict syntactic distances such that the minimum spanning tree that results from a sentence's predictions matches the gold standard parse tree. The difference between these probes is subtle, but they optimise for different metrics---this is reflected in our evaluation in \cref{sec:eval}.
\section{Hast Thou [Parsed] the Jabberwock?}
\label{sec:eval}
We train the probes on normal UDs, then evaluate them on Jabberwocky sentences; if the probes are really learning to extract syntax, they should perform just as well in the Jabberwocky setting.
\subsection{Experimental Setup} \label{sec:exp_des}
\begin{figure*}[ht]
\centering
\begin{minipage}{\columnwidth}
\centering
\includegraphics[width=\columnwidth,trim=0.8cm 0.6cm 2cm 1.6cm,clip]{./figs/bars_uuas.pdf}
\subcaption[first caption.]{UUAS results from the Perceptron Probe}\label{fig:uuas}
\end{minipage}%
~~~
\begin{minipage}{\columnwidth}
\centering
\includegraphics[width=\columnwidth,trim=0.8cm 0.6cm 2cm 1.6cm,clip]{./figs/bars_distspr.pdf}
\subcaption[first caption.]{DSpr results from the Structural Probe}\label{fig:DSpr}
\end{minipage}
\caption{How the models fair when the probes are evaluated on unchanged sentences vs.\ the Jabberwocky.}
\label{fig:normal_jabb}
\end{figure*}
\paragraph{Models to Probe} We probe three popular Transformer \citep{NIPS2017_3f5ee243} models: $\bm{\mathrm{BERT}}$ \citep{devlin-etal-2019-bert}, $\operatorname{\bm{\mathrm{GPT-2}}}$ \citep{gpt2}, and $\bm{\mathrm{RoBERTa}}$ \citep{roberta}. For all three we use the `large' version. We train probes on the representations at multiple layers, and choose whichever layers result in the best performance on the development set. For each Transformer model, we also train probes on the layer $0$ embeddings; we can treat these layer $0$ embeddings as baselines since they are uncontextualised, with knowledge only of a single word and where it sits in a sentence, but no knowledge of the other words. As an additional baseline representation to probe, we use $\mathrm{FastText}$ embeddings \citep{bojanowski-etal-2017-enriching} appended with $\mathrm{BERT}$ position embeddings ($\bm{\mathrm{Fast{+}Pos}}$). We emphasise that none of these baselines can be said to encode anything about syntax (in a linguistic sense), since they are uncontextualised. Training details of these models and baselines can be found in \cref{sec:training}.
\paragraph{Additional Simple Baselines} In addition to the baseline representations which we probe, we compute two even simpler baselines, which ignore the lexical items completely. The first simply connects each word to the word next to it in a sentence ($\bm{\mathrm{Path}}$). %
The second returns, for a given sentence length, the tree which contains the edges occurring most frequently in the training data ($\bm{\mathrm{Majority}}$), which is computed as follows: first, we subdivide the training data into bins based on sentence length. For each sentence length $n$, we create an undirected graph $G_n$ with $n$ nodes, each corresponding to a different position in the sentence. The edges are weighted according to the number of times they occur in %
the training data bin which contains sentences of length $n$.
The `majority tree' of sentence length $n$ is then computed by calculating the maximum spanning tree over $G_n$, which can be done by negating the edges, then running \citeauthor{prim}'s algorithm. For $n>40$, we use the $\mathrm{Path}$ baseline's predictions, owing to data sparsity.
\paragraph{Metrics} As mentioned in \cref{sec:two_probes}, the probes we experiment with each optimise for subtly different aspects of syntax; %
we evaluate them on different metrics which reflect this. We evaluate the structural probe on $\mathrm{DSpr}$, introduced in \citet{hewitt-manning-2019-structural}---it is the Spearman correlation between the actual and predicted syntactic distances between each pair of words. We evaluate the perceptron probe using the unlabeled undirected attachment score ($\mathrm{UUAS}$), which is the percentage of correctly identified edges. These different metrics reflect differences in the probe designs, which are elaborated in \citet{hall-maudslay-etal-2020-tale}.
\subsection{Results}
\cref{fig:normal_jabb} shows the performance of the probes we trained, when they are evaluated on normal test data (plain) versus our specially constructed Jabberwocky data (hatched). Recall that the test sets have identical sentence--parse structures, and differ only insofar as words in the Jabberwocky test set have been swapped for pseudowords.\footnote{This is why the $\mathrm{Path}$ and $\mathrm{Majority}$ baselines, which do not condition on the lexical items in a sentence,
have identical scores on both datasets.} For each $\mathrm{BERT}$oid, the lower portion of its bars (in white) shows the performance of its layer $0$ embeddings, which are uncontextualised and thus function as additional baselines.
All the probes trained on the $\mathrm{BERT}$oids perform worse on the Jabberwocky data than on normal data, indicating that the probes rely in part on semantic information to make syntactic predictions. This is most pronounced with the perceptron probe: in this setting, the three $\mathrm{BERT}$oids' scores dropped by an average of $15.4$ $\mathrm{UUAS}$ points. Although they all still outperform the baselines under $\mathrm{UUAS}$, their advantage is less pronounced, but in some cases it remains high, e.g.\ for $\mathrm{BERT}$ the lead is $20.4$ points over the $\mathrm{Fast{+}Pos}$ baseline.
With the structural probe, $\mathrm{BERT}$'s lead over the simple $\mathrm{Majority}$ baseline is reduced from $0.078$ to $0.037$ $\mathrm{DSpr}$, and $\mathrm{RoBERTa}$'s from $0.074$ to $0.017$---reductions of $53\%$ and $77\%$, respectively. $\operatorname{\mathrm{GPT-2}}$ falls behind the baselines, and performs worse than even the simple $\mathrm{Path}$ predictions ($0.580$ compared to $0.584$).
\subsection{Discussion}
Is $\mathrm{BERT}$ still the syntactic wunderkind we had all assumed? Or do these reductions mean that these models can no longer be said to encode syntax?
We do not use our results to make either claim.
The reductions we have seen here may reflect a weakness of the syntactic probes %
rather than a weakness of the models themselves, per se. In order to properly give the $\mathrm{BERT}$oids their due, one ought train the probes on data which controls for semantic cues (e.g.\ more Jabberwocky data) in addition to evaluating them on it.
Here, we wish only to show that existing probes leverage semantic cues to make their syntactic predictions; since they do not properly \textit{isolate} syntax, they should not be cited to support claims \textit{about} syntax.
The high performance of the baselines (which inherently contain \emph{no} syntax) is reason enough to be cautious about claims of these model's syntactic abilities.
In general, single number metrics like these can be misleading: many correctly labeled easy dependencies may well obfuscate the mistakes being made on comparatively few hard ones, which may well be far more revealing \citep[see, for instance,][]{briscoe-carroll-2006-evaluating}.
Even if these syntactic probes achieved near perfect results on Jabberwocky data, beating the baselines by some margin, that alone would not be enough to conclude that the models encoded a deep understanding of syntax. Dependency grammarians generally parse sentences into directed graphs with labels; these probes by comparison only identify undirected unlabeled parse trees (compare Figures \ref{fig:colourless} and \ref{fig:jabberwocky} for the difference). This much-simplified version of syntax has a vastly reduced space of possible syntactic structures. %
Consider a sentence with e.g.\ $n=5$ words, for which there are only $125$ possible unlabeled undirected parse trees (by \citeauthor{cayley}'s formula, $n^{n-2}$). As the high performance of the $\mathrm{Majority}$ baseline indicates, these are not uniformly distributed (some parse trees are more likely than others); a probe might well use these statistical confounds to advance its syntactic predictions.
Although they remain present, biases like these are less easily exploitable in the labeled and directed case, where there are just over one billion possible parse trees to choose from.\footnote{$n\cdot n^{n-2} \cdot k^{n-1}$ where $k$ is the number of possible labels, and $k=36$ in the case of UDs \citep{nivre-etal-2016-universal}.}
Syntax is an incredibly rich phenomena---far more so than when it is reduced to syntactic distance.
\saveForCR{We refrain from making any comment contrasting the performance of the individual models. Our experiments do not control from such a test---each of the models was trained on different data, with a different number of parameters, and so on.}
\section{O Frabjous Day! Callooh! Callay!}
In this work, we trained two syntactic probes on a variety of $\mathrm{BERT}$oids, then evaluated them using Jabberwocky sentences, and showed that performance dropped substantially in this setting. This suggests that previous results from the probing literature may have overestimated $\mathrm{BERT}$'s syntactic abilities. %
However, in this context, we do not use the results to make any claims about $\mathrm{BERT}$; we contend that to make such a claim one ought train the probes on Jabberwocky sentences, which would require more psuedowords than we had available.
Instead, we
advocate for the separation of syntax and semantics in probing. %
Future work could explore the development of artificial treebanks for use specifically for training syntactic probes, which minimise for any confounding statistical biases in the data. %
We make our Jabberwocky evaluation data and code publicly available at \url{https://github.com/rowanhm/jabberwocky-probing}.
\bibliographystyle{acl_natbib}
|
1,116,691,497,396 | arxiv | \section{Introduction}
\setcounter{equation}{0}
The motion of homogeneous incompressible ideal fluid in a domain
$\Omega \subset \Bbb R^n $ is described by the following system of
Euler equations.
\[
\mathrm{ (E)}
\left\{ \aligned
&\frac{\partial v}{\partial t} +(v\cdot \nabla )v =-\nabla p,
\quad (x,t)\in {\Omega}\times (0, \infty) \\
&\textrm{div }\, v =0 , \quad (x,t)\in {\Omega}\times (0,
\infty)\\
&v(x,0)=v_0 (x), \quad x\in \Omega
\endaligned
\right.
\]
where $v=(v^1, v^2, \cdots , v^n )$, $v^j
=v^j (x, t)$, $j=1, 2, \cdots , n,$ is the velocity of the fluid
flows, $p=p(x,t)$ is the scalar pressure, and $v_0 (x)$ is a given
initial velocity field satisfying div $v_0=0$. Here we use the
standard notion of vector calculus, denoting
$$\nabla p =\left(\frac{\partial p}{\partial x_1 },
\frac{\partial p}{\partial x_2 }, \cdots , \frac{\partial
p}{\partial x_n } \right),\quad (v\cdot \nabla )v^j=\sum_{k=1}^n
v^k \frac{\partial v^j}{\partial x_k},\quad \textrm{div }\, v
=\sum_{k=1}^n \frac{\partial v^k}{\partial x_k }.
$$
The first equation of (E) follows from the balance of momentum for
each portion of fluid, while the second equation can be derived from
the conservation of mass of fluid during its motion, combined with
the homogeneity(constant density) assumption on the fluid. The
system (E) is first derived by L. Euler in 1755(\cite{eul}). Unless
otherwise stated, we are concerned on the Cauchy problem of the
system (E) on $\Omega= \Bbb R^n$, but many of the results presented
here are obviously valid also for
$\Omega =\Bbb R^n/\Bbb Z^n$(periodic domain), and even for the bounded domain
with the smooth boundary with the boundary condition $v\cdot \nu
=0$, where $\nu $ is the outward unit normal vector.
We also suppose $n=2$ or $3$ throughout this paper.
{\em In this article our aim to survey recent results on the
mathematical aspects the 3D Euler equations closely related to
the problem of spontaneous apparition of singularity starting from a
classical solutions having finite energy.}
If we add the dissipation term $\mu \Delta v=\mu \sum_{j=1}^n
\frac{\partial^2 v}{\partial x_j^2} $, where $\mu>0$ is the
viscosity coefficient, to the right hand side of the first equation of (E), then we have
the Navier-Stokes equations, the regularity/singularity question of which is
one of the seven millennium problems in mathematics.
In this article we do not treat the
Navier-Stokes equations. For details of mathematical
studies on the Navier-Stokes equations
see e.g. \cite{tem3, con10,lio, gald, lad, maj3, lem}.
We also omit other important topics such as existence and uniqueness
questions of the weak solutions of the 2D Euler equations, and
the related vortex patch problems, vortex sheet problems, and so
on. These are well treated in the other
papers and monographs(\cite{maj3, cha19a, che2, lio, sch, shn1, yud1, yud2, vis2, tad})
and the references therein.
For the survey related the stability
question please see for example \cite{fri1} and references therein. For the
results on the regularity of the Euler equations with uniformly rotating
external force we refer \cite{bab}, while for the numerical
studies on the blow-up problem of the Euler equations there are
many articles including \cite{ker1, ker2, hou3, bra, fri2, caf1, gra1,gra2,
gre, pel}. For various mathematical and physical aspects of the Euler equations
there are many excellent books, review articles including
\cite{arn, bren, che2, con1,con3, fri1, gib2, maj2, maj3, mar, cha12a, yu}.
Obviously, the references are not complete mainly due to
author's ignorance.
\subsection{Basic properties}
In the study of the Euler equations the notion of vorticity,
$\omega=$curl $v$, plays important roles. We can reformulate the
Euler system in terms of the vorticity fields only as follows. We
first consider the 3D case. Let us
first rewrite the first equation of (E) as
\begin{equation}\label{bas1}
\frac{\partial v}{\partial t} -v\times \mathrm{curl}\, v=-\nabla
(p+\frac12 |v|^2 ).
\end{equation}
Then, taking curl of (\ref{bas1}), and using elementary vector
identities, we obtain the following vorticity formulation:
\begin{equation}\label{bas2}
\frac{\partial \o }{\partial t} +(v\cdot \nabla )\o =\o \cdot \nabla
v,
\end{equation}
\begin{equation}\label{bas3}
\textrm{div }\, v =0 , \quad \textrm{curl }\, v =\o ,
\end{equation}\label{bas4}
\begin{equation} \o(x,0)=\o_0 (x).
\end{equation}
The linear elliptic system (\ref{bas3}) for $v$
can be solved explicitly in terms of $\o$, assuming $\o$ decays sufficiently fast
near spatial infinity, to provides us with the Biot-Savart law,
\begin{equation}\label{bas5}
v(x,t)=\frac{1}{4\pi}\int_{\Bbb R^3}
\frac{(x-y)\times \o (y,t) }{|x-y|^3}dy.
\end{equation}
Substituting this $v$ into (\ref{bas2}), we obtain an
integro-differential system for $\o$. The term in the right hand side of
(\ref{bas2}) is called the vortex stretching term, and is
regarded as the main source of difficulties in the mathematical
theory of the 3D Euler equations.
Let us introduce the deformation matrix
$S(x,t)=(S_{ij}(x,t))_{i,j=1}^3$ defined as the symmetric part of
the velocity gradient matrix,
$$S_{ij} =\frac12 \left(\frac{\partial v_j}{\partial x_{i}}
+\frac{\partial v_i}{\partial x_{j}}\right).
$$
From the Biot-Savart law in (\ref{bas5}) we can explicitly compute
\begin{equation}\label{bas5a}
S(x,t)=\frac{3}{8\pi} p.v. \int_{\Bbb R^3} \frac{ [ (y\times \o (x+y,t) )
\otimes y +y\otimes
(y\times
\o (x+y,t))]}{ |y|^5}dy
\end{equation}
(see e.g. \cite{maj3} for the details on the computation).
The kernel in the convolution integral of (\ref{bas5a}) defines a
singular integral operator of the Calderon-Zygmund type(see e.g.
\cite{ste1, ste2} for more details).
Since the vortex stretching term can be written as $ (\o \cdot \nabla )v = S \o
$, we see that the singular integral operator and related
harmonic analysis results could have important roles to
study the Euler equations.
In the two dimensional case
we take the vorticity as the scalar, $\o =\frac{\partial v^2}{\partial x_1}
-\frac{\partial v^1}{\partial x_2}$, and the evolution equation
of $\o$ becomes
\begin{equation}\label{bas6}
\frac{\partial \o }{\partial t} +(v\cdot \nabla )\o =0,
\end{equation}
where the velocity is represented in terms of the vorticity by the
2D Biot-Savart law,
\begin{equation}\label{bas7}
v(x,t)=\frac{1}{2\pi} \int_{\Bbb R^2} \frac{(-y_2+x_2,
y_1 -x_1
)}{|x-y|^2} \o (y,t )dy.
\end{equation}
Observe that there is no vortex stretching term in (\ref{bas6}),
which makes the proof of global regularity in 2D Euler equations easily accessible.
In many studies of the Euler equations it is convenient to introduce
the notion of `particle trajectory mapping', $X (\cdot ,t)$ defined
by
\begin{equation}\label{bas8}
\frac{\partial X (a, t )}{\partial t} =v (X (a , t
),t), \quad X(a , 0)=a , \quad a \in \Omega.
\end{equation}
The mapping $X(\cdot, t)$ transforms from the location of the initial fluid particle
to the location at time $t$, and the parameter $a$ is called the Lagrangian
particle marker.
If we denote the Jacobian of the transformation, det$(\nabla
_a X (a ,t ))=J(a ,t)$, then we can show easily(see e.g.
\cite{maj3} for the proof)
that
$$\frac{\partial J}{\partial t} =(\mathrm{div}\, v) J,$$
which implies that the velocity field $v$ satisfies the
incompressibility, div $v=0$ if and only if the mapping $X (\cdot
,t)$ is volume preserving. At this moment we note that, although
the Euler equations are originally derived by applying the physical
principles of mass conservation and the momentum balance, we could
also derive them by applying the least action principle to the
action defined by
$$
\mathcal{I} (A)=\frac12 \int_{t_1}^{t_2} \int_{\Omega}
\left|\frac{\partial X (x,t)}{\partial t}\right|^2 dxdt.
$$
Here, $X(\cdot, t): \Omega \to \Omega\subset \Bbb R^n$ is a
parameterized family of volume preserving diffeomorphism. This
variational approach to the Euler equations implies that we can view
solutions of the Euler equations as a geodesic curve in the
$L^2(\Omega)$ metric on the infinite dimensional manifold of volume
preserving diffeomorphisms(see e.g. \cite{arn, bren, ebi} and
references therein for more details on the geometric approaches to
the Euler equations).
The 3D Euler equations have many conserved quantities. We list some
important ones below.
\begin{itemize}
\item[(i)] Energy,
$$
E(t)=\frac12 \int_{\Omega} |v(x,t)|^2 dx.
$$
\item[(ii)] Helicity,
$$
H(t)=\int_{\Omega} v (x,t)\cdot \o (x,t)
dx.
$$
\item[(iii)] Circulation,
$$
\Gamma _{\mathcal{C}(t)}= \oint_{\mathcal{C}(t)} v\cdot
dl,
$$
where $\mathcal{C}(t)=\{ X (a , t) \,| \, a \in \mathcal{C}\}$ is
a curve moving along with the fluid.
\item[(iv)] Impulse,
$$
I(t)=\frac12\int_{\Omega} x\times \o \,dx.
$$
\item[(v)] Moment of Impulse,
$$
M(t)= \frac13 \int_{\Omega} x\times (x\times \o ) \,dx.
$$
\end{itemize}
The proof of conservations of the above quantities for the classical solutions
can be done without difficulty using elementary vector calculus(for details
see e.g. \cite{maj3, mar}). The helicity, in particular,
represents the degree of knotedness of the vortex lines in the
fluid, where the vortex lines are the integral curves of the
vorticity fields. In \cite{arn} there are detailed discussions on this aspects
and other topological implications of the helicity conservation.
For the 2D Euler equations there is no analogue
of helicity, while the circulation conservation is replaced by
the vorticity flux integral,
$$
\int_{D(t)} \o (x,t) dx,
$$
where $D(t)=\{ X (a , t) \, | \, a \in
D\subset \Omega\}$ is a planar region moving along the fluid in
$\Omega$.
The impulse and the moment of impulse integrals in the 2E Euler equations are replace by
$$
\frac12\int_{\Omega} (x_2, -x_1 )\o dx \quad \mbox{and}\quad
-\frac13 \int_{\Omega} |x|^2 \o dx\quad \mbox{respectively}.
$$
In the 2D Euler equations we have extra conserved
quantities; namely for any continuous function $f$ the integral
$$
\int_\Omega f(\o (x,t)) dx
$$
is conserved.
There are also many known explicit solutions to the Euler equations,
for which we just refer \cite{lam, maj3}. In the remained part of
this subsection we introduce some notations to be used later for 3D
Euler equations. Given velocity $v(x,t)$, and pressure $p(x,t)$, we
set the $3\times 3$ matrices,
$$
V_{ij}=\frac{\partial v_j}{\partial x_i},\quad
S_{ij}=\frac{V_{ij}+V_{ji}}{2},\quad A_{ij}=\frac{V_{ij}-V_{ji}}{2},
\quad P_{ij}=\frac{\partial ^2 p}{\partial x_i \partial x_j},
$$
with $i,j=1,2,3$. We have the decomposition $V=(V_{ij})=S+A$, where
the symmetric part $S=(S_{ij})$ represents the deformation tensor of
the fluid introduced above, while the antisymmetric part
$A=(A_{ij})$ is related to the vorticity $\o$
by the formula,
\begin{equation}\label{basfor}
A_{ij}= \frac12 \sum_{k=1}^3 \varepsilon
_{ijk} \o_k,\qquad \o_i = \sum_{j,k=1}^3\varepsilon_{ijk}A_{jk},
\end{equation}
where $\varepsilon_{ijk}$ is the skewsymmetric tensor with the normalization
$\varepsilon_{123}=1$. Note that $P=(P_{ij})$ is the hessian of the
pressure. We also frequently use the
notation for the vorticity direction field,
$$ \xi (x,t)=\frac{\o (x,t)}{|\o(x,t)|}, $$
defined whenever $\o (x,t)\neq 0$.
Computing partial derivatives $\partial/
\partial x_k$ of the first equation of (E), we obtain the matrix equation
\begin{equation}\label{basfor1}
\frac{D V}{Dt}=-V^2 -P, \quad \frac{D}{Dt}=
\frac{\partial}{\partial t}+(v\cdot \nabla )v.
\end{equation} Taking symmetric part of this, we obtain
$$ \frac{D
S}{Dt}=-S^2-A^2-P,
$$
from which, using the formula (\ref{basfor}),
we have
\begin{equation}\label{basfor2}
\frac{D S_{ij}}{Dt} = - \sum_{k=1}^3 S_{ik}S_{kj}
+\frac14 (|\o|^2 \delta_{ij} -\o_i\o_j )- P_{ij},
\end{equation}
where $\delta_{ij}=1$ if $i=j$, and $\delta_{ij}=0$ if $i\neq j$.
The antisymmetric part of (\ref{basfor1}), on the other hand, is
$$
\frac{DA}{Dt}=-SA -AS,
$$
which, using the formula (\ref{basfor}) again, we obtain easily
\begin{equation}\label{basfor2a}
\frac{D \o}{Dt} = S \o,
\end{equation}
which is the vorticity evolution equation (\ref{bas2}). Taking dot
product (\ref{basfor2a}) with $\o$,
we immediately have
\begin{equation}\label{basfor3}
\frac{D |\o |}{Dt} =\a |\o|,
\end{equation}
where we set
$$\a (x,t)= \left\{ \aligned &\sum_{i,j=1}^3
\xi_i(x,t) S_{ij}(x,t) \xi_j (x,t)
& \mbox{if $\o (x,t)\neq 0$}\\
&0 & \mbox{if $\o (x,t)=0$}.
\endaligned
\right. $$
\subsection{Preliminaries}
Here we introduce some notations and function spaces to be used in
the later sections.
Given $p\in [1, \infty]$, the Lebesgue space $L^p (\Bbb R^n )$,
$p\in [1, \infty]$, is the Banach space defined
by the norm
$$\|f\|_{L^p}:=\left\{
\aligned &\left(\int_{\Bbb R^n} |f(x)|^p dx \right)^{\frac{1}{p}}, \quad
p\in [1, \infty )\\
&\mathrm{ess.} \sup_{x\in \Bbb R^n} |f(x)|, \quad p=\infty .
\endaligned \right.
$$
For $j=1, \cdots, n$ the Riesz transform $R_j$ of $f$ is given by
$$
R_j (f)(x)=\frac{\Gamma (\frac{n+1}{2} )}{\pi ^{\frac{n+1}{2}}} p.v.
\int_{\Bbb R^n} \frac{x_j -y_j}{|x-y|^{n+1}} f(y)dy
$$
whenever the right hand side makes sense. The Hardy space
$\mathcal{H}^1 (\Bbb R^n)\subset L^1 (\Bbb R^n)$ is defined by
$$
f\in \mathcal{H}^1 (\Bbb R^n) \quad \mbox{if and only if}\quad
\|f\|_{\mathcal{H}^1}:= \|f\|_{L^1} +\sum_{j=1}^n \|R_j f\|_{L^1}
<\infty.
$$
The space $BMO(\Bbb R^n)$ denotes the space of functions of
bounded mean oscillations, defined by
$$
f\in BMO (\Bbb R^n )\quad \mbox{if and only if}\quad
\|f\|_{BMO}:=\sup_{Q\subset \Bbb R^n} \frac{1}{\mathrm{Vol} (Q)}
\int_Q |f-f_Q |dx <\infty,
$$
where $f_Q =\frac{1}{\mathrm{Vol} (Q)}\int_Q f dx$. For more details
on the Hardy space and BMO we refer \cite{ste1, ste2}.
Let us set the
multi-index $\alpha :=(\alpha_1 , \alpha_2 , \cdots , \alpha_n )\in
(\Bbb
Z_+ \cup \{ 0\} )^n $ with $|\alpha |=\alpha_1 +\alpha_2 +\cdots
+\alpha_n$. Then, $D^\alpha :=D^{\alpha_1}_1 D^{\alpha_2}_2 \cdots
D^{\alpha_n}_n$, where $D_j =\partial/\partial x_j$, $j=1,2,\cdots,
n$.
Given $k\in \Bbb Z$ and $p\in [1, \infty)$ the Sobolev space,
$W^{k,p} (\Bbb R^n )$ is the Banach space of functions
consisting of functions $f\in L^p (\Bbb R^n )$
such that
$$\|f\|_{W^{k, p}}:=\left(\int_{\Bbb R^n} |D^\alpha f(x)|^p dx
\right)^{\frac{1}{p}} <\infty,$$
where the derivatives are in the sense of distributions. For
$p=\infty$ we replace the $L^p (\Bbb R^n )$ norm by the
$L^\infty (\Bbb R^n)$ norm. In particular, we denote $H^m (\Bbb R^n) =W^{m,2} (\Bbb R^n )$.
In order to handle the functions having fractional derivatives of order $s\in \Bbb R$,
we use the
Bessel potential space $L^{s}_p (\Bbb R^n) $ defined by the Banach spaces norm,
$$\|f\|_{L^{s,p}}:=\|(1-\Delta )^{\frac{s}{2}} f\|_{L^p},
$$
where $(1-\Delta )^{\frac{s}{2}} f=\mathcal{F}^{-1} [ (1+|\xi |^2
)^{\frac{s}{2}} \mathcal{F} (f)(\xi )]$. Here $\mathcal{F} (\cdot)$ and
$\mathcal{F}^{-1}(\cdot)$ denoting the Fourier transform and its
inverse, defined by
$$
\mathcal{F} (f)(\xi)=\hat{f} (\xi)
=\frac{1}{(2\pi )^{n/2}}\int_{\Bbb R^n} e^{-ix\cdot \xi }
f(x)dx,
$$
and
$$
\mathcal{F}^{-1} (f)(x)=\check{f} (x)
=\frac{1}{(2\pi )^{n/2}}\int_{\Bbb R^n} e^{ix\cdot \xi }
f(\xi)d\xi ,
$$
whenever the integrals make sense.
Next we
introduce the Besov spaces. We follow \cite{tri}(see also \cite{tay,
lem, che2, run}). Let $\mathfrak{S}$ be the Schwartz class of
rapidly decreasing
functions.
We consider $\varphi \in \mathfrak{S}$ satisfying
$\textrm{Supp}\, \hat{\varphi} \subset
\{\xi \in {\mathbb R}^n \, |\,\frac12 \leq |\xi|\leq
2\}$,
and $\hat{\varphi} (\xi)>0 $ if $\frac12 <|\xi|<2$.
Setting $\hat{\varphi_j } =\hat{\varphi } (2^{-j} \xi )$ (In other words,
$\varphi_j (x)=2^{jn} \varphi (2^j x )$.), we can adjust the
normalization constant in front of $\hat{\varphi}$ so that
$$
\sum_{j\in \mathbb{Z}} \hat{\varphi}_j (\xi )=1\quad \forall \xi \in
{\mathbb R^n}\setminus \{ 0\}.
$$
Let $s\in \mathbb R$, $p,q
\in [0, \infty]$. Given $f\in \mathfrak{S}'$, we denote
$\Delta_j f=\varphi_j* f$.
Then the homogeneous Besov semi-norm $\|
f\|_{\dot{B}^{s}_{p,q}}$ is defined by \[ \|
f\|_{\dot{B}^s_{p,q}}=\left\{ \aligned &\left( \sum_{j\in \Bbb Z}
2^{jqs} \| \varphi_j \ast f \|_{L^p}^q
\right)^{\frac{1}{q}}&\mbox{ if } q\in [1, \infty) \\
&\sup_{j\in \Bbb Z} \left(2^{js} \| \varphi_j \ast f \|_{L^p}
\right)&\mbox{ if }q=\infty.
\endaligned \right.
\]
For $(s, p,q)\in [0, \infty) \times [1, \infty]\times [1, \infty]$
the homogeneous Besov space $\dot{B}^{s}_{p,q}$ is a quasi-normed
space with the quasi-norm given by $\| \cdot \|_{\dot{B}^s_{p,q}}$.
For $s>0$ we define the inhomogeneous Besov space norm $\|
f\|_{{B}^s_{p,q}}$ of $f\in \mathfrak{S}'$ as
$\|f\|_{{B}^s_{p,q}}=\| f\|_{L^p}+\| f\|_{\dot{B}^s_{p,q}}$.
Similarly, for $(s, p,q)\in [0, \infty) \times [1, \infty)\times [1,
\infty]$, the homogeneous Triebel-Lizorkin semi-norm
$\|f\|_{\dot{F}^s_{p,q}}$ is defined by
$$
\|f\|_{\dot{F}^s_{p,q}} =\left\{ \aligned &\left\| \left(\sum_{j\in
\mathbb{Z}} 2^{jqs}|\varphi_j \ast
f(\cdot)|^q\right)^{\frac1q}\right\|_{L^p}
& \mbox{if $q\in[1, \infty)$}\\
&\left\|\sup_{j\in \mathbb{Z}}\left( 2^{js} |\varphi_j \ast f (\cdot
)|\right) \right\|_{L^p} & \mbox{if $q=\infty$}
\endaligned \right. .
$$
The homogeneous Triebel-Lizorkin space $\dot{F}^s_{p,q}$ is a quasi-normed
space with the quasi-norm given by $\|\cdot \|_{\dot{F}^s_{p,q}}$.
For $s>0$, $(p,q)\in [1, \infty)\times [1, \infty)$
we define the inhomogeneous Triebel-Lizorkin space
norm by
$$
\|f\|_{{F}^s_{p,q}} =\|f\|_{L^p} +\|f\|_{\dot{F}^s_{p,q}}.
$$
The inhomogeneous Triebel-Lizorkin space is a Banach space
equipped with the norm, $\|\cdot\|_{{F}^s_{p,q}}$.
We observe that $B^s_{p,p}(\Bbb R^n)=F^s_{p,p}(\Bbb R^n) $. The
Triebel-Lizorkin space is a generalization of many classical
function spaces. Indeed, the followings are well established(see e.g.
\cite{tri})
$$F^0_{p, 2}(\Bbb R^n ) =\dot{F}^0_{p, 2}(\Bbb R^n)=L^p (\Bbb R^n ), \quad (1<p<\infty).$$
$$\dot{F}^0_{1,2}(\Bbb R^n)=\mathcal{H}^1 (\Bbb R^n )\quad \mbox{and}
\quad \dot{F}^0_{\infty,2}=BMO(\Bbb R^n). $$
$$F^s_{p,2}(\Bbb R^n )=L^{s,p} (\Bbb R^n ).$$
We also note sequence of continuous embeddings for the spaces close
to $L^\infty (\Bbb R^n)$(\cite{tri, jaw}).
\begin{equation}\label{emb}
\dot{B}^{\frac{n}{p}}_{p,1} (\Bbb R^n)\hookrightarrow
\dot{B}^{0}_{\infty,1}(\Bbb R^n)
\hookrightarrow L^\infty (\Bbb R^n) \hookrightarrow BMO (\Bbb R^n)\hookrightarrow
\dot{B}^{0}_{\infty,\infty}(\Bbb R^n).
\end{equation}
Given $0<s <1$, $1\leq p \leq \infty, 1\leq q \leq \infty$, we
introduce another function spaces $\mathcal{\dot{F}}^s_{p,q} $
defined by the
seminorm,
$$ \|f\|_{\mathcal{\dot{F}}^s_{p,q} } =\left\{
\aligned \left\|
\left(\int_{\Bbb R^n}\frac{|f(x)-f(x-y)|^q}{|y|^{n+s q}} dy\right)^{
\frac{1}{q}}\right\|_{L^p (\Bbb R^n , dx)}
& \mbox{if $1\leq p \leq \infty , 1\leq q <\infty$}\\
\left\| ess\sup_{|y|>0 } \frac{|f(x) -f(x-y)|}{|y|^s } \right\|_{
L^p (\Bbb R^n , dx)}
& \mbox{if $1\leq p \leq \infty, q=\infty$}
\endaligned \right. . $$
On the other hand, the space $\mathcal{\dot{B}}^s_{p,q} $ is
defined by the
seminorm,
$$ \|f\|_{\mathcal{\dot{B}}^s_{p,q} } =\left\{
\aligned
\left( \int_{\Bbb R^n} \frac{\|f(\cdot)-f(\cdot-y)\|^q_{L^p}}{|y|^{n+s q}}
dy\right)^{
\frac{1}{q}}
& \quad\mbox{if $1\leq p \leq \infty , 1\leq q <\infty$}\\
ess\sup_{|y|>0 } \frac{\|f(\cdot) -f(\cdot-y)\|_{L^p }}{|y|^s }
& \quad\mbox{if $1\leq p \leq \infty, q=\infty$}
\endaligned \right. . $$
Observe that, in particular,
$\mathcal{\dot{F}}^{s}_{\infty,\infty}=
\mathcal{\dot{B}}^{s}_{\infty,\infty}= C^s$, which
is the usual H\"{o}lder
seminormed space for $s\in \Bbb R_+\Bbb Z$. We also note that if $q=\infty$,
$\mathcal{\dot{B}}^{s}_{p,\infty}=
\mathcal{\dot{N}}^s_p$, which is the Nikolskii space.
The inhomogeneous version of those spaces, $\mathcal{{F}}^s_{p,q}
$ and $\mathcal{{B}}^s_{p,q} $
are defined by their
norms,
$$ \|f\|_{\mathcal{{F}}^s_{p,q} }= \|f\|_{L^p }
+\|f\|_{\mathcal{\dot{F}}^s_{p,q}}, \quad
\|f\|_{\mathcal{{B}}^s_{p,q} }= \|f\|_{L^p }
+\|f\|_{\mathcal{\dot{B}}^s_{p,q}}, $$ respectively.
We note that for $0<s <1$, $ 2\leq p< \infty $, $q=2$,
$\mathcal{{F}}^s_{p,2}
\cong
L^{p}_s (\Bbb R^n )$,
introduced above(see pp. 163,\cite{ste1}).
If $\frac{n}{\min\{ p,q\} } <s <
1$, $n<p<\infty$ and $n<q\leq \infty$, then
$\mathcal{{F}}^s_{p,q} $ coincides with the Triebel-Lizorkin
space $F^s_{p,q} (\Bbb R^n ) $ defined above(see pp. 101,
\cite{tri}). On the other hand, for wider range of parameters, $0<s<1$, $0<p\leq \infty$,
$0<q\leq \infty$, $\mathcal{{B}}^s_{p,q} $ coincides with the Besov
space $B^s_{p,q} (\Bbb R^n )$ defined above.
\section{Local well-posedness and blow-up criteria}
\setcounter{equation}{0}
\subsection{Kato's local existence and the BKM criterion}
We review briefly the key elements in the classical local existence
proof of solutions in the Sobolev space $H^m (\Bbb R^n)$, $m>n/2+1$,
essentially obtained by Kato in \cite{kat2}(see also \cite{maj3}).
After that we derive the celebrated Beale, Kato and Majda's
criterion on finite time blow-up of the local solution in $H^m (\Bbb
R^n)$, $m>n/2+1$ in \cite{bea}. Taking derivatives $D^\alpha $ on
the first equation of (E)
and then taking $L^2$ inner
product it with $D^\alpha v$, and summing over the multi-indices
$\alpha $ with $|\alpha
|\leq m$, we obtain
\begin{eqnarray*}
\lefteqn{\frac12 \frac{d}{dt} \|v \|_{H^m} ^2
=-\sum_{|\alpha|\leq m} (D^\alpha (v\cdot \nabla )v
-(v\cdot \nabla )D^\alpha v , D^\alpha v)_{L^2}}\hspace{.1in}\\
&&-\sum_{|\alpha|\leq m}((v\cdot \nabla )
D^\alpha v, D^\alpha v )_{L^2}-\sum_{|\alpha|\leq m} (D^\alpha \nabla\, p
, D^\alpha v )_{L^2}\\
&&\quad=I+II+III.
\end{eqnarray*}
Integrating by part, we obtain
$$
III=\sum_{|\alpha|\leq m}(D^\alpha p
, D^\alpha \mathrm{div} \, v)_{L^2} =0.
$$
Integrating by part again, and using the
fact div $v=0$, we have
$$
II=-\frac12 \sum_{|\alpha|\leq m} \int_{\Bbb R^n} (v\cdot \nabla
)|D^\alpha v |^2 dx
=\frac12 \sum_{|\alpha|\leq m}\int_{\Bbb R^n} \mbox{div}\, v |D^\alpha
v |^2 dx =0
.
$$
We now use the so called {\it commutator type of
estimate}(\cite{kla}),
$$
\sum_{|\alpha|\leq m}\|D^{\alpha} (fg)-fD^\alpha g\|_{L^2}
\leq C (\|\nabla f\|_{L^\infty}
\|g\|_{H^{m-1}} + \|f\|_{H^m} \|g\|_{L^\infty} ),
$$
and obtain
$$
I\leq
\sum_{|\alpha|\leq m} \|D^\alpha (v\cdot \nabla )v
-(v\cdot \nabla )D^\alpha v\|_{L^2} \|v\|_{H^{m}}
\leq C\|\nabla v \|_{L^\infty} \|v \|_{H^{m}}^2 .
$$
Summarizing the above estimates, I,II,III, we have
\begin{equation}\label{sum}
\frac{d}{dt} \|v \|_{H^{m}} ^2
\leq C \|\nabla v\|_{L^\infty} \|v \|_{H^{m}} ^2.
\end{equation}
Further estimate, using the {\it Sobolev inequality}, $\|\nabla
v\|_{L^\infty} \leq C \|v\|_{H^m} $ for $m>n/2+1$, gives
$$
\frac{d}{dt} \|v \|_{H^{m}} ^2
\leq C \|v \|_{H^{m}} ^3.
$$
Thanks to Gr{o}nwall's lemma we have the local in time
uniform estimate
\begin{equation} \label{ap1}
\|v(t)\|_{H^m} \leq \frac{\|v_0\|_{H^m}}{1-Ct \|v_0\|_{H^m}
}
\leq 2 \|v_0\|_{H^m}
\end{equation}
for all $t\in [0, T ]$, where $T=\frac{1}{2C\|v_0\|_{H^m}}$.
Using this estimate we can also deduce the estimate
\begin{equation}\label{ap2} \sup_{0\leq t\leq T} \left\| \frac{\partial
v}{\partial t
}\right\|_{H^{m-1}} \leq C (\|v_0 \|_{H^m} )
\end{equation}
directly from (E).
The estimates (\ref{ap1}) and (\ref{ap2}) are the two key a priori
estimates for the construction of the local
solutions. For actual elaboration of the proof we approximate the Euler
system by mollification, Galerkin projection, or
iteration of successive linear systems, and construct a sequence of
smooth approximate solutions to (E), say $\{ v_k (\cdot ,t)\}_{k\in \Bbb N}$ corresponding to
the initial data $ \{v_{0,k}\}_{k\in \Bbb N}$ respectively with $v_k \to v_0 $
in $H^m (\Bbb R^n)$.
The estimates for the approximate solution sequence provides us with
the uniform estimates of $\{ v_k\}$ in $L^\infty ([0, T]; H^m (\Bbb
R^n )) \cap Lip ([0, T]; H^{m-1}(\Bbb R^n) )$. Then, applying the
standard Aubin-Nitche compactness lemma, we can pass to the limit
$k\to \infty$ in the equations for the approximate solutions, and
can show that the limit $v=v_\infty$ is a solution of the (E) in
$L^\infty ([0, T]); H^m (\Bbb R^n ))$. By further argument we can
actually show that the limit $v$ belongs to $ C([0, T]; H^m (\Bbb
R^n ))\cap AC([0, T]; H^{m-1} (\Bbb R^n ))$, where $AC([0,T]; X )$
denotes the space of $X$ valued absolutely continuous functions on
$[0,T]$. The general scheme of such existence proof is standard, and
is described in detail in \cite{maj1} in the general type of
hyperbolic conservation laws. The approximation of the Euler system
by mollification was done for the construction of local solution of
the Euler(and the
Navier-Stokes) system in \cite{maj3}. \\
Regarding the question of finite time blow-up of the local classical
solution in $H^m (\Bbb R^n )$, $m>n/2 +1$, constructed above, the
celebrated Beale-Kato-Majda theorem(called the BKM criterion) states
that \begin{equation}\label{bkm}
\lim\sup_{t\nearrow T_*} \|v(t)\|_{H^s }=\infty \quad\mbox{if and only
if}\quad
\int_0 ^{T_*} \|\omega (s)\|_{L^\infty} ds =\infty.
\end{equation}
We outline the proof of this theorem below(for more details see \cite{bea, maj3}).
We first recall the Beale-Kato-Majda's version of the {\it
logarithmic
Sobolev inequality},
\begin{equation}\label{lo}
\|\nabla v \|_{L^\infty}
\leq C\|\o \|_{L^\infty} (1+\log (1+\|v
\|_{H^{m}} ))+C \|\o\|_{L^2}
\end{equation}
for $m>n/2 +1$.
Now suppose $\int_0 ^{T_*} \|\o (t)\|_{L^\infty } dt := M(T_*)< \infty .$
Taking $L^2$ inner product the first equation of (E) with $\o$, then after
integration by part we obtain
$$
\frac12\frac{d}{dt} \|\o \|_{L^2} ^2 =((\o\cdot \nabla )v ,\o
)_{L^2}\leq \|\o\|_{L^\infty} \|\nabla v \|_{L^2} \|\o \|_{L^2}
= \|\o\|_{L^\infty}\|\o \|_{L^2}^2,
$$
where we used the identity $\|\nabla v\|_{L^2}= \|\o\|_{L^2}$.
Applying the Gronwall lemma, we obtain
\begin{equation}\label{ineq1} \|\o (t)\|_{L^2} \leq \|\o_0 \|_{L^2}
\exp\left( \int_0 ^{T_*}
\|\o (s)\|_{L^\infty} ds \right)= \|\o_0 \|_{L^2}\exp[{M(T_*)}].
\end{equation}
for all $t\in [0, T_*]$.
Substituting (\ref{ineq1}) into (\ref{lo}), and combining this with
(\ref{sum}), we have
\[
\frac{d}{dt} \|v \|_{H^{m}} ^2
\leq C \left[ 1+\|\o \|_{L^\infty} [1+\log (1+\|v
\|_{H^{m}} )\right] \|v \|_{H^{m}} ^2
\]
Applying the Gr{o}nwall lemma we deduce
\begin{equation}\label{fini}
\|v (t)\|_{H^{m}} \leq \|v_0 \|_{H^{m}}
\exp\left[ C_1 \exp\left(C_2 \int_0 ^{T_*}
\|\o (\tau)\|_{L^\infty} d\tau\right)\right]
\end{equation}
for all $t\in [0,T_*]$ and for some constants $C_1$ and $C_2$ depending on $M(T_*)$.
The inequality (\ref{fini}) provides the with the necessity part of
(\ref{bkm}).
The sufficiency part is an easy consequence of the Sobolev
inequality,
$$ \int_0 ^{T_*} \|\omega (s)\|_{L^\infty} ds \leq
T_* \sup_{0\leq t\leq T_*} \|\nabla v(t)\|_{L^\infty} \\
\leq C T_* \sup_{0\leq t\leq T_*}\|v(t)\|_{H^m}
$$
for $m>n/2+1$.
There are many other results of local well-posedness in various
function spaces(see \cite{cha1, cha2, cha3, cha6, che1,
che2,kat1,kat3,kat4, lic, tem1, tem2, vis1, vis2, yud1}). For the
local existence proved in terms of a geometric formulation see
\cite{ebi}. For the BKM criterion for solutions in the H\"{o}lder
space see \cite{bah}. Immediately after the BKM result appeared,
Ponce derive similar criterion in terms of the deformation
tensor(\cite{pon}). Recently, Constantin proved local well-posedness
and a blow-up criterion in terms of the active vector
formulation(\cite{con5}).
\subsection{Refinements of the BKM criterion}
The first refinement of the BKM criterion was done by Kozono and
Taniuchi in \cite{koz1}, where they proved
\begin{theorem}
Let $s>n/p+1$. A solution $v$ of the Euler equations belonging to
$C([0,T_*);W^{s,p}(\Bbb R^n)) \cap C^1 ([0, T_* ); W^{s-2, p}(\Bbb
R^n ))$ blows up at $T_*$ in $W^{s,p}(\Bbb R^n))$, namely
$$
\lim\sup_{t\nearrow T_*}\|v(t)\|_{W^{s,p}} =\infty \quad \mbox{if
and only if}\quad \int_0 ^{T_*} \|\o \|_{BMO} =\infty.
$$
\end{theorem}
The proof is based on the following version of the logarithmic
Sobolev inequality for $f\in W^{s,p} (\Bbb R^n)$, $s>n/p$,
$1<p<\infty$,
$$
\|f\|_{L^\infty} \leq C \left(1+\|f\|_{BMO} (1+\log^+
\|f\|_{W^{s,p}} ) \right).
$$
(see \cite{koz1} for details of the proof). We recall now the
embedding relations (\ref{emb}). Further refinement of the above
theorem is the following(see\cite{cha1, cha6}).
\begin{theorem}
\begin{description}
\item[(i)] {\rm{ (super-critical case)}} Let $s>n/p+1, p\in (1, \infty), q\in [1,
\infty]$.
Then, the local in time solution $v\in C([0, T_*); B^s_{p, q} (\Bbb R^n))$
blows up at $T_* $ in $B^s_{p, q} (\Bbb R^n)$, namely
$$
\lim\sup_{t\nearrow T_*} \|v(t)\|_{B^s_{p,q}} =\infty
\quad\mbox{
if and only if}
\quad
\int_0 ^{T_*} \|\omega (t)\|_{\dot{B}^0_{\infty ,\infty}}dt
=\infty .
$$
\item[(ii)] {\rm{(critical case)}} Let $p\in (1, \infty)$.
Then, the local in time solution $v\in C([0, T_*); B^{n/p+1}_{p,1}
(\Bbb R^n))$ blows up at $T_* $ in $ B^{n/p+1}_{p,1} (\Bbb R^n))$,
namely
$$
\lim\sup_{t\nearrow T_*} \|v(t)\|_{ B^{n/p+1}_{p,1}} =\infty
\quad\mbox{
if and only if
}\quad
\int_0 ^{T_*} \|\omega (t)\|_{\dot{B}^0_{\infty ,1}}dt
=\infty.
$$
\end{description}
\end{theorem}
The proof of (i) is based on the following version of the
logarithmic Sobolev inequality for $f\in B^s_{p, q} (\Bbb R^n)$ with
$s> n/p$ with $p\in (1, \infty )$, $q\in [1, \infty]$.
$$
\|f\|_{L^\infty}
\leq C(1+\|f\|_{\dot{B}^0_{\infty, \infty} }(\log^+ \|f\|_{B^s_{p,q}} +1))
$$
In \cite{koz2} Kozono, Ogawa and Taniuchi obtained similar results
to
(i) above independently.\\
\ \\
In all of the above criteria, including the BKM theorem, we need to
control all of the three components of the vorticity vector to
obtain regularity. The following theorem proved in \cite{cha7}
states that actually we only need to control two components of the
vorticity in the slightly stronger norm than the $L^\infty$
norm(recall again the embedding (\ref{emb})).
\begin{theorem} Let $m>5/2$.
Suppose $v\in C([0,T_*);H^m (\Bbb R^3))$ is the local classical
solution of (E) for some $T_1 >0$, corresponding to the initial
data $v_0 \in H^m (\Bbb R^3)$, and $\o=$ curl $v$ is its vorticity.
We decompose
$\o=\tilde{\o}+\o^3 e_3$, where $\tilde{\o}=\o^1 e_1 +\o^2 e_2$,
and $\{e_1, e_2, e_3\}$ is the canonical basis of $\Bbb R^3$. Then,
$$
\lim\sup_{t\nearrow T_*} \|v(t)\|_{H^m} =\infty \quad \mbox{if
and only if} \quad \int_0 ^{T_*}\|\tilde{\o}
(t)\|_{\dot{B}^0_{\infty, 1}} ^2 dt =\infty.
$$
\end{theorem}
Note that $\tilde{\o}$ could be the projected component of $\o$
onto any plane in $\Bbb R^3$. For the solution $v=(v^1, v^2, 0)$ of
the Euler equations on the $x_1-x_2$ plane, the vorticity is $\o
=\o^3 e_3$ with $\o_3=\partial_{x_1} v^2 -\partial_{x_2} v^1$, and
$\tilde{\o}\equiv 0$. Hence, as a trivial application of the above
theorem we reproduce the well-known global in time regularity
for the 2D Euler equations.\\
\ \\
Next we present recent results on the blow up criterion in terms of
hessian of the pressure. As in the introduction we use $P=(P_{ij})$,
$S=(S_{ij})$ and $\xi$ to denote the hessian of the pressure, the
deformation tensor and the vorticity direction field respectively,
introduced in section 1. We also introduce the notations
$$ \frac{S\xi}{|S\xi |} =\zeta, \quad \zeta \cdot P \xi = \mu.
$$
The following is proved in \cite{cha15}.
\begin{theorem}
If the solution $v(x,t)$ of the 3D Euler
system with $v_0 \in H^m (\Bbb R^3 )$, $m>5/2$, blows up at $T_*$,
namely $\lim\sup_{t\nearrow T_*} \|v(t)\|_{H^m} =\infty$, then
necessarily,
$$
\int_0 ^{T_*} \exp\left(\int_0 ^\tau \|\mu (s)\|_{L^\infty}ds\right)
d\tau =\infty.
$$
\end{theorem}
Similar criterion in terms of the hessian of pressure, but with
different detailed geometric configuration from the above theorem
is obtained by Gibbon, Holm, Kerr and Roulstone in \cite{gib3}.
Below we denote $\xi_p=\xi \times P \xi$.
\begin{theorem}
Let $m \geq 3$ and $\Bbb T^3 =\Bbb R^3/\Bbb Z^3$ be a periodic box.
Then, there exists a global solution of the Euler equations $v\in
C([0, \infty ); H^m (\Bbb T^3 ))\cap C^1 ([0, \infty); H^{m-1} (\Bbb
T^3 ))$ if
$$\int_0 ^T \|\xi_p (t)\|_{L^\infty} dt <\infty, \quad \forall t\in (0,
T)
$$
excepting the case where $\xi$ becomes collinear with the
eigenvalues of $P$ at $T$.
\end{theorem}
Next, we consider the axisymmetric solution of the Euler equations,
which means velocity field $v(r, x_3, t)$, solving the Euler
equations, and having the representation
$$
v(r, x_3, t)=v^r (r,x_3,t)e_r +v^\theta (r,x_3,t)e_\theta +v^3
(r,x_3,t)e_3
$$
in the cylindrical coordinate system, where
$$e_r = (\frac{x_1}{r}, \frac{x_2}{r}, 0), \quad
e_\theta = (-\frac{x_2}{r}, \frac{x_1}{r}, 0),\quad e_3=(0,0,1),\quad
r=\sqrt{x_1^2 +x_2^2}.
$$
In this case also the question of finite time blow-up of solution
is wide open(see e.g. \cite{gra1, gra2,caf1} for studies in such
case). The vorticity $\o =$ curl $\,v$ is computed as
\[ \omega = \omega^r e_r+ \omega^{\t}e_{\t} + \omega^3 e_3 , \]
where
\[ \omega^r = -\partial_{x_3} v^{\t},
\quad \omega^{\t} = \partial_{x_3} v^r - \partial_r v^3,
\quad \omega ^3 = \frac{1}{r}\partial_r(rv^{\t}). \]
We denote
\[ \tilde{v} = v^r e_r + v^3 e_3 ,\qquad \tilde{\omega} = \omega^r e_r + \omega^3 e_3 .\]
Hence, $ \omega = \tilde{\omega}+\vec{\o}_\t $, where $\vec{\o}_\t
=\omega^\theta e_\theta$. The Euler equations for the axisymmetric
solution are
$$
\left\{ \aligned
&\frac{\partial v^r}{\partial t} +(\tilde{v}\cdot \tilde{\nabla} )v^r
=-\frac{\partial p}{\partial r} ,\\
&
\frac{\partial v^\theta}{\partial t} +(\tilde{v}\cdot \tilde{\nabla}
)v^\t
=-\frac{v^r v^\t}{ r} ,\\
&
\frac{\partial v^3}{\partial t} +(\tilde{v}\cdot \tilde{\nabla} )v^3
=-\frac{\partial p}{\partial x_3} ,\\
&
\textrm{div }\, \tilde{v} =0 ,\\
&
v(r,x_3,0)=v_0 (r,x_3),
\endaligned \right.
$$
where $\tilde{\nabla} =e_r \frac{\partial}{\partial r}
+e_3 \frac{\partial}{\partial x_3}.$
In the axisymmetric Euler equations the vorticity formulation
becomes
$$
\left\{\aligned &\frac{\partial\omega^r}{\partial t} +(\tilde{v} \cdot
\tilde{\nabla} )
=\omega^r(\tilde{\omega} \cdot \tilde{\nabla}) v^r \\
& \frac{\partial\omega^3}{\partial t} +(\tilde{v} \cdot
\tilde{\nabla} )=\omega^3(\tilde{\omega} \cdot \tilde{\nabla}) v^3 \\
&
\left[\frac{\partial}{\partial t} + \tilde{v} \cdot
\tilde{\nabla} \right] \left(\frac{\omega ^{\t}}{r}\right)
= (\tilde{\omega} \cdot \tilde{\nabla}) \left(\frac{v^{\t}}{r}\right)
\\
&\textrm{div }\, \tilde{v} = 0 , \quad \textrm{curl }\, \tilde{v}
=\vec{\o}^\t.
\endaligned \right.
$$
In the case of axisymmetry we only need to control just one
component of vorticity(the angular component) to get the regularity
of solution. The following theorem is proved in \cite{cha21}.
\begin{theorem} Let $v\in C([0,T_*);H^m (\Bbb R^3))$, $m>5/2$, be the
local classical axisymmetric solution of (E), corresponding to an
axisymmetric initial data $v_0 \in H^m (\Bbb R^3 )$. Then, the
solution blows up in $H^m (\Bbb R^3 )$ at $T_*$ if and only if for
all $(\gamma, p)\in (0,1)\times [1, \infty]$ we have
\begin{eqnarray}\label{axiscri}
\lefteqn{\int_0 ^{T_*}
\|\o_\t (t)\|_{L^\infty} dt +\int_0 ^{T_*} \exp\Big[ \int_0 ^t
\left\{ \|\o_\t (s)\|_{L^\infty} (1+\log^+ (\|\o _\t
(s)\|_{C^{\gamma}}
\|\o_\t (s)\|_{L^p} )) \right. }\hspace{2.in}\nonumber \\
&&\left. +\|\o_\t (s)\log^+ r \|_{L^\infty}\right\}ds\Big] dt
=\infty.
\end{eqnarray}
\end{theorem}
We observe that although we need to control only $\omega_\t$ to get the
regularity, the its norm, which is in $C^\gamma$, is higher than the
$L^\infty$ norm used in the BKM criterion. If we use the `critical'
Besov space $\dot{B}^0_{\infty,1}(\Bbb R^3)$ we can derive slightly
sharper criterion than Theorem 2.6 as follows(see \cite{cha7} for
the proof).
\begin{theorem} Let $v\in C([0,T_*);H^m (\Bbb R^3))$ be the
local classical axisymmetric solution of (E), corresponding to an
axisymmetric initial data $v_0 \in H^m (\Bbb R^3 )$. Then,
\begin{equation}\label{axiscri1}
\lim\sup_{t\nearrow T_*} \|v(t)\|_{H^m} =\infty \quad
\mbox{if and only if} \quad \int_0 ^{T_*}\|\vec{\o}_\theta
(t)\|_{\dot{B}^0_{\infty, 1}} dt =\infty.
\end{equation}
\end{theorem}
We observe that contrary to (\ref{axiscri}) we do not need to
control the high regularity norm, the $C^{\gamma}$ norm of vorticity
in (\ref{axiscri1}). We can also have the regularity of the
axisymmetric Euler equation by controlling only one component of the
velocity, the swirl velocity $v^\theta$ as in the follows proved in
\cite{cha22}.
\begin{theorem} Let $v\in C([0,T_*);H^m (\Bbb R^3))$, $m>5/2$, be the
local classical axisymmetric solution of (E), corresponding to an
axisymmetric initial data $v_0 \in H^m (\Bbb R^3 )$. Then, the
solution blows up in $H^m (\Bbb R^3 )$ at $T_*$ if and only if
$$ \int_0 ^{T_*}
\left( \|\tilde{\nabla}v^\t \|_{L^\infty}
+\left\|\frac{\partial v^\t}{\partial r} \right\|_{L^\infty}
\left\| \frac{1}{r}\frac{\partial v^\t}{\partial
x_3}\right\|_{L^\infty}\right) dt =\infty.
$$
\end{theorem}
\subsection{Constantin-Fefferman-Majda's and other related results}
In order to study the regularity problem of the 3D Navier-Stokes
equations Constantin and Fefferman investigated the geometric
structure of the integral kernel in the vortex stretching term more
carefully, and discovered the phenomena of `depletion effect' hidden
in the integration(\cite{con8}, see also \cite{con2} for detailed
exposition related to this fact). Later similar geometric structure
of the vortex stretching term was studied extensively also in the blow-up
problem of the 3D Euler equations by Constantin, Fefferman and
Majda(\cite{con9}).
Here we first present their results in detail, and results in
\cite{cha10}, where the BKM criterion and the
Constantin-Fefferman-Majda's criterion are interpolated in some
sense. Besides those results presented in this subsection we also
mention that there are other interesting geometric approaches to the
Euler equations such as the quaternion formulation by
Gibbon(\cite{gib1, gib2, gib3}). We begin with a definition in
\cite{con9}. Given a set $W\in \Bbb R^3$ and $r>0$ we use the
notation $B_r(W)=\{ y\in B_r (x)\, ;\, x\in W\}$.
\begin{definition}
A set $W_0 \subset \Bbb R^3$ is called smoothly directed if there
exists $\rho>0$ and $r, 0<r\leq \rho/2$ such that the following
three conditions are satisfied.
\end{definition}
\begin{itemize}
\item[(i)] For every $a\in W^*_0
=\{ q\in W_0 \, ; \, |\o_0 (q)|\neq 0\},$ and all $t\in [0,T)$, the
vorticity direction field $\xi (\cdot ,t)$ has a Lipshitz
extension(denoted by the same letter) to the Euclidean ball of
radius $4\rho$ centered at $X(a,t)$ and
$$
M=\lim_{t\to T} \sup_{a\in W^*_0} \int_0 ^t \|\nabla \xi (\cdot
,t)\|_{L^\infty (B_{4\rho} (X(a,t)))} dt <\infty.
$$
\item[(ii)] The inequality
$$ \sup_{B_{3r} (W_t )}|\o (x,t)|\leq m
\sup_{B_r (W_t )} |\o (x,t)|$$ holds for all $t\in [0,T)$ with
$m\geq 0$ constant.
\item[(iii)] The inequality
$$ \sup_{B_{4\rho} (W_t )}|v (x,t)| \leq U$$
holds for all $t\in [0, T)$.
\end{itemize}
The assumption (i) means that the direction of vorticity is well
behaved in a neighborhood of a bunch of trajectories. The assumption
(ii) states that this neighborhood is large enough to capture the
local intensification of $\o$. Under these assumptions the
following theorem is proved in \cite{con9}.
\begin{theorem}
Assume $W_0$ is smoothly directed. Then there exists $\tau >0$ and
$\Gamma$ such that
$$ \sup_{B_r (W_t )} |\o (x,t)|\leq \Gamma
\sup_{B_\rho (W_{t_0})} |\o (x,t_0 )| $$ holds for any $0\leq t_0
<T$ and $0\leq t-t_0 \leq \tau$.
\end{theorem}
They also introduced the notion of regularly directed set, closely
related to the geometric structure of the kernel defining vortex
stretching term.
\begin{definition}
We sat that a set $W_0$ is regularly directed if
there exists $\rho >0$ such that
$$ \sup_{a W^*_0} \int_0 ^T K_{\rho} (X(a,t))dt <\infty$$
where
$$ K_{\rho} (x)=\int_{|y|\leq \rho} |D(\hat{y},
\xi (x+y), \xi (x)) | |\o (x+y)|\frac{dy}{|y|^3}
$$
and
$$
D(\hat{y}, \xi (x+y) ,\xi (x))=(\hat{y}\cdot \xi (x)) \mathrm{Det}
(\hat{y}, \xi (x+y), \xi (x)).
$$
\end{definition}
Under the above assumption on the regularly directed sets the
following is proved also in \cite{con9}.
\begin{theorem}
Assume $W_0$ is regularly directed. Then there exists a constant
$\Gamma $ such that
$$ \sup_{a \in W_0} |\o (X(a,t),t)| \leq \Gamma \sup_{a \in W_0}
|\o_0 (a)|
$$
holds for all $t\in [0,T]$.
\end{theorem}
The original studies by Constantin and Fefferman in \cite{con8}
about the Navier-Stokes equations, which motivated the above
theorems, are concerned mainly about the regularity of solutions in
terms of the vorticity direction fields $\xi$. We recall, on the
other hand, that the BKM type of criterion controls the magnitude of
vorticity to obtain regularity. Incorporation of both the direction
and the magnitude of vorticity to obtain regularity for the 3D
Navier-Stokes equations was first initiated by Beir\~{a}o da Veiga
and Berselli in \cite{bei2}, and developed further by Beir\~{a}o da
Veiga in \cite{bei1}, and finally refined in an `optimal' form in
\cite{cha18}(see also \cite{cha20} for a localized version). We now
present the Euler equation version of the result in \cite{cha18}.
Below we use the notion of particle trajectory $X(a ,t)$, which is
defined by the classical solution $v(x,t)$ of (E). Let us denote
$$
\Omega_0 =\{ x\in \Bbb R^3 \, |\, \o_0 (x) \neq 0\},\quad
\Omega_t = X(\O_0 ,t).
$$
We note that the direction field of the vorticity,
$\xi (x,t)=\o (x,t)/ |\o (x,t)| $, is well-defined if
$x\in \O_t$ for $v_0 \in C^1 (\Bbb R^3 )$ with $\O_0 \neq
\emptyset$.
The following is the
main theorem proved in \cite{cha10}.
\begin{theorem}
Let $v(x,t)$ be the local classical solution to $(E)$ with initial
data $v_0 \in H^m (\Bbb R^3)$, $m>5/2$, and $\o(x,t)=$curl $v(x,t)$.
We assume $\O_0 \neq\emptyset$.
Then, the solution can be continued up to $T +\delta$ as the
classical solution, namely
$v(t)\in C([0,T+\delta];{H^m} (\Bbb R^3 ))$ for some $\delta >0$,
if there exists
$p,p', q,q', s, r_1,r_2,r_3$ satisfying the following
conditions,
\begin{equation}\label{th1}
\frac{1}{p} +\frac{1}{p'} =1, \qquad \frac{1}{q} +\frac{1}{q'} =1,
\end{equation}
and
\begin{equation}\label{th2}
\frac{1}{r_1}+\frac{p'}{r_2}\left(1-\frac{sq'}{3} \right)+
\frac{1}{r_3} \left\{ 1-p' \left( 1-\frac{sq'}{3} \right)\right\}
=1
\end{equation}
with
\begin{equation}\label{th3}
0<s<1, \qquad 1\leq \frac{3}{sq'} <p \leq \infty, \qquad \quad 1\leq
q\leq\infty,
\end{equation}
and
\begin{equation}\label{th4}
r_1\in [1, \infty],\, r_2 \in \left[p'\left(1-\frac{sq'}{3}
\right),\infty\right],\, r_3 \in \left[1-p'\left(1-\frac{sq'}{3}
\right), \infty\right]
\end{equation}
such that for direction field $\xi (x,t)$, and the magnitude of
vorticity $|\o (x,t)|$ the followings hold;
\begin{equation}\label{reg}
\int_0 ^{T} \|\xi (t )\|^{r_1}_{ \mathcal{\dot{F}}^s_{\infty,q} (\O_t) } dt <\infty,
\end{equation}
and
\begin{equation}\label{int}
\int_0 ^{T}\|\o(t) \|^{r_2}_{L^{pq'}(\O_t)} dt +
\int_0 ^{T}\|\o(t) \|^{r_3}_{L^{q'}(\O_t)} dt <\infty.
\end{equation}
\end{theorem}
\ \\
In order to get insight implied by the above theorem let us consider
the special case of $p=\infty, q=1$. In this case the conditions
(\ref{reg})-(\ref{int}) are satisfied if
\begin{equation}\label{remm1}
\xi (x,t )\in L^{r_1}(0,T; C^{ s} (\Bbb R^3 )),
\end{equation}
\begin{equation}\label{remm2}
\o
(x,t) \in L^{r_2} (0, T; L^{\infty} (\Bbb R^3 ))\cap L^{r_3} (0, T;
L^{\infty} (\Bbb R^3 )).
\end{equation}
with
\begin{equation}\label{remm3}
\frac{1}{r_1}+\frac{1}{r_2}\left(1-\frac{s}{3} \right)+
\frac{s}{3r_3}
=1.
\end{equation}
Let us {\em formally} pass $s\to 0$
in (\ref{remm1}) and (\ref{remm3}),
and choose $r_1=\infty$ and $ r_2
=r_3=1$, then we find that the conditions (\ref{remm1})-(\ref{remm2}) reduce to the
BKM condition,
since the condition $\xi (x,t )\in L^{\infty}(0,T; C^{0} (\Bbb R^3
))\cong L^{\infty}((0,T)\times \Bbb R^3)$
is obviously satisfied due to the fact that $|\xi (x,t)|\equiv 1$.\\
The other case of interest is $q'=3/s$, where
(\ref{reg})-(\ref{int}) are satisfied if
\begin{equation}\label{remm4}
\xi (x,t )\in L^{r_1}(0,T;
\mathcal{\dot{F}}^s_{\infty,\frac{3}{3-s}}(\Bbb R^3)
),\qquad
|\o
(x,t)| \in L^{r_2} (0, T; L^{\frac{3}{s}} (\Bbb R^3 )).
\end{equation}
with $1/r_1+1/r_2=1.$ The condition (\ref{remm4}) shows explicitly the
mutual compensation between the regularity of the
direction field and the integrability of the vorticity
magnitude in order to control regularity/singularity of solutions of the
Euler equations.\\
\ \\
Next we review the result of non-blow-up conditions due to Deng,
Hou and Yu\cite{den1,den2}.
We consider a time $t$ and a vortex line segment $L_t$ such that
the maximum of vorticity over the whole domain is comparable to the
maximum of vorticity on over $L_t$, namely
$$ \Omega (t):=\sup_{x\in \Bbb R^3}|\o (x,t)|\sim \max_{x\in L_t} |\o (x,t)|.$$
We denote $ L(t):=\mbox{arc length of $L_t$}$; $\xi$, $\mathbf{n}$
and $\kappa$ are
the unit tangential and the unit normal vectors to $L_t$ and
the curvature of $L_t$ respectively. We also use the notations,
\begin{eqnarray*}
U_\xi (t)&:=&\max_{x,y\in L_t} |(v\cdot \xi)(x,t)-(v\cdot \xi
)(y,t)|,\\
U_n (t)&:=&\max_{x\in L_t} |(v\cdot \mathbf{n})(x,t)|,\\
M(t)&:=&\max_{x\in L_t} |(\nabla \cdot \xi)(x,t)|,\\
K(t)&:=&\max_{x\in L_t} \kappa (x,t).
\end{eqnarray*}
We denote by $X(A, s,t)$ the image by the trajectory map at time
$t>s$ of fluid particles at $A$ at time $s$. Then, the following is
proved in \cite{den2}.
\begin{theorem}
Assume that there is a family of vortex line segment $L_t$ and $T_0
\in [0, T^*)$, such that $X(L_{t_1} , t_1 ,t_2 ) \supseteq L_{t_2}$
for all $T_0 <t_1 <t_2 <T^*$. Also assume that $\Omega (t)$ is
monotonically increasing and $\max_{x\in L_t} |\o (x,t)| \geq c_0
\Omega (t)$ for some $c_0$ when $t$ is sufficiently close to $T^*$.
Furthermore, we assume there are constants $C_U ,C_0 ,c_L$ such
that
\begin{enumerate}
\item $[U\xi (t)+U_n (t) K(t) L(t)] \leq C_U (T^* -t)^{-A}$ for
some constant $A\in (0,1)$,
\item $M(t)L(t), K(t)L(t) \leq C_0$,
\item $L(t) \geq c_L (T^* -t)^B$ for some constant $B\in (0, 1)$.
\end{enumerate}
Then there will be no blow-up in the 3D incompressible Euler flow
up to time $T^*$, as long as $B< 1-A$.
\end{theorem}
In the endpoint case of $B=1-A$ they deduced the following
theorem(\cite{den1}).
\begin{theorem}
Under the same assumption as in Theorem 2.10, there will be no
blow-up in the Euler system up to time $T^*$ in the case $B=1-A$,
as long as the following condition is satisfied:
$$ R^3 K< y_1 \left( R^{A-1} (1-A)^{1-A} /(2-A)^{2-A} \right),
$$
where $R=e^{C_0} /c_0, K:=\frac{C_U c_0}{c_L (1-A)}$, and $y_1 (m)$
denotes the smallest positive $y$ such that $m=y/(1+y)^{2-A}$.
\end{theorem}
We refer \cite{den1,den2} for discussions on the various connections of Theorem 2.10 and Theorem 2.11
with numerical computations.
\section{Blow-up scenarios}
\setcounter{equation}{0}
\subsection{Vortex sheet collapse}
We recall that a vortex line is an integral curve of the vorticity,
and a vortex tube is a tubular neighborhood in $\Bbb R^3$ foliated
by vortex lines. Numerical simulations(see e.g. \cite{cho}) show
that vortex tubes grow and thinner(stretching), and folds before
singularity happens. We review here the result by Cordoba and
Fefferman\cite{cor4} excluding a type of vortex tube collapse.
Let $Q=I_1 \times I_2 \times I_3\subset \Bbb R^3$ be a closed
rectangular box, and let $T>0$ be given. A regular tube is a
relatively open set $\Omega_t \subset Q$ parameterized by time $t\in
[0, T)$, having the form $\Omega_t =\{ (x_1, x_2 , x_3 )\in Q \, :\,
\theta (x_1,x_2,x_3, t)<0\}$ with $\theta \in C^1 (Q\times [0, T))$,
and satisfying the following properties:
$$ |\nabla _{x_1,x_2} \theta |\neq 0\, \mbox{for}\,
(x_1,x_2,x_3,t)\in Q\times [0, T), \theta (x_1,x_2,x_3,t) =0;
$$
$$\Omega_t (x_3):=\{ (x_1,x_2 )\in I_1 \times I_2 \, :\,
(x_1,x_2,x_3 )\in \Omega _t \}\, \mbox{is non-empty},
$$
for all $x_3 \in I_3, t\in [0, T)$;
$$ \mbox{closure} (\Omega_t (x_3 )) \subset \mbox{interior}
(I_1\times I_2 )
$$
for all $x_3 \in I_3, t\in [0, T)$.
Let $u(x,t)=(u_1 (x,t),u_2 (x,t),u_3 (x,t))$ be a $C^1$ velocity
field defined on $Q\times [0,T)$. We say that the regular tube
$\Omega_t $ moves with the velocity field $u$, if we have
$$
\left(\frac{\partial}{\partial t} +u\cdot \nabla_x \right)\theta =0
\, \mbox{whenever} \, (x,t)\in Q\times [0,T), \theta (x,t)=0.
$$
By the Helmholtz theorem we know that a vortex tube arising from a
3D Euler solution moves with the fluid velocity. The following
theorem proved by Cordoba and Fefferman(\cite{cor4}) says for the 3D
Euler equations that a vortex tube cannot reach zero thickness in
finite time, unless it bends and twists so violently that no part of
it forms a regular tube.
\begin{theorem}
Let $\Omega_t \subset Q (t\in [0,T))$ be a regular tube that moves
with $C^1$, divergence free velocity field $u(x,t)$.
$$ \mbox{If}\quad \int_0 ^T \sup_{x\in Q} |u(x,t)|dt <\infty,
\quad \mbox{then}\quad \lim\inf_{t\to T_-} \mbox{Vol}(\Omega_t )>0.
$$
\end{theorem}
\subsection{Squirt singularity}
The theorem of excluding the regular vortex
tube collapse was generalized by Cordoba, Fefferman and de la
Lave(\cite{cor7}), which we review here. We first recall their
definition of squirt singularities. Let $\Omega \subset \Bbb R^n$ be
an open set. We denote $X_t (a)=X(a,t)$, which is a particle
trajectory generated by a $C^1$ vector field $u:\Omega \times
[0,T)\to \Bbb R^n$ such that div $u=0$. We also set $X_{t,s} (a)$ as
the position at time $t$ of the trajectory which at time $t=s$ is
$a$. We have obvious relations,
$$ X_t (a)=X_{t,0} (a), \quad X_{t,s} = X_t\circ X_s^{-1}, \quad
X_{t,s}\circ X_{s, s_1} =X_{t,s_1}.
$$
For $\mathcal{S} \subset \Omega$, we denote by
$$
X_{t,s}^\Omega \mathcal{S} =\{ x\in \Omega \, |\, x=X_t (a), \, a
\in \mathcal{S},\, X_s (a)\in \Omega ,\, 0\leq s\leq t\}.
$$
In other words, $X_{t,s}^\Omega \mathcal{S}$ is the evolution of the
set $\mathcal{S}$, starting at time $a$, after we eliminate the
trajectories which step out of $\Omega$ at some time. By the
incompressibility condition on $u$, we have
that Vol($X_{t,s} \mathcal{S})$ is independent of $t$, and the
function $t\mapsto$Vol($X_{t,s} \mathcal{S})$ is nonincreasing.
\begin{definition}
Let $\Omega_-,\Omega_+$ be open and bounded sets.
$\overline{\Omega_-} \subset \Omega_+$. Therefore, dist $(\Omega_- ,
\Bbb R^n -\Omega_+ )\geq r >0$. We say that $u$ experiences a
squirt singularity in $\Omega_-$, at time $T>0$, when for every
$0\leq s <T$, we can find a set $\mathcal{S}_s \subset \Omega_+$
such that
\begin{itemize}
\item[(i)] $\mathcal{S}_s \cap \Omega_-$ has positive measure, $0\leq
s<T$,
\item[(ii)] $\lim_{t\to T}\mathrm{Vol}(X_{t,s}^{\Omega^+}\mathcal{S}_s
)=0$.
\end{itemize}
\end{definition}
The physical intuition behind the above definition is that there
is a region of positive volume so that all the fluid occupying it
gets ejected from a slightly bigger region in finite time.
Besides the vortex tube collapse singularity introduced in the
previous subsection the potato chip singularity and the saddle
collapse singularity, which will be defined below, are also special
examples of the squirt singularity, connected with real fluid
mechanics phenomena.
\begin{definition}[potato chip singularity]
We say that $u$ experiences a potato chip singularity when we can
find continuous functions
$$f_\pm :\Bbb R^{n-1}\times [0, T) \to \Bbb R
$$
such that
\begin{eqnarray*}
f_+ (x_1 ,\cdots ,x_{n-1}, t)\geq f_-(x_1 ,\cdots ,x_{n-1}, t),
\, t\in [0, T], x_1, \cdots , x_{n-1} \in B_{2r} (\Pi x^0 ),\\
f_+ (x_1 ,\cdots ,x_{n-1},0 )\geq f_-(x_1 ,\cdots ,x_{n-1},0),
\quad x_1, \cdots , x_{n-1} \in B_{r} (\Pi x^0 ),\\
\lim_{t\to T_-} [f_+ (x_1 ,\cdots ,x_{n-1}, t)- f_-(x_1 ,\cdots
,x_{n-1}, t)]=0 \forall x_1, \cdots , x_{n-1} \in B_{2r} (\Pi x^0 )
\end{eqnarray*}
and such that the surfaces
$$ \Sigma _{\pm, t} =\{ x_n = f_\pm(x_1 ,\cdots ,x_{n-1}, t)
\}\subset \Omega$$
are transformed into each other by the flow
$$X (\Sigma _{\pm, 0}, t) \supset \Sigma _{\pm, t}. $$
In the above $\Pi $ is projection on the first $n-1$ coordinates.
\end{definition}
Previously to \cite{cor7} potato chip singularities were considered
in the 2D and 3D flows by C\'{o}rdoba and
Fefferman(\cite{cor6},\cite{cor4a} respectively)
in the name of `sharp front'. In particular the exclusion of sharp
front in the 2D quasi-geostrophic equation is proved in
\cite{cor6}. The following notion of saddle collapse singularity is relevant only
for 2D flows.
\begin{definition}[saddle collapse singularity] We consider
foliation of a neighborhood of the origin(with coordinates $x_1,
x_2$) whose leaves are given by equations of the form
$$
\rho:= (y_1 \beta(t) +y_2 )\cdot (y_1 \delta (t) +y_2 )=cons
$$
and $(y_1 ,y_2 )=F_t (x_1 ,x_2 )$, where $\beta , \delta : [0, T)
\to \Bbb R^+$ are $C^1$ foliations, $F$ is a $C^2$ function of
$x,t,$ for a fixed $t$, and $F_t$ is an orientation preserving
diffeomorphism.
We say that the foliation experiences a saddle collapse when
$$\lim\inf_{t\to T} \beta (t) +\delta (t) =0.
$$
If the leaves of the foliation are transported by a vector field
$u$, we say that the vector field $u$ experiences a saddle
collapse.
\end{definition}
The exclusion of saddle point singularity in the 2D
quasi-geostrophic equation(see Section 4.3 below) was proved by
C\'{o}rdoba in \cite{cor2}. The following `unified' theorem is
proved in \cite{cor7}.
\begin{theorem}
If $u$ has a squirt singularity at $T$, then $\int_s ^T \sup_x
|u(x,t)|dt =\infty$ for all $ s\in (0, T)$. Moreover, if $u$ has a
potato chip singularity, then
$$
\int_s ^T \sup_x |\Pi u (x,t)|dt =\infty.
$$
\end{theorem}
\subsection{Self-similar blow-up}
In this subsection we review the scenario of self-similar
singularity studied in \cite{cha13}. We first observe that the Euler
system (E) has scaling property that
if $(v, p)$ is a
solution of the system (E), then for any $\lambda >0$ and $\alpha
\in \Bbb R $ the functions
\begin{equation}
\label{self}
v^{\lambda, \alpha}(x,t)=\lambda ^\alpha v (\lambda x, \l^{\a +1}
t),\quad p^{\l, \a}(x,t)=\l^{2\a}p(\l x, \l^{\a+1} t )
\end{equation}
are also solutions of (E) with the initial data
$ v^{\lambda, \alpha}_0(x)=\lambda ^\alpha v_0
(\lambda x)$.
In view of the scaling
properties in (\ref{self}), the self-similar blowing up
solution $v(x,t)$ of (E), if it exists, should be of the form,
\begin{eqnarray}
\label{vel}
v(x, t)&=&\frac{1}{(T_*-t)^{\frac{\a}{\a+1}}}
V\left(\frac{x}{(T_*-t)^{\frac{1}{\a+1}}}\right)
\end{eqnarray}
for $\a \neq -1$ and $t$ sufficiently
close to $T_*$.
If we assume that initial vorticity $\omega_0$ has compact support,
then the nonexistence of self-similar blow-up of the form given by
(\ref{vel}) is rather immediate from the well-known formula, $
\omega (X(a,t),t)=\nabla_a X(a,t) \omega_0 (a)$. We want to
generalize this to a nontrivial case. Substituting (\ref{vel}) into
(E), we find that $V$ should be
a solution of the system
\[
(SE)
\left\{
\begin{aligned}
\frac{\a}{\a +1} V+&\frac{1}{\a+1} (x\cdot\nabla ) V
+(V\cdot \nabla )V =-\nabla P,
\\
\textrm{div }\, V& =0
\end{aligned}
\right.
\]
for some scalar function $P$,
which could be regarded as the Euler version of the Leray
equations introduced in \cite{ler}. The question of existence of
nontrivial solution to (SE) is equivalent to the that of existence of
nontrivial self-similar finite time blowing up
solution to the Euler system of the form (\ref{vel}).
Similar question for the
3D Navier-Stokes equations was raised by Leray in \cite{ler}, and
answered negatively by Necas, Ruzicka and Sverak(\cite{nec}), the result of which was
refined later by Tsai in \cite{tsa}(see also \cite{mil} for a generalization).
Combining the energy conservation with a simple scaling argument,
the author of this article showed that if there exists a nontrivial self-similar finite time
blowing up solution, then its helicity should be zero(\cite{cha4}).
Mainly due to lack of the laplacian term
in the right hand side of the first equations of (SE), we cannot
expect the maximum principle, which was crucial in the works in
\cite{nec} and \cite{tsa} for the 3D Navier-Stokes equations.
Using a completely different argument from those previous ones,
in \cite{cha13} it is proved that there cannot be self-similar blowing up solution to (E) of the form
(\ref{vel}), if the vorticity decays sufficiently fast near infinity.
Given a smooth velocity field $v(x,t)$,
the particle trajectory mapping $a\mapsto X(a,t)$
The inverse $A(x,t):=X^{-1} (x ,t)$ is called the back to label
map, which satisfies $
A (X(a,t),t)=a,$ and $ X(A
(x,t),t)=x$.
The existence of the back-to-label
map $ A(\cdot, t)$ for our smooth velocity $v(x,t)$ for
$t\in (0, T_*)$, is guaranteed
if we assume a uniform decay of $v(x,t)$ near infinity, independent of the
decay rate(see \cite{con5}). The following is proved in
\cite{cha13}.
\begin{theorem}
There exists no finite time blowing up self-similar solution
$v(x,t)$ to the 3D Euler equations of the form
(\ref{vel}) for $t \in (0 ,T_*)$ with $\a \neq -1$, if $v$ and $V$
satisfy the following conditions:
\begin{itemize}
\item[(i)]For all $t\in (0, T_*)$ the particle trajectory mapping $X(\cdot ,t)$ generated
by the classical solution $v\in C([0,T_*);C^1(\Bbb R^3;\Bbb R^3 ))$ is a
$C^1$ diffeomorphism from $\Bbb R^3$ onto itself.
\item[(ii)] The vorticity satisfies $\Omega=$curl $V\neq 0$, and there exists $p_1 >0$ such that
$\Omega \in L^p (\Bbb R^3
)$ for all $p\in (0, p_1)$.
\end{itemize}
\end{theorem}
We note that the condition (ii) is satisfied, for example, if $
\Omega\in L^{1}_{loc}(\Bbb R^3)$ and there exist constants $R, K$
and $\varepsilon_1, \varepsilon_2
>0$ such that $ |\Omega(x)|\leq K e^{-\varepsilon_1 |x|^{\varepsilon_2}}$ for $|x|>R$,
then we have
$\Omega\in L^p (\Bbb R^3;\Bbb R^3)$ for all $p\in (0, 1)$.
Indeed,
for all $p\in (0, 1)$, we have
\begin{eqnarray*}
\int_{\Bbb R^3} |\Omega(x)|^p dx
&=& \int_{|x|\leq R} |\Omega(x)|^p dx +\int_{|x|>R} |\Omega (x)|^p \,dx\\
&\leq&|B_R |^{1-p}\left(\int_{|x|\leq R} |\Omega(x)| dx\right)^{p} +
K^p \int_{\Bbb R^3} e^{-p\varepsilon_1|x|^{\varepsilon_2}}dx <\infty ,
\end{eqnarray*}
where $|B_R|$ is the volume of the ball $B_R$ of radius $R$.\\
In the zero vorticity case $\Omega =0$, from div $V=0$ and curl
$V=0$,
we have $V=\nabla h$, where $h(x)$ is a harmonic function in $\Bbb R^3$. Hence,
we have an easy example of self-similar blow-up,
$$v(x,t)=\frac{1}{(T_*-t)^{\frac{\a}{\a+1}}}\nabla
h\left(\frac{x}{(T_*-t)^{\frac{1}{\a+1}}}\right),$$
in $\Bbb R^3$, which is also the case for the 3D Navier-Stokes with $\alpha=1$.
We do not consider this case in the theorem.\\
The above theorem is actually a corollary of the following more
general theorem.
\begin{theorem}
Let $v\in C([0,T);C^1 (\Bbb R^3 ;\Bbb R^3))$ be a classical solution
to the 3D Euler equations generating the particle trajectory mapping $X(\cdot ,t)$ which is a
$C^1$ diffeomorphism from $\Bbb R^3$
onto itself for all $t\in (0,T)$. Suppose we have
representation of the vorticity of the solution,
by
\begin{equation}\label{thm13}
\o (x,t) = \Psi(t)\Omega (\Phi(t)x) \qquad \forall t\in [0, T)
\end{equation}
where $\Psi(\cdot)\in C([0, T );(0, \infty))$,
$\Phi(\cdot)\in C([0, T );\Bbb R^{3\times 3})$
with $\mathrm{det}(\Phi(t))\neq 0$ on $[0, T)$; $\Omega = \mathrm{curl}\, V$ for some
$V$, and there exists $p_1 >0$ such that
$|\Omega |$ belongs to $ L^p
(\Bbb R^3)$ for all $p\in (0, p_1 )$. Then,
necessarily
either $\mathrm{det}(\Phi(t))\equiv \mathrm{det}(\Phi(0))$ on $[0, T)$, or
$\Omega=0$.
\end{theorem}
For the detailed proof of Theorem 3.3 and 3.4 we refer \cite{cha13}.
\subsection{Asymptotic self-similar blow-up}
In this subsection we consider the possibility of more refined
scenario of self-similar singularity than in the previous
subsection, called the asymptotic self-similar singularity. This
means that the local in time smooth solution evolves into a
self-similar profile as the possible singularity time is approached.
The similar notion was considered previously by Giga and Kohn in
their study of semilinear heat equation(\cite{gig}). Their sense of
convergence of solution to the self-similar profile is the
pointwise sense with a time difference weight to make it scaling
invariant, and cannot apply directly to the case of Euler system. It
is found in \cite{cha14} that if we make the sense of convergence
strong enough, then we can apply the notion of asymptotic
self-similar singularity to the Euler and the Navier-Stokes
equations. The following theorem is proved in \cite{cha14}.
\begin{theorem}
Let $v\in C([0, T);B^{\frac{3}{p}+1}_{p, 1}(\Bbb R^3))$ be a
classical solution to the 3D Euler equations.
Suppose there exist $p_1>0$, $\a > -1$, $\bar{V}\in C^1 (\Bbb R^3)$
with $ \lim_{R\to \infty} \sup_{|x|=R} |\bar{V}(x)|=0$
such
that $\bar{\Omega}=$curl $\bar{V}\in L^q(\Bbb R^3)$ for all $q\in
(0,p_1)$, and the following convergence holds true:
$$
\lim_{t\nearrow T}
(T-t)^{\frac{\a-3}{\a+1}}\left\|v(\cdot, t)
-\frac{1}{(T-t)^{\frac{\a}{\a+1}}} \bar{V}
\left(\frac{\cdot}{(T-t)^{\frac{1}{\a+1}}} \right)\right\|_{L^1 }
=0,
$$
and
$$
\sup_{t\in (0,T)}
(T-t)\left\|\o(\cdot, t) -\frac{1}{T-t} \bar{\O}
\left(\frac{\cdot}{(T-t)^{\frac{1}{\a+1}}}
\right)\right\|_{\dot{B}^0_{\infty, 1} }<\infty.
$$
Then, $\bar{\O}=0$, and $v(x,t)$ can be extended to a solution of the 3D Euler system in $[0, T+\delta]\times \Bbb R^3$,
and belongs to $C([0, T+\delta]; B^{\frac{3}{p}+1}_{p, 1}(\Bbb R^3))$ for some $\delta
>0$.
\end{theorem}
We note that the above theorem still
does not exclude the possibility that the sense of vorticity convergence to the
asymptotically self-similar singularity is weaker than $L^\infty$
sense. Namely, a self-similar vorticity profile could be approached
from a local classical solution in the pointwise sense in space,
or in the $L^p(\Bbb R^3)$ sense for
some $p$ with $1\leq p<\infty$. In \cite{cha14} we also proved
nonexistence of
asymptotic self-similar solution to the 3D Navier-Stokes equations with appropriate
change of functional setting(see also \cite{hou2}
for related results).\\
\ \\
The proof of the above theorem follows without difficulty from the
following blow-up rate estimate(\cite{cha14}), which is interesting
in itself.
\begin{theorem}
Let $p\in [1, \infty)$ and $v\in C([0, T);B^{\frac{3}{p}+1}_{p,
1}(\Bbb R^3))$ be a classical solution to the 3D Euler equations.
There exists an absolute constant $\eta >0$ such that if
\begin{equation}\label{th11}
\inf_{0\leq t<T} (T-t) \|\o (t)\|_{\dot{B}^0_{\infty, 1} } <\eta
,
\end{equation}
then $v(x,t)$ can be extended to a solution of the 3D Euler system in $[0, T+\delta]\times \Bbb R^3$,
and belongs to $C([0, T+\delta]; B^{\frac{3}{p}+1}_{p,
1}(\Bbb R^3))$ for some $\delta
>0$.
\end{theorem}
We note that the proof of the local existence for $v_0 \in
B^{\frac{3}{p}+1}_{p, 1}(\Bbb R^3)$ is
done in \cite{cha1, cha6}(see also \cite{vis1}).
The above theorem implies that
if $T_*$ is the first time of singularity, then we have
the lower estimate of the blow-up rate,
\begin{equation}\label{blow}
\|\o (t)\|_{\dot{B}^0_{\infty, 1} } \geq \frac{C}{T_*-t}\quad
\forall t\in [0, T_*)
\end{equation}
for an absolute constant $C$. The estimate (\ref{blow}) was actually
derived previously by a different argument in \cite{cha4}.
We observe that (\ref{blow}) is consistent with both of the
BKM criterion(\cite{bea}) and
the Kerr's numerical calculation in \cite{ker1} respectively.\\
The above continuation principle
for a local solutions in $B^{\frac{3}{p}+1}_{p,
1}(\Bbb R^3)$ has obvious applications to the solutions belonging
to more conventional function spaces, due to the embeddings,
$$ H^m (\Bbb R^3 )\hookrightarrow C^{1, \gamma} (\Bbb R^3)\hookrightarrow
B^{\frac{3}{p}+1}_{p,
1}(\Bbb R^3) $$
for $m>5/2$ and $\gamma = m-5/2$.
For example the local solution $v\in C([0, T); H^m (\Bbb R^3 ))$ can
be continued to be $v\in C([0, T+\delta]; H^m (\Bbb R^3 ))$ for
some $\delta$, if
(\ref{th11}) is satisfied. Regarding other implication of the above theorem on the self-similar
blowing up solution to the 3D Euler equations, we have the following
corollary(see \cite{cha14} for the proof).
\begin{cor}
Let $v\in C([0, T_*);B^{\frac{3}{p}+1}_{p, 1}(\Bbb R^3))$ be a
classical solution to the 3D Euler equations. There exists $\eta
>0$ such that if we have representation for the
velocity by (\ref{vel}), and $\bar{\O}$=curl $\bar{V}$ satisfies
$\|\bar{\O}\|_{\dot{B}^0_{\infty ,1}} <\eta$, then $\bar{\Omega}=0$, and
$v(x,t)$ can be extended to a solution of the 3D Euler system in $[0, T_*+\delta]\times \Bbb R^3$,
and belongs to $C([0, T_*+\delta]; B^{\frac{3}{p}+1}_{p,
1}(\Bbb R^3))$ for some $\delta
>0$.
\end{cor}
\section{Model problems}
\setcounter{equation}{0}
Since the blow-up problem of the 3D Euler equations looks beyond the
capability of current analysis, people proposed simplified model
equations to get insight on the original problem.
In this section we review some of them.
Besides those results presented in the following subsections there
are also studies on the other model problems.
In \cite{din} Dinaburg, Posvyanskii and Sinai analyzed a quasi-linear
approximation of the infinite system of ODE arising when we write the
Euler equation in a Fourier mode. Fridlander and
Pavlovi\'{c}(\cite{fri2}) considered a vector model, and
Katz and Pavlovi\'{c}(\cite{katz}) studied dyadic model, both of
which are resulted from the representation of the Euler equations
in the wave number space. Okamoto and Ohkitani proposed
model equations in \cite{oka}, and a `dual' system to the Euler
equations was considered in \cite{cha6a}.
\subsection{Distortions of the Euler equations}
Taking trace of the matrix equation (\ref{basfor1} ) for $V$, we
obtain
$\Delta p=-tr V^2$, and hence the hessian of the pressure is given
by
$$ P_{ij}=- \partial_i \partial _j (\Delta )^{-1} tr V^2 =-R_i R_j tr
V^2,$$
where $R_j$ denotes the Riesz transform(see Section 1).
Hence we can rewrite the Euler equations as
\begin{equation}\label{matrixeuler}
\frac{D V}{Dt}=-V^2 -R[tr V^2], \quad R[\cdot]=(R_iR_j[\cdot])
\end{equation}
In \cite{con1} Constantin studied a distorted version of the above
system,
\begin{equation}\label{coneq}
\frac{\partial V}{\partial t}
=-V^2 -R[tr V^2], \quad R[\cdot]=(R_iR_j[\cdot]),
\end{equation}
where the convection term of the original Euler equations is
deleted, and showed that a solution indeed blows up in finite time.
Note that the incompressibility condition, tr$V=0$, is
respected in the system (\ref{coneq}).
Thus we find that the convection term should have significant role
in the study of the blow-up problem of the original Euler
equations.\\
On the other hand, in \cite{liu} Liu and Tadmor studied another
distorted version of (\ref{matrixeuler}), called the restricted Euler
equations,
\begin{equation}\label{liueq}
\frac{D V}{Dt}=-V^2 +\frac{1}{n} (tr V^2 ) I.
\end{equation}
We observe that in (\ref{liueq}) the convection term is kept, but
the non-local operator $R_iR_j(\cdot)$ is changed into a
local one $-1/n \delta_{ij}$, where the numerical factor $-1/n$ is to keep the
incompressibility condition.
Analyzing the dynamics of eigenvalues of the matrix $V$, they
showed that the system (\ref{liueq}) actually blows up in finite
time(\cite{liu}).
\subsection{The Constantin-Lax-Majda equation}
In 1985 Constantin, Lax and Majda constructed a one dimensional
model of the vorticity formulation of the 3D Euler equations, which
preserves the feature of nonlocality in vortex stretching term.
Remarkably enough this model equation has an explicit solution for
general initial data(\cite{con11}). In this subsection we briefly
review their result.
We first observe from section 1 that vorticity formulation of the Euler equations is
$ \frac{D \omega }{Dt} = S \omega ,$ where
$S=\mathcal{P} (\omega )$ defines a singular integral operator of
the Calderon-Zygmund type on $\o$.
Let us replace
${ \omega (x,t)\Rightarrow \theta (x,t)}$,
${ \frac{D}{Dt}\Rightarrow\frac{\partial}{\partial t}}$,
${\mathcal{P}(\cdot
)\Rightarrow H (\cdot )}$, where
$\theta (x,t)$ is a scalar
function on $\Bbb R \times \Bbb R_+$, and
$H(\cdot)$ is the \textit{Hilbert transform} defined by
$$Hf (x)=\frac{1}{\pi} p.v. \int_{\Bbb R}
\frac{f(y)}{x-y}dy.
$$
Then we obtain, the following 1D scalar equation from the 3D Euler
equation,
$$ (CLM): \frac{\partial \theta}{\partial t}=\theta H\theta .$$
This model preserve the feature of { nonlocality} of the Euler
system (E), in contrast to the more traditional one dimensional
model, the inviscid Burgers equation.
We recall the identities for the Hilbert transform:
\begin{equation}\label{hilbert}
H(Hf)=-f,\quad
H(fHg+gHf)=(Hf)(Hg)-fg,
\end{equation}
which imply $ H(\theta H\theta )=\frac12 [ (H\theta)^2 -\theta^2
]$.
Applying $H$ on both sides of the first equation of (CLM), and using the formula
(\ref{hilbert}), we
obtain
$$
(CLM)^* : (H\theta )_{t}+\frac{1}{2}((H\theta )^{2}-(\theta
)^{2})=0.
$$
We introduce the complex valued function,
$$ { z(x,t)=H\theta
(x,t)+i\theta (x,t)}.
$$
Then, (CLM) and (CLM)$^*$ are the imaginary and the real parts
of the complex Riccati equation,
$$
z_{t} (x,t) =\frac12 z^2 (x,t)
$$
The explicit solution to the complex equation is
$$
z(x,t)=\frac{z_0}{1-\frac12 t z_0 (x)} .
$$
Taking the imaginary part, we obtain
$$
\theta (x,t) = \frac{4\theta_0 (x)}{(2-tH\theta_0 (x))^2 +t^2
\theta_0 ^2(x)}.
$$
The finite time blow-up occurs if and only if
$$
{Z=\{ x \, | \, \theta_0 (x)=0 \, \mbox{and}\,
H\theta_0(x) >0 \}\neq \varnothing }.
$$
In \cite{scho} Schochet find that even if we add viscosity term to
(CLM) there is a finite time blow-up. See also \cite{sak1, sak2} for
the studies of other variations of (CLM).
\subsection{The 2D quasi-geostrophic equation and its 1D model}
The 2D quasi-geostrophic
equation(QG) models the dynamics of the mixture of cold and hot air
and the fronts between them.
$$
(QG) \left\{\aligned
& \theta _{t}+(u\cdot \nabla )\theta =0, \\
&v=-\nabla ^{\bot }(-\Delta )^{-\frac{1}{2}}\theta ,\\
&\theta (x,0)=\theta _{0}(x),
\endaligned
\right.
$$
where $\nabla ^{\bot }=(-\partial _{2},\partial _{1})$.
Here, $\theta (x,t)$ represents the temperature of the air at
$(x,t)\in \Bbb R^2 \times \Bbb R_+$. Besides its direct physical
significance (QG) has another important feature that it has very
similar structure to the 3D Euler equations. Indeed, taking $\nabla
^\bot$ to (QG), we obtain
$$
\left(\frac{\partial }{\partial t} +v\cdot \nabla\right){ \nabla
^\bot \theta}
=({\nabla ^\bot \theta }\cdot
\nabla ) v,
$$
where
$$
v(x,t)=\int_{\Bbb R^2} \frac{ \nabla ^\bot \theta (y,t)}{|x-y|} dy
.
$$
This is exactly the vorticity formulation of 3D Euler
equation if we identify
$$ \nabla^\bot \theta \Longleftrightarrow \omega
$$
After first observation and pioneering analysis of these feature by
Constantin, Majda and Tabak(\cite{con12}) there have been so many
research papers devoted to the study of this equation(also the
equation with the viscosity term, $-(-\Delta )^{\alpha}\theta$,
$\alpha
>0$, added)(\cite{con2,con6a, con12a, con12b, cor2, cor1, cor3, cor5,
cor6, cor7, cor8, wu1,wu2,wu3, cha2a, cha10, cha10a, cha24a, ohk,
din, kis, caff}). We briefly review some of them
here concentrating on the inviscid equation (QG). \\
The local existence can be proved easily by standard method. The BKM
type of blow-up criterion proved by Constantin, Majda and Tabak in
\cite{con12} is
\begin{equation}\label{cmtcri}
\lim\sup_{t\nearrow T_*} \|\theta(t)\|_{H^s }=\infty
\quad
\mbox{if and only if}\quad
\int_0 ^{T_*} \|{ \nabla ^\bot \theta (s)}\|_{L^\infty} ds =\infty.
\end{equation}
This criterion has been refined, using the Triebel-Lizorkin spaces
\cite{cha2a}.
The question of finite time singularity/global regularity is still
open. Similarly to the Euler equations case we also have the following
geometric type of condition for the regularity. We define the
direction field $\xi =\nabla^\bot \t /|\nabla^\bot \t|$ whenever $|\nabla^\bot \theta
(x,t)|\neq 0$.
\begin{definition}
We say that
a set $\Omega_0$ is smoothly directed if there exists $\rho >0$ such
that
$$
\sup_{q\in \Omega_0} \int_0 ^T |v(X(q,t),t)|^2 dt < infty,
$$
$$
\sup_{q\in \Omega^*_0} \int_0 ^T \|\nabla \xi (\cdot ,
t)\|_{L^\infty (B_\rho (X(q,t)))} ^2 dt <\infty,
$$
where $B_\rho (X)$ is the ball of radius $\rho$ centered at $X$ and
$$ \Omega ^*_0 =\{ q \in \Omega_0 \, ;\, |\nabla \theta_0 (q)| \neq
0\}.$$
\end{definition}
We denote $ \mathfrak{O}_T(\Omega_0 )=\{ (x,t) \, |\, x\in
X(\Omega_0 ,t), 0\leq t\leq T\}$. Then, the following theorem is
proved(\cite{con12}).
\begin{theorem}
Assume that $\Omega_0$ is smoothly directed. Then
$$ \sup_{(x,t) \in \mathfrak{O}_T(\Omega_0 )} |\nabla \theta (x,t)|<\infty,$$
and no singularity occurs in $\mathfrak{O}_T(\Omega_0 )$.
\end{theorem}
Next we present an `interpolated' result between the criterion
(\ref{cmtcri}) and Theorem 4.1, obtained in \cite{cha10}. Let us
denote bellow,
$$
D_0 =\{ x\in \Bbb R^2 \, |\, |\nabla^\bot\th_0 (x)| \neq 0\},\quad
D_t = X(D_0 ,t).
$$
The following theorem(\cite{cha10}) could be also considered as the
(QG) version of Theorem 2.9.
\begin{theorem}
Let $\th (x,t)$ be the local classical solution to $(QG)$ with
initial data $\theta_0 \in H^m (\Bbb R^2)$,
$m>3/2$, for which $D_0
\neq\emptyset$. Let $\xi (x,t)=\nabla^\bot\th (x,t)/ |\nabla^\bot\th (x,t)| $ be
the direction field defined for $x\in D_t $.
Then, the solution can be continued up to $T <\infty$ as the classical solution, namely
$\th (t)\in C([0,T];{H^m} (\Bbb R^2 ))$,
if there exist parameters
$p,p', q,q', s, r_1,r_2,r_3$ satisfying the following
conditions,
\begin{equation}\label{31}
\frac{1}{p} +\frac{1}{p'} =1, \qquad \frac{1}{q} +\frac{1}{q'} =1,
\end{equation}
and
\begin{equation}\label{32}
\frac{1}{r_1}+\frac{p'}{r_2}\left(1-\frac{sq'}{2} \right)+
\frac{1}{r_3} \left\{ 1-p' \left( 1-\frac{sq'}{2} \right)\right\}
=1
\end{equation}
with
\begin{equation}\label{33}
0<s<1, \qquad 1\leq \frac{2}{sq'} <p \leq \infty, \qquad \quad 1\leq
q\leq\infty,
\end{equation}
and
\begin{equation}\label{34}
r_1\in [1, \infty],\, r_2 \in \left[p'\left(1-\frac{sq'}{2}
\right),\infty\right],\, r_3 \in \left[1-p'\left(1-\frac{sq'}{2}
\right), \infty\right]
\end{equation}
such that the followings hold:
\begin{equation}\label{35}
\int_0 ^{T} \|\xi (t )\|^{r_1}_{ \mathcal{\dot{F}}^s_{\infty,q} (D_t) } dt <\infty,
\end{equation}
and
\begin{equation}\label{36}
\int_0 ^{T}\|\nabla^\bot \th(t) \|^{r_2}_{L^{pq'}(D_t)} dt +
\int_0 ^{T}\|\nabla^\bot\th (t) \|^{r_3}_{L^{q'}(D_t)} dt <\infty.
\end{equation}
\end{theorem}
In order to compare this theorem with the Constantin-Majda-Tabak
criterion (\ref{cmtcri}), let
us consider the case of $p=\infty, q=1$. In this case the conditions
(\ref{35})-(\ref{36}) are satisfied if
\begin{equation}\label{37}
\xi (x,t )\in L^{r_1}(0,T; C^{ s} (\Bbb R^2 )),
\end{equation}
\begin{equation}\label{38}
|\nabla^\bot\th
(x,t)| \in L^{r_2} (0, T; L^{\infty} (\Bbb R^2 ))\cap L^{r_3} (0, T;
L^{\infty} (\Bbb R^2 )).
\end{equation}
with
$$
\frac{1}{r_1}+\frac{1}{r_2}\left(1-\frac{s}{2} \right)+
\frac{s}{2r_3}
=1.
$$
If we {\em formally} passing $s\to 0$, and choosing
$r_1=\infty, r_2=r_3
=1$, we find that the conditions (\ref{37})-(\ref{38})
are satisfied if the
Constantin-Majda-Tabak condition in (\ref{cmtcri}) holds, since the condition
$$\xi (x,t )
\in L^{\infty}(0,T; C^{0} (\Bbb
R^2))\cong L^\infty ((0,T)\times \Bbb R^2 )$$
is automatically satisfied.
The other is the case $q'=2/s$, where
(\ref{35})-(\ref{36}) are satisfied if
\begin{equation}
\xi (x,t )\in L^{r_1}(0,T;
\mathcal{\dot{F}}^s_{\infty,\frac{2}{2-s}}(\Bbb R^2 )
),\quad
|\nabla^\bot\th
(x,t)| \in L^{r_2} (0, T; L^{\frac{2}{s}} (\Bbb R^2 ))
\end{equation}
with $1/r_1+1/r_2=1,$ which shows mutual compensation of the
regularity of the direction field $\xi (x,t)$ and the
integrability of the
magnitude of gradient $|\nabla^\bot \theta (x,t)|$ to obtain smoothness
of $\theta (x,t)$.\\
\ \\
There had been a conjectured scenario of singularity in (QG) in the
form of hyperbolic saddle collapse of level curves of $\theta
(x,t)$(see Definition 3.3). This was excluded by C\'{o}rdoba in
1998(\cite{cor2}, see also Section 3.2 of this article). Another
scenario of singularity, the sharp front singularity,
which is a two dimensional version of
potato chip singularity(see Definition 3.2 with $n=2$)
was excluded by C\'{o}rdoba and Fefferman in \cite{cor6} under
the assumption of suitable velocity control(see Section 3.2).\\
\ \\
We can also consider the possibility of self-similar singularity for
(QG). We first note that (QG) has the scaling property that
if $\t$ is a
solution of the system, then for any $\lambda >0$ and $\alpha \in
\Bbb R $ the functions
\begin{equation}
\label{self3a}
\t^{\lambda, \alpha}(x,t)=\lambda ^\alpha \t(\lambda x, \l^{\a +1}
t)
\end{equation}
are also solutions of (QG) with the initial data
$ \t ^{\lambda, \alpha}_0(x)=\lambda ^\alpha \t_0
(\lambda x)$.
Hence, the self-similar blowing up solution should be of the
form,
\begin{equation}
\label{self3b}
\t(x, t)=\frac{1}{(T_*-t)^{\frac{\a}{\a+1}}}
\Theta\left(\frac{x}{(T_*-t)^{\frac{1}{\a+1}}}\right)
\end{equation}
for $t$ sufficiently close $T_*$ and $\a \neq -1$.
The following theorem is proved in \cite{cha13}.
\begin{theorem}
Let $v $ generates a particle trajectory, which is a $C^1$
diffeomorphism from $\Bbb R^2$ onto itself for all $t\in (0, T_*)$.
There exists no nontrivial solution $ \t $ to the system $(QG)$ of
the form (\ref{self3b}), if there exists $p_1, p_2 \in (0, \infty
]$, $p_1 < p_2$, such that $\Theta\in L^{p_1} (\Bbb R^2 ) \cap
L^{p_2} (\Bbb R^2 )$.
\end{theorem}
We note that the integrability condition on the self-similar
representation function $\Theta$ in the above theorem is `milder'
than the case of the exclusion of self-similar Euler equations in
Theorem 3.3, in the sense that the decay condition is of $\Theta$
(not $\nabla^\bot \Theta$) near infinity is weaker than that of
$\Omega=$curl $V$.\\
\ \\
In the remained part of
this subsection we discuss a 1D model of the 2D quasi-geostrophic
equation studied in \cite{cha19}(see \cite{mor} for related
results). The construction of the one dimensional model can be done
similarly to the Constantin-Lax-Majda equation introduced in section
4.2. We
first note that
$$
v=-R^{\bot }\theta =(-R_2 \theta , R_1 \theta ),
$$
where $R_{j}$, $j=1,2$, is the two dimensional Riesz transform(see
Section 1).
We can rewrite the dynamical equation of (QG) as
$$
\theta _{t}+ \rm{div }\, [(R^{\bot }\theta )\theta ]=0,
$$
since $\rm{div} (R^{\bot }\theta )=0$.
To construct the one dimensional model we replace:
$${ R^{\bot
}(\cdot ) \Rightarrow H(\cdot ), \qquad \rm{div} (\cdot)
\Rightarrow
\partial_x}
$$
to obtain
$$\theta _{t}+(H(\theta )\theta )_{x}=0. $$
Defining the complex valued function
${ z(x,t)=H\theta (x,t)
+i\theta (x,t)}$, and following Constantin-Lax-Majda(\cite{con11}), we
find that our equation is the imaginary part of
$$z_t +zz_x =0, $$
which is complex Burgers'
equation. The characteristics method does not work here.
Even in that case we can show that the finite time blow-up occurs
for the generic initial data as follows.
\begin{theorem}
Given a periodic non-constant initial data $\theta_0\in
C^1([-\pi,\pi])$ such that $\int^{\pi}_{-\pi}\theta_0(x) dx =0$,
there is no $C^1([-\pi,\pi]\times[0,\infty))$ periodic solution to
the model equation.
\end{theorem}
For the proof we refer \cite{cha19}. Here we give a brief outline of
the construction of an explicit blowing up solution. We
begin with the complex Burgers equation:
$$
{z_{t}+zz_{x}=0, \quad z=u+i\theta}
$$
with $u(x,t) \equiv H\theta(x,t)$. Expanding it to real and
imaginary parts, we obtain the system:
$$
\left\{ \aligned
u_{t}+uu_{x}-\theta \theta _{x} &=0, \\
\theta _{t}+u\theta _{x}+\theta u_{x} &=0
\endaligned
\right.
$$
In order to perform the \textit{hodograph transform} we consider ${
x(u,\theta )}$ and ${ t(u,\theta )}$
We have,
\begin{eqnarray*}
u_{x} &=&Jt_{\theta }\;, \quad
\theta _{x} =-Jt_{u}\;, \\
u_{t} &=&-Jx_{\theta }\;, \quad \theta _{t} =Jx_{u}\;,
\end{eqnarray*} where $J=(x_{u}t_{\theta }-x_{\theta
}t_{u})^{-1}$.
By direct substitution we obtain,
$$
\left\{ \aligned -x_{\theta }+ut_{\theta }+\theta t_{u} &=0\;, \\
x_{u}-ut_{u}+\theta t_{\theta } &=0\; \endaligned \right. $$
as far
as $J^{-1}\neq 0$. This system can be written more compactly in the
form:
\begin{eqnarray*}
-(x-tu)_{\theta }+(t\theta )_{u} &=&0\;, \\
(x-tu)_{u}+(t\theta )_{\theta } &=&0\;,
\end{eqnarray*}
which leads to the
following Cauchy-Riemann system,
$$
\xi _{u} =\eta _{\theta },\quad \xi _{\theta } =-\eta _{u},
$$
where we set
$
\eta (u,\theta ):=
x(u,\theta )-t(u,\theta )u, \quad \xi (u,\theta ):= t(u,\theta
)\theta . $
Hence, $f(z)=\xi (u,\theta )+i\eta (u,\theta )$ with
$z=u+i\theta $ is an analytic function. Choosing $f(z)=\log\, z, $
we find,
\begin{equation}\label{1drel}
t\theta =\log \sqrt{u^{2}+\theta ^{2}} , \quad
x-tu =\arctan \frac{\theta }{u},
\end{equation}
which corresponds to the initial data,$ z(x,0)=\cos x+i\sin x .$
The relation (\ref{1drel}) defines implicitly the real and imaginary
parts $(u(x,t),\theta (x,t))$ of the solution.
Removing $\theta$ from the system, we obtain
$$
tu\tan (x-tu)=\log\left| \frac{u}{\cos (x-tu)}\right|,
$$
which defines $u(x,t)$ implicitly.
By elementary computations we find both $u_x$ and $\theta_x$ blow
up at $t=e^{-1}$.
\subsection{The 2D Boussinesq system and Moffat's problem}
The 2D Boussineq system for the incompressible fluid flows in $\Bbb
R^2$ is
$$
(B)_{\nu, \kappa }\left\{
\aligned
&\frac{\partial v}{\partial t} +(v\cdot \nabla )v =-\nabla p +\nu \Delta v+
\theta {e_2} ,
\\
&\frac{\partial \theta}{\partial t} +(v\cdot \nabla )\theta =\kappa \Delta \theta\\
&\textrm{div }\, v =0 ,\\
&v(x,0)=v_0 (x), \qquad \theta(x,0)=\theta_0 (x),
\endaligned
\right.
$$
where $v=(v_1, v_2)$, $v_j =v_j (x,t), j=1,2$, $(x,t)\in \Bbb R^2
\times (0, \infty)$, is the velocity vector field, $p=p(x,t)$ is the
scalar pressure, $\theta(x,t)$ is the scalar temperature, $\nu \geq
0$ is the viscosity, and $\kappa \geq 0$ is the thermal diffusivity,
and ${e_2} =(0,1)$. The Boussinesq system has important roles in the
atmospheric sciences(see e.g. \cite{maj4}). The global in time
regularity of $(B)_{\nu, \kappa }$ with $ \nu >0$ and $\kappa
>0$ is well-known(see e.g. \cite{can}). On the other hand, the
regularity/singularity questions of the fully inviscid case of
$(B)_{0,0}$ is an outstanding open problem in the mathematical fluid
mechanics. It is well-known that inviscid 2D Boussinesq system has
exactly same structure to the axisymmetric 3D Euler system {\em off}
the axis of symmetry(see e.g. \cite{maj2} for this observation).
This is why the inviscid 2D Boussinesq system can be considered as a
model equation of the 3D Euler system. The problem of the finite
time blow-up for the fully inviscid Boussinesq system is an
outstanding open problem. The BKM type of blow-up criterion,
however, can be obtained without difficulty(see \cite{cha22, cha23,
e, tan} for various forms of blow-up criteria for the Boussinesq
system)). We first consider the partially viscous cases, i.e. either
the zero diffusivity case, $\kappa =0$ and $\nu>0$, or the zero
viscosity case, $\kappa >0$ and $\nu=0$. Even the regularity
problem for partial viscosity cases has been open recently.
Actually, in an
article appeared in 2001, M. K. Moffatt
raised a question of finite time singularity in the case $\kappa =0,
\nu>0$ and its possible development in the limit $\kappa \to 0$ as
one of the 21th century problems(see the Problem no. 3 in
\cite{mof}). For this problem Cordoba, Fefferman and De La
LLave(\cite{cor7}) proved that special type of singularities, called
`squirt singularities', is absent. In \cite{cha11} the author
considered the both of two partial viscosity cases, and prove the
global in time regularity for both of the cases. Furthermore it is
proved that as diffusivity(viscosity) goes to zero the solutions of
$(B)_{\nu. \kappa}$ converge strongly to those of zero
diffusivity(viscosity) equations\cite{cha11}. In particular the
Problem no. 3 in \cite{mof} is solved. More precise statements of
these results are stated in Theorem 1.1 and Theorem 1.2 below.
\begin{theorem}
Let $\nu >0$ be fixed, and div $v_0=0$.
Let $m >2$ be an integer, and
$(v_0 , \theta_0 )\in H^m( \Bbb R^2
)$. Then, there
exists unique solution $(v, \th )$ with $\theta \in C([0, \infty );H^m (\Bbb R^2
))$ and $ v\in C([0, \infty );H^m (\Bbb R^2 ))\cap
L^2(0,T;H^{m+1}(\Bbb R^2 ))$ of the system $(B)_{\nu ,0}$.
Moreover, for
each $s<m$, the solutions $(v, \theta )$ of $(B)_{\nu ,\kappa}$
converge to the corresponding solutions of $(B)_{\nu ,0}$ in $C
([0,T];H^{s} (\Bbb R^2 ))$ as $\kappa \to 0$.
\end{theorem}
We note that Hou and Li also obtained the existence part of the
above theorem independently in \cite{hou1}. The following theorem is
concerned with zero viscosity problem with fixed positive
diffusivity.
\begin{theorem}
Let $\kappa >0$ be fixed, and div $v_0=0$.
Let $m >2$ be an integer. Let $m >2$ be an integer, and
$(v_0 , \theta_0 )\in H^m( \Bbb R^2
)$. Then, there
exists unique solutions $(v, \th)$ with $v\in C([0, \infty );H^m
(\Bbb R^2 ))$ and $ \theta\in C([0, \infty );H^m (\Bbb R^2 ))\cap
L^2(0,T;H^{m+1}(\Bbb R^2 ))$ of the system $(B)_{0 ,\kappa}$.
Moreover, for
each $s<m$, the solutions $(v, \theta )$ of $(B)_{\nu,\kappa}$
converge to the corresponding solutions of $(B)_{0,\kappa}$ in $C
([0,T];H^{s} (\Bbb R^2 ))$ as $\nu \to 0$.
\end{theorem}
The proof of the above two theorems in \cite{cha11} crucially uses
the Brezis-Wainger inequality in \cite{bre, eng}. Below we consider
the fully inviscid Boussinesq system, and show that there is no
self-similar singularities under milder decay condition near
infinity than the case of 3D Euler system. The inviscid Boussinesq
system (B)$=(B)_{0 ,0}$ has scaling property that
if $(v, \t , p)$ is a
solution of the system (B), then for any $\lambda >0$ and $\alpha
\in \Bbb R$ the functions
\begin{equation}
\label{self2a}
v^{\l, \a }(x,t)=\lambda ^\alpha v(\lambda x, \l^{\a +1}
t),\quad
\t ^{\l, \a } (x,t)= \l ^{2\a +1} \t (\lambda x, \l^{\a +1}
t),
\end{equation}
\begin{equation}
p^{\l, \a }(x,t)= \lambda ^{2\alpha}p (\lambda x,
\l^{\a +1}
t)
\end{equation}
are also solutions of (B) with the initial data
$$ v_0^{\l,\a}(x)=\lambda ^\alpha v_0
(\lambda x),\quad \t_0^{\l,\a}(x)=
\l ^{2\a +1} \t_0(\lambda x ).
$$
In view of the scaling
properties in (\ref{self2a}), the
self-similar blowing-up
solution $(v(x, t), \t (x,t))$ of (B) should of the form,
\begin{eqnarray}
\label{self2b1}
v(x, t)&=&\frac{1}{(T_*-t)^{\frac{\a}{\a+1}}}
V\left(\frac{x}{(T_*-t)^{\frac{1}{\a+1}}}\right),\\
\label{self2b2}
\t (x,t)&=&\frac{1}{(T_*-t)^{2\a+1}}
\Theta \left(\frac{x}{(T_*-t)^{\frac{1}{\a+1}}}\right),
\end{eqnarray}
where $\alpha
\neq -1 $. We have the following nonexistence result of such type of
solution(see \cite{cha13}).
\begin{theorem}
Let $v $ generates a particle trajectory, which is a $C^1$
diffeomorphism from $\Bbb R^2$ onto itself for all $t\in (0, T_*)$.
There exists no nontrivial solution $(v, \t )$ of the system $(B)$
of the form (\ref{self2b1})-(\ref{self2b2}), if there exists $p_1,
p_2 \in (0, \infty]$, $p_1 < p_2$, such that $\Theta\in L^{p_1}
(\Bbb R^2 ) \cap L^{p_2} (\Bbb R^2 )$, and $V\in H^m (\Bbb R^2 )$,
$m>2$.
\end{theorem}
Recalling the fact that the system $(B)$ has the similar form as
the axisymmetric 3D Euler system, we can also deduce the
nonexistence of self-similar blowing up solution to the axisymmetric
3D Euler equations of the fome (\ref{vel}), if $\Theta = rV^\theta$
satisfies the condition of Theorem 4.7, and curl $V=\Omega \in H^m
(\Bbb R^3 )$, $m>5/2$, where $r=\sqrt{x_1^2 +x_2 ^2}$, and
$V^\theta$ is the angular component of $V$. Note that in this case
we do not need to assume strong decay of $\Omega$ as in Theorem 3.3.
See \cite{cha13} for more details.
\subsection{Deformations of the Euler equations}
Let us consider the following system considered in \cite{cha16}.
\[
\mathrm{ (P_1)}
\left\{ \aligned
\frac{\partial u}{\partial t} +(u\cdot \nabla )u &=-\nabla q
+(1+\varepsilon )\|\nabla
u (t)\|_{L^\infty}u,
\\
\textrm{div }\, u =0 , &\\
u(x,0)=u_0 (x), &
\endaligned
\right.
\]
where $u=(u_1, \cdots, u_n )$, $u_j =u_j (x, t)$, $j=1, \cdots , n,
$, is the unknown vector field $q=q(x,t)$ is the scalar, and $u_0 $
is the given initial vector field satisfying div $u_0 =0$. The
constant $\varepsilon >0$ is fixed. Below denote curl
$u=\o$ for `vorticity' associated the `velocity' $u$.
We first note that the system of $(P_1)$ has the similar nonlocal structure
to the Euler system (E), which is implicit in the pressure term combined
with the divergence free condition.
Moreover it has the same scaling properties
as the original Euler system in $(E)$. Namely, if $u(x,t)$,
$q(x,t)$ is a pair of solutions to $(P_1)$ with initial data $u_0 (x)$,
then for any $\alpha \in \Bbb R$
$$ u^\lambda (x,t) =\lambda ^\alpha u (\lambda x, \lambda
^{\alpha +1} t ), \quad q^\lambda (x,t) =
\lambda ^{2\alpha} q (\lambda x, \lambda
^{\alpha +1} t )
$$
is also a pair of solutions to $(P_1)$ with initial data $u_0 ^\lambda
(x)=\lambda ^\alpha u_0 (x)$. As will be seen below, we can have the local well-posedness in the
Sobolev space, $H^m (\Bbb R^n ), m>n/2+2$, as well as the BKM type
of blow-up criterion for $(P_1)$, similarly to the Euler system
(E). Furthermore, we can prove actual finite time blow-up for
smooth initial data if $\o_0 \neq 0$. This is rather surprising in
the viewpoint that people working on the Euler system often have
speculation that the divergence free condition might have the role
of `desingularization', and might make the singularity disappear.
Obviously this is not the case for the system $(P_1)$.
Furthermore, there is a canonical
functional relation between the solution of $(P_1)$ and that of
the Euler system (E); hence the word `deformation'. Using this relation we can translate the
blow-up condition of the Euler system in terms of the solution of
$(P_1)$. The precise contents of the above results on $(P_1)$ are stated
in the
following theorem.
\begin{theorem}
Given $u_0 \in H^m (\Bbb R^n )$ with div $u_0 =0$, where $m>
\frac{n}{2}+2$, the following statements hold true for $(P_1)$.
\begin{itemize}
\item[(i)] There exists a local in time unique solution $u(t)\in
C([0,T]:H^m (\Bbb R^n ))$ with $T=T (\|u_0 \|_{H^m })$.
\item[(ii)]
The solution $u(x,t)$ blows-up at $t=t_*$, namely
$$
\lim\sup_{t\to t_*}\|u(t)\|_{H^m} =\infty\quad \mbox{ if and
only if}\quad \int_0 ^{t_*} \|\o (t)\|_{L^\infty} dt =\infty,
$$
where
$\o=$ curl $u$. Moreover, if the solution $u(x,t)$ blows up at
$t_*$, then necessarily,
$$
\int_0 ^{t_*} \exp\left[ (2+\varepsilon)\int_0 ^\tau \|\nabla
u (s)\|_{L^\infty}ds\right] d\tau =\infty
$$
for $n=3$, while
$$
\int_0 ^{t_*} \exp\left[ (1+\varepsilon)\int_0 ^\tau \|\nabla
u (s)\|_{L^\infty}ds\right] d\tau=\infty
$$
for $n=2$.
\item[(iii)] If $\|\o_0
\|_{L^\infty }\neq 0$, then there exists time $ t_* \leq
\frac{1}{\varepsilon \|\o_0 \|_{L^\infty }}$ such that solution $u(x,t)$
of $(P_1)$ actually blows up at $t_*$. Moreover, at such $t_*$ we
have
$$
\int_0
^{t_*} \exp\left[ (1+\varepsilon)\int_0 ^\tau \|\nabla
u (s)\|_{L^\infty}ds\right] d\tau=\infty .
$$
\item[(iv)] The functional
relation between the solution $u(x,t)$ of $(P_1)$ and the solution
$v(x,t)$ of the Euler system (E) is given by
$$
u(x,t)=\varphi'(t) v(x, \varphi (t)),
$$
where
$$
\varphi (t)=\lambda\int_0 ^t\exp\left[ (1+\varepsilon )\int_0 ^\tau \|\nabla
u(s)\|_{L^\infty}ds\right]d\tau.
$$
(The relation between the two initial datum is
$u_0 (x)=\lambda v_0 (x)$.)
\item[(v)]
The solution $v(x,t)$ of the Euler system (E) blows up at
$T_* <\infty$ if and only if for $t_*:=\varphi^{-1} (T_*) < \frac{1}{\varepsilon \|\o_0 \|_{L^\infty}}$
both of the followings hold true.
$$
\int_0 ^{t_*}\exp\left[ (1+\varepsilon )\int_0 ^\tau \|\nabla
u(s)\|_{L^\infty}ds\right]d\tau <\infty,
$$
and
$$
\int_0 ^{t_*}\exp\left[ (2+\varepsilon )\int_0 ^\tau \|\nabla
u(s)\|_{L^\infty}ds\right]d\tau =\infty.
$$
\end{itemize}
\end{theorem}
For the proof we refer \cite{cha16}. In the above theorem the result
(ii) combined with (v) shows indirectly that there is no finite time
blow-up in 2D Euler equations, consistent with the well-known
result. Following the argument on p. 542 of \cite{cha4}, the
following fact can be
verified without difficulty:\\
We set
\begin{equation}\label{aoft}
a(t)=\exp \left( \int_0 ^t (1+\varepsilon) \|\nabla
u(s)\|_{L^\infty} ds \right).
\end{equation}
Then, the solution $(u, q)$ of
$(P_1)$ is given by
$$ u(x,t)=a(t) U(x,t), \quad q(x,t)=a(t)P(x,t), $$
where $(U, P)$ is a solution of the following system,
$$
(aE)\left\{ \aligned
&\frac{\partial U}{\partial t} +a(t) (U\cdot \nabla )U =-\nabla P,\\
&\qquad \mathrm{div}\, U=0,\\
&U(x,0)=U_0 (x)
\endaligned
\right.
$$
The system $(aE)$ was studied in \cite{cha4}, when $a(t)$ is a
prescribed function of $t$, in which case the proof of local
existence of $(aE)$ in \cite{cha4} is exactly same as the case of
$(E)$. In the current case, however, we need an extra proof of
local existence, as is done in the next section, since the function
$a(t)$ defined by (\ref{aoft}) depends on the solution $u(x,t)$
itself. As an application of Theorem 4.8 we can prove the following
lower estimate of the possible blow-up time(see \cite{cha16} for the
detailed proof).
\begin{theorem}
Let $p\in (1, \infty)$ be fixed.
Let $v(t)$ be the local classical solution of the 3D Euler
equations with initial data $v_0\in H^m (\Bbb R^3 )$, $m>7/2$. If
$T_*$ is the first blow-up time, then
\begin{equation}\label{loweres}
T_* -t\geq \frac{1}{ C_0\|\o (t) \|_{\dot{B}^{\frac{3}{p}}_{p,1}}},
\quad \forall t\in (0, T_*)
\end{equation}
where $C_0$ is the absolute constant in $(Q_2)$.
\end{theorem}
In \cite{cha4} the following form of lower estimate for the blow-up
rate is derived.
\begin{equation}\label{loweres1}
T_*-t \geq \frac{1}{ \tilde{C}_0\|\o (t)\|_{\dot{B}^{0}_{\infty,1}}},
\end{equation}
where $\tilde{C}_0$ is another absolute constant(see also the remarks after Theorem 3.6).
Although there
is (continuous) embedding relation,
$\dot{B}^{\frac{3}{p}}_{p,1}(\Bbb R^3) \hookrightarrow
\dot{B}^{0}_{\infty,1}(\Bbb R^3)$ for $p\in [1, \infty]$(see Section
1), it is difficult to compare the two estimates (\ref{loweres}) and
(\ref{loweres1}) and decide which one is sharper, since the precise
evaluation of the optimal constants
$C_0, \tilde{C}_0$ in those inequalities could be very difficult problem.\\
\ \\
Next, given $\varepsilon \geq 0$, we consider the following problem.
\[
\mathrm{ (P_2)}
\left\{ \aligned
\frac{\partial u}{\partial t} +(u\cdot \nabla )u &=-\nabla q
-(1+\varepsilon )\|\nabla
u (t)\|_{L^\infty}u ,
\\
\textrm{div }\, u =0 , &\\
u(x,0)=u_0 (x), &
\endaligned
\right.
\]
Although the system of $(P_2)$ has also the same nonlocal structure and the scaling
properties as the Euler system and $(P_1)$, we have the result
of the global regularity stated in the following theorem(see \cite{cha16} for the proof).
\begin{theorem}
Given $u_0 \in H^m (\Bbb R^n)$ with div $u_0 =0$, where $m>
\frac{n}{2}+2$, then the solution $u(x,t)$ of $(P_2)$ belongs to
$C([0, \infty ):H^m (\Bbb R^n ))$. Moreover, we have the following
decay estimate for the vorticity,
$$
\| \o (t)\|_{L^\infty }
\leq \frac{\|\o _0 \|_{L^\infty}}{1+\varepsilon\|\o_0 \|_{L^\infty} t} \qquad \forall
t\in [0, \infty ).
$$
\end{theorem}
We also note that solution of the system $(P_2)$ has also similar
functional relation with that of the Euler system as given in (iv)
of Theorem 5.8 as will be clear in the proof of Theorem 1.1 in the
next section.\\
\ \\
Next, given $\varepsilon >0 $, we consider the following perturbed systems
of (E).
\[
(E)^\varepsilon _\pm\left\{ \aligned
&\frac{\partial u}{\partial t} +(u\cdot \nabla )u =-\nabla q
\pm \varepsilon \|\nabla u\|_{L^\infty}^{1+\varepsilon}u,
\\
&\textrm{div}\, u =0 , \\
& u(x,0)=u_0 (x).
\endaligned
\right.
\]
If we set $\varepsilon=0$ in the above, then the system $(E)^0 _\pm$
becomes $(E)$. For $\varepsilon >0$ we have finite time blow-up for the
system $(E)^\varepsilon_+$ with certain initial data, while we have the
global regularity for $(E)^\varepsilon_-$ with all solenoidal initial data
in $H^m (\Bbb R^3 )$, $m>5/2$. More precisely we have the following
theorem(see \cite{cha17} for the proof).
\begin{theorem}
\begin{itemize}
\item[(i)] Given $\varepsilon>0$, suppose
$u_0 =u_0^\varepsilon\in H^m (\Bbb R^3 )$ with div $u_0=0$ satisfies
$\|\o_0 \|_{L^\infty} > (2/\varepsilon )^{1/\varepsilon}$, then
there exists $T_* $ such that the
solution $u(x,t)$ to $(E)_\varepsilon ^+$ blows up at $T_*$, namely
$$
\lim\sup _{t\nearrow T_*} \|u(t)\|_{H^m} =\infty.
$$
\item[(ii)] Given $\varepsilon >0$ and $u_0 \in H^m (\Bbb R^3)$ with div $u_0=0$, there exists
unique global in time classical solution $u(t)\in
C([0, \infty ); H^m (\Bbb R^3 ))$ to
$(E)^\varepsilon _-$. Moreover, we have the global in time vorticity
estimate for the solution of $(E)^\varepsilon _-$,
$$
\|\o(t)\|_{L^\infty} \leq
\max\left\{ \|\o_0\|_{L^\infty}, \left(\frac{1}{\varepsilon}\right)^{\frac{1}{\varepsilon}}\right\}
\qquad \forall t\geq 0.
$$
\end{itemize}
\end{theorem}
The following theorem relates the finite time blow-up/global
regularity of the Euler system with those of the system
$(E)^\varepsilon_\pm$.
\begin{theorem}
Given $\varepsilon >0$, let $u^\varepsilon_\pm$ denote the solutions of
$(E)^\varepsilon _\pm$ respectively with the same initial data $u_0 \in
H^m (\Bbb R^3 )$, $m>5/2$. We define
$$
\varphi ^\varepsilon_\pm (t, u_0):=\int_0 ^t \exp \left[ \pm \varepsilon \int_0
^\tau \|\nabla u^\varepsilon_\pm (s)\|_{L^\infty}^{1+\varepsilon} ds\right]
d\tau.
$$
\begin{itemize}
\item[(i)] If $\varphi ^\varepsilon_- (\infty , u_0 ) =\infty$, then the
solution of the Euler system with initial data $u_0$ is regular
globally in time.
\item[(ii)] Let $t_*$
be the first blow-up time for a solution $u^\varepsilon_+$ of
$(E)^\varepsilon_+$ with initial data $u_0$ such that
$$\int_0 ^{t_*} \|\o ^\varepsilon _+(t)\|_{L^\infty} dt =\infty, \qquad \mbox{where}\quad
\o ^\varepsilon _+ =\textrm{curl}\, \,\, u^\varepsilon_+ .$$
If
$\varphi ^\varepsilon_+ (t_* , u_0 )<\infty$, then the solution of the
Euler system blows up at the finite time $T_*=\varphi ^\varepsilon_+ (t_*
, u_0 )$.
\end{itemize}
\end{theorem}
We refer \cite{cha17} for the proof of the above theorem.
\section{Dichotomy: singularity or global regular dynamics?}
\setcounter{equation}{0}
In this section we review results in \cite{cha12}. Below $S$, $P$
and $\xi (x,t)$ are the deformation tensor , the Hessian of the
pressure and the vorticity direction field, associated with the
flow, $v$, respectively as introduced in section 1.
Let $\{ (\lambda _k , \eta^k)\}_{k=1}^3$ be
the eigenvalue and the normalized eigenvectors of $S$.
We set $\lambda =(\lambda_1, \lambda_2 ,\lambda_3 )$, and
$$|\lambda|= \left(\sum_{k=1}^3 \lambda_k ^2 \right)^{\frac12},\quad
\rho_k =\eta^k \cdot P \eta^k \quad\mbox{ for $k=1,2,3$}.
$$
We also denote
$$\eta^k (x,0)=\eta^k_0 (x),\quad
\lambda_k (x,0)=\lambda_{k,0}(x),\quad \lambda (x,0)=\lambda_0 (x),
\quad \rho_k (x,0)=\rho_{k,0}(x)
$$
for the quantities at $t=0$.
Let $\o (x,t)\neq 0$. At such point $(x,t)$ we define the scalar fields
$$
\a = \xi\cdot S\xi,\quad
\rho = \xi \cdot P\xi.
$$
At
the points where $\o (x,t)=0$ we define $\a(x,t)=\rho(x,t)=0$. We
denote $\a_0 (x)=\a (x,0)$, $\rho_0 (x)=\rho (x,0 )$.
Below we denote $ f(X(a,t),t)'=\frac{Df}{Dt} (X(a,t),t)$
for simplicity.
Now, suppose that there is no blow-up of the solution on $[0, T_*]$,
and
the inequality
\begin{equation}\label{contra1}
\a(X(a,t),t) |\o (X(a,t),t)| \geq \varepsilon |\o (X(a,t),t)|^2
\end{equation}
persists on $[0,T_*]$. We will see that this leads to a
contradiction. Combining (\ref{contra1}) with (\ref{basfor3}), we have
$$
|\o |' \geq \varepsilon |\o|^2.
$$
Hence, by Gronwall's lemma, we obtain
$$
|\o (X(a,t),t)|\geq \frac{|\o_0 (a)|}{1-\varepsilon |\o_0 (a)| t},
$$
which implies that
$$ \lim\sup_{t\nearrow T_*}|\o (X(a,t),t)|=\infty.$$
Thus we are lead to the following lemma.
\begin{lemma}
Suppose $\a_0 (a) > 0$, and there exists $\varepsilon>0$ such that
\begin{equation}\label{210}
\a _0 (a) |\o_0 (a)|\geq \varepsilon |\o_0 (a)|^2.
\end{equation}
Let us set
\begin{equation}
T_* =\frac{1}{\varepsilon \a_0 (a)}.
\end{equation}
Then, either the vorticity blows up no later than $T_*$, or there
exists $t \in (0, T_*)$ such that
\begin{equation}\label{th11} \a(X(a,t),t) |\o (X(a,t),t)| < \varepsilon |\o (X(a,t),t)|^2.
\end{equation}
\end{lemma}
From this lemma we can derive the following:
\begin{theorem}[vortex dynamics]
Let $v_0 \in H^m (\Omega )$, $m>5/2$, be given. We define
$$\Phi_1(a,t)= \frac{\a (X(a,t),t)}{|\o (X(a,t),t)|}$$
and
$$\Sigma_1 (t)=\{ a\in \Omega \,|\, \a (X(a,t),t)> 0\}$$
associated with the classical solution $v(x,t)$.
Suppose $a\in
\Sigma_1 (0)$ and $\o_0 (a) \neq 0$. Then one of the following holds
true.
\begin{itemize}
\item[(i)] {\rm{(finite time singularity)}}
The solution of the Euler equations blows-up in finite time along
the trajectory $\{ X(a,t)\}$.
\item[(ii)] {\rm{(regular dynamics)}} On of the following holds
true:
\begin{description}
\item[(a)] {\rm{(finite time extinction of $\a$)}}
There exists $t_1 \in (0, \infty)$ such that $\a (X(a, t_1),t_1)=0$.
\item[(b)] {\rm{(long time behavior of $\Phi_1$)}} There exists an infinite sequence $\{
t_j\}_{j=1}^\infty$ with $t_1<t_2<\cdots <t_j <t_{j+1} \to \infty$
as $j\to \infty$ such that for all $j=1,2,\cdots$ we have $\Phi_1
(a,0)> \Phi_1 (a,t_1)>\cdots>\Phi_1 (a, t_j)>\Phi_1 (a,t_{j+1})>0$
and $\Phi_1 (a,t)\geq \Phi _1 (a,t_j ) >0$ for all $t\in [0,t_j]$.
\end{description}
\end{itemize}
\end{theorem}
As an illustration of proofs for the Theorem 5.2 and 5.3 below, we
give outline of the proof of the above theorem.
Let us first observe that the formula
$$ |\o (X(a,t),t)|=\exp\left[ \int_0 ^t \a (X(a,s),s) ds\right]
|\o_0 (a)|,
$$
which is obtained from (\ref{basfor3}) immediately shows that $\o (X(a,t),t)\neq
0$ if and only if $\o_0 (a)\neq 0$ for the particle trajectory $\{ X(a,t)\}$ of the classical
solution $v(x,t)$ of the Euler equations. Choosing
$\varepsilon = \a _0 (a)/|\o_0 (a)|$ in Lemma 4.1, we see that
either the vorticity blows up no later than $T_* =1/\a_0 (a)$,
or there exists $t_1\in (0, T_* )$ such that
$$
\Phi_1 (a,t_1)=\frac{\a (X(a,t_1),t_1)}{|\o (X(a,t_1),t_1 )|} < \frac{\a_0 (a)}{|\o_0
(a)|}=\Phi_1 (a,0).
$$
Under the hypothesis that (i) and (ii)-(a) do not hold true, we may
assume $a\in \Sigma_1 (t_1)$ and repeat the above argument
to find $t_2 >t_1$ such that $\Phi_1(a, t_2 )<\Phi_1 (a, t_1 )$,
and also $a \in \Sigma_1 (t_2)$. Iterating the argument, we find
a monotone increasing sequence $\{ t_j\}_{j=1}^\infty$ such that $\Phi _1 (a,t_{j})>\Phi
_1 (a,t_{j+1})$ for all $j=1,2,3,\cdots$. In particular we can
choose each $t_j$ so that $\Phi_1 (a, t)\geq \Phi_1 (a, t_j )$
for all $t\in (t_{j-1}, t_j]$. If $t_j \to t_\infty <\infty$ as $j\to \infty$,
then we can proceed further to have $t_* > t_\infty$ such that
$\Phi_1 (a, t_\infty )>\Phi_1 (a, t_* )$. Hence, we may set
$t_\infty =\infty$, which finishes the proof.\\
The above argument can be extended to prove the following theorems.
\begin{theorem}[dynamics of $\a$]
Let $v_0 \in H^m (\Omega )$, $m>5/2$, be given. In case
$\a(X(a,t),t)\neq 0$ we define
$$\Phi_2(a,t)= \frac{|\xi\times S\xi
|^2(X(a,t),t)-\rho (X(a,t),t)}{\a
^2(X(a,t),t)},
$$
and
$$\Sigma_2 ^+(t)=\{ a\in \Omega \,|\, \a (X(a,t),t)> 0, \Phi _2
(X(a,t),t)>1\},$$
$$\Sigma_2 ^-(t)=\{ a\in \Omega \,|\, \a (X(a,t),t)<0, \Phi _2
(X(a,t),t)<1\},$$
associated with $v(x,t)$.
Suppose $a\in \Sigma_2 ^+(0)\cup \Sigma_2 ^-(0)$.
Then one of the following
holds true.
\begin{itemize}
\item[(i)] {\rm{(finite time singularity)}}
The solution of the Euler equations blows-up in finite time along
the trajectory $\{ X(a,t)\}$.
\item[(ii)] {\rm{(regular dynamics)}} One of the following holds
true:
\begin{description}
\item[(a)] {\rm{(finite time extinction of $\a$)}} There exists $t_1 \in (0, \infty)$ such that $\a (X(a, t_1),t_1)=0$.
\item[(b)] {\rm{(long time behaviors of $\Phi_2$)}} Either there exists $T_1\in (0, \infty)$ such that
$ \Phi_2 (a,T_1) =1$, or there exists an infinite sequence $\{
t_j\}_{j=1}^\infty$ with $t_1<t_2<\cdots <t_j <t_{j+1} \to \infty$
as $j\to \infty$ such that one of the followings hold:
\begin{description}
\item[(b.1)] In the case $a\in \Sigma_2 ^+(0)$, for
all $j=1,2,\cdots$ we have $\Phi_2 (a,0)> \Phi_2
(a,t_1)>\cdots>\Phi_2 (a, t_j)>\Phi_2 (a,t_{j+1})
>1$ and $\Phi_2 (a,t)\geq \Phi_2
(a,t_j )>1 $ for all $t\in [0,t_j]$.
\item[(b.2)] In the case $a\in \Sigma_2 ^-(0)$, for all $j=1,2,\cdots$ we
have $\Phi_2 (a,0)<\Phi_2 (a,t_1)<\cdots <\Phi_2 (a, t_j)< \Phi_2
(a,t_{j+1}) <1$ and $\Phi_2 (a,t)\leq \Phi_2 (a,t_j )<1 $ for all
$t\in [0,t_j]$.
\end{description}
\end{description}
\end{itemize}
\end{theorem}
\begin{theorem}[spectral dynamics]
Let $v_0 \in H^m (\Omega )$, $m>5/2$, be given. In case
$\lambda(X(a,t),t)\neq 0$ we define
$$
\Phi_3(a,t)= \frac{\sum_{k=1}^3\left[ -\lambda_k ^3 +\frac14 |\eta_k
\times \o |^2\lambda_k - \rho_k \lambda_k \right](X(a,t),t)}{
|\lambda (X(a,t),t)|^3},
$$
and
$$\Sigma_3 (t)=\{ a\in \Omega \,|\, \lambda (X(a,t),t)\neq 0, \Phi _3
(X(a,t),t)>0\}$$
associated with $v(x,t)$.
Suppose $a\in \Sigma_3 (0)$.
Then one of the following
holds true:
\begin{itemize}
\item[(i)] {\rm{(finite time singularity)}}
The solution of the Euler equations blows-up in finite time along
the trajectory $\{ X(a,t)\}$.
\item[(ii)]{\rm{(regular dynamics)}} One of the followings hold
true:
\begin{description}
\item[(a)] {\rm{(finite time extinction of $\lambda$)}}
There exists $t_1 \in (0, \infty)$ such that $\lambda (X(a, t_1),t_1)=0$.
\item[(b)]{\rm{(long time behavior of $\Phi_3 $)}} Either there exists $T_1\in (0, \infty)$ such that
$ \Phi_3 (a,T_1) =0$, or there exists an infinite sequence $\{
t_j\}_{j=1}^\infty$ with $t_1<t_2<\cdots <t_j <t_{j+1} \to \infty$
as $j\to \infty$ such that for all $j=1,2,\cdots$ we have $\Phi_2
(a,0)> \Phi_3 (a,t_1)>\cdots>\Phi_3 (a, t_j)>\Phi_3(a,t_{j+1})
>0$ and $\Phi_3 (a,t)\geq \Phi_3
(a,t_j )>0 $ for all $t\in [0,t_j]$.
\end{description}
\end{itemize}
\end{theorem}
For the details of the proof of Theorem 5.2 and Theorem 5.3 we refer \cite{cha12}.
Here we present a refinement of Theorem 2.1 of \cite{cha15}, which is proved in
\cite{cha12}.
\begin{theorem} Let $v_0 \in H^m (\Omega )$, $m>5/2$, be given.
For such $v_0$ let us define a set $\Sigma \subset \Omega$ by
\begin{eqnarray*}
\lefteqn{ \Sigma=\{ a\in
\Omega \,|\, \a_0 (a) > 0, \o_0 (a) \neq 0, \exists \,\,\varepsilon \in
(0, 1) \,\, \mbox{such
that}}\hspace{.3in}\\
&& \rho_0(a) + 2\a ^2_0 (a)-|\xi_0 \times S_0\xi_0 |^2(a)\leq
(1-\varepsilon)^2 \a^2 _0 (a)\}.
\end{eqnarray*}
Let us set
\begin{equation}
T_* =\frac{1}{\varepsilon \a_0 (a)}.
\end{equation}
Then, either the solution blows up no later than $T_*$, or there
exists $t\in (0, T_*)$ such that
\begin{equation}
\rho(X(a,t),t) + 2\a ^2(X(a,t),t) -|\xi \times S\xi |^2(X(a,t),t)
> (1-\varepsilon)^2 \a ^2(X(a,t),t).
\end{equation}
\end{theorem}
We note that if we ignore the term $|\xi_0 \times S_0\xi_0
|^2(a)$, then we have the condition,
$$ \rho_0(a) + \a ^2_0 (a)\leq (-2\varepsilon +\varepsilon^2) \a^2 _0 (a) < 0,
$$
since $\varepsilon \in (0, 1)$.
Thus $\Sigma \subset \mathcal{S}$, where $\mathcal{S}$ is the
set defined in Theorem 2.1 of \cite{cha15}. One can verify without
difficulty that $\Sigma =\emptyset$ for the 2D Euler flows.
Regarding the question if $\Sigma \neq\emptyset$ or not for 3D Euler flows, we
have the following proposition(see \cite{cha15} for more details).
\begin{pro}
Let us consider the system the domain $\Omega =[0,2\pi ]^3$ with
the periodic boundary condition. In $\Omega$ we consider the
Taylor-Green vortex flow defined by
\begin{equation}\label{ta}
u(x_1,x_2,x_3 )=(\sin x_1 \cos x_2 \cos x_3 ,
-\cos x_1 \sin x_2 \cos x_3 , 0).
\end{equation}
Then, the set
$$\mathcal{S}_0 =\left\{(0,
\frac{\pi}{4} , \frac{7\pi}{4} ),(0, \frac{7\pi}{4} , \frac{\pi}{4}
)\right\} $$
is included in $\Sigma$ of Theorem 4.4. Moreover, for $x\in
\mathcal{S}_0$ we have the explicit values of $\alpha$ and $\rho$,
$$
\alpha(x)=\frac12,\quad
\rho(x)=-\frac12.
$$
\end{pro}
We recall that the Taylor-Green vortex has been the first candidate
proposed for a finite time singularity for the 3D Euler equations,
and there have been many numerical calculations of solution of (E)
with the initial data given by it(see e.g. \cite{bra}).
\section{Spectral dynamics approach}
\setcounter{equation}{0}
Spectral dynamics approach in the fluid mechanics was initiated by
Liu and Tadmor(\cite{liu}). They analyzed the restricted Euler
system (\ref{liueq}) in terms of (poitwise) dynamics of the
eigenvalues of the velocity gradient matrix $V$. More specifically,
multiplying
left and right eigenvectors of $V$ to (\ref{liueq}), they derived
$$
\frac{D \lambda_j}{Dt}=-\lambda_j^2
+\frac{1}{n} \sum_{k=1}^n \lambda_k ^2 ,\quad j=1,2,\cdots, n,
$$
where $\lambda_j$, $j=1,2, ..,n$ are eigenvalues $V$, which are not necessarily
real values.
In this model system they proved finite time blow-up for suitable
initial data. In this section we review the results in
\cite{cha8}, where the full Euler system is concerned.
Moreover, the we are working on the dynamics of eigenvalues of the
deformation tensor $S$(hence real valued), not the velocity gradient
matrix. We note that there were also application of the spectral
dynamics of the deformation tensor in the study of regularity
problem of the Navier-Stokes equations by Neustupa and
Penel(\cite{neu}).
In this section for
simplicity we consider the 3D Euler system (E) in the periodic
domain, $\Omega =\Bbb T^3(=\Bbb R^3/\Bbb Z^3)$.
Below we denote ${ \lambda_1 ,\lambda_2 , \lambda_3 }$ for the eigenvalues of
the deformation tensor
$S=( S_{ij})$ for the velocity fields of the 3D Euler system.
We will first establish the following formula,
\begin{equation}\label{spec1}
\frac{d}{dt} \int_{\Bbb T^3} (\lambda_1^2 +\lambda_2^2 +\lambda_3^2 )dx=
-4\int_{\Bbb T^3} \lambda_1\lambda_2 \lambda_3 \, dx,
\end{equation}
which has important implications(Theorem 6.1- Theorem 6.3 below).
Indeed, using (\ref{basfor2}), we can compute
\begin{eqnarray*}
\lefteqn{\frac12\frac{d}{dt}\int_{\Bbb T^3} S_{ij}S_{ij}dx=\int_{\Bbb T^3} S_{ij}
\frac{D S_{ij}}{Dt}dx}\hspace{.0in} \\
&&=-\int_{\Bbb T^3} S_{ik}S_{kj}S_{ij} dx -\frac14 \int_{\Bbb T^3} \o_i S_{ij}\o_j
+\frac14\int_{\Bbb T^3} |\o |^2 S_{ii} dx +\int_{\Bbb T^3} P_{ij} S_{ij} dx\\
&&=-\int_{\Bbb T^3} S_{ik}S_{kj}S_{ij} dx-\frac18\frac{d}{dt}\int_{\Bbb T^3} |\o|^2dx,
\end{eqnarray*}
where we used the summation
convention for the repeated indices, and used the
$L^2$-version of the vorticity equation,
\begin{equation}\label{spec2}
\frac12\frac{d}{dt}\int_{\Bbb T^3} |\o
|^2dx=\int_{\Bbb T^3} \o_i S_{ij}\o_j ,
dx
\end{equation}
which is immediate from (\ref{basfor2a}).
We note
\begin{eqnarray*}
\lefteqn{{ \int_{\Bbb T^3} |\o |^2 dx} = \int_{\Bbb T^3} |\nabla v|^2 dx=\int_{\Bbb T^3} V_{ij}
V_{ij}dx= \int_{\Bbb T^3} (S_{ij}+A_{ij})(S_{ij}+A_{ij})dx}\hspace{.3in}\\
&&=\int_{\Bbb T^3} S_{ij}S_{ij} dx +\int_{\Bbb T^3} A_{ij}A_{ij}dx
=\int_{\Bbb T^3} S_{ij}S_{ij} dx+\frac12 { \int_{\Bbb T^3} |\o|^2dx}.
\end{eqnarray*}
Hence,
$$\int_{\Bbb R^n} S_{ij}S_{ij} dx =\frac12 \int_{\Bbb T^3} |\o |^2 dx$$
Substituting this into (\ref{spec2}), we obtain that
$$
\frac{d}{dt}\int_{\Bbb T^3} S_{ij}S_{ij}dx=-\frac43\int_{\Bbb T^3} S_{ik}S_{kj}S_{ij}
dx,
$$
which, in terms of the spectrum of $S$, can be written as
\begin{equation}\label{spec3}
\frac{d}{dt} \int_{\Bbb T^3} (\lambda_1^2 +\lambda_2^2 +\lambda_3^2 )dx=-\frac43\int_{\Bbb T^3} (\lambda_1^3
+ \lambda_2^3 + \lambda_3^3 ) dx.
\end{equation}
We observe from the divergence free condition, $0=$div $v=Tr
S=\lambda_1+\lambda_2+\lambda_3$,
\begin{eqnarray*}
0&=&(\lambda_1+\lambda_2 +\lambda_3 )^3\\
&=&\lambda_1^3 +\lambda_2^3 +\lambda_3^3 +3\lambda_1^2 (\lambda_2+\lambda_3
)+3\lambda_2^2 (\lambda_1 +\lambda_3 ) +3\lambda_3 (\lambda_1+\lambda_2 )+ 6\lambda_1\lambda_2\lambda_3\\
&=&\lambda_1^3 +\lambda_2^3 +\lambda_3^3 -3(\lambda_1^3 +\lambda_2^3 +\lambda_3^3)+ 6\lambda_1\lambda_2\lambda_3.
\end{eqnarray*}
Hence, $\lambda_1^3 + \lambda_2^3 + \lambda_3^3=3\lambda_1\lambda_2\lambda_3$. Substituting this into
(\ref{spec3}), we completes the proof of (\ref{spec1}).\\
\ \\
Using the formula (\ref{spec1}), we can first prove the following
new a priori estimate for the $L^2$ norm of vorticity for the 3D
incompressible Euler equations(see \cite{cha8} for the proof). We
denote
$$ \mathbb{H}^m _\sigma =\{ v\in [H^m (\Bbb T^3 )]^3\, |\, \mathrm{div
}\, v=0\}.$$
\begin{theorem}
Let $v(t)\in C([0,T); \mathbb{H}^m _\sigma )$, $m>5/2$ be the local
classical solution of the 3D Euler equations with initial data $v_0
\in \mathbb{H}^m_\sigma $ with $\o_0 \neq 0$. Let $\lambda_1 (x,t)\geq \lambda_2
(x,t)\geq \lambda_3 (x,t)$ are the eigenvalues of the deformation tensor
$S_{ij}(v)=\frac12 ( \frac{\partial v_j}{\partial x_i}
+\frac{\partial v_i}{\partial x_j})$. We denote $\lambda_2^+ (x,t)=\max\{
\lambda_2 (x,t), 0\}$, and $\lambda_2^- (x,t)=\min\{ \lambda_2 (x,t), 0\}$. Then, the
following estimates hold.
\begin{eqnarray*}
\lefteqn{ \exp\left[
\int_0 ^t \left(\frac12 \inf_{x\in \Bbb T^3}\lambda_2^+ (x,t)-\sup_{x\in \Bbb T^3}
|\lambda_2^-(x,t)|\right)dt\right]\leq
\frac{\|\o (t)\|_{L^2}}{\|\o_0\|_{L^2}}}\hspace{1.in}\nonumber \\
&&\leq \exp\left[
\int_0 ^t \left( \sup_{x\in \Bbb T^3}\lambda_2^+ (x,t)-\frac12 \inf_{x\in \Bbb T^3}
|\lambda_2^- (x,t)|\right)dt\right]\nonumber \\
\end{eqnarray*}
for all $t\in (0,T)$.
\end{theorem}
The above estimate says, for example, that if we have the following
compatibility conditions,
$$\sup_{x\in \Bbb T^3}\lambda_2^+ (x,t)\simeq \inf_{x\in \O}|\lambda_2^- (x,t)|\simeq g(t)$$
for some time interval $[0,T]$, then
$$ \|\o (t)\|_{L^2}
\lesssim O\left(\exp\left[C \int_0 ^t g(s)ds\right]\right) \qquad
\forall t\in [0,T]
$$
for some constant $C$. On the other hand, we note the following
connection of the above result to the previous one. From the
equation
$$ \frac{D |\o |}{Dt}=\alpha |\o |, \qquad \alpha (x,t)=\frac{\o \cdot S\o
}{|\o |^2}$$ we immediately have
\begin{eqnarray*}
\|\o (t)\|_{L^2}&\leq& \|\o_0 \|_{L^2}
\exp \left(\int_0 ^t \sup_{x\in \Bbb T^3} \alpha (x,s)
ds \right)\\
&\leq&\|\o_0 \|_{L^2} \exp \left(\int_0 ^t \sup_{x\in \Bbb T^3}
\lambda_1(x,s )
d\tau \right),
\end{eqnarray*}
where we used the fact $\lambda_3 \leq \alpha \leq \lambda_1$, the well-known
estimate for the Rayleigh quotient. We note that
$\lambda_2^+(x,t)>0$ implies we have stretching of infinitesimal fluid
volume in two directions and compression in the other one
direction(planar stretching) at $(x,t)$, while $|\lambda_2^-(x,t)|>0$
implies stretching in one direction and compressions in two
directions(linear stretching). The above estimate says that the
dominance competition between planar stretching and linear
stretching is an important mechanism controlling the growth/decay
in time of the $L^2$ norm of vorticity.
In order to state our next
theorem we introduce some definitions. Given a differentiable
vector field $f=(f_1 ,f_2 ,f_3 )$ on $\Bbb T^3$, we denote by the
scalar field $\l_i (f)$, i=1,2,3, the eigenvalues of the deformation
tensor associated with $f$. Below we always assume the ordering, $
\lambda_1 (f)\geq \lambda_2 (f)\geq \lambda_3 (f). $ We also fix $m>5/2$ below. We
recall that if $f\in \mathbb{H}^m _\sigma $, then $\lambda_1 (f)+\lambda_2 (f)+\lambda_3
(f)=0$, which is another representation of div $f=0$.
Let us begin with introduction of admissible classes
$\mathcal{A}_\pm$ defined by
$$\mathcal{A}_+=\{ f\in \mathbb{H}^m _\sigma (\Bbb T^3)\, | \, \inf_{x\in \Bbb T^3}\lambda_2 (f)(x)
>0 \,\},$$
and
$$\mathcal{A}_-=\{ f\in \mathbb{H}^m_\sigma (\Bbb T^3)\, |
\sup_{x\in \Bbb T^3}\lambda_2 (f)(x)<0 \, \}.$$
Physically $\mathcal{A}_+$ consists of solenoidal vector fields
with planar stretching everywhere, while $\mathcal{A}_-$ consists
of everywhere linear stretching vector fields. Although they do not represent
real physical flows, they might be useful in the study of searching
initial data leading to finite time singularity for the 3D Euler
equations. Given $v_0 \in \mathbb{H}_\sigma ^m$, let $T_*(v_0)$ be the maximal time
of unique existence of solution in $\mathbb{H}_\sigma ^m$ for the system
(E).
Let $S_t : \mathbb{H}^m_\sigma \to
\mathbb{H}^m_\sigma$ be the solution operator, mapping from initial data
to the
solution $v(t)$.
Given $f\in \mathcal{A}_+$, we define the first zero touching time
of $\lambda_2 (f)$ as
$$ T(f)=
\inf\{ t\in (0, T_* (v_0)) \, | \, \mbox{$\exists x\in \Bbb T^3 $
such that $\lambda_2 (S_t f ) (x)<0 $}\}.
$$
Similarly for $f\in \mathcal{A}_-$, we define
$$ T(f)=
\inf\{ t\in (0, T_* (v_0)) \, | \, \mbox{$\exists x\in \Bbb T^3 $
such that $\lambda_2 (S_t f ) (x)>0 $}\}.
$$
The following theorem is actually an immediate corollary of Theorem
6.1, combined with the above definition of $\mathcal{A}_\pm$ and
$T(f)$. We just observe that for $v_0 \in \mathcal{A}_+ $(resp.
$\mathcal{A}_- $) we have $\lambda_2^-=0, \lambda_2^+=\lambda_2$(resp. $\lambda_2^+ =0,
\lambda_2^-=\lambda_2$) on $\O \times (0,T( v_0 ))$.
\begin{theorem}
Let $v_0\in \mathcal{A}_\pm$ be given. We set $\lambda_1(x,t)\geq
\lambda_2(x,t)\geq \lambda_3(x,t)$ as the eigenvalues of the deformation tensor
associated with $v(x,t)=(S_t v_0)(x)$ defined $t\in (0, T(v_0 ))$.
Then, for all $t\in (0, T(v_0 ))$ we have the following
estimates:\\
(i) If $v_0 \in \mathcal{A}_+$, then
$$
\exp\left( \frac12\int_0 ^t
\inf_{x\in\Bbb T^3} |\lambda_2 (x,s)| ds\right)\leq \frac{\|\o (t)\|_{L^2}}{ \|\o_0\|_{L^2}}
\leq \exp\left( \int_0 ^t
\sup_{x\in\Bbb T^3} |\lambda_2 (x,s)| ds\right) .
$$
(ii) If $v_0 \in \mathcal{A}_-$, then
$$
\exp\left( - \int_0 ^t
\sup_{x\in\Bbb T^3} |\lambda_2 (x,s)| ds\right)\leq \frac{\|\o (t)\|_{L^2}}{ \|\o_0\|_{L^2}}
\leq \exp\left( -\frac12 \int_0 ^t
\inf_{x\in\Bbb T^3} |\lambda_2 (x,s)| ds\right).
$$
\end{theorem}
(see \cite{cha8} for the proof) If we have the compatibility
conditions,
\begin{eqnarray*}
\inf_{x\in\Bbb T^3} |\lambda_2 (x,t)|&\simeq& \sup_{x\in\Bbb T^3} |\lambda_2
(x,t)|\simeq g(t) \quad \forall t\in (0, T(v_0 )), \end{eqnarray*}
which is the case for sufficiently small box $\Bbb T^3$, then we have
\begin{eqnarray*}
\frac{\|\o (t)\|_{L^2}}{ \|\o_0\|_{L^2}} \simeq\left\{
\aligned &\exp\left(\int_0 ^t g(s)ds \right) \quad \mbox{if}\quad v_0 \in
\mathcal{A}_+\\
&\exp\left(-\int_0 ^t g(s)ds \right) \quad \mbox{if}\quad v_0
\in
\mathcal{A}_- \endaligned \right.
\end{eqnarray*}
for $t\in (0, T(v_0 ))$.
In particular, if we could find $v_0 \in \mathcal{A}_+$ such that
$$
\inf_{x\in\Bbb T^3} |\lambda_2 (x,t)| \gtrsim
O\left(\frac{1}{t_*-t}\right)
$$
for time
interval near $t_*$, then such data would lead to singularity at
$t_*$.
As another application of the formula (\ref{spec1}) we have some
decay in time estimates for some ratio of eigenvalues(see
\cite{cha8} for the proof).
\begin{theorem}
Let $v_0 \in \mathcal{A}_\pm$ be given, and we set $\lambda_1(x,t)\geq
\lambda_2(x,t)\geq \lambda_3(x,t)$ as in Theorem 3.1.
We define
$$
\varepsilon (x,t)=\frac{|\lambda_2 (x,t) |}{\l (x,t)}\quad \forall (x,t)\in
\Bbb T^3 \times (0, T(v_0)),
$$ where we set
$$\l (x,t)=\left\{\aligned \l _1 (x,t)
\quad \mbox{ if} \quad v_0 \in \mathcal{A}_+\\
-\l _3 (x,t) \quad \mbox{ if} \quad v_0 \in \mathcal{A}_- .
\endaligned
\right.
$$
Then, there exists a constant $C=C(v_0)$ such that
$$
\inf_{(x,s)\in \Bbb T^3\times (0,t)} \varepsilon (x,s)< \frac{C}{\sqrt{t}}\quad
\forall t\in (0, T(v_0)).
$$
\end{theorem}
Regarding the problem of searching finite time
blowing up solution, the proof of
the above theorem suggests the following:\\
Given $\delta >0$, let us suppose we could find $v_0 \in
\mathcal{A}_+$ such that for the associated solution $v(x,t)=(S_t v_0)(x)$
the estimate
\begin{equation}\label{spec3.3}
\inf_{(x,s)\in \Bbb T^3 \times (0,t)}\varepsilon (x,s)\gtrsim
O\left(\frac{1}{t^{\frac12+\delta}}\right),
\end{equation}
holds true, for sufficiently
large time $t$.
Then such $v_0$ will lead to the finite time singularity. In order
to check the behavior (\ref{spec3.3}) for a given solution we need
a sharper and/or localized
version of the equation (\ref{spec1}) for the dynamics of eigenvalues
of the deformation tensor.
\section{Conservation laws for singular solutions }
\setcounter{equation}{0}
For the smooth solutions of the Euler equations there are many
conserved quantities as described in Section 1 of this article. One
of the most important conserved quantities is the total kinetic
energy. For nonsmooth(weak) solutions it is not at all obvious that
we still have energy conservation. Thus, there comes very
interesting question of how much smoothness we need to assume for
the solution to have energy conservation property. Regarding this
question L. Onsager conjectured that a H\"{o}lder continuous weak
solution with the H\"{o}lder exponent $1/3$ preserve the energy, and
this is sharp(\cite{ons}). Considering Kolmogorov's scaling argument
on the energy correlation in the homogeneous turbulence the exponent
$1/3$ is natural. A sufficiency part of this conjecture is proved
in a positive direction by an ingenious argument due to
Constantin-E-Titi\cite{con7}, using a special Besov type of space
norm, $\mathcal{\dot{B}}^s_{3, \infty}$ with $s>1/3$(more precisely,
the Nikolskii space norm) for the velocity. See also \cite{caf2} for
related results in the magnetohydrodynamics. Remarkably enough
Shnirelman\cite{shn2} later constructed an example of weak solution
of 3D Euler equations, which does not preserve energy. The problem
of finding optimal regularity condition for a weak solution to have
conservation property can also be considered for the helicity. Since
the helicity is closely related to the topological invariants, e.g.
the knottedness of vortex tubes, the non-conservation of helicity is
directly related to the spontaneous apparition of singularity from
local smooth solutions, which is the main theme of this article. In
\cite{cha5} the author of this article obtained a sufficient
regularity condition for the helicity conservation, using the
function space $\mathcal{\dot{B}}^s_{\frac95, \infty}$, $s>1/3$, for
the vorticity. These results on the energy and the helicity are
recently refined in \cite{cha9}, using the Triebel-Lizorkin type of
spaces, $\mathcal{\dot{F}}^s_{p,q}$, and the Besov spaces
$\mathcal{\dot{B}}^s_{p,q}$(see Section 1 for the definitions) with
similar values for $s, p$, but allowing full range of values for
$q\in [1, \infty]$.
By a weak solution of $(E)$ in $\Bbb R^n \times (0, T)$ with initial
data $v_0$ we mean a vector field $v\in C([0, T); L^2_{loc} (\Bbb
R^n))$ satisfying the integral identity:
\begin{eqnarray}
\lefteqn{-\int_0 ^T \int_{\Bbb R^n} v(x,t)\cdot \frac{\partial\phi (x,t)}{\partial
t} dx dt -\int_{\Bbb R^n} v_0(x)\cdot \phi (x, 0 )dx}\hspace{.0in}\nonumber \\
&&\quad -\int_0 ^T \int_{\Bbb R^n} v(x,t)\otimes v(x,t) : \nabla \phi (x,t)
dxdt\nonumber \\
&&\qquad -\int_0 ^T \int_{\Bbb R^n} \mathrm{div }\, \phi (x,t ) p(x,t )dxdt
=0, \end{eqnarray} \begin{equation}
\int_0 ^T \int_{\Bbb R^n} v(x,t)\cdot \nabla \psi (x,t)dxdt=0
\end{equation}
for every vector test function $\phi =(\phi_1 , \cdots
,\phi_n )\in C_0 ^\infty (\Bbb R^n\times [0, T))$, and for every scalar
test function $\psi \in C_0 ^\infty (\Bbb R^n \times [0, T))$. Here we
used the notation $(u\otimes v)_{ij}= u_i v_j$, and
$A:B=\sum_{i,j=1} ^n A_{ij} B_{ij}$ for $n\times n$ matrices $A$ and $ B$.
In the case when we discuss the helicity conservation of the weak
solution we impose further regularity for the
vorticity, $\o (\cdot ,t)\in L^{\frac32}
(\Bbb R^3)$ for almost every $t\in [0,T]$ in order to define the
helicity for such weak solution. {\em Hereafter, we use the notation
${\dot{X}}^s_{p,q}$(resp. ${X}^s_{p,q}$) to represent
$\mathcal{\dot{F}}^s_{p,q}$(resp. $\mathcal{{F}}^s_{p,q}$) or
$\mathcal{\dot{B}}^s_{p,q}$(resp. $\mathcal{{B}}^s_{p,q}$)}. The
following is proved in \cite{cha9}.
\begin{theorem}
Let $s>1/3$ and $q\in [2, \infty]$ be given. Suppose $ v$ is a weak
solution of the $n-$dimensional Euler equations
with $v\in C([0, T]; L^{2} (\Bbb R^n))\cap L^3 (0, T;
\dot{X}^s_{3,q}(\Bbb R^n ))$. Then,
the energy is preserved in time, namely
\begin{equation}
\int_{\Bbb R^n} |v(x,t)|^2 dx =\int_{\Bbb R^n} |v_0 (x)|^2 dx
\end{equation}
for all $t\in [0, T)$.
\end{theorem}
When we restrict $q=\infty$, the above theorem
reduce to the one in \cite{con7}. On the other hand, the
results for Triebel-Lizorkin type of space are completely new.
\begin{theorem}
Let $s>1/3$, $q\in [2, \infty]$, and $r_1\in [2, \infty], r_2\in [1, \infty]$ be given, satisfying
$2/r_1 +1/r_2 =1$. Suppose $ v$ is a weak
solution of the 3-D Euler equations
with $v\in C([0, T]; L^{2} (\Bbb R^3 ))\cap L^{r_1} (0, T;
\dot{ X}^s_{\frac92,q} (\Bbb R^3 ))$ and $\o \in L^{r_2} (0, T;
{\dot{X}}^s_{\frac95,q}(\Bbb R^3 ))$,
where the curl operation is in the sense of distribution. Then,
the helicity is preserved in time, namely
\begin{equation}\label{helicityy}
\int_{\Bbb R^3} v(x,t)\cdot \o (x,t )dx =\int_{\Bbb R^3} v_0 (x)\cdot \o _0 (x)dx
\end{equation}
for all $t\in [0, T)$.
\end{theorem}
Similarly to the case of Theorem 7.1, when we restrict $q=\infty$, the above theorem
reduce to the one in \cite{cha5}. The
results for the case of the Triebel-Lizorkin type of space, however,
is new in \cite{cha9}.
As an application of the above theorem we have the following
estimate from below of the vorticity by a constant depending on
the initial data
for the weak solutions of the 3D Euler equations.
We estimate the helicity,
\begin{eqnarray*}
\lefteqn{\int_{\Bbb R^3} v(x,t )\cdot \o (x,t )dx
\leq \|v (\cdot , t)\|_{L^3} \|\o (\cdot , t
)\|_{L^{\frac32}}}\hspace{.2in} \\
&\leq &C \|\nabla v (\cdot , t)\|_{L^{\frac32}} \|\o (\cdot , t )\|_{L^{\frac32}}
\leq C\|\o (\cdot , t )\|_{L^{\frac32}}^2,
\end{eqnarray*}
where we used the Sobolev inequality and the Calderon-Zygmund
inequality. Combining this estimate with (\ref{helicityy}), we obtain
the following:
\begin{cor}
Suppose $ v$ is a weak
solution of the 3D Euler equations satisfying the conditions of
Theorem 7.2. Then,
we have the following estimate:
$$
\|\o (\cdot ,t)\|_{L^{\frac32}}^2 \geq C H_0, \quad \forall
t\in [0, T)
$$
where $H_0 =\int_{\Bbb R^3} v_0(x)\cdot \o _0(x) dx$ is the initial
helicity, and $C$ is an absolute constant.
\end{cor}
Next we are concerned on the $L^p$-norm conservation for the weak
solutions of (QG). Let $p\in [2, \infty)$. By a weak solution of
$(QG)$ in $D\times (0, T)$ with initial data $v_0$ we mean a scalar
field $\theta \in C([0, T); L^p (\Bbb R^2)\cap L^{\frac{p}{p-1}}
(\Bbb R^2))$ satisfying the integral identity:
\begin{equation}\label{qgg1}
-\int_0 ^T \int_{\Bbb R^2} \theta (x,t)\left[\frac
{\partial}{\partial
t} +v \cdot \nabla\right] \phi (x,t)dx dt -\int_{\Bbb R^2} \th_0(x) \phi (x, 0
)dx=0
\end{equation}
\begin{equation}
\label{qgg2}
v(x,t) =-\nabla^\bot\int_{\Bbb R^2}
\frac{ \th (y,t)}{|x-y|} dy
\end{equation}
for every test function $\phi \in C_0 ^\infty (\Bbb R^2\times [0,
T))$, where $\nabla^\bot$ in (\ref{qgg2}) is in the sense of
distribution. We note that contrary to the case of 3D Euler
equations there is a global existence result for the weak
solutions of (QG) for $p=2$ due to Resnick(\cite{res}). The following is proved in
\cite{cha9}.
\begin{theorem}
Let $s>1/3$, $p\in [2, \infty)$, $q\in [1, \infty]$, and
$r_1\in [p, \infty], r_2\in [1, \infty]$
be given, satisfying
$p/r_1 +1/r_2 =1$. Suppose $ \th $ is a weak
solution of (QG)
with $\theta \in C([0, T]; L^{p} (\Bbb R^2)\cap L^{\frac{p}{p-1}}(\Bbb R^2))\cap L^{r_1} (0, T;
X^s_{p+1,q}(\Bbb R^2))$ and $v \in L^{r_2} (0, T;
{\dot{X}}^s_{p+1,q} (\Bbb R^2))$. Then,
the $L^p$ norm of $\theta(\cdot,t)$ is preserved, namely
\begin{equation}
\|\theta (t)\|_{L^p}=\|\theta_0\|_{L^p}
\end{equation}
for all $t\in [0, T]$.
\end{theorem}
|
1,116,691,497,397 | arxiv | \section{Introduction}
\label{S1}
Ultracold atomic gases have been widely explored from both experimental and
theoretical point of view due to their ability to emulate many effects from
condensed-matter physics and create novel states of quantum matter. Various
results obtained in this area have been reviewed in many publications --
see, in particular, Refs. \cite%
{Bongs04,Jaksch05,Giorgini08,Bloch08,Spielman12,Goldman14,Zhai15,Malomed18}.
Important experimental tools, the application of which opens ways to the
observation diverse novel phenomena in the quantum gases, are, \textit{inter
alia}, optical-lattice (OL) potentials, the use of the Feshbach resonance
(FR)\ to control the strength of interactions between atoms, and the
implementation of the effective spin-orbit coupling \cite%
{Snoek2011a,Frohlich2011,BoseSOC1,BoseSOC2,FermiSOC}.
The effective spatial dimension of the setting in which quantum gases are
created strongly affects the ensuing physics. The use of confining
potentials makes it possible to reduce the dimension from 3D to 2D and 1D.
In particular, the dimensional reduction of confined Bose gases can be
approximated by means of the variational method \cite%
{Salasnich02a,Salasnich02b,Salasnich2D,Salasnich10}. Recently, similar
approaches for ultracold Fermi gases in confining potentials have been
elaborated in Refs. \cite{Adhikari2006,Adhikari2007,Malomed2009,Diaz12}.
These reductions make it possible to study the complex dynamics and pattern
formation in ultracold gases in 2D and 1D settings. In this context, the
study of dark solitons in ultracold gases was reported in Bose-Einstein
condensates (BECs) \cite{Burger99}, and further developed later \cite%
{Becker08,Weller08}. For dark solitons in Fermi gases, several works have
reported theoretical and experimental results \cite%
{Antezza07,Scott11,Liao11,Yefsah13,Ku16,Syrwid18,Alphen18}. The reduced 1D
equation for Fermi gases was used for studies of interactions between dark
solitons \cite{Diaz12}.
The earliest experimental studies of Bose-Fermi mixtures (BFMs) were
performed with lithium isotopes \cite{Partridge01,Schreck01}, as well as in $%
^{174}$Yb-$^{6}$Li \cite{Hansen11} and $^{87}$Rb-$^{40}$K \cite{Heinze11}
settings. Much interest has been also drawn to heavy-atom mixtures, such as $%
^{87}$Sr-$^{84}$Sr \cite{Tey10}. These isotopes, which are characterized by
a large nuclear spin, have been proposed for the design of prototype
quantum-information processing devices. The use of FRs in the mixtures plays
a major role, as it allows one to control nonlinear interactions between the
species. For the $^{87}$Rb-$^{40}$K mixture, the FR has been observed in
Ref. \cite{Best09,Cumby13}, and a giant FR effect was reported in the $^{85}$%
Rb-$^{6}$Li system \cite{Deh10}. Further, in the $^{6} $Li-$^{133}$Cs
mixture five FRs have been observed \cite{Tung13}, and over 30 resonances
are expected to be available in the $^{23}$Na-$^{40}$K system \cite{Park12}.
Multiple heteronuclear FRs were reported in the triply quantum-degenerate
mixture of bosonic $^{41}$K and two fermionic species, $^{40}$K and $^{6}$Li
\cite{Wu11}. In a recently elaborated framework, the BFM is transformed into
a strongly interacting isotopic mixture immersed into a Fermi sea, with the
help of a wide $s$-wave resonance for the $^{41}$K-$^{40}$K combination.
Many theoretical works have addressed the dynamics of BFMs under various
conditions \cite%
{Lelas09,Watanabe08,Kain11,Mering11,Jun11,Ludwig11,Bertaina13}. To describe
the ground state (GS) of the mixture, the quasi-mean-field theory may be a
useful approach \cite{Adhikari08,Maruyama09,Iskin09,Snoek11,Nishida06}. In
this framework, the use of FRs was studied in $^{23}$Na-$^{6}$Li, $^{87}$Rb-$%
^{40}$K, $^{87}$Rb-$^{6}$Li, $^{3}$He-$^{4}$He, $^{173}$Yb-$^{174}$Yb, and $%
^{87}$Sr-$^{84}$Sr mixtures \cite{Salasnich07a,Gautam11}. Recently,
effective 1D and 2D nonlinear Schr\"{o}dinger equations have been derived
for BFMs in cigar-shaped and disc-shaped configurations \cite{Diaz15}, using
the variational approximation (VA) along the lines previously developed in
Refs. \cite{Salasnich02a,Diaz12}. In addition, dark solitons in BFMs have
been analyzed in Ref. \cite{Tylutki16}. Here, we address, in particular,
dark solitons in the $^{7}$Li-$^{6}$Li BFM, using the effective
low-dimensional equations derived in Ref. \cite{Diaz15}.
The general aim of the present article is to present a brief review of the
spatial reduction for Fermi gases and BFMs, based on the VA. In particular,
we outline the procedure for implementing the 2D and 1D reduction, starting
from the full 3D equations of motions. To test the accuracy of the
approximations, we present a comparison of the results with full 3D
numerical simulations. Using the corresponding effective equations, we
address various dynamical settings, such as dark solitons and their
interactions. In the case of BFMs, we consider the construction of GSs,
varying the interaction strength. Finally, for the 1D situation, we address
the formation of dark solitons in the mixture, and compare the corresponding
1D solution to results of the full numerical simulations, observing good
agreement between them. The presentation is arranged as follows: the Fermi
gases and BFMs are considered, severally,in Secs. \ref{S2} and \ref{S3}, and
the paper is concluded in Sec \ref{S4}.
\section{The Fermi Gas}
\label{S2}
We consider a dilute superfluid formed by $N$ fermionic atoms of mass $m_{%
\mathrm{F}}$ and spin $s_{\mathrm{F}}$, loaded into an optical trap at zero
temeprature. We apply the local density approximation \cite{Bloch08} to the
description of this setting. The corresponding dynamical equations can be
derived from the action functiona
\begin{equation}
\mathcal{S}=\int {dtd{\mathbf{r}}\mathcal{L}}, \label{Eq1}
\end{equation}%
where the Lagrangian density is
\begin{equation}
{\mathcal{L}}=\frac{{i\hbar }}{2\lambda _{1}}\left( {\Psi ^{\ast }\frac{{%
\partial {\Psi }}}{{\partial t}}-{\Psi }\frac{{\partial \Psi ^{\ast }}}{{%
\partial t}}}\right) -\frac{{\hbar ^{2}}}{{2\lambda _{2}{m_{\mathrm{F}}}}}{%
\left\vert {\nabla {\Psi }}\right\vert ^{2}}-{U(\mathbf{r})}{\left\vert {%
\Psi }\right\vert ^{2}}-\frac{3}{5}\beta \frac{{\hbar ^{2}}}{{2{m_{\mathrm{F}%
}}}}{C_{\mathrm{F}}}{\left\vert {\Psi }\right\vert ^{10/3}}-\frac{{{g_{%
\mathrm{F}}}}}{2}{\left\vert {\Psi }\right\vert ^{4}}, \label{Eq2}
\end{equation}%
$\Psi \left( \mathbf{r},t\right) $ being a complex order parameter, whose
norm is equal to the number of particles. Here ${C_{\mathrm{F}}}={\left[ {6{%
\pi ^{2}}/\left( {2{s_{\mathrm{F}}}+1}\right) }\right] ^{2/3}}$ is a
constant that depends on spin $s_{\mathrm{F}}$, ${g_{\mathrm{F}}}=4\pi {%
\hbar ^{2}}({a_{\mathrm{F}}}/{m_{\mathrm{F}}})[2s_{\mathrm{F}}/(2s_{\mathrm{F%
}}+1)]$ with scattering length $a_{\mathrm{F}}$, which determines
interactions of fermions belonging to different spin states
(the interactions which are not forbidden by the Pauli principle) \cite%
{Giorgini08}, and ${U(\mathbf{r})}$ is to an external potential applied to
fermions.
\begin{table}[tbh]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Regime & $\lambda_1$ & $\lambda_2$ & $\beta$ & $s_\mathrm{F}$ \\ \hline
Polarized & 1 & 1 & 1 & 0 \\
BCS & 2 & 4 & 1 & 1/2 \\
Unitary & 2 & 4 & 0.44 & 1/2 \\ \hline
\end{tabular}%
\end{center}
\caption{$\protect\lambda _{1}$, $\protect\lambda _{2}$, $\protect\beta $,
and $s_{\mathrm{F}}$ for three different regimes in the Fermi-gas dynamics
\protect\cite{Manini05,Salasnich09,Ancilotto09,Ancilotto12}.}
\label{TT1}
\end{table}
Parameters $\lambda _{1}$, $\lambda _{2}$, $\beta $, and $s_{\mathrm{F}}$ in
Eq. (\ref{Eq2}) correspond to three different regimes addressed in this
article, which are listed in Table \ref{TT1}. It is relevant
to mention that the spin polarization may affect some parameters, such as
coefficient ${C_{\mathrm{F}}}${\ \cite{Andreev18}}.
Lagrangian density (\ref{Eq2}) gives rise to the following The
Euler-Lagrange equation,
\begin{equation}
\frac{i\hbar }{\lambda _{1}}\frac{{\partial {\Psi }}}{{\partial t}}=\left[ -%
\frac{{\hbar ^{2}}}{{2{\lambda _{2}}{m_{\mathrm{F}}}}}{\nabla ^{2}}+{U}+g_{%
\mathrm{F}}{{\left\vert {\Psi }\right\vert }^{2}}\right. \left. +\frac{{%
\hbar ^{2}}}{{2{m_{\mathrm{F}}}}}\beta {C_{\mathrm{F}}}{{\left\vert {\Psi }%
\right\vert }^{4/3}}\right] {\Psi }, \label{Eq3}
\end{equation}%
which as an effective quasi-mean-field equation for the fermi gas under the
consideration; note that it may be rewritten in the form of
hydrodynamic equations \cite{Kim2004,Adhikari2006b}. More details on the
derivation of this equation are given in Appendix A. Below, we focus on the
BCS (Bardeen-Cooper-Schrieffer) setting, referring to atoms of $^{6}$Li with
mass $6$ a.u.
In numerical simulations we use the fourth-order Runge-Kutta
method in time, and the centered second-order finite-difference method for
handling the spatial discretization. In the next two subsections we reduce
the full 3D equation to the corresponding 2D and 1D effective equations,
using the VA proposed in Ref. \cite{Diaz12}.
\subsection{The two-dimensional reduction}
We derive effective 2D equations, applying the VA to the Fermi gas in the
disk-shaped trap. For this purpose, we consider an external potential
composed of two terms: the parabolic (harmonic-oscillator) one accounting
for the confinement in the $z$ direction, transverse to the disk's plane,
and the in-plane potential, $U_{\mathrm{2D}}$:
\begin{equation}
U\left( {\mathbf{r}}\right) =\frac{1}{2}m_{\mathrm{F}}\omega _{z}^{2}{z^{2}+}%
U_{\mathrm{2D}}\left( {\mathbf{r}_{\bot },t}\right) . \label{Eq4}
\end{equation}%
The initial ansatz assumes, as usual, the factorization of the 3D wave
function into a product of functions of $z$ and $\mathbf{r}_{\bot }$, the
former one being the Gaussian ground state of the harmonic-oscillator
potential \cite{Salasnich02a}:
\begin{equation}
\Psi \left( {{\mathbf{r}},t}\right) =\frac{1}{{{\pi ^{1/4}}}\sqrt{{\xi (%
\mathbf{r}_{\bot },t)}}}{\exp }\left( -\frac{z^{2}}{2(\xi (\mathbf{r}_{\bot
},t))^{2}}\right) \phi \left( {\mathbf{r}_{\bot },t}\right) . \label{Eq5}
\end{equation}%
The Gaussian is subject to the unitary normalization, with transverse width $%
\xi $ considered as a variational parameter, while the 2D wave function, $%
\phi $, is normalized to the number of atoms. Therefore, the reduction from
3D to 2D implies that the system of equations should be derived for the pair
of functions $\phi \left( \mathbf{r}_{\bot },t\right) $ and $\xi \left(
\mathbf{r}_{\bot },t\right) $, using the reduced action functional, which is
obtained by integrating the 3D action over the $z$-coordinate:
\begin{equation}
\mathcal{S}{_{\mathrm{2D}}}=\int {dtdxdy{\mathcal{L}}_{\mathrm{2D}}},
\label{Eq6}
\end{equation}%
where the respective Lagrangian density is
\begin{eqnarray}
{\mathcal{L}_{{\mathrm{2D}}}} &=&i\frac{\hbar }{2\lambda _{1}}\left( {{\phi
^{\ast }}{\partial _{t}}\phi -\phi {\partial _{t}}{\phi ^{\ast }}}\right) -%
\frac{{\hbar ^{2}}}{{2\lambda _{2}m_{\mathrm{F}}}}{\left\vert {{\nabla
_{\bot }}\phi }\right\vert ^{2}}-{U_{{\mathrm{2D}}}}{\left\vert {\phi }%
\right\vert ^{2}}-\frac{{\hbar ^{2}}}{{2m_{\mathrm{F}}}}\frac{{3\beta C_{2D}}%
}{{5{\xi ^{2/3}}}}{\left\vert {\phi }\right\vert ^{10/3}}-\frac{g_{\mathrm{F}%
}}{{2{{\left( {2\pi }\right) }^{1/2}}\xi }}{\left\vert {\phi }\right\vert
^{4}} \notag \\
&&-\frac{{\hbar ^{2}}}{{4m_{\mathrm{F}}\lambda _{2}{\xi ^{2}}}}{\left\vert {%
\phi }\right\vert ^{2}}-\frac{1}{4}{{m_{\mathrm{F}}\omega _{z}^{2}{\xi ^{2}}}%
}{\left\vert {\phi }\right\vert ^{2}}, \label{Eq7}
\end{eqnarray}%
$C_{\mathrm{2D}}\equiv {(3/5)^{1/2}}{(6/(2s_{\mathrm{F}}+1))^{2/3}}\pi $,
the last two terms being produced by the reduction to 2D, the penultimate
term corresponding to the spread in the confined dimension. Hence, the
Euler-Lagrange equations, derived by varying the 2D action, which is
generated by Lagrangian (\ref{Eq7}), with respect to $\phi $ and $\xi $ take
the form of
\begin{eqnarray}
i\frac{\hbar }{\lambda _{1}}{\partial _{t}}{\phi } &=&\left[ {-\frac{\hbar
^{2}}{{2{\lambda _{2}}{m_{\mathrm{F}}}}}\nabla _{\bot }^{2}+{U_{2\mathrm{D}}}%
+\frac{g_{\mathrm{F}}}{{\sqrt{2\pi }{\xi }}}{\left\vert {\phi }\right\vert
^{2}}}\right. +\frac{{\hbar ^{2}}}{2m_{\mathrm{F}}}\frac{\beta }{{\xi ^{2/3}}%
}{C_{2\mathrm{D}}}{\left\vert {\phi }\right\vert ^{4/3}}+\frac{{{\hbar ^{2}}}%
}{{4\lambda _{2}{m_{\mathrm{F}}}\xi ^{2}}} \notag \\
&&+\left. {\frac{1}{4}}{m_{\mathrm{F}}}\omega _{z}^{2}\xi ^{2}\right] {\phi }%
, \label{Eq8}
\end{eqnarray}%
\begin{equation}
m_{\mathrm{F}}\omega _{z}^{2}\xi ^{4}-\frac{{2{\hbar ^{2}}}}{5m_{\mathrm{F}}}%
\beta {C_{2\mathrm{D}}}{\left\vert {\phi }\right\vert ^{4/3}}\xi ^{4/3}-%
\frac{g_{\mathrm{F}}}{\sqrt{2\pi }}{\left\vert {\phi }\right\vert ^{2}}{\xi }%
-\frac{{{\hbar ^{2}}}}{\lambda _{2}m_{\mathrm{F}}}=0. \label{Eq9}
\end{equation}%
Algebraic equation (\ref{Eq9}) for $\xi $ cannot be solved analytically,
therefore we used the Newton's method to solve it numerically. The necessity
to find $\xi $ at each step of the integration is a numerical complication
of a minimal cost compared to the 3D integration of the underlying equation (%
\ref{Eq3}). Note that a further simplifications can be achieved
by assuming in Eq. (\ref{Eq5}) that the Gaussian width is a constant $\xi (%
\mathbf{r}_{\bot },t)=\xi _{0}$. In this case $\xi $, naturally, does not
depend on $\phi $. Then, the solution of Eq. (\ref{Eq9}) with the density
tending to zero can be calculated analytically and it is given by $\xi
_{0}=\lambda _{2}^{-1/4}\sqrt{\hbar /m_{\mathrm{F}}\omega _{z}}$.
\begin{figure}[tbp]
\begin{centering}
\begin{tabular}{lll}
(a) & (b) & \\
\includegraphics[width=0.45\textwidth]{Figure1arev-eps-converted-to.pdf} &
\includegraphics[width=0.45\textwidth]{Figure1brev-eps-converted-to.pdf} &
\end{tabular}%
\caption{(a) The 2D radial density, $\protect n_{\mathrm{2D}}(r)$, as obtained from the full 3D equation, and the 2D reduction derived with the help of the VA. (b) The 2D radial density, $\protect n_{\mathrm{2D}}(r)$, as obtained from the full 3D equation, and the 2D reduction derived with the help of the VA, assuming that the Gaussian width is a constant: $\xi_0 = \sqrt{\hbar/2m_{\mathrm{F}}\omega _{z} }$. Different curves correspond to the indicated values of $a_{s}=(0,-50,-100)\mathrm{nm}$. The other parameters are $N=1000$, $\omega_{x}=\omega_{y}=1050\mathrm{Hz}$, $\omega_{z}=21\mathrm{kHz}$ and $A=0$. The panel (a) is taken from Ref. \cite{Diaz12}.}
\label{Fig1}
\end{centering}
\end{figure}
We consider a 2D potential consisting of the axisymmetric parabolic
potential and the superposition of two triangular OLs:
\begin{equation}
{U_{\mathrm{2D}}}=A\sum\limits_{b=1}^{2}{\sum\limits_{a=1}^{3}{{{\sin }^{2}}%
\left( \frac{2\pi }{\lambda }\;({{\mathbf{k}_{a,b}}\cdot {\mathbf{r}_{\bot }}%
})\right) }}+\frac{1}{2}\omega _{x}^{2}{x^{2}}+\frac{1}{2}\omega _{y}^{2}{%
y^{2}}, \label{Eq10}
\end{equation}%
where $\{\mathbf{k}_{a,1}\}$ and $\{\mathbf{k}_{a,2}\}$ are triplets of
unitary vectors of both triangular lattices, which are separated by a
specific angle $\theta $. Here $A$ denotes the lattice's amplitude, and $%
(\omega _{x},\omega _{y})$ are frequencies of the magnetic-optical trapping
potential. In the absence of the OLs ($A=0$), we have verified the accuracy
of the 2D reduction by comparing results generated by this approximation to
those obtained by integrating the underlying 3D equation (\ref{Eq3}). The
respective GS was found by means of the imaginary-time integration based on
the fourth-order Runge-Kutta algorithm with $\Delta t=0.5$ $\mu $s. The
spatial discretization for the simulations was performed with $\Delta x=0.25$
$\mu $m and $\Delta y=0.25$ $\mu $m. The comparison is displayed in
panel (a) of Figure \ref{Fig1}, where the radial-density
profiles are plotted. We can observe excellent agreement between the reduced
2D and full 3D descriptions. This result suggests one to use the Eqs. (\ref%
{Eq8}) and (\ref{Eq9}) for studying 2D patterns. Panel (b)
of Figure \ref{Fig1} shows a comparison of 3D full numerical simulations versus
the VA, assuming a constant width $\xi _{0}$. One can observe that
the latter approximation produces less accurate results, which is at least
ten times worse than the VA with a density-dependent width.
\begin{figure}[tbp]
\begin{centering}
\begin{tabular}{lll}
(a) & (b) & \\
\includegraphics[width=0.3\textwidth]{Figure2arev-eps-converted-to.pdf} & %
\includegraphics[width=0.3\textwidth]{Figure2brev-eps-converted-to.pdf} & \\
& & \\
(c) & (d) & \\
\includegraphics[width=0.3\textwidth]{Figure2crev-eps-converted-to.pdf} & %
\includegraphics[width=0.3\textwidth]{Figure2drev-eps-converted-to.pdf} & \\
& & \\
\end{tabular}%
\caption{Density $\protect n_{2D}$ as a function of coordinates $x$ and $y$ for four different angles between the triangular OLs $\theta=(5º,10º,15º,20º)$. The fixed parameters are $N=1000$, $\omega_x=\omega_y=1050\mathrm{Hz}$, $\omega_z=52.5\mathrm{kHz}$, $A=1.74\times10^{-29}\mathrm{J}$, $\lambda=10\mu\mathrm{m}$ and $a_s=200\mathrm{nm}$.}
\label{Fig2R}
\end{centering}
\end{figure}
Figure \ref{Fig2R} shows the density as a function of coordinates $x$ and $y$
when the OLs are taken into account. We observe that this particular
combination of the OLs (a superlattice) produces a pattern in the form of
the superstructure, with the number of density peaks varying when the angle
between the unitary vectors increases. Note that the multitude of different
coexisting robust multi-peak patterns suggests that this setting has a
potential for the use as a data-storage system.
\subsection{The one-dimensional reduction}
Next, we consider the system confined in one dimension, which implies a
cigar-shaped configuration, elongated in the $z$ direction. In this case,
the potential trap acting in transeverse plane is the harmonic oscillator in
the transverse plane:
\begin{equation}
U\left( {{\mathbf{r}}}\right) =U_{1\mathrm{D}}\left( {z}\right) +\frac{1}{2}%
m\omega _{t}^{2}r^{2}, \label{Eq11}
\end{equation}%
where $r^{2}={x^{2}}+{y^{2}}$. It is assumed that the potential in the
transverse direction is much stronger than the axial one. The simplest
option is to adopt a Gaussian shape in the transverse plane, which
represents the ground state of a the 2D harmonic oscillator , similar to
what is adopted above in the case of the 2D reduction. As a result, the
variable-separation assumption can be applied, defining the 3D wave function
as \cite{Salasnich02a,Salasnich02b,Salasnich2D}
\begin{equation}
\Psi \left( {{\mathbf{r}},t}\right) =\frac{1}{{{\pi ^{1/2}}{\sigma }\left( {%
z,t}\right) }}{\exp {\ \left( -\frac{r^{2}}{{2{\sigma }{{\left( {z,t}\right)
}^{2}}}}\right) }}f\left( {z,t}\right) , \label{Eq12}
\end{equation}%
where $f$ is normalized to $N$, such that the 1D density is ${n_{1\mathrm{D}}%
}={\left\vert f\right\vert ^{2}}$. Here $\sigma $ is the Gaussian width,
which is a function of $z$ and time. After some algebra, similar to that
performed above, one derives the Euler-Lagrange equations:
\begin{equation}
i\frac{\hbar }{\lambda _{1}}{\partial _{t}}{f}=\left[ {\ -\frac{\hbar ^{2}}{{%
2{\lambda _{2}}{m_{\mathrm{F}}}}}\partial _{z}^{2}+U_{1\mathrm{D}}+\frac{g_{%
\mathrm{F}}}{{2\pi \sigma ^{2}}}{{\left\vert {f}\right\vert }^{2}}}+\frac{%
\hbar ^{2}}{{2{m_{\mathrm{F}}}}}\beta \frac{C_{1\mathrm{D}}}{{\sigma ^{4/3}}}%
{{\left\vert {f}\right\vert }^{4/3}}+\frac{\hbar ^{2}}{2{m_{\mathrm{F}}}%
\lambda _{2}\sigma ^{2}}\frac{1}{2}{m_{\mathrm{F}}}\omega _{t}^{2}\sigma ^{2}%
\right] {f,} \label{Eq13}
\end{equation}%
\begin{equation}
m_{\mathrm{F}}\omega _{t}^{2}\sigma ^{4}-\frac{2}{5}\frac{\hbar ^{2}}{m_{%
\mathrm{F}}}\beta C_{1\mathrm{D}}|f|^{4/3}\sigma ^{2/3}-\frac{\hbar ^{2}}{%
\lambda _{2}m_{\mathrm{F}}}-\frac{g_{\mathrm{F}}}{{2\pi }}|f|^{2}=0,
\label{Eq14}
\end{equation}%
where $C_{1\mathrm{D}}=(3/5){(6\pi (2{s_{\mathrm{F}}}+1))^{2/3}}$. Similar
to the 2D case, algebraic equation (\ref{Eq14}) is solved using the Newton's
method, and here too the quasi-BCS regime is addressed. We set $U_{1\mathrm{D%
}}=0$, $a_{\mathrm{F}}=-5$ nm, and $\omega _{t}=1000$ Hz, these parameters
being in the range of experimental values \cite{Yefsah13}. Since Eqs. (\ref%
{Eq13}) and (\ref{Eq14}) produce results which agree well with the full 3D
simulations \cite{Diaz12}, one can use the effective 1D equations to study
more complex dynamical behavior, such as that of dark solitons \cite%
{Yefsah13,Alphen18}.
\begin{figure}[tbp]
\begin{centering}
\begin{tabular}{lll}
(a) & (b) & \\
\includegraphics[width=0.3\textwidth]{Figure3arev-eps-converted-to.pdf} & %
\includegraphics[width=0.3\textwidth]{Figure3brev-eps-converted-to.pdf} &
\end{tabular}%
\caption{The initial 1D density for one soliton (a) and eighteen dark solitons (b).
In both cases, $\Delta_s=0.8 \mu$m and $n_b=10$ are used. The other fixed parameters
are $a_\mathrm{F}=-5$ nm and $\omega_t=1000$ Hz.}
\label{FIG3R}
\end{centering}
\end{figure}
To generate a dark soliton, it is posible to consider the initial condition
with zero imaginary part, $f_{I}(z,t=0)=0$, while the real part is given by $%
f_{R}(z,t=0)=f_{b}\tanh ((z-z_{s})/\Delta _{s})$, where $f_{b}$ and
$\Delta _{s}$ are the soliton's amplitude and width,
respectively. We have found that values of the square amplitude and width, $%
n_{b}=10$ particles/$\mu $m and $\Delta _{s}=0.8~\mu $m, respectively, can
be chosen to minimize the background noise. If we consider a set of $N_{s}$
dark solitons, the initial condition for the imaginary part is again zero, $%
f_{I}=0$, while the real part can be cast in the form of
\begin{equation}
f_{R}\left( z,t_{0}\right) =\frac{1}{2}+\frac{1}{2}\sum%
\limits_{j=1}^{N_{s}/2}{\left[ {\left( {-1}\right) }^{1+j}\left( {\tanh
\left( {\frac{{z-{z}}_{j}}{\Delta _{s}}}\right) +1}\right) +{{\left( {-1}%
\right) }^{j}}\left( {\tanh \left( {\frac{{z-{z_{-j}}}}{\Delta _{s}}}\right)
+1}\right) \right] },
\end{equation}%
where the positions of the solitons are $z_{j}$ and $z_{-j}$ on the positive
and negative $z$ half-axes, receptively. Moreover, the widths of the
solitons ($\Delta _{s}$) are considered the same, and that the number of
initial solitons $N_{s}$ is even. This initial ansatz was normalized to
secure the correct density of the wave function, $n_{b}=|f_{b}|^{2}$.
Then, the system was simulated with the help of the standard
fourth-order Runge-Kutta method with $\Delta t=0.095$ $\mu $s. The spatial
discretization for the simulations was performed with $\Delta z=0.100$ $\mu $%
m. Figure \ref{FIG3R} shows the shape of the initial conditions for the case
of one and $N_{s}=18$ dark solitons.
\begin{figure}[tbp]
\begin{centering}
\begin{tabular}{lll}
(a) & (b) & \\
\includegraphics[width=0.3\textwidth]{Figure4arev-eps-converted-to.pdf} & %
\includegraphics[width=0.3\textwidth]{Figure4brev-eps-converted-to.pdf} & \\
& & \\
\end{tabular}%
\caption{(a) The spatiotemporal diagram for the density, $n_{1D}$, when the initial
core-core separation between the two dark solitons is
$d=4$ $\mu$m. (b) The speed of the solitons at $t=90$ms as a function of $d$. The
other fixed parameters are the same as in Fig. \ref{FIG3R}.}
\label{FIG4R}
\end{centering}
\end{figure}
\begin{figure}[tbp]
\begin{centering}
\begin{tabular}{lll}
(a) & (b) & \\
\includegraphics[width=0.3\textwidth]{Figure5arev-eps-converted-to.pdf} & %
\includegraphics[width=0.3\textwidth]{Figure5brev-eps-converted-to.pdf} & \\
& & \\
(c) & (d) & \\
\includegraphics[width=0.3\textwidth]{Figure5crev-eps-converted-to.pdf} & %
\includegraphics[width=0.3\textwidth]{Figure5drev-eps-converted-to.pdf} & \\
& & \\
\end{tabular}%
\caption{The spatiotemporal diagram for the density, $n_{1D}$ for
different numbers of dark solitons: (a) $N_s=6$, (b) $N_s=10$, (c) $N_s=14$
and (d) $N_s=18$. In all the cases the initial distance between the solitons
is $d=4$ $\mu$m. The other fixed parameters are the same as in Fig. \ref{FIG3R}.}
\label{FIG5R}
\end{centering}
\end{figure}
In the case of two solitons, we have $z_{i}=z_{-i}=d/2$, where $d$ is the
initial inter-core separation. Frame (a) of Fig. \ref{FIG4R} shows the
spatiotemporal diagram for two solitons at $d=4\mu $m. One clearly observes
that both solitons separate in the course of the evolution. Frame (b) of
Fig. \ref{FIG4R} shows the speed taken after $90$ms of the evaluation as a
function of different initial inter-core separations between the dark
solitons. naturally, smaller initial separations generate higher speeds. In
fact, at this fix time the speed follows the law $v_{s}\sim d^{\alpha }$,
with $\alpha =-3.49$. Other features of the two-soliton interaction can be
found in Ref. \cite{Alphen18}.
Figure \ref{FIG5R} shows the spatiotemporal diagrams for the 1D density, $%
n_{1\mathrm{D}}$, for different numbers of dark solitons $N_{s}=(6,10,14,18)$%
. Similar to the case of two solitons, we observe that the solitons interact
repulsively. To measure the strength of the interaction is provided by the
distance between the central part and the positive-side border of the
soliton gas, $\delta z_{e}=|z_{\mathrm{central}}-z_{\mathrm{bond}}|$.
\begin{figure}[tbp]
\begin{centering}
\includegraphics[width=0.4\textwidth]{Figure6rev-eps-converted-to.pdf} %
\caption{The distance between the central part and the edge at $z>0$ of the
dark-soliton gas, $\delta z_e$, as a function of time for different numbers
of the dark solitons, $N_s$. The other fixed parameters are the same as in Fig. \ref{FIG3R}.}
\label{FIG6R}
\end{centering}
\end{figure}
Figure \ref{FIG6R} shows $\delta z_{e}$ as a function of time for different
values of $N_{s}$. We obverse that it increases monotonously with time, and
its time derivative (speed) change as $N_{s}$ increases. Nevertheless, the
speed tends to a limit with the increase of the number of solitons, so that
there is no dramatic difference between $N_{s}=14$ and $N_{s}=18$. This
happens because the interaction between the solitons has an effective range,
as shown in frame (b) of Fig. \ref{FIG4R}, hence the solitons located near
the edges interact weaker with the central ones.
\begin{figure}[tbp]
\begin{centering}
\begin{tabular}{lll}
(a) & (b) & \\
\includegraphics[width=0.3\textwidth]{Figure7arev-eps-converted-to.pdf} & %
\includegraphics[width=0.3\textwidth]{Figure7brev-eps-converted-to.pdf} & \\
\end{tabular}%
\caption{Dependence of the speed of the dark solitons for $N_s=18$.
(a) The distribution of the speed of each soliton at $t=90$ms for different initial distances $d$.
(b) The speed of the dark soliton near the edge at $t=90$ms as a function of $d$.
The other fixed parameters are the same as in Fig. \ref{FIG3R}.}
\label{FIG7R}
\end{centering}
\end{figure}
To analyze the case of a large number of solitons, it is enough to take $%
N_{s}=18$. Frame (a) of Fig. \ref{FIG7R} shows the speed at $t=90$ms of each
soliton as a function of its initial position for different initial
distances $d$. We observe that the central solitons have smaller speeds than
their counterparts placed near the edges, so that the speed is given by $%
v_{s}\simeq \tanh (\gamma _{d}z_{d})$ with $\gamma _{d}=-0.01d+0.077$ in the
range of Fig. \ref{FIG7R}. Frame (b) of Fig. \ref{FIG7R} shows the speed of
the soliton located near the positive edge at $t=90$ms. Similar to the
two-soliton case, the speed decays with the increase of the
initial distance, $v_{s}\sim d^{\alpha }$, with $\alpha =-3.385$.
\begin{figure}[tbp]
\begin{centering}
\begin{tabular}{llll}
(a) & (b) & (c) & \\
\includegraphics[width=0.28\textwidth]{Figure8arev-eps-converted-to.pdf} & %
\includegraphics[width=0.28\textwidth]{Figure8brev-eps-converted-to.pdf} & %
\includegraphics[width=0.28\textwidth]{Figure8crev-eps-converted-to.pdf} & \\
\end{tabular}%
\caption{Spatiotemporal diagrams of the density $n_{1D}$ for $N_s=18$ for
two different initial conditions in the presence of random perturbation $\epsilon$.
Panels (a) and (b) display the results for $\epsilon$ taking values in the ranges
of $[-\epsilon_{\max},\epsilon_{\max}]$, with $\epsilon_{\max}=0.4$ $\mu$m and
$0.8$ $\mu$m, respectively.
(c) $E_{\epsilon}$ normalized to $E_{0}$ as a function of $\epsilon_{\max}$,
i.e., the amplitude of the randomly varying variable.}
\label{FIG8R}
\end{centering}
\end{figure}
Finally, we consider random initial positions of the solitons, with $%
N_{s}=18 $. We define the initial positions as $z_{j,\epsilon
}=z_{j,0}+\epsilon $, where $\epsilon $ is a random fluctuation, and $%
z_{j,0} $ are the soliton positions, with the mean distance between them $%
d=4~\mu $m, like in the symmetric case. Figure \ref{FIG8R} shows the
spatiotemporal diagrams of the 1D density $n_{1\mathrm{D}}$ for two
different random realizations. In particular, we assume that $\epsilon $
takes random values in the ranges $[-\epsilon_{\max},\epsilon_{\max}]=
[-0.4,0.4]~\mu $m and $[-0.8,0.8]~\mu $m, in panels (a) and (b),
respectively. It is observed that the speed of the expansion is higher than
in the absence of the randomness, because the interaction energy generates
higher internal pressure in the gas of solitons. We analyze the influence of
the random-fluctuation magnitude, $\epsilon $ on the dynamics. In
particular, we calculate the sum of the squared velocities at the final
moment of time,
\begin{equation}
E_{\epsilon }=\sum\limits_{j=1}^{N_{s}}v_{j,\epsilon }^{2}.
\end{equation}%
Panel (c) of Fig. \ref{FIG8R} shows $E_{\epsilon }$ normalized to $E_{0}$
(the kinetic energy of the set of dark solitons with equidistant initial
positions) as a function of $\epsilon_{\max}$. We can observe that $%
E_{\epsilon}$ strongly increases with the growth of $%
\epsilon_{\max}$, which naturally means that the gas of solitons expands
faster when the fluctuations are stronger.
\section{The Bose - Fermi Mixture}
\label{S3}
In this section we consider a dilute superfluid mixture formed by $N_{%
\mathrm{B}}$ bosonic atoms of mass $m_{\mathrm{B}}$, and $N_{\mathrm{F}}$
fermionic atoms of mass $m_{\mathrm{F}}$ and spin $s_{\mathrm{F}}$. The
atoms interact through the pseudopotential, $\delta (\mathbf{r})$ \cite%
{Bloch08}. We assume that bosons form a BEC, described by the
Gross-Pitaevskii equation \cite{Bloch08}, while the local density
approximation \cite{Bloch08} applies to the description of the weakly
interacting fermionic component. Accordingly, the dynamical
equations can be derived from the functional,
\begin{equation}
\mathcal{S}=\int {dtd{\mathbf{r}}\left( \mathcal{L}_{\mathrm{B}}+\mathcal{L}%
_{\mathrm{F}}+\mathcal{L}_{\mathrm{BF}}\right) }, \label{Eq17}
\end{equation}
where $\mathcal{L}_{B}$ and $\mathcal{L}_{\mathrm{F}}$ are the Lagrangian
densities of the Bose and Fermi components, while $\mathcal{L}_{\mathrm{BF}}$
accounts for the interaction between them \cite{Diaz15}:
\begin{equation}
{\mathcal{L}_{B}}=\frac{{i\hbar }}{2}\left( {\Psi _{\mathrm{B}}^{\ast }\frac{%
{\partial {\Psi _{\mathrm{B}}}}}{{\partial t}}-{\Psi _{\mathrm{B}}}\frac{{%
\partial \Psi _{\mathrm{B}}^{\ast }}}{{\partial t}}}\right) -\frac{{{\hbar
^{2}}}}{{2{m_{\mathrm{B}}}}}{\left\vert {\nabla {\Psi _{\mathrm{B}}}}%
\right\vert ^{2}}-{U_{B}(\mathbf{r})\left\vert {{\Psi _{\mathrm{B}}}}%
\right\vert ^{2}}-\frac{1}{2}{g_{\mathrm{B}}}{\left\vert {{\Psi _{\mathrm{B}}%
}}\right\vert ^{4}},
\end{equation}%
\begin{equation}
{\mathcal{L}_{\mathrm{F}}}=\frac{{i\hbar }}{2\lambda _{1}}\left( {\Psi _{%
\mathrm{F}}^{\ast }\frac{{\partial {\Psi _{\mathrm{F}}}}}{{\partial t}}-{%
\Psi _{\mathrm{F}}}\frac{{\partial \Psi _{\mathrm{F}}^{\ast }}}{{\partial t}}%
}\right) -\frac{{{\hbar ^{2}}}}{{2\lambda _{2}{m_{\mathrm{F}}}}}{\left\vert {%
\nabla {\Psi _{\mathrm{F}}}}\right\vert ^{2}}-{U_{\mathrm{F}}(\mathbf{r})}{%
\left\vert {{\Psi _{\mathrm{F}}}}\right\vert ^{2}}-\frac{1}{2}{{g_{\mathrm{F}%
}}}{\left\vert {{\Psi _{\mathrm{F}}}}\right\vert ^{4}}-\frac{3\beta C_{%
\mathrm{F}}\hbar ^{2}}{10m_{\mathrm{F}}}{\left\vert {{\Psi _{\mathrm{F}}}}%
\right\vert ^{10/3}},
\end{equation}%
\begin{equation}
{\mathcal{L}_{B\mathrm{F}}}=-\frac{1}{2}{g_{\mathrm{BF}}}{\left\vert {{\Psi
_{B}}}\right\vert ^{2}}{\left\vert {{\Psi _{\mathrm{F}}}}\right\vert ^{2}}.
\label{Eq20}
\end{equation}%
Here ${g_{B}}\equiv 4\pi {\hbar ^{2}}{a_{B}}/${$m_{\mathrm{B}}$}, ${g_{%
\mathrm{F}}}\equiv 4\pi {\hbar ^{2}}({a_{\mathrm{F}}}/{m_{\mathrm{F}}})[2S_{%
\mathrm{F}}/(2S_{\mathrm{F}}+1)]$, and ${g_{\mathrm{BF}}}\equiv 4\pi {\hbar
^{2}}${$a_{\mathrm{BF}}$}$/${$m_{\mathrm{BF}}$} are three interaction
parameters of the mixture, with $a_{\mathrm{B}}$, $a_{\mathrm{F}}$ and $a_{%
\mathrm{BF}}$ being the respective scattering lengths; ${m_{\mathrm{BF}}}%
\equiv {m_{B}}{m_{\mathrm{F}}}/({m_{B}}+{m_{\mathrm{F}}})$ is the reduced
mass; and $U_{\mathrm{B}/\mathrm{F}}{(\mathbf{r})}$ are external potentials
acting on bosons/fermions. Complex wave functions $\Psi _{\mathrm{B}/\mathrm{%
F}}\left( \mathbf{r},t\right) $ are normalized to the respective numbers of
particles, $N_{\mathrm{B}/\mathrm{F}}$. The other parameters of the
fermionic Lagrangian density are the same as in Sec. \ref{S2}.
Varying action $\mathcal{S}$ with respect to $\Psi _{\mathrm{B}}^{\ast }$
and to $\Psi _{\mathrm{F}}^{\ast }$, we derive the following system of
nonlinear Schr\"{o}dinger equations for bosons and fermions:
\begin{equation}
i\hbar {\partial _{t}}{\Psi _{B}}=\left[ {-\frac{{{\hbar ^{2}}}}{{2{m_{B}}}}{%
\nabla ^{2}}+{g_{B}}{{\left\vert {{\Psi _{B}}}\right\vert }^{2}}+{g_{\mathrm{%
BF}}}{{\left\vert {{\Psi _{\mathrm{F}}}}\right\vert }^{2}}+{U_{B}}}\right] {%
\Psi _{B},} \label{Eq21}
\end{equation}
\begin{equation}
\frac{{i\hbar }}{{{\lambda _{1}}}}{\partial _{t}}{\Psi _{\mathrm{F}}}=\left[
-\frac{{{\hbar ^{2}}}}{{2{\lambda _{2}}{m_{\mathrm{F}}}}}{\nabla ^{2}}+{{g_{%
\mathrm{F}}}{{\left\vert {{\Psi _{\mathrm{F}}}}\right\vert }^{2}}+{g_{%
\mathrm{BF}}}{{\left\vert {{\Psi _{B}}}\right\vert }^{2}}}+{U_{\mathrm{F}}}+%
\frac{{{\beta C_{\mathrm{F}}\hbar ^{2}}}}{{2{m_{F}}}}{{\left\vert {{\Psi _{%
\mathrm{F}}}}\right\vert }^{4/3}}\right] {\Psi _{\mathrm{F}}}. \label{Eq22}
\end{equation}
We apply the formalism developed below to the ${}^{7}$Li-${}^{6}$Li mixture,
with the same scattering parameter for both species, $a_{\mathrm{B}}=a_{%
\mathrm{F}}=5$nm. The use of isotopes of the same alkali element is
suggested by the similarity of their electric polarizability, thus implying
similar external potentials induced by an optical trap. Unless specified
otherwise, in what follows below we consider configurations with fully
polarized fermions. Note that the BCS and unitarity regimes involve more
than one spin state of fermions, hence the magnetic trap will split the
respective spin energy levels. For this reason, we assume the presence of
the optical trap, which supports equal energy levels for all the spin
states, making it possible to discriminate different regimes of the
interaction in the BFM. In the BCS and unitarity regimes, we assume balanced
populations of the two spin components.
Our analysis is first presented for the GS and dynamics of perturbations
around it. In particular, for the GS we focus on determining the spatial
correlation $C_{s}$ between the spatial particle densities in both species,
defined as
\begin{equation}
{C_{s}}\left( {{\bar{n}_{B}},{\bar{n}_{\mathrm{F}}}}\right) =\frac{{%
\left\langle {{\bar{n}_{B}}{\bar{n}_{\mathrm{F}}}}\right\rangle }}{\sqrt{%
\left\langle {\bar{n}_{B}^{2}}\right\rangle \left\langle {\bar{n}_{\mathrm{F}%
}^{2}}\right\rangle }}, \label{Eq23}
\end{equation}
where ${{\bar{n}}_{\mathrm{B}/\mathrm{F}}}={n_{\mathrm{B}/\mathrm{F}}}%
-\left\langle {{n_{\mathrm{B}/\mathrm{F}}}}\right\rangle $, $\left\langle
{}\right\rangle $ standing for the spatial average. For dynamical
perturbations around the GS, a spatiotemporal correlation, which is defined
by replacing the spatial average with the spatiotemporal average, is known
as the Pearson coefficient $C_{s-t}$ \cite{Bragard04}. We remark that when $%
C_s=1$ and $C_s=-1$ the mixture is fully synchronized and anti-synchronized,
respectively; whereas, the mixture is not synchronized at $C_s=0$.
While numerical integration of this system in the 3D form is very heavy
computationally, the effective dimension may be reduced to 1D or 2D when the
system is tightly confined by a trapping potential. To this end, the VA is
employed, making use, as above, of the factorization of the 3D wave
function, which includes the Gaussian ansatz in the tightly confined
transverse directions. As mentioned above too, the factorization has been
widely used for Bose and Fermi systems separately, as it shown in Refs. \cite%
{Salasnich02a} and \cite{Diaz12}, respectively. In the next two subsections
we reduce the full 3D system to the corresponding 2D and 1D effective
systems, using the VA proposed in Ref. \cite{Diaz15}.
\subsection{The two-dimensional reduction}
Similar to the case of the pure Fermi gas, we derive 2D equations for the
disc-shaped configuration. Accordingly, the structure of the confinement
potential is taken as
\begin{equation}
{U_{\mathrm{B}/\mathrm{F}}}\left( {\mathbf{r}}\right) =\frac{1}{2}{m_{%
\mathrm{B}/\mathrm{F}}}\omega _{z,\mathrm{B}/\mathrm{F}}^{2}{z^{2}}+{U_{2%
\mathrm{D},\mathrm{B}/\mathrm{F}}}\left( \mathbf{r}_{\bot }\right) ,
\label{Eq24}
\end{equation}
where the second term corresponds to the strong harmonic-oscillator trap
acting along the $z$ direction. The corresponding factorized ansatz is
adopted as
\begin{equation}
\Psi _{\mathrm{B}/\mathrm{F}}\left( {{\mathbf{r}},t}\right) =\frac{1}{{{\pi
^{1/4}}\sqrt{\xi _{\mathrm{B}/\mathrm{F}}(\mathbf{r}_{\bot },t)}}}{\exp }%
\left( -\frac{{{z^{2}}}}{{2\left( \xi _{\mathrm{B}/\mathrm{F}}(\mathbf{r}%
_{\bot },t)\right) ^{2}}}\right) \phi _{\mathrm{B}/\mathrm{F}}(\mathbf{r}%
_{\bot },t) \label{Eq25}
\end{equation}
where $\phi _{\mathrm{B}/\mathrm{F}}$ is normalized to $N_{\mathrm{B}/%
\mathrm{F}}$, and ${\xi _{\mathrm{B}/\mathrm{F}}}\left( {x,y,t}\right) $ are
widths of the gas in the confined direction. Substituting the factorized
ansatz (\ref{Eq25}) in action (\ref{Eq17}) and integrating over $z$, we
arrive at the following expression for the effective 2D action:
\begin{equation}
\mathcal{S}=\int {dtdxdy\left( \mathcal{L}_{2D\mathrm{,B}}+\mathcal{L}_{2%
\mathrm{D},\mathrm{F}}+\mathcal{L}_{2\mathrm{D,BF}}\right) }, \label{Eq26}
\end{equation}
where
\begin{equation}
\mathcal{L}_{2\mathrm{D,B}}=i\frac{\hbar }{2}\left( {\phi _{\mathrm{B}%
}^{\ast }{\partial _{t}}{\phi _{\mathrm{B}B}}-{\phi _{\mathrm{B}}}{\partial
_{t}}\phi _{\mathrm{B}}^{\ast }}\right) -{U_{2D\mathrm{,B}}}{n_{2\mathrm{D,B}%
}}-{e_{2\mathrm{D,B}}},
\end{equation}
\begin{equation}
{\mathcal{L}_{2\mathrm{D},\mathrm{F}}}=i\frac{\hbar }{{2{\lambda _{1}}}}%
\left( {\phi _{\mathrm{F}}^{\ast }{\partial _{t}}{\phi _{\mathrm{F}}}-{\phi
_{\mathrm{F}}}{\partial _{t}}\phi _{\mathrm{F}}^{\ast }}\right) -{U_{2%
\mathrm{D},\mathrm{F}}}{n_{2\mathrm{D},\mathrm{F}}}-{e_{2\mathrm{D},\mathrm{F%
}}},
\end{equation}%
\begin{equation}
{\mathcal{L}_{2\mathrm{D,BF}}}=-\frac{1}{{{\pi ^{1/2}}}}\frac{{{g_{B\mathrm{F%
}}}}}{\sqrt{\xi _{\mathrm{B}}^{2}+\xi _{\mathrm{F}}^{2}}}{n_{2\mathrm{D},%
\mathrm{B}}}{n_{2\mathrm{D},\mathrm{F}}},
\end{equation}%
so that $n_{2\mathrm{D,B/F}}\equiv \left\vert {\phi _{\mathrm{B}/\mathrm{F}}}%
\left( {x,y}\right) \right\vert ^{2}$ are the 2D particle densities of the
boson and fermion species, and ${e_{2\mathrm{D,B}}}$ and ${e_{2\mathrm{D},%
\mathrm{F}}}$ are their energy densities:
\begin{equation}
{e_{2\mathrm{D,B}}}=\frac{{{\hbar ^{2}}}}{{2{m_{\mathrm{B}}}}}{\left\vert {{%
\nabla _{\bot }}{\phi _{\mathrm{B}}}}\right\vert ^{2}}+\left[ {\frac{{{g_{%
\mathrm{B}}}}}{{{\ \sqrt{8\pi }}{\xi _{\mathrm{B}}}}}{n}_{2\mathrm{D,B}}+%
\frac{{{\hbar ^{2}}}}{{4{m_{\mathrm{B}}}\xi _{\mathrm{B}}^{2}}}}\right.
+\left. {\frac{1}{4}{m_{\mathrm{B}}}\omega _{z,\mathrm{B}}^{2}\xi _{\mathrm{B%
}}^{2}}\right] {n_{2\mathrm{D},\mathrm{B}}},
\end{equation}
\begin{eqnarray}
{e_{2\mathrm{D},\mathrm{F}}} &=&\frac{{{\hbar ^{2}}}}{{2{\lambda _{2}}{m_{%
\mathrm{F}}}}}{\left\vert {{\nabla _{\bot }}{\phi _{\mathrm{F}}}}\right\vert
^{2}}+\left[ {\frac{{{g_{\mathrm{F}}}}}{{{\ \sqrt{8\pi }}{\xi _{\mathrm{F}}}}%
}{n_{2\mathrm{D},\mathrm{F}}}+\frac{{{\hbar ^{2}}}}{{4\lambda _{2}{m_{%
\mathrm{F}}}\xi _{\mathrm{F}}^{2}}}+\frac{1}{4}{m_{\mathrm{F}}}\omega _{z,%
\mathrm{F}}^{2}\xi _{\mathrm{F}}^{2}}\right. \notag \\
&&+\left. {\frac{{{\hbar ^{2}}}}{{2{m_{\mathrm{F}}}}}\xi \frac{3}{{5\xi _{%
\mathrm{F}}^{2/3}}}{C_{2\mathrm{D},\mathrm{F}}}n_{2\mathrm{D},\mathrm{F}%
}^{2/3}}\right] {n_{2\mathrm{D},\mathrm{F}}},
\end{eqnarray}
with $C_{2\mathrm{D},\mathrm{F}}\equiv {(3/5)^{1/2}}{(6/(2s_{\mathrm{F}%
}+1))^{2/3}}\pi $. The field equations for the 2D system are obtained by the
variation of the action $S$ given by Eq. (\ref{Eq26}) with respect to
variables $\phi _{\mathrm{B}}$ and $\phi _{\mathrm{F}}$:
\begin{eqnarray}
i\hbar {\partial _{t}}{\phi _{\mathrm{B}}} &=&\left[ {-\frac{{{\hbar ^{2}}}}{%
{2{m_{B}}}}\nabla _{\bot }^{2}+{U_{2\mathrm{D,B}}}+\frac{1}{{{\pi ^{1/2}}}}%
\frac{{{g_{\mathrm{BF}}}}}{\sqrt{\xi _{\mathrm{B}}^{2}+\xi _{\mathrm{F}}^{2}}%
}{n_{2\mathrm{D},\mathrm{F}}}}+\frac{{{g_{\mathrm{B}}}}}{{\sqrt{2\pi }{\xi _{%
\mathrm{B}}}}}{{\left\vert {{\phi _{\mathrm{B}}}}\right\vert }^{2}}\right.
\notag \\
&&+\left. \frac{{{\hbar ^{2}}}}{{4{m_{\mathrm{B}}}\xi _{\mathrm{B}}^{2}}}+%
\frac{1}{4}{m_{\mathrm{B}B}}\omega _{z,\mathrm{B}}^{2}\xi _{\mathrm{B}}^{2}%
\right] {\phi _{\mathrm{B}},} \label{Eq32}
\end{eqnarray}%
\begin{eqnarray}
i\frac{\hbar }{{{\lambda _{1}}}}{\partial _{t}}{\phi _{\mathrm{F}}} &=&\left[
{-\frac{{{\hbar ^{2}}}}{{2{\lambda _{2}}{m_{\mathrm{F}}}}}\nabla _{\bot
}^{2}+{U_{2D,\mathrm{F}}}+\frac{1}{{{\pi ^{1/2}}}}\frac{{{g_{\mathrm{BF}}}}}{%
\sqrt{\xi _{\mathrm{B}}^{2}+\xi _{\mathrm{F}}^{2}}}{n_{2\mathrm{D,B}}}}+%
\frac{{{g_{\mathrm{F}}}}}{{\sqrt{2\pi }{\xi _{\mathrm{F}}}}}{\left\vert {{%
\phi _{\mathrm{F}}}}\right\vert ^{2}}\right. \notag \\
&&+\left. \frac{{{\hbar ^{2}}}}{{2{m_{\mathrm{F}}}}}\xi \frac{1}{{\xi _{%
\mathrm{F}}^{2/3}}}{C_{2\mathrm{D},\mathrm{F}}}{\left\vert {{\phi _{\mathrm{F%
}}}}\right\vert ^{4/3}}+{\frac{{{\hbar ^{2}}}}{{4\lambda _{2}{m_{\mathrm{F}}}%
\xi _{\mathrm{F}}^{2}}}+\frac{1}{4}{m_{\mathrm{F}}}\omega _{z,\mathrm{F}%
}^{2}\xi _{\mathrm{F}}^{2}}\right] {\phi _{\mathrm{F}}.} \label{Eq33}
\end{eqnarray}
Relations between $\xi _{\mathrm{B}/\mathrm{F}}$ and $\phi _{\mathrm{B}/%
\mathrm{F}}$ are produced by the Euler-Lagrange equations associated to $\xi
_{\mathrm{B}/\mathrm{F}}$:
\begin{equation}
{\kappa _{I,B}}\xi _{\mathrm{B}}^{4}-\frac{{{g_{B}}}}{\sqrt{2\pi }}{n_{2%
\mathrm{D,B}}}{\xi _{\mathrm{B}}}-\frac{{{\hbar ^{2}}}}{{{m_{\mathrm{B}}}}}%
=0, \label{Eq34}
\end{equation}%
\begin{equation}
{\kappa _{I,\mathrm{F}}}\xi _{\mathrm{F}}^{4}-\frac{{2{\hbar ^{2}}}}{{5{m_{%
\mathrm{F}}}}}\xi {C_{2\mathrm{D},\mathrm{F}}}n_{2\mathrm{D},\mathrm{F}%
}^{2/3}\xi _{\mathrm{F}}^{4/3}-\frac{g_{\mathrm{F}}}{\sqrt{2\pi }}{n_{2%
\mathrm{D},\mathrm{F}}}{\xi _{\mathrm{F}}}-\frac{{{\hbar ^{2}}}}{\lambda
_{2}m_{\mathrm{F}}}=0, \label{Eq35}
\end{equation}%
where ${\kappa _{I,\mathrm{F}}}\equiv {m_{\mathrm{F}}}\omega _{z,\mathrm{F}%
}^{2}+2{g_{\mathrm{BF}}}{n_{2\mathrm{D,B}}}/[{{\pi ^{1/2}}{{({\xi _{\mathrm{B%
}}^{2}+\xi _{\mathrm{F}}^{2}})}^{3/2}}}]$. Thus, Eqs. (\ref{Eq32})-(\ref%
{Eq35}) constitute a system of four 2D coupled equations produced by the
reduction of the underlying 3D system (\ref{Eq21}). Note also
that when $g_{BF}=0$, the system is decoupled and Eq. (\ref{Eq32})
corresponds to the dimensional reduction of the Gross-Pitaevskii equation.
Equations (\ref{Eq34}) and (\ref{Eq35}) for $\xi _{\mathrm{B}/\mathrm{F}}$
can be solved numerically by dint of the Newton's method. The basic external
potential is taken as the harmonic-oscillator one: $U_{2\mathrm{D},\mathrm{B}%
/\mathrm{F}}=m_{\mathrm{B}/\mathrm{F}}\omega _{x,\mathrm{B}/\mathrm{F}}^{2}{%
x^{2}}/2+m_{\mathrm{B}/\mathrm{F}}\omega _{y,\mathrm{B}/\mathrm{F}}^{2}{y^{2}%
}/2$. The simulations were based on the fourth-order Runge-Kutta algorithm
with $\Delta t=4.77$ $\mu $s. The spatial discretizations was performed with
$\Delta x=1$ $\mu $m, $\Delta y=1$ $\mu $m and $\Delta z=0.05$ $\mu $m. The
GS was found by means of the imaginary-time integration. We here focus on
the case when the number of bosons is much greater than the number of
fermions, \textit{viz}., $N_{\mathrm{B}}=5\times 10^{4}$ and $N_{\mathrm{F}%
}=2.5\times 10^{3}$.
\begin{figure}[tbp]
\centering
\resizebox{0.8\textwidth}{!}{
\includegraphics{Figure9arev-eps-converted-to.pdf}
\includegraphics{Figure9brev-eps-converted-to.pdf}
}
\resizebox{0.8\textwidth}{!}{
\includegraphics{Figure9crev-eps-converted-to.pdf}
\includegraphics{Figure9drev-eps-converted-to.pdf}
}
\caption{The radial profile of the 2D particle density, and the respective
width for different values of interaction strength $a_{\mathrm{BF}}$. (a) $%
n_{2\mathrm{D},\mathrm{B}}$, (b) $n_{2\mathrm{D,F}}$, (c) $\protect\xi _{%
\mathrm{B}}$, and (d) $\protect\xi _{\mathrm{F}}$. The parameters are $N_{%
\mathrm{B}}=5\times 10^{4}$, $N_{\mathrm{F}}=2.5\times 10^{3}$, $a_{\mathrm{%
B/F}}=5$ nm, $\protect\omega _{z,\mathsf{B/F}}=1000$ Hz, and $\protect\omega %
_{x,\mathrm{B/F}}=\protect\omega _{y,\mathrm{B/F}}=30$ Hz. The inset in
panel (a) shows the difference between the VA and full 3D simulations, by
means of $\Delta n_{\mathrm{2D}}\equiv \bar{n}_{\mathrm{2D}}-n_{\mathrm{2D}}$%
. This figure is taken from Ref. \protect\cite{Diaz15}.}
\label{FIG9R}
\end{figure}
Frames (a) and (b) of Fig. \ref{FIG9R} show the radial profile of both 2D
bosonic and fermionic densities, $n_{2\mathrm{D},\mathrm{B}/\mathrm{F}}$,
respectively. The panels for the bosonic and fermionic components are the
left and right ones, respectively. Each density has been computed using the
VA and the full 3D system. To obtain the 2D profile from the 3D simulations,
Eqs. (\ref{Eq21}) and (\ref{Eq22}) were solved, and the 3D density was
integrated along the $z$ axis, $\bar{n}_{\mathrm{2D,B/F}}=\int_{-\infty
}^{+\infty }\left\vert \Psi _{\mathrm{2D,B/F}}(\mathbf{r})\right\vert dz$.
We infer that the repulsive mixture concentrates the bosons at the center,
while the attractive mixture concentrates both species at the center. Panels
(c) and (d) of Figure \ref{FIG9R} show the radial dependence of the width
for both bosonic and fermionic component, respectively. We observe that only
the width of the fermionic density profile varies significantly with the
change of the scattering length of the inter-species interaction, which is a
consequence of a greater number of bosons in comparison with fermions. It is
clearly seen that fermions are stronger confined when the interaction is
attractive, and their spatial distribution significantly expands when the
interaction is repulsive. Similar results have been reported in Refs. \cite%
{Adhikari08,Maruyama09,Salasnich07a}.
\begin{figure}[tbp]
\centering
\resizebox{0.5\textwidth}{!}{
\includegraphics{Figure10rev-eps-converted-to.pdf}
}
\caption{The 2D overall percentage error of the VA versus the full 3D
system, as a functionof $a_{\mathrm{BF}}$ for both species. Parameters are
the same as in Fig.~\protect\ref{FIG9R}. This figure is taken from Ref.
\protect\cite{Diaz15}.}
\label{FIG10R}
\end{figure}
Now, to compare the results obtained from the VA with those produced by the
3D simulations, we note that both profiles are practically identical, except
for the repulsive case in which a discrepancy is observed. The inset in
panel (a) of Fig. \ref{FIG9R} shows that the difference between the two
results has a magnitude of nearly three orders of magnitude lower than the
density itself. We define the overall percentage error of the VA as $E_{\%,%
\mathrm{2D}}=\int \int \left\vert \rho _{\mathrm{2D}}-n_{\mathrm{2D}%
}\right\vert dxdy$ (for both species). Figure~\ref{FIG10R} shows the error
for both species as a function of interspecies scattering parameter, $a_{%
\mathrm{BF}}$. For bosons it takes values $\sim 0.2\%$, and does not change
much, as shown in the inset to panel (a) of Fig. \ref{FIG9R}. For fermions
the error is greater than for bosons throughout the observed range, but it
is quite small for the attractive mixture. Note that the error increases for
the repulsive mixture, but remains lower that $2\%$. Thus we conclude that
the 2D approximation is very accurate.
\begin{figure}[tbp]
\centering
\resizebox{0.5\textwidth}{!}{\includegraphics{Figure11rev-eps-converted-to.pdf}
}
\caption{(Color online) Spatial correlation $C_{s}$ of the GS of the 2D
mixture as a a function of $a_{\mathrm{BF}}$, for three fermionic regimes:
polarized, BCS, and unitarity. The fixed parameters are: $N_{\mathrm{B}%
}=5\times 10^{4}$, $N_{\mathrm{F}}=2.5\times 10^{3}$, $a_{\mathrm{B/F}}=5$
nm, $\protect\omega _{x,\mathrm{B/F}}=\protect\omega _{y,\mathrm{B/F}}=30$
Hz and $\protect\omega _{z,\mathrm{B/F}}=1000$ Hz. This figure is taken from
Ref. \protect\cite{Diaz15}.}
\label{FIG11R}
\end{figure}
Finally, we measure the correlations of the BFM states. To this end, the
spatial correlation, $C_{s}$, in the GS was calculated using the definition
given in Eq. (\ref{Eq23}). Figure \ref{FIG11R} presents the analysis of the
GS synchronization of the mixture as a function of $a_{\mathrm{BF}}$, where
three possible regimes are considered for the fermions: fully polarized,
BCS, and unitarity. Parameters of the Lagrangian density for each fermionic
regime are given in Table \ref{TT1}. When the interaction is attractive,
there is not a large discrepancy between the correlation curves. In fact,
for $a_{\mathrm{BF}} \in (-25, -15)$nm the values of $C_s \gtrsim 0.9$, and
therefore the GS states are synchronized. In the unitarity regime, it is
again observed that the correlation reaches a maximum close to $1$ at $a_{%
\mathrm{BF}}\approx -10$ nm, dropping to negative values when the mixture is
strongly repulsive. Also, we observe that the three curves demonstrate
stronger demixing when $a_{\mathrm{BF}}$ changes from negative to positive
values of $a_{\mathrm{BF}}$, and for $a_{\mathrm{BF}} \gtrsim 15$ the value
of $C_s$ tends to zero implying that the GS states are not synchronized.
\subsection{The one-dimensional reduction}
The 1D confinement means, as above, a cigar-shaped configuration elongated
in the direction of $z$. In this case, the corresponding confining
potentials trap is written as
\begin{equation}
U_{\mathrm{B}/\mathrm{F}}\left( {{\mathbf{r}},t}\right) =\frac{1}{2}m_{%
\mathrm{B}/\mathrm{F}}\omega _{t,{\mathrm{B}/\mathrm{F}}}^{2}r^{2}+U_{1%
\mathrm{D},{\mathrm{B}/\mathrm{F}}}\left( {z,t}\right) , \label{Eq36}
\end{equation}%
where $U_{1\mathrm{D},{\mathrm{B}/\mathrm{F}}}\left( {z,t}\right) $ are the
axial potentials. Assuming that the transverse trapping potential is strong
enough, the dimensional reduction is carried out by means of the usual
factorized ansatz for the wave functions,
\begin{equation}
\Psi _{\mathrm{B}/\mathrm{F}}\left( {{\mathbf{r}},t}\right) =\frac{1}{{{\pi
^{1/2}}{\sigma _{\mathrm{B}/\mathrm{F}}}\left( {z,t}\right) }}{\exp }\left( -%
\frac{r^{2}}{{2{\left( \sigma _{\mathrm{B}/\mathrm{F}}\left( {z,t}\right)
\right) ^{2}}}}\right) f_{\mathrm{B}/\mathrm{F}}\left( {z,t}\right) ,
\label{Eq37}
\end{equation}%
where $\sigma _{\mathrm{B}/\mathrm{F}}$ are the transverse GS Gaussians
widths. Here, the axial functions, $f_{\mathrm{B}/\mathrm{F}}$, are
normalized to $N_{\mathrm{B}/\mathrm{F}}$. For both species, we define the
axial density as ${n_{1D,{\mathrm{B}/\mathrm{F}}}}\equiv {\left\vert f_{%
\mathrm{B}/\mathrm{F}}\right\vert ^{2}}$. By means of a procedure similar to
the one outlined above for the 2D reduction, we derive the Euler-Lagrange
equations for the BFM in the 1D approximation:
\begin{eqnarray}
i\hbar {\partial _{t}}{f_{\mathrm{B}}} &=&\left[ {-\frac{{{\hbar ^{2}}}}{{2{%
m_{\mathrm{B}}}}}\partial _{Z}^{2}+{U{_{1\mathrm{D,B}}}}+\frac{1}{\pi }\frac{%
{{g_{\mathrm{BF}}}}}{{\sigma _{\mathrm{B}}^{2}+\sigma _{\mathrm{F}}^{2}}}{{%
\left\vert {{f_{\mathrm{F}}}}\right\vert }^{2}}}+{\frac{{{g_{\mathrm{B}}}}}{{%
2\pi \sigma _{\mathrm{B}}^{2}}}{{\left\vert {{f_{\mathrm{B}}}}\right\vert }%
^{2}}}\right. \notag \\
&&+\left. \frac{{{\hbar ^{2}}}}{{2{m_{B}}\sigma _{\mathrm{B}}^{2}}}+\frac{1}{%
2}{m_{B}}\omega _{t,\mathrm{B}}^{2}\sigma _{\mathrm{B}}^{2}\right] {f_{%
\mathrm{B}},} \label{Eq38}
\end{eqnarray}%
\begin{eqnarray}
i\frac{\hbar }{{{\lambda _{1}}}}{\partial _{t}}{f_{\mathrm{F}}} &=&\left[ {-%
\frac{{{\hbar ^{2}}}}{{2{\lambda _{2}}{m_{\mathrm{F}}}}}\partial _{Z}^{2}+{%
U_{1d,\mathrm{F}}}+\frac{1}{\pi }\frac{{{g_{\mathrm{BF}}}}}{{\sigma _{%
\mathrm{B}}^{2}+\sigma _{\mathrm{F}}^{2}}}{{\left\vert {{f_{B}}}\right\vert }%
^{2}}}+\frac{{{g_{\mathrm{F}}}}}{{2\pi \sigma _{\mathrm{F}}^{2}}}{{%
\left\vert {{f_{\mathrm{F}}}}\right\vert }^{2}}\right. \notag \\
&&+\left. \frac{{{\hbar ^{2}}\xi }}{{2{m_{\mathrm{F}}}}}\frac{{{C_{\mathrm{F}%
,1\mathrm{D}}}}}{{\sigma _{\mathrm{F}}^{4/3}}}{{\left\vert {{f_{\mathrm{F}}}}%
\right\vert }^{4/3}}+{\frac{{{\hbar ^{2}}}}{{2{m_{\mathrm{F}}}\lambda
_{2}\sigma _{\mathrm{F}}^{2}}}+\frac{1}{2}{m_{\mathrm{F}}}\omega _{t,\mathrm{%
F}}^{2}\sigma _{\mathrm{F}}^{2}}\right] {f_{\mathrm{F}}}. \label{Eq39}
\end{eqnarray}%
In addition, the algebraic relationships between $\sigma _{\mathrm{B}/%
\mathrm{F}}$ and $f_{\mathrm{B}/\mathrm{F}}$ are:
\begin{equation}
{\chi _{I,\mathrm{B}}}\sigma _{\mathrm{B}}^{4}-\frac{{{\hbar ^{2}}}}{{{m_{%
\mathrm{B}}}}}-\frac{{{g_{B}}}}{{2\pi }}{n_{1\mathrm{D,B}}}=0, \label{Eq40}
\end{equation}%
\begin{equation}
{\chi _{I,\mathrm{F}}}\sigma _{B}^{4}-\frac{2}{5}\frac{{{\hbar ^{2}}}}{{{m_{%
\mathrm{F}}}}}\xi {C_{\mathrm{F},1\mathrm{D}}}n_{1\mathrm{D},\mathrm{F}%
}^{2/3}\sigma _{\mathrm{F}}^{2/3}-\frac{\hbar ^{2}}{\lambda _{2}m_{\mathrm{F}%
}}-\frac{g_{\mathrm{F}}}{{2\pi }}{n_{1\mathrm{D},\mathrm{F}}}=0,
\label{Eq41}
\end{equation}%
where $\chi _{I,\mathrm{B}/\mathrm{F}}\equiv m_{\mathrm{B}/\mathrm{F}}\omega
_{t,\mathrm{B}/\mathrm{F}}^{2}-2g_{\mathrm{BF}}n_{1\mathrm{D},\mathrm{F}/%
\mathrm{B}}/[\pi (\sigma _{\mathrm{B}}^{2}+\sigma _{\mathrm{F}}^{2})^{2}]$.
Thus, Eqs. (\ref{Eq38})-(\ref{Eq41}) constitute a system of four 1D coupled
equations produced by the reduction of the underlying 3D system (\ref{Eq21})
- (\ref{Eq22}). Simulations of the system were performed with mesh
parameters $\Delta t=0.5$ $\mu $s and $\Delta z=0.25$ $\mu $m. The external
potential is chosen here as the harmonic-oscillator one: $U_{1\mathrm{d},%
\mathrm{B/F}}=m_{\mathrm{B}/\mathrm{F}}\omega _{z,{\mathrm{B}/\mathrm{F}}%
}^{2}z^{2}/2$.
\begin{figure}[tbp]
\centering
\resizebox{0.6\textwidth}{!}{
\includegraphics{FigureBFM1Darev-eps-converted-to.pdf}
\includegraphics{FigureBFM1Dbrev-eps-converted-to.pdf}
}
\resizebox{0.6\textwidth}{!}{
\includegraphics{FigureBFM1Dcrev-eps-converted-to.pdf}
\includegraphics{FigureBFM1Ddrev-eps-converted-to.pdf}
}
\caption{Profiles of the particle density and the width in the confined
direction as a function the $z$-coordinate for different values of the
interaction strength $a_{\mathrm{BF}}$. (a) $n_{1\mathrm{D,B}}$, (b) $n_{1%
\mathrm{D,F}}$, (c) $\protect\sigma _{\mathrm{B}}$, and (d) $\protect\sigma %
_{\mathrm{F}}$. The parameters are $N_{\mathrm{B}}=5\times 10^{4}$, $N_{%
\mathrm{F}}=2.5\times 10^{3}$, $a_{\mathrm{B/F}}=5$ nm, $\protect\omega _{z,%
\mathrm{B/F}}=30$ Hz and $\protect\omega _{t,\mathrm{B/F}}=1000$ Hz. The
inset in panel (a) shows the difference between the VA and full 3D
simulations, by means of $\Delta n_{\mathrm{1D,B}}=\bar{n}_{\mathrm{1D,B}%
}-n_{\mathrm{1D,B}}$. This figure is taken from Ref. \protect\cite{Diaz15}.}
\label{FigBFM1D}
\end{figure}
The effect of the magnitude and sign of the interaction parameter on the
spatial profile of both species, and the accuracy of the VA compared to the
3D solution, can be analyzed by varying the scattering length, $a_{\mathrm{BF%
}}$. In particular, we consider a mixture with more bosons than fermions,
\textit{viz}., $N_{\mathrm{B}}=5\times 10^{4}$, $N_{\mathrm{F}}=2.5\times
10^{3}$. Because of this condition, the bosonic profile is mainly determined
by its self-interaction and the external potential. Frames (a) and (b) of
Fig. \ref{FigBFM1D} show the spatial dependence of $n_{1\mathrm{D,B}}$ and $%
n_{1\mathrm{D},\mathrm{F}}$, respectively. These densities are calculated
using both the reduced equations (\ref{Eq38}) - (\ref{Eq41}) and the full
numerical simulations of Eqs. (\ref{Eq21}) and (\ref{Eq22}). In the latter
case, the densities are calculated as $\bar{n}_{1\mathrm{D},\;\mathrm{j}%
}(z)=\int \int \left\vert \Psi _{\mathrm{j}}(\mathbf{r})\right\vert ^{2}dxdy$
with $j=(F,B)$. We observe that variations of the bosonic density profile
are very small in comparison to the significant changes of the inter-species
scattering length. The situation is opposite for the fermionic species. As
the repulsive scattering length increases, the fermions tend to be pushed to
the periphery of the bosonic-gas density profile. This phenomenon is known
as \textit{demixing} \cite{Adhikari2006,Adhikari08,Salasnich07a,Adhikari2007}%
. On the other hand, for the attraction case, fermions are, naturally,
concentrated in the same region where the bosons are located. Frames (c) and
(d) of Fig. \ref{FigBFM1D} correspond to the profiles of $\sigma _{\mathrm{B}%
}$ and $\sigma _{\mathrm{F}}$. We observe that the width of the bosonic
profile slightly increases while proceeding from the inter-species
attraction to repulsion. A similar trend is observed for fermions, as shown
in panel (d). However, the effect is amplified in the spatial zone of the
interaction with the bosons, where the gas is compressed in the case of the
attraction, and expands in the case of the repulsion. Note that the
fermionic component expands in the confined direction much more than its
bosonic counterpart, and that the fermionic width markedly varies, following
changes in the density.
Further, one can see in the inset of panel (a) of Fig. \ref{FigBFM1D} the
difference between the density calculated by means of the VA and the full 3D
simulation, $\Delta n_{\mathrm{1D,B}}=\bar{n}_{\mathrm{1D,B}}-n_{\mathrm{1D,B%
}}$ is really small. In fact, the difference between the bosonic profiles
obtained by both methods is $\sim 2\%$ of the maximum density for all cases
(the fact that the error changes very little with variations in $a_{\mathrm{%
BF}}$ is a consequence of the greater number of bosons). Frame (b) of Fig. %
\ref{FigBFM1D} shows that, for the attractive mixture, the variational
profile is very close to the 3D result, in particular for the case of $a_{%
\mathrm{BF}}=-6~$nm. For the repulsive mixture, it is observed that the
error increases, which is a consequence of the lower fermionic density at
the center of the 3D harmonic-oscillator potential, which plays the dominant
role for the bosons, hence a monotonously decreasing function in the
transverse direction, such as the Gaussian, is not a sufficiently good
approximation. We define the global error of the VA as $E_{\%,\mathrm{1D}%
}=\int_{-\infty }^{+\infty }\left\vert \bar{n}_{\mathrm{1D},\;j}-n_{\mathrm{%
1D}\;j}\right\vert dz$ (for both species). We have found that in the range
of $a_{\mathrm{BF}}\in \left( -6,6\right) $ nm the global error for the
bosonic species is around $2\%$ for all the values of $a_{\mathrm{BF}}$. For
the fermionic species, it goes from $0.5\%$ to $5\%$ depending on $a_{%
\mathrm{BF}}$, such that for positive value of $a_{\mathrm{BF}}$ the error
is higher than for negative ones, and the minimum error is attained at $a_{%
\mathrm{BF}}\approx -4$nm. This is a consequence of the fact that, for this
value of $a_{\mathrm{BF}}$, the interspecies interaction practically
compensates the Pauli repulsion, making the dynamics of the fully polarized
Fermi gas close to that governed by the linear Sch\"{o}dinger equation
(recall that the Gaussian is the solution for the ground state). When the
mixture becomes more attractive, the fermionic dynamics is dominated by the
bosons, producing a similar error for both species, while for the repulsive
mixture the Gaussian approximation is not appropriate. For the
non-interacting mixture, the error for the fermions is smaller than for the
bosons, because the fermionic density is very low, making the
self-interaction terms weak in comparison to the external potential,
therefore it is appropriate to use the Gaussian ansatz to approximate the 1D
dynamics. Finally, note that the error is lower in the 2D case in comparison
with 1D, because the reduction to 2D case is closer to the full 3D model.
\begin{figure}[tbp]
\centering
\resizebox{1.\textwidth}{!}{
\includegraphics{FigureBFM1D2arev-eps-converted-to.pdf}
\includegraphics{FigureBFM1D2crev-eps-converted-to.pdf}
\includegraphics{FigureBFM1D2erev-eps-converted-to.pdf}
}
\resizebox{1.\textwidth}{!}{
\includegraphics{FigureBFM1D2brev-eps-converted-to.pdf}
\includegraphics{FigureBFM1D2drev-eps-converted-to.pdf}
\includegraphics{FigureBFM1D2frev-eps-converted-to.pdf}
}
\caption{(Color online) Space-time diagrams of the densities of bosons (top)
and fermions (bottom), for three different values of the interspecies
scattering parameter: (a,b) $a_{\mathrm{BF}}=-18nm$, (c,d) $a_{\mathrm{BF}%
}=-26nm$, and (e,f) $a_{\mathrm{BF}}=-34nm$. The initial conditions are the
same in all the cases, see the text. The other parameters are the same as in
Fig.~\protect\ref{FigBFM1D}}
\label{FigBFM1D2}
\end{figure}
Next, we address the BFM dynamics, considering a mixture with arbitrary
initial conditions for the 1D fields. To create the initial state, we start
with the GS found in the absence of the inter-species interaction ($a_{%
\mathrm{BF}}=0$). Then, at $t=0$, we switch the interaction on, which may
imply the application of the magnetic field, that gives rise to $a_{\mathrm{%
BF}}\neq 0$ via the FR. Figure~\ref{FigBFM1D2} shows three cases of the
temporal evolution with these initial conditions for $a_{\mathrm{BF}}=-18~%
\mathrm{nm}$, $a_{\mathrm{BF}}=-26~\mathrm{nm}$, and $a_{\mathrm{BF}}=-34~%
\mathrm{nm}$. In the first case (panels (a) and (b) of Fig. \ref{FigBFM1D2}%
), it is observed that the densities converge towards the center of the
potential, as may be expected, creating a pattern of oscillations around the
potential minimum; in addition, the fermions are affected by bosons, as
shown by the mark left by the bosons in the fermionic density. For the
second case (panels (c) and (d) of Fig. \ref{FigBFM1D2}), it is observed
that the increase in the strength of the attractive interaction generates
dark solitons in the fermionic density, some of which show oscillatory
dynamics very close to that observed in Refs. \cite%
{Yefsah13,Scott11,Shomroni09,Cardoso13,Donadello14}. The last case (panels
(e) and (f) of Fig. \ref{FigBFM1D2}) shows that the further increase of the
strength of the interspecies interaction generates a larger number of dark
solitons. In other words, we show that the attractive interaction of
fermions with bosons in a state different from the GS eventually generates a
gas of dark solitons.
\begin{figure}[tbp]
\centering{\
\resizebox{1.\textwidth}{!}{
\includegraphics{cidi3-eps-converted-to.pdf}
\includegraphics{cidi5-eps-converted-to.pdf}
\includegraphics{cidi6-eps-converted-to.pdf}
\includegraphics{cidi7-eps-converted-to.pdf}
}
\resizebox{1.\textwidth}{!}{
\includegraphics{cidi4-eps-converted-to.pdf}
\includegraphics{cidi8-eps-converted-to.pdf}
\includegraphics{cidi9-eps-converted-to.pdf}
\includegraphics{cidi10-eps-converted-to.pdf}
}}
\caption{(Color online) Comparison of the dynamics, as produced by the 1D
VA, and from the 3D simulations. Spatiotemporal diagrams for bosons (a) and
fermions (b) are obtained from the 3D simulations. The other panels show
spatial profiles for: (c)$-$(d) $t=0~$ms, (e)$-$(f) $t=25$ ms, and (e)$-$(f)
$t=50$ ms. Here $a_{\mathrm{BF}}=-10$ nm, the initial conditions and other
fixed parameters being the same as in Fig.~\protect\ref{FigBFM1D}. This
figure is taken from Ref. \protect\cite{Diaz15}. }
\label{FigBFM1D3}
\end{figure}
Finally, we address the accuracy of the VA for the dynamical behavior near
the GS. Figure \ref{FigBFM1D3} displays the spatiotemporal dynamics of the
1D density, as produced by the solution of 3D equations (\ref{Eq21}) and (%
\ref{Eq22}) for $a_{\mathrm{BF}}=-10$ nm. The initial conditions for the 3D
dynamics are given by the ansatz based on Eq. (\ref{Eq25}), with the
Gaussian profile along the $z$ axes. Panels (a) and (b) of Fig. \ref%
{FigBFM1D3} show the spatiotemporal diagrams of the bosonic and fermionic
densities, making the emergence of dark solitons obvious. This result
corroborates that the dark solitons emerge too in the 3D dynamics, which is
approximated by the present 1D model. The other panels of Fig. \ref%
{FigBFM1D3} show a comparison of the 1D spatial profiles, as obtained from
the 3D simulations, and the 1D VA, for three instants of time: $t=0$ ms ((c)
and (d)), $t=25$ ms ((e) and (f)), and $t=50$ ms ((g) and (h)). The results
demonstrate that the VA profiles are very similar to their counterparts
produced by the 3D simulations, hence the present approximation provides
good accuracy and allows one to study dynamical features of the BFM in a
sufficiently simple form.
\section{Conclusion}
\label{S4}
In this brief review we have summarized results produced by the VA
(variational approximation) for reducing the 3D system to 1D and 2D forms
for the Fermi gas and BFM\ (Bose-Fermi mixture) in the framework of the
quasi-mean-field description \cite{Diaz12,Diaz15}. The method is based on
the Gaussian variational ansatz, which provides very accurate results for
the GSs (ground states) of the gases loaded in the disc- and cigar-shaped
traps. The reduced equations are useful, in particular, for modeling systems
with low atomic densities and large spatiotemporal variations of the
external potential. For the 1D case, the reduced equations provide results
by means of modest computational resources, allowing one to quickly explore
a vast volume of the parameter space. In the 2D case, the required
simulation time is still significantly lower than what is necessary for the
3D simulations. We have shown that, for the Fermi gases and BFMs alike, the
VA produces results with a very good accuracy, the error, in the comparison
to the 3D simulations, being lower than $5\%$, for both the GSs and
dynamical states.
For the Fermi gas case in the 2D approximation, we have considered the
example of the hexagonal superlattice, built as a superposition of two
triangular lattices with different angles among them. This possibility
may be relevant for emulating condensed-matter settings, such
as graphene-like superlattices. In addition, we have presented results for
dark solitons, obtained in the framework of the 1D approximation. We have
verified that the interaction is repulsive \cite{Alphen18} and strongly
depends on the initial distance between the dark solitons.
Finally, for the BFM trapped in the harmonic-oscillator potential we have
shown that a change in the interaction strength can generate a gas of dark
solitons. The solitons oscillate under the action of the external potential.
\section*{Acknowledgents}
PD acknowledges partial financial support from DIUFRO project under grant
DI18-0066 and CMCC of the Universidad de La Frontera. DL acknowledge partial
financial support from Centers of Excellence with BASAL/CONICYT financing,
Grant FB0807, CEDENNA and CONICYT-ANILLO ACT 1410. PD and DL acknowledges
financial support from FONDECYT 1180905. The work of BAM is supported, in
part, by the joint program in physics between NSF and Binational (US-Israel)
Science Foundation through project No. 2015616, and by the Israel Science
Foundation, through grant No. 1287/17. The authors appreciate a support
provided by the PAI-CONICYT program (Chile), grant No. 80160086, and
hospitality of Instituto de Alta Investigaci\'{o}n, at Universidad de Tarapac%
\'{a} (Arica, Chile).
\section*{Appendix A: Nonlinear Schr\"{o}dinger equation for the fermionic
superfluid}
\label{secA}
Kim and Zubarev in Ref. \cite{Kim2004} proposed an effective hydrodynamic
equation for a Fermi gas, in the regime of the BCS-BEC crossover. The
equation was derived from the time-dependent density-functional theory and
has the form given by:
\begin{equation}
i\hbar {\partial _{t}}\Psi \left( {\mathbf{r},t}\right) =\left[ {\ -\frac{{%
\hbar ^{2}}}{{2m_\mathrm{F}}}{\nabla ^{2}}+U\left( {\mathbf{r}}\right) +\mu
\left( n\left( {\mathbf{r},t}\right) \right) }\right] \Psi \left( {\mathbf{r}%
,t}\right) , \label{E-b0}
\end{equation}
where $\Psi $ is a complex field that represent the superfluid wave
function, $n\left( {\mathbf{r},t}\right) ={\left\vert \Psi \left( {\mathbf{r}%
,t}\right) \right\vert ^{2}}$ is the particle density, and $\mu$ is the
chemical potential. In addition, the relationship between the chemical
potential and the energy density (energy per particle), $\varepsilon \left(
n\right)$, is given by:
\begin{equation}
\mu \left( n\right) =\frac{\partial }{\partial n}\left[ {n\varepsilon \left(
n\right) }\right] \label{mu}
\end{equation}%
For the case of two spin states with balanced populations and a negative
scattering length, ${a_\mathrm{F}}<0$, the BCS limit corresponds to $k_{F}{%
\left\vert {a_\mathrm{F}}\right\vert }\ll 1$, where $k_{F}=(3\pi
^{2}n)^{1/3} $ is the Fermi wavenumber. In this limit $\varepsilon $ is
given by \cite{Huang1957}:
\begin{equation}
\varepsilon \left( n\right) =\frac{3}{5}{\varepsilon _{F}}\left[ {1+\frac{{10%
}}{{9\pi }}{k_{F}}{a_\mathrm{F}}+\frac{{4\left( {11-2\ln \left( 2\right) }%
\right) }}{{21{\pi ^{2}}}}{{\left( {{k_{F}}{a_\mathrm{F}}}\right) }^{2}}%
+\cdots }\right] , \label{E-b1}
\end{equation}%
where ${\varepsilon _{F}}={\hbar ^{2}}k_{F}^{2}/\left( {2m_\mathrm{F}}%
\right) $ is the Fermi energy. Taking the Eq. (\ref{E-b1}) into the Eq. (\ref%
{mu}) the chemical potential takes the form
\begin{equation}
\mu \left( n\right) =\frac{{\hbar ^{2}}}{{2m_\mathrm{F}}}{\left( {3{\pi ^{2}}%
}\right) ^{2/3}}{n^{2/3}}+\frac{{2{\hbar ^{2}}\pi {a_\mathrm{F}}}}{m_\mathrm{%
F}}n\left[ {1+{1.893 a_\mathrm{F}}{n^{1/3}}+\cdots }\right] \label{E-b2}
\end{equation}%
where the first term corresponds to the effective Pauli repulsion, and the
following ones to the superfluidity due to collisions between the fermions
in different spin states. Substituting the latter expression in Eq.(\ref%
{E-b0}), and keeping only the first collisional term, we obtain the known
nonlinear Schr\"{o}dinger equation for the fermionic superfluid \cite%
{Kim2004,Adhikari2006b}
\begin{equation}
i\hbar {\partial _{t}}\Psi =\left[ {\ -\frac{{\hbar ^{2}}}{{2m_\mathrm{F}}}{%
\nabla ^{2}}+U\left( {\mathbf{r}}\right) +\frac{{\hbar ^{2}}}{{2m_\mathrm{F}}%
}{{\left( {3{\pi ^{2}}}\right) }^{2/3}}{n^{2/3}}+\frac{{2\pi {\hbar ^{2}}{a_%
\mathrm{F}}}}{m_\mathrm{F}}n}\right] \Psi , \label{E-b3}
\end{equation}%
where the last term is similar to one in the Gross-Pitaevskii equation for
bosons, but with an extra factor of $1/2$, as the Pauli exclusion principle
allows only atoms in different spin states interact via the scattering. We
remark that Eq. (\ref{E-b3}) implies equal particle densities and phases of
the wave functions associated with both spin states.
When we have a system with multiple atomic spin states, $\sigma _{j}$,
associated with vertical projection of the spin $s$ (with $2s_\mathrm{F}+1$
states), we treat the atoms per state as a fully polarized Fermi gas. The
term for the interactions by collisions between atoms in different spin
states, with the same scattering length ($a_\mathrm{F}$), correspond to the
scattering term in the Gross-Pitaevskii equation. The motion equation for
the atoms in spin states $j$ is given by:
\begin{eqnarray}
i\hbar {\partial _{t}}{\Psi _{j}}\left( {\mathbf{r},t}\right) &=&\left[ {\ -%
\frac{{\hbar ^{2}}}{{2m_\mathrm{F}}}{\nabla ^{2}}+U\left( {\mathbf{r}}%
\right) +\frac{{\hbar ^{2}}}{{2m_\mathrm{F}}}{{\left( {6{\pi ^{2}}}\right) }%
^{2/3}}{{n_{j}\left( {\mathbf{r},t}\right) }^{2/3}}}\right] {\Psi _{j}}%
\left( {\mathbf{r},t}\right) \notag \\
&&+\frac{{4\pi {\hbar ^{2}}{a_\mathrm{F}}}}{m_\mathrm{F}}\sum\limits_{k\neq
j=-(s_\mathrm{F}+1/2)}^{s_\mathrm{F}+1/2}{{{n_{k}\left( {\mathbf{r},t}%
\right) }}{\Psi _{j}}}\left( {\mathbf{r},t}\right) , \label{E-1}
\end{eqnarray}%
where $\Psi _{j}$ is the wave function associated with spin projection $%
\sigma _{j}$, such that $n_{j}\left( {\mathbf{r},t}\right) ={\left\vert {{%
\Psi _{j}}\left( {\mathbf{r},t}\right) }\right\vert ^{2}}$ is the respective
particle density, and $V(\mathbf{r})$ an external potential, which is
assumed to be identical for all the spin states.
In the case of fully locally balanced populations, the density of particles
is the same in each component, $n_{1}=n_{2}=...=n_{2s_\mathrm{F}+1}$, hence
the total density is $n=n_{j}/(2s_\mathrm{F}+1)$. Assuming also equal phases
of the wave-function components, we define a single wave function, $\Psi =%
\sqrt{2s_\mathrm{F}+1}\Psi _{j}$, such that the Eq. (\ref{E-1}) take the
form
\begin{equation}
i\hbar {\partial _{t}}{\Psi }=\left[ -\frac{{{\hbar ^{2}}}}{{2{m_\mathrm{F}}}%
}{\nabla ^{2}}+{U\left( {\mathbf{r}}\right)}+\frac{{\hbar ^{2}}}{{2m_\mathrm{%
F}}}{{\left( {\frac{{6{\pi ^{2}}}}{{2s_\mathrm{F}+1}}}\right) }^{2/3}}
\left\vert {\Psi \left( {{\mathbf{r}},t}\right) }\right\vert ^{4/3}+g_%
\mathrm{F} \left\vert {\Psi \left( {{\mathbf{r}},t}\right) }\right\vert ^{2} %
\right] {\Psi }, \label{E-2}
\end{equation}%
where $g_\mathrm{F}\equiv 8s_\mathrm{F}\pi {\hbar ^{2}}{a_\mathrm{F}}/(2s_%
\mathrm{F}+1)m_\mathrm{F}$ is the scattering coefficient. This equation is
the same that Eq. \ref{Eq3} without considered the corrections of the first
principles calculations given by $\lambda_1$, $\lambda_2$ and $\beta$ \cite%
{Manini05,Salasnich09,Ancilotto09,Ancilotto12}. In particular, the fully
polarized gas, with the interactions between identical fermions suppressed
by the Pauli principle, formally corresponds to $s_\mathrm{F}=0$, hence $g_%
\mathrm{F}=0$, and the last term of Eq. \ref{E-2} vanishes.
Finally, the equation (\ref{E-2}) can be derived, as the Euler-Lagrange
equation,
\begin{equation}
\frac{{\delta \mathcal{L}}}{{\delta {\Psi ^{\ast }}}}=\frac{{\partial
\mathcal{L}}}{{\partial {\Psi ^{\ast }}}}-\frac{\partial }{{\partial t}}%
\frac{{\partial \mathcal{L}}}{{\partial \left( {{\partial _{t}}{\Psi ^{\ast }%
}}\right) }}-\nabla \frac{{\partial \mathcal{L}}}{{\partial \left( {\nabla {%
\Psi ^{\ast }}}\right) }}=0, \label{E-5}
\end{equation}
from the corresponding action, $\mathcal{S}=\int {dtd{\mathbf{r}}\mathcal{L}}
$, with the Lagrangian density
\begin{eqnarray}
\mathcal{L} &=&i\frac{\hbar }{2}\left( {{\Psi ^{\ast }}\frac{{\partial \Psi }%
}{{\partial t}}-\Psi \frac{{\partial {\Psi ^{\ast }}}}{{\partial t}}}\right)
-\frac{{{\hbar ^{2}}}}{{2{m_\mathrm{F}}}}{\left\vert {\nabla {\Psi }}%
\right\vert ^{2}}-{U}(\mathbf{r}){\left\vert {\Psi \left( {{\mathbf{r}},t}%
\right) }\right\vert ^{4/3}}- \notag \\
&&\frac{{{\hbar ^{2}}}}{{2{m_\mathrm{F}}}}\frac{3}{5}{\left( {\frac{{6{\pi
^{2}}}}{{2{s_\mathrm{F}}+1}}}\right) ^{2/3}}\left\vert {\Psi \left( {{%
\mathbf{r}},t}\right) }\right\vert ^{10/3}-\frac{1}{2}{g_\mathrm{F}}%
\left\vert {\Psi \left( {{\mathbf{r}},t}\right) }\right\vert ^{4},
\label{E-4}
\end{eqnarray}
where the asterisk stands for the complex conjugate. Similar Lagrangian
formalisms have been used, in the context of the density-functional theory,
in diverse settings \cite{Adhikari2006b,Adhikari2007,Kim2004a}.
\section*{References}
\section{Introduction}
\label{S1}
Ultracold atomic gases have been widely explored from both experimental and
theoretical point of view due to their ability to emulate many effects from
condensed-matter physics and create novel states of quantum matter. Various
results obtained in this area have been reviewed in many publications --
see, in particular, Refs. \cite%
{Bongs04,Jaksch05,Giorgini08,Bloch08,Spielman12,Goldman14,Zhai15,Malomed18}.
Important experimental tools, the application of which opens ways to the
observation diverse novel phenomena in the quantum gases, are, \textit{inter
alia}, optical-lattice (OL) potentials, the use of the Feshbach resonance
(FR)\ to control the strength of interactions between atoms, and the
implementation of the effective spin-orbit coupling \cite%
{Snoek2011a,Frohlich2011,BoseSOC1,BoseSOC2,FermiSOC}.
The effective spatial dimension of the setting in which quantum gases are
created strongly affects the ensuing physics. The use of confining
potentials makes it possible to reduce the dimension from 3D to 2D and 1D.
In particular, the dimensional reduction of confined Bose gases can be
approximated by means of the variational method \cite%
{Salasnich02a,Salasnich02b,Salasnich2D,Salasnich10}. Recently, similar
approaches for ultracold Fermi gases in confining potentials have been
elaborated in Refs. \cite{Adhikari2006,Adhikari2007,Malomed2009,Diaz12}.
These reductions make it possible to study the complex dynamics and pattern
formation in ultracold gases in 2D and 1D settings. In this context, the
study of dark solitons in ultracold gases was reported in Bose-Einstein
condensates (BECs) \cite{Burger99}, and further developed later \cite%
{Becker08,Weller08}. For dark solitons in Fermi gases, several works have
reported theoretical and experimental results \cite%
{Antezza07,Scott11,Liao11,Yefsah13,Ku16,Syrwid18,Alphen18}. The reduced 1D
equation for Fermi gases was used for studies of interactions between dark
solitons \cite{Diaz12}.
The earliest experimental studies of Bose-Fermi mixtures (BFMs) were
performed with lithium isotopes \cite{Partridge01,Schreck01}, as well as in $%
^{174}$Yb-$^{6}$Li \cite{Hansen11} and $^{87}$Rb-$^{40}$K \cite{Heinze11}
settings. Much interest has been also drawn to heavy-atom mixtures, such as $%
^{87}$Sr-$^{84}$Sr \cite{Tey10}. These isotopes, which are characterized by
a large nuclear spin, have been proposed for the design of prototype
quantum-information processing devices. The use of FRs in the mixtures plays
a major role, as it allows one to control nonlinear interactions between the
species. For the $^{87}$Rb-$^{40}$K mixture, the FR has been observed in
Ref. \cite{Best09,Cumby13}, and a giant FR effect was reported in the $^{85}$%
Rb-$^{6}$Li system \cite{Deh10}. Further, in the $^{6} $Li-$^{133}$Cs
mixture five FRs have been observed \cite{Tung13}, and over 30 resonances
are expected to be available in the $^{23}$Na-$^{40}$K system \cite{Park12}.
Multiple heteronuclear FRs were reported in the triply quantum-degenerate
mixture of bosonic $^{41}$K and two fermionic species, $^{40}$K and $^{6}$Li
\cite{Wu11}. In a recently elaborated framework, the BFM is transformed into
a strongly interacting isotopic mixture immersed into a Fermi sea, with the
help of a wide $s$-wave resonance for the $^{41}$K-$^{40}$K combination.
Many theoretical works have addressed the dynamics of BFMs under various
conditions \cite%
{Lelas09,Watanabe08,Kain11,Mering11,Jun11,Ludwig11,Bertaina13}. To describe
the ground state (GS) of the mixture, the quasi-mean-field theory may be a
useful approach \cite{Adhikari08,Maruyama09,Iskin09,Snoek11,Nishida06}. In
this framework, the use of FRs was studied in $^{23}$Na-$^{6}$Li, $^{87}$Rb-$%
^{40}$K, $^{87}$Rb-$^{6}$Li, $^{3}$He-$^{4}$He, $^{173}$Yb-$^{174}$Yb, and $%
^{87}$Sr-$^{84}$Sr mixtures \cite{Salasnich07a,Gautam11}. Recently,
effective 1D and 2D nonlinear Schr\"{o}dinger equations have been derived
for BFMs in cigar-shaped and disc-shaped configurations \cite{Diaz15}, using
the variational approximation (VA) along the lines previously developed in
Refs. \cite{Salasnich02a,Diaz12}. In addition, dark solitons in BFMs have
been analyzed in Ref. \cite{Tylutki16}. Here, we address, in particular,
dark solitons in the $^{7}$Li-$^{6}$Li BFM, using the effective
low-dimensional equations derived in Ref. \cite{Diaz15}.
The general aim of the present article is to present a brief review of the
spatial reduction for Fermi gases and BFMs, based on the VA. In particular,
we outline the procedure for implementing the 2D and 1D reduction, starting
from the full 3D equations of motions. To test the accuracy of the
approximations, we present a comparison of the results with full 3D
numerical simulations. Using the corresponding effective equations, we
address various dynamical settings, such as dark solitons and their
interactions. In the case of BFMs, we consider the construction of GSs,
varying the interaction strength. Finally, for the 1D situation, we address
the formation of dark solitons in the mixture, and compare the corresponding
1D solution to results of the full numerical simulations, observing good
agreement between them. The presentation is arranged as follows: the Fermi
gases and BFMs are considered, severally,in Secs. \ref{S2} and \ref{S3}, and
the paper is concluded in Sec \ref{S4}.
\section{The Fermi Gas}
\label{S2}
We consider a dilute superfluid formed by $N$ fermionic atoms of mass $m_{%
\mathrm{F}}$ and spin $s_{\mathrm{F}}$, loaded into an optical trap at zero
temeprature. We apply the local density approximation \cite{Bloch08} to the
description of this setting. The corresponding dynamical equations can be
derived from the action functiona
\begin{equation}
\mathcal{S}=\int {dtd{\mathbf{r}}\mathcal{L}}, \label{Eq1}
\end{equation}%
where the Lagrangian density is
\begin{equation}
{\mathcal{L}}=\frac{{i\hbar }}{2\lambda _{1}}\left( {\Psi ^{\ast }\frac{{%
\partial {\Psi }}}{{\partial t}}-{\Psi }\frac{{\partial \Psi ^{\ast }}}{{%
\partial t}}}\right) -\frac{{\hbar ^{2}}}{{2\lambda _{2}{m_{\mathrm{F}}}}}{%
\left\vert {\nabla {\Psi }}\right\vert ^{2}}-{U(\mathbf{r})}{\left\vert {%
\Psi }\right\vert ^{2}}-\frac{3}{5}\beta \frac{{\hbar ^{2}}}{{2{m_{\mathrm{F}%
}}}}{C_{\mathrm{F}}}{\left\vert {\Psi }\right\vert ^{10/3}}-\frac{{{g_{%
\mathrm{F}}}}}{2}{\left\vert {\Psi }\right\vert ^{4}}, \label{Eq2}
\end{equation}%
$\Psi \left( \mathbf{r},t\right) $ being a complex order parameter, whose
norm is equal to the number of particles. Here ${C_{\mathrm{F}}}={\left[ {6{%
\pi ^{2}}/\left( {2{s_{\mathrm{F}}}+1}\right) }\right] ^{2/3}}$ is a
constant that depends on spin $s_{\mathrm{F}}$, ${g_{\mathrm{F}}}=4\pi {%
\hbar ^{2}}({a_{\mathrm{F}}}/{m_{\mathrm{F}}})[2s_{\mathrm{F}}/(2s_{\mathrm{F%
}}+1)]$ with scattering length $a_{\mathrm{F}}$, which determines
interactions of fermions belonging to different spin states
(the interactions which are not forbidden by the Pauli principle) \cite%
{Giorgini08}, and ${U(\mathbf{r})}$ is to an external potential applied to
fermions.
\begin{table}[tbh]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Regime & $\lambda_1$ & $\lambda_2$ & $\beta$ & $s_\mathrm{F}$ \\ \hline
Polarized & 1 & 1 & 1 & 0 \\
BCS & 2 & 4 & 1 & 1/2 \\
Unitary & 2 & 4 & 0.44 & 1/2 \\ \hline
\end{tabular}%
\end{center}
\caption{$\protect\lambda _{1}$, $\protect\lambda _{2}$, $\protect\beta $,
and $s_{\mathrm{F}}$ for three different regimes in the Fermi-gas dynamics
\protect\cite{Manini05,Salasnich09,Ancilotto09,Ancilotto12}.}
\label{TT1}
\end{table}
Parameters $\lambda _{1}$, $\lambda _{2}$, $\beta $, and $s_{\mathrm{F}}$ in
Eq. (\ref{Eq2}) correspond to three different regimes addressed in this
article, which are listed in Table \ref{TT1}. It is relevant
to mention that the spin polarization may affect some parameters, such as
coefficient ${C_{\mathrm{F}}}${\ \cite{Andreev18}}.
Lagrangian density (\ref{Eq2}) gives rise to the following The
Euler-Lagrange equation,
\begin{equation}
\frac{i\hbar }{\lambda _{1}}\frac{{\partial {\Psi }}}{{\partial t}}=\left[ -%
\frac{{\hbar ^{2}}}{{2{\lambda _{2}}{m_{\mathrm{F}}}}}{\nabla ^{2}}+{U}+g_{%
\mathrm{F}}{{\left\vert {\Psi }\right\vert }^{2}}\right. \left. +\frac{{%
\hbar ^{2}}}{{2{m_{\mathrm{F}}}}}\beta {C_{\mathrm{F}}}{{\left\vert {\Psi }%
\right\vert }^{4/3}}\right] {\Psi }, \label{Eq3}
\end{equation}%
which as an effective quasi-mean-field equation for the fermi gas under the
consideration; note that it may be rewritten in the form of
hydrodynamic equations \cite{Kim2004,Adhikari2006b}. More details on the
derivation of this equation are given in Appendix A. Below, we focus on the
BCS (Bardeen-Cooper-Schrieffer) setting, referring to atoms of $^{6}$Li with
mass $6$ a.u.
In numerical simulations we use the fourth-order Runge-Kutta
method in time, and the centered second-order finite-difference method for
handling the spatial discretization. In the next two subsections we reduce
the full 3D equation to the corresponding 2D and 1D effective equations,
using the VA proposed in Ref. \cite{Diaz12}.
\subsection{The two-dimensional reduction}
We derive effective 2D equations, applying the VA to the Fermi gas in the
disk-shaped trap. For this purpose, we consider an external potential
composed of two terms: the parabolic (harmonic-oscillator) one accounting
for the confinement in the $z$ direction, transverse to the disk's plane,
and the in-plane potential, $U_{\mathrm{2D}}$:
\begin{equation}
U\left( {\mathbf{r}}\right) =\frac{1}{2}m_{\mathrm{F}}\omega _{z}^{2}{z^{2}+}%
U_{\mathrm{2D}}\left( {\mathbf{r}_{\bot },t}\right) . \label{Eq4}
\end{equation}%
The initial ansatz assumes, as usual, the factorization of the 3D wave
function into a product of functions of $z$ and $\mathbf{r}_{\bot }$, the
former one being the Gaussian ground state of the harmonic-oscillator
potential \cite{Salasnich02a}:
\begin{equation}
\Psi \left( {{\mathbf{r}},t}\right) =\frac{1}{{{\pi ^{1/4}}}\sqrt{{\xi (%
\mathbf{r}_{\bot },t)}}}{\exp }\left( -\frac{z^{2}}{2(\xi (\mathbf{r}_{\bot
},t))^{2}}\right) \phi \left( {\mathbf{r}_{\bot },t}\right) . \label{Eq5}
\end{equation}%
The Gaussian is subject to the unitary normalization, with transverse width $%
\xi $ considered as a variational parameter, while the 2D wave function, $%
\phi $, is normalized to the number of atoms. Therefore, the reduction from
3D to 2D implies that the system of equations should be derived for the pair
of functions $\phi \left( \mathbf{r}_{\bot },t\right) $ and $\xi \left(
\mathbf{r}_{\bot },t\right) $, using the reduced action functional, which is
obtained by integrating the 3D action over the $z$-coordinate:
\begin{equation}
\mathcal{S}{_{\mathrm{2D}}}=\int {dtdxdy{\mathcal{L}}_{\mathrm{2D}}},
\label{Eq6}
\end{equation}%
where the respective Lagrangian density is
\begin{eqnarray}
{\mathcal{L}_{{\mathrm{2D}}}} &=&i\frac{\hbar }{2\lambda _{1}}\left( {{\phi
^{\ast }}{\partial _{t}}\phi -\phi {\partial _{t}}{\phi ^{\ast }}}\right) -%
\frac{{\hbar ^{2}}}{{2\lambda _{2}m_{\mathrm{F}}}}{\left\vert {{\nabla
_{\bot }}\phi }\right\vert ^{2}}-{U_{{\mathrm{2D}}}}{\left\vert {\phi }%
\right\vert ^{2}}-\frac{{\hbar ^{2}}}{{2m_{\mathrm{F}}}}\frac{{3\beta C_{2D}}%
}{{5{\xi ^{2/3}}}}{\left\vert {\phi }\right\vert ^{10/3}}-\frac{g_{\mathrm{F}%
}}{{2{{\left( {2\pi }\right) }^{1/2}}\xi }}{\left\vert {\phi }\right\vert
^{4}} \notag \\
&&-\frac{{\hbar ^{2}}}{{4m_{\mathrm{F}}\lambda _{2}{\xi ^{2}}}}{\left\vert {%
\phi }\right\vert ^{2}}-\frac{1}{4}{{m_{\mathrm{F}}\omega _{z}^{2}{\xi ^{2}}}%
}{\left\vert {\phi }\right\vert ^{2}}, \label{Eq7}
\end{eqnarray}%
$C_{\mathrm{2D}}\equiv {(3/5)^{1/2}}{(6/(2s_{\mathrm{F}}+1))^{2/3}}\pi $,
the last two terms being produced by the reduction to 2D, the penultimate
term corresponding to the spread in the confined dimension. Hence, the
Euler-Lagrange equations, derived by varying the 2D action, which is
generated by Lagrangian (\ref{Eq7}), with respect to $\phi $ and $\xi $ take
the form of
\begin{eqnarray}
i\frac{\hbar }{\lambda _{1}}{\partial _{t}}{\phi } &=&\left[ {-\frac{\hbar
^{2}}{{2{\lambda _{2}}{m_{\mathrm{F}}}}}\nabla _{\bot }^{2}+{U_{2\mathrm{D}}}%
+\frac{g_{\mathrm{F}}}{{\sqrt{2\pi }{\xi }}}{\left\vert {\phi }\right\vert
^{2}}}\right. +\frac{{\hbar ^{2}}}{2m_{\mathrm{F}}}\frac{\beta }{{\xi ^{2/3}}%
}{C_{2\mathrm{D}}}{\left\vert {\phi }\right\vert ^{4/3}}+\frac{{{\hbar ^{2}}}%
}{{4\lambda _{2}{m_{\mathrm{F}}}\xi ^{2}}} \notag \\
&&+\left. {\frac{1}{4}}{m_{\mathrm{F}}}\omega _{z}^{2}\xi ^{2}\right] {\phi }%
, \label{Eq8}
\end{eqnarray}%
\begin{equation}
m_{\mathrm{F}}\omega _{z}^{2}\xi ^{4}-\frac{{2{\hbar ^{2}}}}{5m_{\mathrm{F}}}%
\beta {C_{2\mathrm{D}}}{\left\vert {\phi }\right\vert ^{4/3}}\xi ^{4/3}-%
\frac{g_{\mathrm{F}}}{\sqrt{2\pi }}{\left\vert {\phi }\right\vert ^{2}}{\xi }%
-\frac{{{\hbar ^{2}}}}{\lambda _{2}m_{\mathrm{F}}}=0. \label{Eq9}
\end{equation}%
Algebraic equation (\ref{Eq9}) for $\xi $ cannot be solved analytically,
therefore we used the Newton's method to solve it numerically. The necessity
to find $\xi $ at each step of the integration is a numerical complication
of a minimal cost compared to the 3D integration of the underlying equation (%
\ref{Eq3}). Note that a further simplifications can be achieved
by assuming in Eq. (\ref{Eq5}) that the Gaussian width is a constant $\xi (%
\mathbf{r}_{\bot },t)=\xi _{0}$. In this case $\xi $, naturally, does not
depend on $\phi $. Then, the solution of Eq. (\ref{Eq9}) with the density
tending to zero can be calculated analytically and it is given by $\xi
_{0}=\lambda _{2}^{-1/4}\sqrt{\hbar /m_{\mathrm{F}}\omega _{z}}$.
\begin{figure}[tbp]
\begin{centering}
\begin{tabular}{lll}
(a) & (b) & \\
\includegraphics[width=0.45\textwidth]{Figure1arev-eps-converted-to.pdf} &
\includegraphics[width=0.45\textwidth]{Figure1brev-eps-converted-to.pdf} &
\end{tabular}%
\caption{(a) The 2D radial density, $\protect n_{\mathrm{2D}}(r)$, as obtained from the full 3D equation, and the 2D reduction derived with the help of the VA. (b) The 2D radial density, $\protect n_{\mathrm{2D}}(r)$, as obtained from the full 3D equation, and the 2D reduction derived with the help of the VA, assuming that the Gaussian width is a constant: $\xi_0 = \sqrt{\hbar/2m_{\mathrm{F}}\omega _{z} }$. Different curves correspond to the indicated values of $a_{s}=(0,-50,-100)\mathrm{nm}$. The other parameters are $N=1000$, $\omega_{x}=\omega_{y}=1050\mathrm{Hz}$, $\omega_{z}=21\mathrm{kHz}$ and $A=0$. The panel (a) is taken from Ref. \cite{Diaz12}.}
\label{Fig1}
\end{centering}
\end{figure}
We consider a 2D potential consisting of the axisymmetric parabolic
potential and the superposition of two triangular OLs:
\begin{equation}
{U_{\mathrm{2D}}}=A\sum\limits_{b=1}^{2}{\sum\limits_{a=1}^{3}{{{\sin }^{2}}%
\left( \frac{2\pi }{\lambda }\;({{\mathbf{k}_{a,b}}\cdot {\mathbf{r}_{\bot }}%
})\right) }}+\frac{1}{2}\omega _{x}^{2}{x^{2}}+\frac{1}{2}\omega _{y}^{2}{%
y^{2}}, \label{Eq10}
\end{equation}%
where $\{\mathbf{k}_{a,1}\}$ and $\{\mathbf{k}_{a,2}\}$ are triplets of
unitary vectors of both triangular lattices, which are separated by a
specific angle $\theta $. Here $A$ denotes the lattice's amplitude, and $%
(\omega _{x},\omega _{y})$ are frequencies of the magnetic-optical trapping
potential. In the absence of the OLs ($A=0$), we have verified the accuracy
of the 2D reduction by comparing results generated by this approximation to
those obtained by integrating the underlying 3D equation (\ref{Eq3}). The
respective GS was found by means of the imaginary-time integration based on
the fourth-order Runge-Kutta algorithm with $\Delta t=0.5$ $\mu $s. The
spatial discretization for the simulations was performed with $\Delta x=0.25$
$\mu $m and $\Delta y=0.25$ $\mu $m. The comparison is displayed in
panel (a) of Figure \ref{Fig1}, where the radial-density
profiles are plotted. We can observe excellent agreement between the reduced
2D and full 3D descriptions. This result suggests one to use the Eqs. (\ref%
{Eq8}) and (\ref{Eq9}) for studying 2D patterns. Panel (b)
of Figure \ref{Fig1} shows a comparison of 3D full numerical simulations versus
the VA, assuming a constant width $\xi _{0}$. One can observe that
the latter approximation produces less accurate results, which is at least
ten times worse than the VA with a density-dependent width.
\begin{figure}[tbp]
\begin{centering}
\begin{tabular}{lll}
(a) & (b) & \\
\includegraphics[width=0.3\textwidth]{Figure2arev-eps-converted-to.pdf} & %
\includegraphics[width=0.3\textwidth]{Figure2brev-eps-converted-to.pdf} & \\
& & \\
(c) & (d) & \\
\includegraphics[width=0.3\textwidth]{Figure2crev-eps-converted-to.pdf} & %
\includegraphics[width=0.3\textwidth]{Figure2drev-eps-converted-to.pdf} & \\
& & \\
\end{tabular}%
\caption{Density $\protect n_{2D}$ as a function of coordinates $x$ and $y$ for four different angles between the triangular OLs $\theta=(5º,10º,15º,20º)$. The fixed parameters are $N=1000$, $\omega_x=\omega_y=1050\mathrm{Hz}$, $\omega_z=52.5\mathrm{kHz}$, $A=1.74\times10^{-29}\mathrm{J}$, $\lambda=10\mu\mathrm{m}$ and $a_s=200\mathrm{nm}$.}
\label{Fig2R}
\end{centering}
\end{figure}
Figure \ref{Fig2R} shows the density as a function of coordinates $x$ and $y$
when the OLs are taken into account. We observe that this particular
combination of the OLs (a superlattice) produces a pattern in the form of
the superstructure, with the number of density peaks varying when the angle
between the unitary vectors increases. Note that the multitude of different
coexisting robust multi-peak patterns suggests that this setting has a
potential for the use as a data-storage system.
\subsection{The one-dimensional reduction}
Next, we consider the system confined in one dimension, which implies a
cigar-shaped configuration, elongated in the $z$ direction. In this case,
the potential trap acting in transeverse plane is the harmonic oscillator in
the transverse plane:
\begin{equation}
U\left( {{\mathbf{r}}}\right) =U_{1\mathrm{D}}\left( {z}\right) +\frac{1}{2}%
m\omega _{t}^{2}r^{2}, \label{Eq11}
\end{equation}%
where $r^{2}={x^{2}}+{y^{2}}$. It is assumed that the potential in the
transverse direction is much stronger than the axial one. The simplest
option is to adopt a Gaussian shape in the transverse plane, which
represents the ground state of a the 2D harmonic oscillator , similar to
what is adopted above in the case of the 2D reduction. As a result, the
variable-separation assumption can be applied, defining the 3D wave function
as \cite{Salasnich02a,Salasnich02b,Salasnich2D}
\begin{equation}
\Psi \left( {{\mathbf{r}},t}\right) =\frac{1}{{{\pi ^{1/2}}{\sigma }\left( {%
z,t}\right) }}{\exp {\ \left( -\frac{r^{2}}{{2{\sigma }{{\left( {z,t}\right)
}^{2}}}}\right) }}f\left( {z,t}\right) , \label{Eq12}
\end{equation}%
where $f$ is normalized to $N$, such that the 1D density is ${n_{1\mathrm{D}}%
}={\left\vert f\right\vert ^{2}}$. Here $\sigma $ is the Gaussian width,
which is a function of $z$ and time. After some algebra, similar to that
performed above, one derives the Euler-Lagrange equations:
\begin{equation}
i\frac{\hbar }{\lambda _{1}}{\partial _{t}}{f}=\left[ {\ -\frac{\hbar ^{2}}{{%
2{\lambda _{2}}{m_{\mathrm{F}}}}}\partial _{z}^{2}+U_{1\mathrm{D}}+\frac{g_{%
\mathrm{F}}}{{2\pi \sigma ^{2}}}{{\left\vert {f}\right\vert }^{2}}}+\frac{%
\hbar ^{2}}{{2{m_{\mathrm{F}}}}}\beta \frac{C_{1\mathrm{D}}}{{\sigma ^{4/3}}}%
{{\left\vert {f}\right\vert }^{4/3}}+\frac{\hbar ^{2}}{2{m_{\mathrm{F}}}%
\lambda _{2}\sigma ^{2}}\frac{1}{2}{m_{\mathrm{F}}}\omega _{t}^{2}\sigma ^{2}%
\right] {f,} \label{Eq13}
\end{equation}%
\begin{equation}
m_{\mathrm{F}}\omega _{t}^{2}\sigma ^{4}-\frac{2}{5}\frac{\hbar ^{2}}{m_{%
\mathrm{F}}}\beta C_{1\mathrm{D}}|f|^{4/3}\sigma ^{2/3}-\frac{\hbar ^{2}}{%
\lambda _{2}m_{\mathrm{F}}}-\frac{g_{\mathrm{F}}}{{2\pi }}|f|^{2}=0,
\label{Eq14}
\end{equation}%
where $C_{1\mathrm{D}}=(3/5){(6\pi (2{s_{\mathrm{F}}}+1))^{2/3}}$. Similar
to the 2D case, algebraic equation (\ref{Eq14}) is solved using the Newton's
method, and here too the quasi-BCS regime is addressed. We set $U_{1\mathrm{D%
}}=0$, $a_{\mathrm{F}}=-5$ nm, and $\omega _{t}=1000$ Hz, these parameters
being in the range of experimental values \cite{Yefsah13}. Since Eqs. (\ref%
{Eq13}) and (\ref{Eq14}) produce results which agree well with the full 3D
simulations \cite{Diaz12}, one can use the effective 1D equations to study
more complex dynamical behavior, such as that of dark solitons \cite%
{Yefsah13,Alphen18}.
\begin{figure}[tbp]
\begin{centering}
\begin{tabular}{lll}
(a) & (b) & \\
\includegraphics[width=0.3\textwidth]{Figure3arev-eps-converted-to.pdf} & %
\includegraphics[width=0.3\textwidth]{Figure3brev-eps-converted-to.pdf} &
\end{tabular}%
\caption{The initial 1D density for one soliton (a) and eighteen dark solitons (b).
In both cases, $\Delta_s=0.8 \mu$m and $n_b=10$ are used. The other fixed parameters
are $a_\mathrm{F}=-5$ nm and $\omega_t=1000$ Hz.}
\label{FIG3R}
\end{centering}
\end{figure}
To generate a dark soliton, it is posible to consider the initial condition
with zero imaginary part, $f_{I}(z,t=0)=0$, while the real part is given by $%
f_{R}(z,t=0)=f_{b}\tanh ((z-z_{s})/\Delta _{s})$, where $f_{b}$ and
$\Delta _{s}$ are the soliton's amplitude and width,
respectively. We have found that values of the square amplitude and width, $%
n_{b}=10$ particles/$\mu $m and $\Delta _{s}=0.8~\mu $m, respectively, can
be chosen to minimize the background noise. If we consider a set of $N_{s}$
dark solitons, the initial condition for the imaginary part is again zero, $%
f_{I}=0$, while the real part can be cast in the form of
\begin{equation}
f_{R}\left( z,t_{0}\right) =\frac{1}{2}+\frac{1}{2}\sum%
\limits_{j=1}^{N_{s}/2}{\left[ {\left( {-1}\right) }^{1+j}\left( {\tanh
\left( {\frac{{z-{z}}_{j}}{\Delta _{s}}}\right) +1}\right) +{{\left( {-1}%
\right) }^{j}}\left( {\tanh \left( {\frac{{z-{z_{-j}}}}{\Delta _{s}}}\right)
+1}\right) \right] },
\end{equation}%
where the positions of the solitons are $z_{j}$ and $z_{-j}$ on the positive
and negative $z$ half-axes, receptively. Moreover, the widths of the
solitons ($\Delta _{s}$) are considered the same, and that the number of
initial solitons $N_{s}$ is even. This initial ansatz was normalized to
secure the correct density of the wave function, $n_{b}=|f_{b}|^{2}$.
Then, the system was simulated with the help of the standard
fourth-order Runge-Kutta method with $\Delta t=0.095$ $\mu $s. The spatial
discretization for the simulations was performed with $\Delta z=0.100$ $\mu $%
m. Figure \ref{FIG3R} shows the shape of the initial conditions for the case
of one and $N_{s}=18$ dark solitons.
\begin{figure}[tbp]
\begin{centering}
\begin{tabular}{lll}
(a) & (b) & \\
\includegraphics[width=0.3\textwidth]{Figure4arev-eps-converted-to.pdf} & %
\includegraphics[width=0.3\textwidth]{Figure4brev-eps-converted-to.pdf} & \\
& & \\
\end{tabular}%
\caption{(a) The spatiotemporal diagram for the density, $n_{1D}$, when the initial
core-core separation between the two dark solitons is
$d=4$ $\mu$m. (b) The speed of the solitons at $t=90$ms as a function of $d$. The
other fixed parameters are the same as in Fig. \ref{FIG3R}.}
\label{FIG4R}
\end{centering}
\end{figure}
\begin{figure}[tbp]
\begin{centering}
\begin{tabular}{lll}
(a) & (b) & \\
\includegraphics[width=0.3\textwidth]{Figure5arev-eps-converted-to.pdf} & %
\includegraphics[width=0.3\textwidth]{Figure5brev-eps-converted-to.pdf} & \\
& & \\
(c) & (d) & \\
\includegraphics[width=0.3\textwidth]{Figure5crev-eps-converted-to.pdf} & %
\includegraphics[width=0.3\textwidth]{Figure5drev-eps-converted-to.pdf} & \\
& & \\
\end{tabular}%
\caption{The spatiotemporal diagram for the density, $n_{1D}$ for
different numbers of dark solitons: (a) $N_s=6$, (b) $N_s=10$, (c) $N_s=14$
and (d) $N_s=18$. In all the cases the initial distance between the solitons
is $d=4$ $\mu$m. The other fixed parameters are the same as in Fig. \ref{FIG3R}.}
\label{FIG5R}
\end{centering}
\end{figure}
In the case of two solitons, we have $z_{i}=z_{-i}=d/2$, where $d$ is the
initial inter-core separation. Frame (a) of Fig. \ref{FIG4R} shows the
spatiotemporal diagram for two solitons at $d=4\mu $m. One clearly observes
that both solitons separate in the course of the evolution. Frame (b) of
Fig. \ref{FIG4R} shows the speed taken after $90$ms of the evaluation as a
function of different initial inter-core separations between the dark
solitons. naturally, smaller initial separations generate higher speeds. In
fact, at this fix time the speed follows the law $v_{s}\sim d^{\alpha }$,
with $\alpha =-3.49$. Other features of the two-soliton interaction can be
found in Ref. \cite{Alphen18}.
Figure \ref{FIG5R} shows the spatiotemporal diagrams for the 1D density, $%
n_{1\mathrm{D}}$, for different numbers of dark solitons $N_{s}=(6,10,14,18)$%
. Similar to the case of two solitons, we observe that the solitons interact
repulsively. To measure the strength of the interaction is provided by the
distance between the central part and the positive-side border of the
soliton gas, $\delta z_{e}=|z_{\mathrm{central}}-z_{\mathrm{bond}}|$.
\begin{figure}[tbp]
\begin{centering}
\includegraphics[width=0.4\textwidth]{Figure6rev-eps-converted-to.pdf} %
\caption{The distance between the central part and the edge at $z>0$ of the
dark-soliton gas, $\delta z_e$, as a function of time for different numbers
of the dark solitons, $N_s$. The other fixed parameters are the same as in Fig. \ref{FIG3R}.}
\label{FIG6R}
\end{centering}
\end{figure}
Figure \ref{FIG6R} shows $\delta z_{e}$ as a function of time for different
values of $N_{s}$. We obverse that it increases monotonously with time, and
its time derivative (speed) change as $N_{s}$ increases. Nevertheless, the
speed tends to a limit with the increase of the number of solitons, so that
there is no dramatic difference between $N_{s}=14$ and $N_{s}=18$. This
happens because the interaction between the solitons has an effective range,
as shown in frame (b) of Fig. \ref{FIG4R}, hence the solitons located near
the edges interact weaker with the central ones.
\begin{figure}[tbp]
\begin{centering}
\begin{tabular}{lll}
(a) & (b) & \\
\includegraphics[width=0.3\textwidth]{Figure7arev-eps-converted-to.pdf} & %
\includegraphics[width=0.3\textwidth]{Figure7brev-eps-converted-to.pdf} & \\
\end{tabular}%
\caption{Dependence of the speed of the dark solitons for $N_s=18$.
(a) The distribution of the speed of each soliton at $t=90$ms for different initial distances $d$.
(b) The speed of the dark soliton near the edge at $t=90$ms as a function of $d$.
The other fixed parameters are the same as in Fig. \ref{FIG3R}.}
\label{FIG7R}
\end{centering}
\end{figure}
To analyze the case of a large number of solitons, it is enough to take $%
N_{s}=18$. Frame (a) of Fig. \ref{FIG7R} shows the speed at $t=90$ms of each
soliton as a function of its initial position for different initial
distances $d$. We observe that the central solitons have smaller speeds than
their counterparts placed near the edges, so that the speed is given by $%
v_{s}\simeq \tanh (\gamma _{d}z_{d})$ with $\gamma _{d}=-0.01d+0.077$ in the
range of Fig. \ref{FIG7R}. Frame (b) of Fig. \ref{FIG7R} shows the speed of
the soliton located near the positive edge at $t=90$ms. Similar to the
two-soliton case, the speed decays with the increase of the
initial distance, $v_{s}\sim d^{\alpha }$, with $\alpha =-3.385$.
\begin{figure}[tbp]
\begin{centering}
\begin{tabular}{llll}
(a) & (b) & (c) & \\
\includegraphics[width=0.28\textwidth]{Figure8arev-eps-converted-to.pdf} & %
\includegraphics[width=0.28\textwidth]{Figure8brev-eps-converted-to.pdf} & %
\includegraphics[width=0.28\textwidth]{Figure8crev-eps-converted-to.pdf} & \\
\end{tabular}%
\caption{Spatiotemporal diagrams of the density $n_{1D}$ for $N_s=18$ for
two different initial conditions in the presence of random perturbation $\epsilon$.
Panels (a) and (b) display the results for $\epsilon$ taking values in the ranges
of $[-\epsilon_{\max},\epsilon_{\max}]$, with $\epsilon_{\max}=0.4$ $\mu$m and
$0.8$ $\mu$m, respectively.
(c) $E_{\epsilon}$ normalized to $E_{0}$ as a function of $\epsilon_{\max}$,
i.e., the amplitude of the randomly varying variable.}
\label{FIG8R}
\end{centering}
\end{figure}
Finally, we consider random initial positions of the solitons, with $%
N_{s}=18 $. We define the initial positions as $z_{j,\epsilon
}=z_{j,0}+\epsilon $, where $\epsilon $ is a random fluctuation, and $%
z_{j,0} $ are the soliton positions, with the mean distance between them $%
d=4~\mu $m, like in the symmetric case. Figure \ref{FIG8R} shows the
spatiotemporal diagrams of the 1D density $n_{1\mathrm{D}}$ for two
different random realizations. In particular, we assume that $\epsilon $
takes random values in the ranges $[-\epsilon_{\max},\epsilon_{\max}]=
[-0.4,0.4]~\mu $m and $[-0.8,0.8]~\mu $m, in panels (a) and (b),
respectively. It is observed that the speed of the expansion is higher than
in the absence of the randomness, because the interaction energy generates
higher internal pressure in the gas of solitons. We analyze the influence of
the random-fluctuation magnitude, $\epsilon $ on the dynamics. In
particular, we calculate the sum of the squared velocities at the final
moment of time,
\begin{equation}
E_{\epsilon }=\sum\limits_{j=1}^{N_{s}}v_{j,\epsilon }^{2}.
\end{equation}%
Panel (c) of Fig. \ref{FIG8R} shows $E_{\epsilon }$ normalized to $E_{0}$
(the kinetic energy of the set of dark solitons with equidistant initial
positions) as a function of $\epsilon_{\max}$. We can observe that $%
E_{\epsilon}$ strongly increases with the growth of $%
\epsilon_{\max}$, which naturally means that the gas of solitons expands
faster when the fluctuations are stronger.
\section{The Bose - Fermi Mixture}
\label{S3}
In this section we consider a dilute superfluid mixture formed by $N_{%
\mathrm{B}}$ bosonic atoms of mass $m_{\mathrm{B}}$, and $N_{\mathrm{F}}$
fermionic atoms of mass $m_{\mathrm{F}}$ and spin $s_{\mathrm{F}}$. The
atoms interact through the pseudopotential, $\delta (\mathbf{r})$ \cite%
{Bloch08}. We assume that bosons form a BEC, described by the
Gross-Pitaevskii equation \cite{Bloch08}, while the local density
approximation \cite{Bloch08} applies to the description of the weakly
interacting fermionic component. Accordingly, the dynamical
equations can be derived from the functional,
\begin{equation}
\mathcal{S}=\int {dtd{\mathbf{r}}\left( \mathcal{L}_{\mathrm{B}}+\mathcal{L}%
_{\mathrm{F}}+\mathcal{L}_{\mathrm{BF}}\right) }, \label{Eq17}
\end{equation}
where $\mathcal{L}_{B}$ and $\mathcal{L}_{\mathrm{F}}$ are the Lagrangian
densities of the Bose and Fermi components, while $\mathcal{L}_{\mathrm{BF}}$
accounts for the interaction between them \cite{Diaz15}:
\begin{equation}
{\mathcal{L}_{B}}=\frac{{i\hbar }}{2}\left( {\Psi _{\mathrm{B}}^{\ast }\frac{%
{\partial {\Psi _{\mathrm{B}}}}}{{\partial t}}-{\Psi _{\mathrm{B}}}\frac{{%
\partial \Psi _{\mathrm{B}}^{\ast }}}{{\partial t}}}\right) -\frac{{{\hbar
^{2}}}}{{2{m_{\mathrm{B}}}}}{\left\vert {\nabla {\Psi _{\mathrm{B}}}}%
\right\vert ^{2}}-{U_{B}(\mathbf{r})\left\vert {{\Psi _{\mathrm{B}}}}%
\right\vert ^{2}}-\frac{1}{2}{g_{\mathrm{B}}}{\left\vert {{\Psi _{\mathrm{B}}%
}}\right\vert ^{4}},
\end{equation}%
\begin{equation}
{\mathcal{L}_{\mathrm{F}}}=\frac{{i\hbar }}{2\lambda _{1}}\left( {\Psi _{%
\mathrm{F}}^{\ast }\frac{{\partial {\Psi _{\mathrm{F}}}}}{{\partial t}}-{%
\Psi _{\mathrm{F}}}\frac{{\partial \Psi _{\mathrm{F}}^{\ast }}}{{\partial t}}%
}\right) -\frac{{{\hbar ^{2}}}}{{2\lambda _{2}{m_{\mathrm{F}}}}}{\left\vert {%
\nabla {\Psi _{\mathrm{F}}}}\right\vert ^{2}}-{U_{\mathrm{F}}(\mathbf{r})}{%
\left\vert {{\Psi _{\mathrm{F}}}}\right\vert ^{2}}-\frac{1}{2}{{g_{\mathrm{F}%
}}}{\left\vert {{\Psi _{\mathrm{F}}}}\right\vert ^{4}}-\frac{3\beta C_{%
\mathrm{F}}\hbar ^{2}}{10m_{\mathrm{F}}}{\left\vert {{\Psi _{\mathrm{F}}}}%
\right\vert ^{10/3}},
\end{equation}%
\begin{equation}
{\mathcal{L}_{B\mathrm{F}}}=-\frac{1}{2}{g_{\mathrm{BF}}}{\left\vert {{\Psi
_{B}}}\right\vert ^{2}}{\left\vert {{\Psi _{\mathrm{F}}}}\right\vert ^{2}}.
\label{Eq20}
\end{equation}%
Here ${g_{B}}\equiv 4\pi {\hbar ^{2}}{a_{B}}/${$m_{\mathrm{B}}$}, ${g_{%
\mathrm{F}}}\equiv 4\pi {\hbar ^{2}}({a_{\mathrm{F}}}/{m_{\mathrm{F}}})[2S_{%
\mathrm{F}}/(2S_{\mathrm{F}}+1)]$, and ${g_{\mathrm{BF}}}\equiv 4\pi {\hbar
^{2}}${$a_{\mathrm{BF}}$}$/${$m_{\mathrm{BF}}$} are three interaction
parameters of the mixture, with $a_{\mathrm{B}}$, $a_{\mathrm{F}}$ and $a_{%
\mathrm{BF}}$ being the respective scattering lengths; ${m_{\mathrm{BF}}}%
\equiv {m_{B}}{m_{\mathrm{F}}}/({m_{B}}+{m_{\mathrm{F}}})$ is the reduced
mass; and $U_{\mathrm{B}/\mathrm{F}}{(\mathbf{r})}$ are external potentials
acting on bosons/fermions. Complex wave functions $\Psi _{\mathrm{B}/\mathrm{%
F}}\left( \mathbf{r},t\right) $ are normalized to the respective numbers of
particles, $N_{\mathrm{B}/\mathrm{F}}$. The other parameters of the
fermionic Lagrangian density are the same as in Sec. \ref{S2}.
Varying action $\mathcal{S}$ with respect to $\Psi _{\mathrm{B}}^{\ast }$
and to $\Psi _{\mathrm{F}}^{\ast }$, we derive the following system of
nonlinear Schr\"{o}dinger equations for bosons and fermions:
\begin{equation}
i\hbar {\partial _{t}}{\Psi _{B}}=\left[ {-\frac{{{\hbar ^{2}}}}{{2{m_{B}}}}{%
\nabla ^{2}}+{g_{B}}{{\left\vert {{\Psi _{B}}}\right\vert }^{2}}+{g_{\mathrm{%
BF}}}{{\left\vert {{\Psi _{\mathrm{F}}}}\right\vert }^{2}}+{U_{B}}}\right] {%
\Psi _{B},} \label{Eq21}
\end{equation}
\begin{equation}
\frac{{i\hbar }}{{{\lambda _{1}}}}{\partial _{t}}{\Psi _{\mathrm{F}}}=\left[
-\frac{{{\hbar ^{2}}}}{{2{\lambda _{2}}{m_{\mathrm{F}}}}}{\nabla ^{2}}+{{g_{%
\mathrm{F}}}{{\left\vert {{\Psi _{\mathrm{F}}}}\right\vert }^{2}}+{g_{%
\mathrm{BF}}}{{\left\vert {{\Psi _{B}}}\right\vert }^{2}}}+{U_{\mathrm{F}}}+%
\frac{{{\beta C_{\mathrm{F}}\hbar ^{2}}}}{{2{m_{F}}}}{{\left\vert {{\Psi _{%
\mathrm{F}}}}\right\vert }^{4/3}}\right] {\Psi _{\mathrm{F}}}. \label{Eq22}
\end{equation}
We apply the formalism developed below to the ${}^{7}$Li-${}^{6}$Li mixture,
with the same scattering parameter for both species, $a_{\mathrm{B}}=a_{%
\mathrm{F}}=5$nm. The use of isotopes of the same alkali element is
suggested by the similarity of their electric polarizability, thus implying
similar external potentials induced by an optical trap. Unless specified
otherwise, in what follows below we consider configurations with fully
polarized fermions. Note that the BCS and unitarity regimes involve more
than one spin state of fermions, hence the magnetic trap will split the
respective spin energy levels. For this reason, we assume the presence of
the optical trap, which supports equal energy levels for all the spin
states, making it possible to discriminate different regimes of the
interaction in the BFM. In the BCS and unitarity regimes, we assume balanced
populations of the two spin components.
Our analysis is first presented for the GS and dynamics of perturbations
around it. In particular, for the GS we focus on determining the spatial
correlation $C_{s}$ between the spatial particle densities in both species,
defined as
\begin{equation}
{C_{s}}\left( {{\bar{n}_{B}},{\bar{n}_{\mathrm{F}}}}\right) =\frac{{%
\left\langle {{\bar{n}_{B}}{\bar{n}_{\mathrm{F}}}}\right\rangle }}{\sqrt{%
\left\langle {\bar{n}_{B}^{2}}\right\rangle \left\langle {\bar{n}_{\mathrm{F}%
}^{2}}\right\rangle }}, \label{Eq23}
\end{equation}
where ${{\bar{n}}_{\mathrm{B}/\mathrm{F}}}={n_{\mathrm{B}/\mathrm{F}}}%
-\left\langle {{n_{\mathrm{B}/\mathrm{F}}}}\right\rangle $, $\left\langle
{}\right\rangle $ standing for the spatial average. For dynamical
perturbations around the GS, a spatiotemporal correlation, which is defined
by replacing the spatial average with the spatiotemporal average, is known
as the Pearson coefficient $C_{s-t}$ \cite{Bragard04}. We remark that when $%
C_s=1$ and $C_s=-1$ the mixture is fully synchronized and anti-synchronized,
respectively; whereas, the mixture is not synchronized at $C_s=0$.
While numerical integration of this system in the 3D form is very heavy
computationally, the effective dimension may be reduced to 1D or 2D when the
system is tightly confined by a trapping potential. To this end, the VA is
employed, making use, as above, of the factorization of the 3D wave
function, which includes the Gaussian ansatz in the tightly confined
transverse directions. As mentioned above too, the factorization has been
widely used for Bose and Fermi systems separately, as it shown in Refs. \cite%
{Salasnich02a} and \cite{Diaz12}, respectively. In the next two subsections
we reduce the full 3D system to the corresponding 2D and 1D effective
systems, using the VA proposed in Ref. \cite{Diaz15}.
\subsection{The two-dimensional reduction}
Similar to the case of the pure Fermi gas, we derive 2D equations for the
disc-shaped configuration. Accordingly, the structure of the confinement
potential is taken as
\begin{equation}
{U_{\mathrm{B}/\mathrm{F}}}\left( {\mathbf{r}}\right) =\frac{1}{2}{m_{%
\mathrm{B}/\mathrm{F}}}\omega _{z,\mathrm{B}/\mathrm{F}}^{2}{z^{2}}+{U_{2%
\mathrm{D},\mathrm{B}/\mathrm{F}}}\left( \mathbf{r}_{\bot }\right) ,
\label{Eq24}
\end{equation}
where the second term corresponds to the strong harmonic-oscillator trap
acting along the $z$ direction. The corresponding factorized ansatz is
adopted as
\begin{equation}
\Psi _{\mathrm{B}/\mathrm{F}}\left( {{\mathbf{r}},t}\right) =\frac{1}{{{\pi
^{1/4}}\sqrt{\xi _{\mathrm{B}/\mathrm{F}}(\mathbf{r}_{\bot },t)}}}{\exp }%
\left( -\frac{{{z^{2}}}}{{2\left( \xi _{\mathrm{B}/\mathrm{F}}(\mathbf{r}%
_{\bot },t)\right) ^{2}}}\right) \phi _{\mathrm{B}/\mathrm{F}}(\mathbf{r}%
_{\bot },t) \label{Eq25}
\end{equation}
where $\phi _{\mathrm{B}/\mathrm{F}}$ is normalized to $N_{\mathrm{B}/%
\mathrm{F}}$, and ${\xi _{\mathrm{B}/\mathrm{F}}}\left( {x,y,t}\right) $ are
widths of the gas in the confined direction. Substituting the factorized
ansatz (\ref{Eq25}) in action (\ref{Eq17}) and integrating over $z$, we
arrive at the following expression for the effective 2D action:
\begin{equation}
\mathcal{S}=\int {dtdxdy\left( \mathcal{L}_{2D\mathrm{,B}}+\mathcal{L}_{2%
\mathrm{D},\mathrm{F}}+\mathcal{L}_{2\mathrm{D,BF}}\right) }, \label{Eq26}
\end{equation}
where
\begin{equation}
\mathcal{L}_{2\mathrm{D,B}}=i\frac{\hbar }{2}\left( {\phi _{\mathrm{B}%
}^{\ast }{\partial _{t}}{\phi _{\mathrm{B}B}}-{\phi _{\mathrm{B}}}{\partial
_{t}}\phi _{\mathrm{B}}^{\ast }}\right) -{U_{2D\mathrm{,B}}}{n_{2\mathrm{D,B}%
}}-{e_{2\mathrm{D,B}}},
\end{equation}
\begin{equation}
{\mathcal{L}_{2\mathrm{D},\mathrm{F}}}=i\frac{\hbar }{{2{\lambda _{1}}}}%
\left( {\phi _{\mathrm{F}}^{\ast }{\partial _{t}}{\phi _{\mathrm{F}}}-{\phi
_{\mathrm{F}}}{\partial _{t}}\phi _{\mathrm{F}}^{\ast }}\right) -{U_{2%
\mathrm{D},\mathrm{F}}}{n_{2\mathrm{D},\mathrm{F}}}-{e_{2\mathrm{D},\mathrm{F%
}}},
\end{equation}%
\begin{equation}
{\mathcal{L}_{2\mathrm{D,BF}}}=-\frac{1}{{{\pi ^{1/2}}}}\frac{{{g_{B\mathrm{F%
}}}}}{\sqrt{\xi _{\mathrm{B}}^{2}+\xi _{\mathrm{F}}^{2}}}{n_{2\mathrm{D},%
\mathrm{B}}}{n_{2\mathrm{D},\mathrm{F}}},
\end{equation}%
so that $n_{2\mathrm{D,B/F}}\equiv \left\vert {\phi _{\mathrm{B}/\mathrm{F}}}%
\left( {x,y}\right) \right\vert ^{2}$ are the 2D particle densities of the
boson and fermion species, and ${e_{2\mathrm{D,B}}}$ and ${e_{2\mathrm{D},%
\mathrm{F}}}$ are their energy densities:
\begin{equation}
{e_{2\mathrm{D,B}}}=\frac{{{\hbar ^{2}}}}{{2{m_{\mathrm{B}}}}}{\left\vert {{%
\nabla _{\bot }}{\phi _{\mathrm{B}}}}\right\vert ^{2}}+\left[ {\frac{{{g_{%
\mathrm{B}}}}}{{{\ \sqrt{8\pi }}{\xi _{\mathrm{B}}}}}{n}_{2\mathrm{D,B}}+%
\frac{{{\hbar ^{2}}}}{{4{m_{\mathrm{B}}}\xi _{\mathrm{B}}^{2}}}}\right.
+\left. {\frac{1}{4}{m_{\mathrm{B}}}\omega _{z,\mathrm{B}}^{2}\xi _{\mathrm{B%
}}^{2}}\right] {n_{2\mathrm{D},\mathrm{B}}},
\end{equation}
\begin{eqnarray}
{e_{2\mathrm{D},\mathrm{F}}} &=&\frac{{{\hbar ^{2}}}}{{2{\lambda _{2}}{m_{%
\mathrm{F}}}}}{\left\vert {{\nabla _{\bot }}{\phi _{\mathrm{F}}}}\right\vert
^{2}}+\left[ {\frac{{{g_{\mathrm{F}}}}}{{{\ \sqrt{8\pi }}{\xi _{\mathrm{F}}}}%
}{n_{2\mathrm{D},\mathrm{F}}}+\frac{{{\hbar ^{2}}}}{{4\lambda _{2}{m_{%
\mathrm{F}}}\xi _{\mathrm{F}}^{2}}}+\frac{1}{4}{m_{\mathrm{F}}}\omega _{z,%
\mathrm{F}}^{2}\xi _{\mathrm{F}}^{2}}\right. \notag \\
&&+\left. {\frac{{{\hbar ^{2}}}}{{2{m_{\mathrm{F}}}}}\xi \frac{3}{{5\xi _{%
\mathrm{F}}^{2/3}}}{C_{2\mathrm{D},\mathrm{F}}}n_{2\mathrm{D},\mathrm{F}%
}^{2/3}}\right] {n_{2\mathrm{D},\mathrm{F}}},
\end{eqnarray}
with $C_{2\mathrm{D},\mathrm{F}}\equiv {(3/5)^{1/2}}{(6/(2s_{\mathrm{F}%
}+1))^{2/3}}\pi $. The field equations for the 2D system are obtained by the
variation of the action $S$ given by Eq. (\ref{Eq26}) with respect to
variables $\phi _{\mathrm{B}}$ and $\phi _{\mathrm{F}}$:
\begin{eqnarray}
i\hbar {\partial _{t}}{\phi _{\mathrm{B}}} &=&\left[ {-\frac{{{\hbar ^{2}}}}{%
{2{m_{B}}}}\nabla _{\bot }^{2}+{U_{2\mathrm{D,B}}}+\frac{1}{{{\pi ^{1/2}}}}%
\frac{{{g_{\mathrm{BF}}}}}{\sqrt{\xi _{\mathrm{B}}^{2}+\xi _{\mathrm{F}}^{2}}%
}{n_{2\mathrm{D},\mathrm{F}}}}+\frac{{{g_{\mathrm{B}}}}}{{\sqrt{2\pi }{\xi _{%
\mathrm{B}}}}}{{\left\vert {{\phi _{\mathrm{B}}}}\right\vert }^{2}}\right.
\notag \\
&&+\left. \frac{{{\hbar ^{2}}}}{{4{m_{\mathrm{B}}}\xi _{\mathrm{B}}^{2}}}+%
\frac{1}{4}{m_{\mathrm{B}B}}\omega _{z,\mathrm{B}}^{2}\xi _{\mathrm{B}}^{2}%
\right] {\phi _{\mathrm{B}},} \label{Eq32}
\end{eqnarray}%
\begin{eqnarray}
i\frac{\hbar }{{{\lambda _{1}}}}{\partial _{t}}{\phi _{\mathrm{F}}} &=&\left[
{-\frac{{{\hbar ^{2}}}}{{2{\lambda _{2}}{m_{\mathrm{F}}}}}\nabla _{\bot
}^{2}+{U_{2D,\mathrm{F}}}+\frac{1}{{{\pi ^{1/2}}}}\frac{{{g_{\mathrm{BF}}}}}{%
\sqrt{\xi _{\mathrm{B}}^{2}+\xi _{\mathrm{F}}^{2}}}{n_{2\mathrm{D,B}}}}+%
\frac{{{g_{\mathrm{F}}}}}{{\sqrt{2\pi }{\xi _{\mathrm{F}}}}}{\left\vert {{%
\phi _{\mathrm{F}}}}\right\vert ^{2}}\right. \notag \\
&&+\left. \frac{{{\hbar ^{2}}}}{{2{m_{\mathrm{F}}}}}\xi \frac{1}{{\xi _{%
\mathrm{F}}^{2/3}}}{C_{2\mathrm{D},\mathrm{F}}}{\left\vert {{\phi _{\mathrm{F%
}}}}\right\vert ^{4/3}}+{\frac{{{\hbar ^{2}}}}{{4\lambda _{2}{m_{\mathrm{F}}}%
\xi _{\mathrm{F}}^{2}}}+\frac{1}{4}{m_{\mathrm{F}}}\omega _{z,\mathrm{F}%
}^{2}\xi _{\mathrm{F}}^{2}}\right] {\phi _{\mathrm{F}}.} \label{Eq33}
\end{eqnarray}
Relations between $\xi _{\mathrm{B}/\mathrm{F}}$ and $\phi _{\mathrm{B}/%
\mathrm{F}}$ are produced by the Euler-Lagrange equations associated to $\xi
_{\mathrm{B}/\mathrm{F}}$:
\begin{equation}
{\kappa _{I,B}}\xi _{\mathrm{B}}^{4}-\frac{{{g_{B}}}}{\sqrt{2\pi }}{n_{2%
\mathrm{D,B}}}{\xi _{\mathrm{B}}}-\frac{{{\hbar ^{2}}}}{{{m_{\mathrm{B}}}}}%
=0, \label{Eq34}
\end{equation}%
\begin{equation}
{\kappa _{I,\mathrm{F}}}\xi _{\mathrm{F}}^{4}-\frac{{2{\hbar ^{2}}}}{{5{m_{%
\mathrm{F}}}}}\xi {C_{2\mathrm{D},\mathrm{F}}}n_{2\mathrm{D},\mathrm{F}%
}^{2/3}\xi _{\mathrm{F}}^{4/3}-\frac{g_{\mathrm{F}}}{\sqrt{2\pi }}{n_{2%
\mathrm{D},\mathrm{F}}}{\xi _{\mathrm{F}}}-\frac{{{\hbar ^{2}}}}{\lambda
_{2}m_{\mathrm{F}}}=0, \label{Eq35}
\end{equation}%
where ${\kappa _{I,\mathrm{F}}}\equiv {m_{\mathrm{F}}}\omega _{z,\mathrm{F}%
}^{2}+2{g_{\mathrm{BF}}}{n_{2\mathrm{D,B}}}/[{{\pi ^{1/2}}{{({\xi _{\mathrm{B%
}}^{2}+\xi _{\mathrm{F}}^{2}})}^{3/2}}}]$. Thus, Eqs. (\ref{Eq32})-(\ref%
{Eq35}) constitute a system of four 2D coupled equations produced by the
reduction of the underlying 3D system (\ref{Eq21}). Note also
that when $g_{BF}=0$, the system is decoupled and Eq. (\ref{Eq32})
corresponds to the dimensional reduction of the Gross-Pitaevskii equation.
Equations (\ref{Eq34}) and (\ref{Eq35}) for $\xi _{\mathrm{B}/\mathrm{F}}$
can be solved numerically by dint of the Newton's method. The basic external
potential is taken as the harmonic-oscillator one: $U_{2\mathrm{D},\mathrm{B}%
/\mathrm{F}}=m_{\mathrm{B}/\mathrm{F}}\omega _{x,\mathrm{B}/\mathrm{F}}^{2}{%
x^{2}}/2+m_{\mathrm{B}/\mathrm{F}}\omega _{y,\mathrm{B}/\mathrm{F}}^{2}{y^{2}%
}/2$. The simulations were based on the fourth-order Runge-Kutta algorithm
with $\Delta t=4.77$ $\mu $s. The spatial discretizations was performed with
$\Delta x=1$ $\mu $m, $\Delta y=1$ $\mu $m and $\Delta z=0.05$ $\mu $m. The
GS was found by means of the imaginary-time integration. We here focus on
the case when the number of bosons is much greater than the number of
fermions, \textit{viz}., $N_{\mathrm{B}}=5\times 10^{4}$ and $N_{\mathrm{F}%
}=2.5\times 10^{3}$.
\begin{figure}[tbp]
\centering
\resizebox{0.8\textwidth}{!}{
\includegraphics{Figure9arev-eps-converted-to.pdf}
\includegraphics{Figure9brev-eps-converted-to.pdf}
}
\resizebox{0.8\textwidth}{!}{
\includegraphics{Figure9crev-eps-converted-to.pdf}
\includegraphics{Figure9drev-eps-converted-to.pdf}
}
\caption{The radial profile of the 2D particle density, and the respective
width for different values of interaction strength $a_{\mathrm{BF}}$. (a) $%
n_{2\mathrm{D},\mathrm{B}}$, (b) $n_{2\mathrm{D,F}}$, (c) $\protect\xi _{%
\mathrm{B}}$, and (d) $\protect\xi _{\mathrm{F}}$. The parameters are $N_{%
\mathrm{B}}=5\times 10^{4}$, $N_{\mathrm{F}}=2.5\times 10^{3}$, $a_{\mathrm{%
B/F}}=5$ nm, $\protect\omega _{z,\mathsf{B/F}}=1000$ Hz, and $\protect\omega %
_{x,\mathrm{B/F}}=\protect\omega _{y,\mathrm{B/F}}=30$ Hz. The inset in
panel (a) shows the difference between the VA and full 3D simulations, by
means of $\Delta n_{\mathrm{2D}}\equiv \bar{n}_{\mathrm{2D}}-n_{\mathrm{2D}}$%
. This figure is taken from Ref. \protect\cite{Diaz15}.}
\label{FIG9R}
\end{figure}
Frames (a) and (b) of Fig. \ref{FIG9R} show the radial profile of both 2D
bosonic and fermionic densities, $n_{2\mathrm{D},\mathrm{B}/\mathrm{F}}$,
respectively. The panels for the bosonic and fermionic components are the
left and right ones, respectively. Each density has been computed using the
VA and the full 3D system. To obtain the 2D profile from the 3D simulations,
Eqs. (\ref{Eq21}) and (\ref{Eq22}) were solved, and the 3D density was
integrated along the $z$ axis, $\bar{n}_{\mathrm{2D,B/F}}=\int_{-\infty
}^{+\infty }\left\vert \Psi _{\mathrm{2D,B/F}}(\mathbf{r})\right\vert dz$.
We infer that the repulsive mixture concentrates the bosons at the center,
while the attractive mixture concentrates both species at the center. Panels
(c) and (d) of Figure \ref{FIG9R} show the radial dependence of the width
for both bosonic and fermionic component, respectively. We observe that only
the width of the fermionic density profile varies significantly with the
change of the scattering length of the inter-species interaction, which is a
consequence of a greater number of bosons in comparison with fermions. It is
clearly seen that fermions are stronger confined when the interaction is
attractive, and their spatial distribution significantly expands when the
interaction is repulsive. Similar results have been reported in Refs. \cite%
{Adhikari08,Maruyama09,Salasnich07a}.
\begin{figure}[tbp]
\centering
\resizebox{0.5\textwidth}{!}{
\includegraphics{Figure10rev-eps-converted-to.pdf}
}
\caption{The 2D overall percentage error of the VA versus the full 3D
system, as a functionof $a_{\mathrm{BF}}$ for both species. Parameters are
the same as in Fig.~\protect\ref{FIG9R}. This figure is taken from Ref.
\protect\cite{Diaz15}.}
\label{FIG10R}
\end{figure}
Now, to compare the results obtained from the VA with those produced by the
3D simulations, we note that both profiles are practically identical, except
for the repulsive case in which a discrepancy is observed. The inset in
panel (a) of Fig. \ref{FIG9R} shows that the difference between the two
results has a magnitude of nearly three orders of magnitude lower than the
density itself. We define the overall percentage error of the VA as $E_{\%,%
\mathrm{2D}}=\int \int \left\vert \rho _{\mathrm{2D}}-n_{\mathrm{2D}%
}\right\vert dxdy$ (for both species). Figure~\ref{FIG10R} shows the error
for both species as a function of interspecies scattering parameter, $a_{%
\mathrm{BF}}$. For bosons it takes values $\sim 0.2\%$, and does not change
much, as shown in the inset to panel (a) of Fig. \ref{FIG9R}. For fermions
the error is greater than for bosons throughout the observed range, but it
is quite small for the attractive mixture. Note that the error increases for
the repulsive mixture, but remains lower that $2\%$. Thus we conclude that
the 2D approximation is very accurate.
\begin{figure}[tbp]
\centering
\resizebox{0.5\textwidth}{!}{\includegraphics{Figure11rev-eps-converted-to.pdf}
}
\caption{(Color online) Spatial correlation $C_{s}$ of the GS of the 2D
mixture as a a function of $a_{\mathrm{BF}}$, for three fermionic regimes:
polarized, BCS, and unitarity. The fixed parameters are: $N_{\mathrm{B}%
}=5\times 10^{4}$, $N_{\mathrm{F}}=2.5\times 10^{3}$, $a_{\mathrm{B/F}}=5$
nm, $\protect\omega _{x,\mathrm{B/F}}=\protect\omega _{y,\mathrm{B/F}}=30$
Hz and $\protect\omega _{z,\mathrm{B/F}}=1000$ Hz. This figure is taken from
Ref. \protect\cite{Diaz15}.}
\label{FIG11R}
\end{figure}
Finally, we measure the correlations of the BFM states. To this end, the
spatial correlation, $C_{s}$, in the GS was calculated using the definition
given in Eq. (\ref{Eq23}). Figure \ref{FIG11R} presents the analysis of the
GS synchronization of the mixture as a function of $a_{\mathrm{BF}}$, where
three possible regimes are considered for the fermions: fully polarized,
BCS, and unitarity. Parameters of the Lagrangian density for each fermionic
regime are given in Table \ref{TT1}. When the interaction is attractive,
there is not a large discrepancy between the correlation curves. In fact,
for $a_{\mathrm{BF}} \in (-25, -15)$nm the values of $C_s \gtrsim 0.9$, and
therefore the GS states are synchronized. In the unitarity regime, it is
again observed that the correlation reaches a maximum close to $1$ at $a_{%
\mathrm{BF}}\approx -10$ nm, dropping to negative values when the mixture is
strongly repulsive. Also, we observe that the three curves demonstrate
stronger demixing when $a_{\mathrm{BF}}$ changes from negative to positive
values of $a_{\mathrm{BF}}$, and for $a_{\mathrm{BF}} \gtrsim 15$ the value
of $C_s$ tends to zero implying that the GS states are not synchronized.
\subsection{The one-dimensional reduction}
The 1D confinement means, as above, a cigar-shaped configuration elongated
in the direction of $z$. In this case, the corresponding confining
potentials trap is written as
\begin{equation}
U_{\mathrm{B}/\mathrm{F}}\left( {{\mathbf{r}},t}\right) =\frac{1}{2}m_{%
\mathrm{B}/\mathrm{F}}\omega _{t,{\mathrm{B}/\mathrm{F}}}^{2}r^{2}+U_{1%
\mathrm{D},{\mathrm{B}/\mathrm{F}}}\left( {z,t}\right) , \label{Eq36}
\end{equation}%
where $U_{1\mathrm{D},{\mathrm{B}/\mathrm{F}}}\left( {z,t}\right) $ are the
axial potentials. Assuming that the transverse trapping potential is strong
enough, the dimensional reduction is carried out by means of the usual
factorized ansatz for the wave functions,
\begin{equation}
\Psi _{\mathrm{B}/\mathrm{F}}\left( {{\mathbf{r}},t}\right) =\frac{1}{{{\pi
^{1/2}}{\sigma _{\mathrm{B}/\mathrm{F}}}\left( {z,t}\right) }}{\exp }\left( -%
\frac{r^{2}}{{2{\left( \sigma _{\mathrm{B}/\mathrm{F}}\left( {z,t}\right)
\right) ^{2}}}}\right) f_{\mathrm{B}/\mathrm{F}}\left( {z,t}\right) ,
\label{Eq37}
\end{equation}%
where $\sigma _{\mathrm{B}/\mathrm{F}}$ are the transverse GS Gaussians
widths. Here, the axial functions, $f_{\mathrm{B}/\mathrm{F}}$, are
normalized to $N_{\mathrm{B}/\mathrm{F}}$. For both species, we define the
axial density as ${n_{1D,{\mathrm{B}/\mathrm{F}}}}\equiv {\left\vert f_{%
\mathrm{B}/\mathrm{F}}\right\vert ^{2}}$. By means of a procedure similar to
the one outlined above for the 2D reduction, we derive the Euler-Lagrange
equations for the BFM in the 1D approximation:
\begin{eqnarray}
i\hbar {\partial _{t}}{f_{\mathrm{B}}} &=&\left[ {-\frac{{{\hbar ^{2}}}}{{2{%
m_{\mathrm{B}}}}}\partial _{Z}^{2}+{U{_{1\mathrm{D,B}}}}+\frac{1}{\pi }\frac{%
{{g_{\mathrm{BF}}}}}{{\sigma _{\mathrm{B}}^{2}+\sigma _{\mathrm{F}}^{2}}}{{%
\left\vert {{f_{\mathrm{F}}}}\right\vert }^{2}}}+{\frac{{{g_{\mathrm{B}}}}}{{%
2\pi \sigma _{\mathrm{B}}^{2}}}{{\left\vert {{f_{\mathrm{B}}}}\right\vert }%
^{2}}}\right. \notag \\
&&+\left. \frac{{{\hbar ^{2}}}}{{2{m_{B}}\sigma _{\mathrm{B}}^{2}}}+\frac{1}{%
2}{m_{B}}\omega _{t,\mathrm{B}}^{2}\sigma _{\mathrm{B}}^{2}\right] {f_{%
\mathrm{B}},} \label{Eq38}
\end{eqnarray}%
\begin{eqnarray}
i\frac{\hbar }{{{\lambda _{1}}}}{\partial _{t}}{f_{\mathrm{F}}} &=&\left[ {-%
\frac{{{\hbar ^{2}}}}{{2{\lambda _{2}}{m_{\mathrm{F}}}}}\partial _{Z}^{2}+{%
U_{1d,\mathrm{F}}}+\frac{1}{\pi }\frac{{{g_{\mathrm{BF}}}}}{{\sigma _{%
\mathrm{B}}^{2}+\sigma _{\mathrm{F}}^{2}}}{{\left\vert {{f_{B}}}\right\vert }%
^{2}}}+\frac{{{g_{\mathrm{F}}}}}{{2\pi \sigma _{\mathrm{F}}^{2}}}{{%
\left\vert {{f_{\mathrm{F}}}}\right\vert }^{2}}\right. \notag \\
&&+\left. \frac{{{\hbar ^{2}}\xi }}{{2{m_{\mathrm{F}}}}}\frac{{{C_{\mathrm{F}%
,1\mathrm{D}}}}}{{\sigma _{\mathrm{F}}^{4/3}}}{{\left\vert {{f_{\mathrm{F}}}}%
\right\vert }^{4/3}}+{\frac{{{\hbar ^{2}}}}{{2{m_{\mathrm{F}}}\lambda
_{2}\sigma _{\mathrm{F}}^{2}}}+\frac{1}{2}{m_{\mathrm{F}}}\omega _{t,\mathrm{%
F}}^{2}\sigma _{\mathrm{F}}^{2}}\right] {f_{\mathrm{F}}}. \label{Eq39}
\end{eqnarray}%
In addition, the algebraic relationships between $\sigma _{\mathrm{B}/%
\mathrm{F}}$ and $f_{\mathrm{B}/\mathrm{F}}$ are:
\begin{equation}
{\chi _{I,\mathrm{B}}}\sigma _{\mathrm{B}}^{4}-\frac{{{\hbar ^{2}}}}{{{m_{%
\mathrm{B}}}}}-\frac{{{g_{B}}}}{{2\pi }}{n_{1\mathrm{D,B}}}=0, \label{Eq40}
\end{equation}%
\begin{equation}
{\chi _{I,\mathrm{F}}}\sigma _{B}^{4}-\frac{2}{5}\frac{{{\hbar ^{2}}}}{{{m_{%
\mathrm{F}}}}}\xi {C_{\mathrm{F},1\mathrm{D}}}n_{1\mathrm{D},\mathrm{F}%
}^{2/3}\sigma _{\mathrm{F}}^{2/3}-\frac{\hbar ^{2}}{\lambda _{2}m_{\mathrm{F}%
}}-\frac{g_{\mathrm{F}}}{{2\pi }}{n_{1\mathrm{D},\mathrm{F}}}=0,
\label{Eq41}
\end{equation}%
where $\chi _{I,\mathrm{B}/\mathrm{F}}\equiv m_{\mathrm{B}/\mathrm{F}}\omega
_{t,\mathrm{B}/\mathrm{F}}^{2}-2g_{\mathrm{BF}}n_{1\mathrm{D},\mathrm{F}/%
\mathrm{B}}/[\pi (\sigma _{\mathrm{B}}^{2}+\sigma _{\mathrm{F}}^{2})^{2}]$.
Thus, Eqs. (\ref{Eq38})-(\ref{Eq41}) constitute a system of four 1D coupled
equations produced by the reduction of the underlying 3D system (\ref{Eq21})
- (\ref{Eq22}). Simulations of the system were performed with mesh
parameters $\Delta t=0.5$ $\mu $s and $\Delta z=0.25$ $\mu $m. The external
potential is chosen here as the harmonic-oscillator one: $U_{1\mathrm{d},%
\mathrm{B/F}}=m_{\mathrm{B}/\mathrm{F}}\omega _{z,{\mathrm{B}/\mathrm{F}}%
}^{2}z^{2}/2$.
\begin{figure}[tbp]
\centering
\resizebox{0.6\textwidth}{!}{
\includegraphics{FigureBFM1Darev-eps-converted-to.pdf}
\includegraphics{FigureBFM1Dbrev-eps-converted-to.pdf}
}
\resizebox{0.6\textwidth}{!}{
\includegraphics{FigureBFM1Dcrev-eps-converted-to.pdf}
\includegraphics{FigureBFM1Ddrev-eps-converted-to.pdf}
}
\caption{Profiles of the particle density and the width in the confined
direction as a function the $z$-coordinate for different values of the
interaction strength $a_{\mathrm{BF}}$. (a) $n_{1\mathrm{D,B}}$, (b) $n_{1%
\mathrm{D,F}}$, (c) $\protect\sigma _{\mathrm{B}}$, and (d) $\protect\sigma %
_{\mathrm{F}}$. The parameters are $N_{\mathrm{B}}=5\times 10^{4}$, $N_{%
\mathrm{F}}=2.5\times 10^{3}$, $a_{\mathrm{B/F}}=5$ nm, $\protect\omega _{z,%
\mathrm{B/F}}=30$ Hz and $\protect\omega _{t,\mathrm{B/F}}=1000$ Hz. The
inset in panel (a) shows the difference between the VA and full 3D
simulations, by means of $\Delta n_{\mathrm{1D,B}}=\bar{n}_{\mathrm{1D,B}%
}-n_{\mathrm{1D,B}}$. This figure is taken from Ref. \protect\cite{Diaz15}.}
\label{FigBFM1D}
\end{figure}
The effect of the magnitude and sign of the interaction parameter on the
spatial profile of both species, and the accuracy of the VA compared to the
3D solution, can be analyzed by varying the scattering length, $a_{\mathrm{BF%
}}$. In particular, we consider a mixture with more bosons than fermions,
\textit{viz}., $N_{\mathrm{B}}=5\times 10^{4}$, $N_{\mathrm{F}}=2.5\times
10^{3}$. Because of this condition, the bosonic profile is mainly determined
by its self-interaction and the external potential. Frames (a) and (b) of
Fig. \ref{FigBFM1D} show the spatial dependence of $n_{1\mathrm{D,B}}$ and $%
n_{1\mathrm{D},\mathrm{F}}$, respectively. These densities are calculated
using both the reduced equations (\ref{Eq38}) - (\ref{Eq41}) and the full
numerical simulations of Eqs. (\ref{Eq21}) and (\ref{Eq22}). In the latter
case, the densities are calculated as $\bar{n}_{1\mathrm{D},\;\mathrm{j}%
}(z)=\int \int \left\vert \Psi _{\mathrm{j}}(\mathbf{r})\right\vert ^{2}dxdy$
with $j=(F,B)$. We observe that variations of the bosonic density profile
are very small in comparison to the significant changes of the inter-species
scattering length. The situation is opposite for the fermionic species. As
the repulsive scattering length increases, the fermions tend to be pushed to
the periphery of the bosonic-gas density profile. This phenomenon is known
as \textit{demixing} \cite{Adhikari2006,Adhikari08,Salasnich07a,Adhikari2007}%
. On the other hand, for the attraction case, fermions are, naturally,
concentrated in the same region where the bosons are located. Frames (c) and
(d) of Fig. \ref{FigBFM1D} correspond to the profiles of $\sigma _{\mathrm{B}%
}$ and $\sigma _{\mathrm{F}}$. We observe that the width of the bosonic
profile slightly increases while proceeding from the inter-species
attraction to repulsion. A similar trend is observed for fermions, as shown
in panel (d). However, the effect is amplified in the spatial zone of the
interaction with the bosons, where the gas is compressed in the case of the
attraction, and expands in the case of the repulsion. Note that the
fermionic component expands in the confined direction much more than its
bosonic counterpart, and that the fermionic width markedly varies, following
changes in the density.
Further, one can see in the inset of panel (a) of Fig. \ref{FigBFM1D} the
difference between the density calculated by means of the VA and the full 3D
simulation, $\Delta n_{\mathrm{1D,B}}=\bar{n}_{\mathrm{1D,B}}-n_{\mathrm{1D,B%
}}$ is really small. In fact, the difference between the bosonic profiles
obtained by both methods is $\sim 2\%$ of the maximum density for all cases
(the fact that the error changes very little with variations in $a_{\mathrm{%
BF}}$ is a consequence of the greater number of bosons). Frame (b) of Fig. %
\ref{FigBFM1D} shows that, for the attractive mixture, the variational
profile is very close to the 3D result, in particular for the case of $a_{%
\mathrm{BF}}=-6~$nm. For the repulsive mixture, it is observed that the
error increases, which is a consequence of the lower fermionic density at
the center of the 3D harmonic-oscillator potential, which plays the dominant
role for the bosons, hence a monotonously decreasing function in the
transverse direction, such as the Gaussian, is not a sufficiently good
approximation. We define the global error of the VA as $E_{\%,\mathrm{1D}%
}=\int_{-\infty }^{+\infty }\left\vert \bar{n}_{\mathrm{1D},\;j}-n_{\mathrm{%
1D}\;j}\right\vert dz$ (for both species). We have found that in the range
of $a_{\mathrm{BF}}\in \left( -6,6\right) $ nm the global error for the
bosonic species is around $2\%$ for all the values of $a_{\mathrm{BF}}$. For
the fermionic species, it goes from $0.5\%$ to $5\%$ depending on $a_{%
\mathrm{BF}}$, such that for positive value of $a_{\mathrm{BF}}$ the error
is higher than for negative ones, and the minimum error is attained at $a_{%
\mathrm{BF}}\approx -4$nm. This is a consequence of the fact that, for this
value of $a_{\mathrm{BF}}$, the interspecies interaction practically
compensates the Pauli repulsion, making the dynamics of the fully polarized
Fermi gas close to that governed by the linear Sch\"{o}dinger equation
(recall that the Gaussian is the solution for the ground state). When the
mixture becomes more attractive, the fermionic dynamics is dominated by the
bosons, producing a similar error for both species, while for the repulsive
mixture the Gaussian approximation is not appropriate. For the
non-interacting mixture, the error for the fermions is smaller than for the
bosons, because the fermionic density is very low, making the
self-interaction terms weak in comparison to the external potential,
therefore it is appropriate to use the Gaussian ansatz to approximate the 1D
dynamics. Finally, note that the error is lower in the 2D case in comparison
with 1D, because the reduction to 2D case is closer to the full 3D model.
\begin{figure}[tbp]
\centering
\resizebox{1.\textwidth}{!}{
\includegraphics{FigureBFM1D2arev-eps-converted-to.pdf}
\includegraphics{FigureBFM1D2crev-eps-converted-to.pdf}
\includegraphics{FigureBFM1D2erev-eps-converted-to.pdf}
}
\resizebox{1.\textwidth}{!}{
\includegraphics{FigureBFM1D2brev-eps-converted-to.pdf}
\includegraphics{FigureBFM1D2drev-eps-converted-to.pdf}
\includegraphics{FigureBFM1D2frev-eps-converted-to.pdf}
}
\caption{(Color online) Space-time diagrams of the densities of bosons (top)
and fermions (bottom), for three different values of the interspecies
scattering parameter: (a,b) $a_{\mathrm{BF}}=-18nm$, (c,d) $a_{\mathrm{BF}%
}=-26nm$, and (e,f) $a_{\mathrm{BF}}=-34nm$. The initial conditions are the
same in all the cases, see the text. The other parameters are the same as in
Fig.~\protect\ref{FigBFM1D}}
\label{FigBFM1D2}
\end{figure}
Next, we address the BFM dynamics, considering a mixture with arbitrary
initial conditions for the 1D fields. To create the initial state, we start
with the GS found in the absence of the inter-species interaction ($a_{%
\mathrm{BF}}=0$). Then, at $t=0$, we switch the interaction on, which may
imply the application of the magnetic field, that gives rise to $a_{\mathrm{%
BF}}\neq 0$ via the FR. Figure~\ref{FigBFM1D2} shows three cases of the
temporal evolution with these initial conditions for $a_{\mathrm{BF}}=-18~%
\mathrm{nm}$, $a_{\mathrm{BF}}=-26~\mathrm{nm}$, and $a_{\mathrm{BF}}=-34~%
\mathrm{nm}$. In the first case (panels (a) and (b) of Fig. \ref{FigBFM1D2}%
), it is observed that the densities converge towards the center of the
potential, as may be expected, creating a pattern of oscillations around the
potential minimum; in addition, the fermions are affected by bosons, as
shown by the mark left by the bosons in the fermionic density. For the
second case (panels (c) and (d) of Fig. \ref{FigBFM1D2}), it is observed
that the increase in the strength of the attractive interaction generates
dark solitons in the fermionic density, some of which show oscillatory
dynamics very close to that observed in Refs. \cite%
{Yefsah13,Scott11,Shomroni09,Cardoso13,Donadello14}. The last case (panels
(e) and (f) of Fig. \ref{FigBFM1D2}) shows that the further increase of the
strength of the interspecies interaction generates a larger number of dark
solitons. In other words, we show that the attractive interaction of
fermions with bosons in a state different from the GS eventually generates a
gas of dark solitons.
\begin{figure}[tbp]
\centering{\
\resizebox{1.\textwidth}{!}{
\includegraphics{cidi3-eps-converted-to.pdf}
\includegraphics{cidi5-eps-converted-to.pdf}
\includegraphics{cidi6-eps-converted-to.pdf}
\includegraphics{cidi7-eps-converted-to.pdf}
}
\resizebox{1.\textwidth}{!}{
\includegraphics{cidi4-eps-converted-to.pdf}
\includegraphics{cidi8-eps-converted-to.pdf}
\includegraphics{cidi9-eps-converted-to.pdf}
\includegraphics{cidi10-eps-converted-to.pdf}
}}
\caption{(Color online) Comparison of the dynamics, as produced by the 1D
VA, and from the 3D simulations. Spatiotemporal diagrams for bosons (a) and
fermions (b) are obtained from the 3D simulations. The other panels show
spatial profiles for: (c)$-$(d) $t=0~$ms, (e)$-$(f) $t=25$ ms, and (e)$-$(f)
$t=50$ ms. Here $a_{\mathrm{BF}}=-10$ nm, the initial conditions and other
fixed parameters being the same as in Fig.~\protect\ref{FigBFM1D}. This
figure is taken from Ref. \protect\cite{Diaz15}. }
\label{FigBFM1D3}
\end{figure}
Finally, we address the accuracy of the VA for the dynamical behavior near
the GS. Figure \ref{FigBFM1D3} displays the spatiotemporal dynamics of the
1D density, as produced by the solution of 3D equations (\ref{Eq21}) and (%
\ref{Eq22}) for $a_{\mathrm{BF}}=-10$ nm. The initial conditions for the 3D
dynamics are given by the ansatz based on Eq. (\ref{Eq25}), with the
Gaussian profile along the $z$ axes. Panels (a) and (b) of Fig. \ref%
{FigBFM1D3} show the spatiotemporal diagrams of the bosonic and fermionic
densities, making the emergence of dark solitons obvious. This result
corroborates that the dark solitons emerge too in the 3D dynamics, which is
approximated by the present 1D model. The other panels of Fig. \ref%
{FigBFM1D3} show a comparison of the 1D spatial profiles, as obtained from
the 3D simulations, and the 1D VA, for three instants of time: $t=0$ ms ((c)
and (d)), $t=25$ ms ((e) and (f)), and $t=50$ ms ((g) and (h)). The results
demonstrate that the VA profiles are very similar to their counterparts
produced by the 3D simulations, hence the present approximation provides
good accuracy and allows one to study dynamical features of the BFM in a
sufficiently simple form.
\section{Conclusion}
\label{S4}
In this brief review we have summarized results produced by the VA
(variational approximation) for reducing the 3D system to 1D and 2D forms
for the Fermi gas and BFM\ (Bose-Fermi mixture) in the framework of the
quasi-mean-field description \cite{Diaz12,Diaz15}. The method is based on
the Gaussian variational ansatz, which provides very accurate results for
the GSs (ground states) of the gases loaded in the disc- and cigar-shaped
traps. The reduced equations are useful, in particular, for modeling systems
with low atomic densities and large spatiotemporal variations of the
external potential. For the 1D case, the reduced equations provide results
by means of modest computational resources, allowing one to quickly explore
a vast volume of the parameter space. In the 2D case, the required
simulation time is still significantly lower than what is necessary for the
3D simulations. We have shown that, for the Fermi gases and BFMs alike, the
VA produces results with a very good accuracy, the error, in the comparison
to the 3D simulations, being lower than $5\%$, for both the GSs and
dynamical states.
For the Fermi gas case in the 2D approximation, we have considered the
example of the hexagonal superlattice, built as a superposition of two
triangular lattices with different angles among them. This possibility
may be relevant for emulating condensed-matter settings, such
as graphene-like superlattices. In addition, we have presented results for
dark solitons, obtained in the framework of the 1D approximation. We have
verified that the interaction is repulsive \cite{Alphen18} and strongly
depends on the initial distance between the dark solitons.
Finally, for the BFM trapped in the harmonic-oscillator potential we have
shown that a change in the interaction strength can generate a gas of dark
solitons. The solitons oscillate under the action of the external potential.
\section*{Acknowledgents}
PD acknowledges partial financial support from DIUFRO project under grant
DI18-0066 and CMCC of the Universidad de La Frontera. DL acknowledge partial
financial support from Centers of Excellence with BASAL/CONICYT financing,
Grant FB0807, CEDENNA and CONICYT-ANILLO ACT 1410. PD and DL acknowledges
financial support from FONDECYT 1180905. The work of BAM is supported, in
part, by the joint program in physics between NSF and Binational (US-Israel)
Science Foundation through project No. 2015616, and by the Israel Science
Foundation, through grant No. 1287/17. The authors appreciate a support
provided by the PAI-CONICYT program (Chile), grant No. 80160086, and
hospitality of Instituto de Alta Investigaci\'{o}n, at Universidad de Tarapac%
\'{a} (Arica, Chile).
\section*{Appendix A: Nonlinear Schr\"{o}dinger equation for the fermionic
superfluid}
\label{secA}
Kim and Zubarev in Ref. \cite{Kim2004} proposed an effective hydrodynamic
equation for a Fermi gas, in the regime of the BCS-BEC crossover. The
equation was derived from the time-dependent density-functional theory and
has the form given by:
\begin{equation}
i\hbar {\partial _{t}}\Psi \left( {\mathbf{r},t}\right) =\left[ {\ -\frac{{%
\hbar ^{2}}}{{2m_\mathrm{F}}}{\nabla ^{2}}+U\left( {\mathbf{r}}\right) +\mu
\left( n\left( {\mathbf{r},t}\right) \right) }\right] \Psi \left( {\mathbf{r}%
,t}\right) , \label{E-b0}
\end{equation}
where $\Psi $ is a complex field that represent the superfluid wave
function, $n\left( {\mathbf{r},t}\right) ={\left\vert \Psi \left( {\mathbf{r}%
,t}\right) \right\vert ^{2}}$ is the particle density, and $\mu$ is the
chemical potential. In addition, the relationship between the chemical
potential and the energy density (energy per particle), $\varepsilon \left(
n\right)$, is given by:
\begin{equation}
\mu \left( n\right) =\frac{\partial }{\partial n}\left[ {n\varepsilon \left(
n\right) }\right] \label{mu}
\end{equation}%
For the case of two spin states with balanced populations and a negative
scattering length, ${a_\mathrm{F}}<0$, the BCS limit corresponds to $k_{F}{%
\left\vert {a_\mathrm{F}}\right\vert }\ll 1$, where $k_{F}=(3\pi
^{2}n)^{1/3} $ is the Fermi wavenumber. In this limit $\varepsilon $ is
given by \cite{Huang1957}:
\begin{equation}
\varepsilon \left( n\right) =\frac{3}{5}{\varepsilon _{F}}\left[ {1+\frac{{10%
}}{{9\pi }}{k_{F}}{a_\mathrm{F}}+\frac{{4\left( {11-2\ln \left( 2\right) }%
\right) }}{{21{\pi ^{2}}}}{{\left( {{k_{F}}{a_\mathrm{F}}}\right) }^{2}}%
+\cdots }\right] , \label{E-b1}
\end{equation}%
where ${\varepsilon _{F}}={\hbar ^{2}}k_{F}^{2}/\left( {2m_\mathrm{F}}%
\right) $ is the Fermi energy. Taking the Eq. (\ref{E-b1}) into the Eq. (\ref%
{mu}) the chemical potential takes the form
\begin{equation}
\mu \left( n\right) =\frac{{\hbar ^{2}}}{{2m_\mathrm{F}}}{\left( {3{\pi ^{2}}%
}\right) ^{2/3}}{n^{2/3}}+\frac{{2{\hbar ^{2}}\pi {a_\mathrm{F}}}}{m_\mathrm{%
F}}n\left[ {1+{1.893 a_\mathrm{F}}{n^{1/3}}+\cdots }\right] \label{E-b2}
\end{equation}%
where the first term corresponds to the effective Pauli repulsion, and the
following ones to the superfluidity due to collisions between the fermions
in different spin states. Substituting the latter expression in Eq.(\ref%
{E-b0}), and keeping only the first collisional term, we obtain the known
nonlinear Schr\"{o}dinger equation for the fermionic superfluid \cite%
{Kim2004,Adhikari2006b}
\begin{equation}
i\hbar {\partial _{t}}\Psi =\left[ {\ -\frac{{\hbar ^{2}}}{{2m_\mathrm{F}}}{%
\nabla ^{2}}+U\left( {\mathbf{r}}\right) +\frac{{\hbar ^{2}}}{{2m_\mathrm{F}}%
}{{\left( {3{\pi ^{2}}}\right) }^{2/3}}{n^{2/3}}+\frac{{2\pi {\hbar ^{2}}{a_%
\mathrm{F}}}}{m_\mathrm{F}}n}\right] \Psi , \label{E-b3}
\end{equation}%
where the last term is similar to one in the Gross-Pitaevskii equation for
bosons, but with an extra factor of $1/2$, as the Pauli exclusion principle
allows only atoms in different spin states interact via the scattering. We
remark that Eq. (\ref{E-b3}) implies equal particle densities and phases of
the wave functions associated with both spin states.
When we have a system with multiple atomic spin states, $\sigma _{j}$,
associated with vertical projection of the spin $s$ (with $2s_\mathrm{F}+1$
states), we treat the atoms per state as a fully polarized Fermi gas. The
term for the interactions by collisions between atoms in different spin
states, with the same scattering length ($a_\mathrm{F}$), correspond to the
scattering term in the Gross-Pitaevskii equation. The motion equation for
the atoms in spin states $j$ is given by:
\begin{eqnarray}
i\hbar {\partial _{t}}{\Psi _{j}}\left( {\mathbf{r},t}\right) &=&\left[ {\ -%
\frac{{\hbar ^{2}}}{{2m_\mathrm{F}}}{\nabla ^{2}}+U\left( {\mathbf{r}}%
\right) +\frac{{\hbar ^{2}}}{{2m_\mathrm{F}}}{{\left( {6{\pi ^{2}}}\right) }%
^{2/3}}{{n_{j}\left( {\mathbf{r},t}\right) }^{2/3}}}\right] {\Psi _{j}}%
\left( {\mathbf{r},t}\right) \notag \\
&&+\frac{{4\pi {\hbar ^{2}}{a_\mathrm{F}}}}{m_\mathrm{F}}\sum\limits_{k\neq
j=-(s_\mathrm{F}+1/2)}^{s_\mathrm{F}+1/2}{{{n_{k}\left( {\mathbf{r},t}%
\right) }}{\Psi _{j}}}\left( {\mathbf{r},t}\right) , \label{E-1}
\end{eqnarray}%
where $\Psi _{j}$ is the wave function associated with spin projection $%
\sigma _{j}$, such that $n_{j}\left( {\mathbf{r},t}\right) ={\left\vert {{%
\Psi _{j}}\left( {\mathbf{r},t}\right) }\right\vert ^{2}}$ is the respective
particle density, and $V(\mathbf{r})$ an external potential, which is
assumed to be identical for all the spin states.
In the case of fully locally balanced populations, the density of particles
is the same in each component, $n_{1}=n_{2}=...=n_{2s_\mathrm{F}+1}$, hence
the total density is $n=n_{j}/(2s_\mathrm{F}+1)$. Assuming also equal phases
of the wave-function components, we define a single wave function, $\Psi =%
\sqrt{2s_\mathrm{F}+1}\Psi _{j}$, such that the Eq. (\ref{E-1}) take the
form
\begin{equation}
i\hbar {\partial _{t}}{\Psi }=\left[ -\frac{{{\hbar ^{2}}}}{{2{m_\mathrm{F}}}%
}{\nabla ^{2}}+{U\left( {\mathbf{r}}\right)}+\frac{{\hbar ^{2}}}{{2m_\mathrm{%
F}}}{{\left( {\frac{{6{\pi ^{2}}}}{{2s_\mathrm{F}+1}}}\right) }^{2/3}}
\left\vert {\Psi \left( {{\mathbf{r}},t}\right) }\right\vert ^{4/3}+g_%
\mathrm{F} \left\vert {\Psi \left( {{\mathbf{r}},t}\right) }\right\vert ^{2} %
\right] {\Psi }, \label{E-2}
\end{equation}%
where $g_\mathrm{F}\equiv 8s_\mathrm{F}\pi {\hbar ^{2}}{a_\mathrm{F}}/(2s_%
\mathrm{F}+1)m_\mathrm{F}$ is the scattering coefficient. This equation is
the same that Eq. \ref{Eq3} without considered the corrections of the first
principles calculations given by $\lambda_1$, $\lambda_2$ and $\beta$ \cite%
{Manini05,Salasnich09,Ancilotto09,Ancilotto12}. In particular, the fully
polarized gas, with the interactions between identical fermions suppressed
by the Pauli principle, formally corresponds to $s_\mathrm{F}=0$, hence $g_%
\mathrm{F}=0$, and the last term of Eq. \ref{E-2} vanishes.
Finally, the equation (\ref{E-2}) can be derived, as the Euler-Lagrange
equation,
\begin{equation}
\frac{{\delta \mathcal{L}}}{{\delta {\Psi ^{\ast }}}}=\frac{{\partial
\mathcal{L}}}{{\partial {\Psi ^{\ast }}}}-\frac{\partial }{{\partial t}}%
\frac{{\partial \mathcal{L}}}{{\partial \left( {{\partial _{t}}{\Psi ^{\ast }%
}}\right) }}-\nabla \frac{{\partial \mathcal{L}}}{{\partial \left( {\nabla {%
\Psi ^{\ast }}}\right) }}=0, \label{E-5}
\end{equation}
from the corresponding action, $\mathcal{S}=\int {dtd{\mathbf{r}}\mathcal{L}}
$, with the Lagrangian density
\begin{eqnarray}
\mathcal{L} &=&i\frac{\hbar }{2}\left( {{\Psi ^{\ast }}\frac{{\partial \Psi }%
}{{\partial t}}-\Psi \frac{{\partial {\Psi ^{\ast }}}}{{\partial t}}}\right)
-\frac{{{\hbar ^{2}}}}{{2{m_\mathrm{F}}}}{\left\vert {\nabla {\Psi }}%
\right\vert ^{2}}-{U}(\mathbf{r}){\left\vert {\Psi \left( {{\mathbf{r}},t}%
\right) }\right\vert ^{4/3}}- \notag \\
&&\frac{{{\hbar ^{2}}}}{{2{m_\mathrm{F}}}}\frac{3}{5}{\left( {\frac{{6{\pi
^{2}}}}{{2{s_\mathrm{F}}+1}}}\right) ^{2/3}}\left\vert {\Psi \left( {{%
\mathbf{r}},t}\right) }\right\vert ^{10/3}-\frac{1}{2}{g_\mathrm{F}}%
\left\vert {\Psi \left( {{\mathbf{r}},t}\right) }\right\vert ^{4},
\label{E-4}
\end{eqnarray}
where the asterisk stands for the complex conjugate. Similar Lagrangian
formalisms have been used, in the context of the density-functional theory,
in diverse settings \cite{Adhikari2006b,Adhikari2007,Kim2004a}.
\section*{References}
|
1,116,691,497,398 | arxiv | \section{Introduction}\label{S:intro-1}
The study of the separation property for Schr\"odinger operators on $\RR^n$ was initiated through
the work of Everitt and Giertz in \cite{Everitt-Giertz77}. We recall that the expression $-\Delta + V$ in $L^p(\RR^n)$
is \textit{separated} if the following property is satisfied: For all $u \in L^p(\RR^n)$ such that
$(-\Delta + V)u \in L^p(\RR^n)$, we have that $-\Delta u \in L^p(\RR^n)$ and $Vu \in L^p(\RR^n)$.
After the work of Everitt and Giertz, various authors took up the study of separation problems for
(second and higher order) differential operators; see~\cite{Boimatov88,Boimatov97,hdn-12,Okazawa-84} and references therein. The paper \cite{Milatovic06-separation} then studied the separation property of the operator
$\Delta_M + v$ on $L^2(M)$, where $M$ is a non-compact Riemannian manifold,
$\Delta_M$ is the scalar Laplacian, and $v \in C^1(M)$.
The separation problem for the differential expression $\Delta_M + v$
in $L^p(M)$ was first considered, in the bounded geometry setting,
in \cite{Milatovic13-separation}. The work in \cite{Milatovic18-separation} gives another proof of the main theorem in \cite{Milatovic13-separation}, crucially without any bounded geometry hypothesis. For a study of separation in the context of
a perturbation of the (magnetic) Bi-Laplacian on $L^2(M)$, see the papers \cite{AAR,Milatovic18-separation}. A closer look at the works mentioned in this paragraph reveals that the separation property is linked to the self-adjointness in $L^2$ (or $m$-accretivity in $L^p$) of the underlying operator. In the context of a Riemannian manifold $M$, the latter problem has been studied quite a bit over the past two decades. For recent references see, for instance, the papers~\cite{bs-16,GK,GP,Pran-Ser-Riz-16} and chapter XI in~\cite{Guneysu-2016}.
In this article we consider the separation problem for the differential expression
$\nabla^{\dagger}\nabla+V$, where $\nabla$ is a metric covariant derivative on a Hermitian bundle $E$ over a Riemannian
manifold $M$, $\nabla^{\dagger}$ its formal adjoint, and $V$ is a self-adjoint endomorphism of $E$. We start with the
separation problem on $L^2(E)$, obtaining a result (see Theorem \ref{T:main})
that can be seen as an extension of the work carried out in
\cite{Milatovic06-separation}. The condition~(\ref{E:assumption-2}) on the endomorphism $V$ that guarantees the separation property is analogous to the one in the scalar case. We then move on to consider the separation problem of the above operator in
$L^p(E)$, $1<p<\infty$, obtaining a result (see Theorem \ref{T:main-2}) that generalizes the work~\cite{Milatovic13-separation}.
We do this (see Proposition~\ref{L:c-e} and Corollary~\ref{C:domination} below for precise statements) by exploiting a coercive estimate for $\Delta_M + v$ in $L^p(M)$ from~\cite{Milatovic13-separation} alongside the following property from~\cite{Guneysu-12}: if $V\geq v$, where $v\geq 0$ is a real-valued function on $M$, then the $L^p$-semigroup corresponding to $\nabla^{\dagger}\nabla+V$ is dominated by the $L^p$-semigroup corresponding to $\Delta_M+v$. In the case $p \neq 2$, we assumed, in addition to geodesic completeness of $M$, that the Ricci curvature of $M$ is bounded from below by a constant. One reason is that, as far as we know, the only available proof of the coercive estimate~(\ref{M:Del_bound}) in the case $p\neq 2$ uses the Kato inequality approach, which leads one to apply, in the
language of section XIII.5 of~\cite{Guneysu-2016}, the $L^{p}$-positivity preservation property of $M$. The latter property,
whose proof is based on the construction of a sequence of Laplacian cut-off functions (see section III.1 in~\cite{Guneysu-2016}
for details), is known to hold under the aforementioned assumption on the Ricci curvature. Actually, as
shown in~\cite{B-S-16}, this hypothesis on Ricci curvature can be further weakened to assume boundedness below
by a (possibly unbounded) non-positive function depending on the distance from a reference point.
We mention in passing that the $L^{p}$-positivity preservation property of $M$ is related to the
so-called BMS-conjecture, the details of which are explained in~\cite{bms} and~\cite{Guneysu-Ulmer}.
Another reason for the hypothesis on Ricci curvature is that this assumption is used (see section~\ref{SS:op-h} below for details) for the $m$-accretivity of $\nabla^{\dagger}\nabla+V$ in $L^p(E)$ in the case $p\geq 3$.
Lastly, we should point out that although the separation property for $\Delta_M + v$ in $L^p(M)$, with $v\geq 0$, was obtained in~\cite{Milatovic18-separation} under the geodesic completeness assumption on $M$ only, it was done so without explicitly establishing~(\ref{M:Del_bound}). Instead, assuming~(\ref{E:assumption-2}) with $v$ replaced by the Yosida approximation $v_{\varepsilon}:=v(1+\varepsilon v)^{-1}$, $\varepsilon>0$, and with a certain condition on the constant $\gamma$, the work~\cite{Milatovic18-separation} establishes an estimate involving the operator $\Delta_{M}$ and the multiplication operator by $v_{\varepsilon}$. Using the abstract framework of~\cite{Okazawa-84}, one concludes the $m$-accretivity of the (operator) sum of ``maximal" operators corresponding to $\Delta_{M}$ and $v$, which, due to the fact (see section~\ref{SS:op-h} below) that the ``maximal" operator corresponding to $\Delta_M + v$ is $m$-accretive in $L^p(M)$, $1<p<\infty$, leads to the separation property. The approach from~\cite{Milatovic18-separation} does not seem to carry to covariant Schr\"odinger operators.
\section{Main Results}\label{S:main}
\subsection{The setting}\label{SS:setting}
Let $M$ be a smooth connected Riemannian manifold without boundary, with metric $g$, and with Riemannian volume element $d\mu$.
Let $E$ be a vector bundle over $M$ with Hermitian structure $\langle\cdot, \cdot\rangle_{x}$ and the corresponding norms $|\cdot|_{x}$ on fibers $E_{x}$. Throughout the paper, the symbols $C^{\infty}(E)$ and $C_{c}^{\infty}(E)$ denote smooth sections of $E$ and smooth compactly supported sections of $E$, respectively. The notation $L^p(E)$, $1\leq p<\infty$, indicates the space of $p$-integrable sections of $E$ with the norm
\[
\|u\|_{p}:=\int_{M}|u(x)|^p\,d\mu.
\]
In the special case $p=2$, we have a Hilbert space $L^2(E)$ and we use $(\cdot,\cdot)$ to denote the corresponding inner product. For local Sobolev spaces of sections we use the notation $W^{k,p}_{\operatorname{loc}}(E)$, with $k$ and $p$ indicating the highest order of derivatives and the corresponding $L^{p}$-space, respectively. For $k=0$ we use the simpler notation $L^{p}_{\operatorname{loc}}(E)$. In the case $E=M\times\mathbb{C}$, we denote the corresponding function spaces by $C^{\infty}(M)$, $C_{c}^{\infty}(M)$, $L^p(M)$, $W^{k,p}_{\operatorname{loc}}(M)$, and $L^{p}_{\operatorname{loc}}(M)$.
In the remainder of the paper, $\nabla\colon C^{\infty}(E)\to C^{\infty}(T^*M\otimes E)$ stands for a smooth metric covariant derivative on $E$, and $\nabla^{\dagger}\colon C^{\infty}(T^*M\otimes E)\to C^{\infty}(E)$ indicates the formal adjoint of $\nabla$ with respect to $(\cdot,\cdot)$. The covariant derivative $\nabla$ on $E$ induces the covariant derivative $\nabla^{\operatorname{End}}$ on the bundle of endomorphisms $\operatorname{End} E$, making $\nabla^{\operatorname{End}} V$ a section of the bundle $T^*M\otimes (\operatorname{End} E)$.
We study a covariant Schr\"odinger differential expression
\begin{equation}\label{E:expression-L}
L^{\nabla}_{V}:=\nabla^{\dagger}\nabla+V,
\end{equation}
where $V$ is a linear self-adjoint bundle map $V\in L_{\loc}^{\infty}(\operatorname{End} E)$.
To help us describe the separation property, we define
\begin{equation}\label{E:D-p}
\textrm{D}^{\nabla}_p:=\{u\in L^p(E)\colon L^{\nabla}_{V}u\in L^p(E)\}, \qquad 1<p<\infty,
\end{equation}
where $L^{\nabla}_{V}u$ is understood in the sense of distributions. In the case of a real valued function $v\in L_{\loc}^{\infty}(M)$, trivial bundle $E=M\times\mathbb{C}$ and $\nabla=d$, where $d$ is the standard differential, we will use the notations $L^{d}_{v}:=\Delta_{M}+v$, with $\Delta_{M}:=d^{\dagger} d$ indicating the scalar Laplacian.
In general, it is not true that for all $u\in\textrm{D}^{\nabla}_p$
we have $\nabla^{\dagger}\nabla u\in L^p(E)$ and $Vu\in L^p(E)$ separately. Using the language of Everitt and Giertz (see\cite{Everitt-Giertz77}), we say that the differential
expression $L^{\nabla}_{V}=\nabla^{\dagger}\nabla+V$ is \emph{separated} in $L^p(E)$ when
the following statement holds true: for all $u\in\textrm{D}^{\nabla}_p$ we have $Vu\in L^p(E)$.
\subsection{Statements of the Results} Our first result concerns the separation property for $L^{\nabla}_{V}$ in $L^2(E)$. Before giving its exact statement, we describe the assumptions on $V$.
\noindent\textbf{Assumption (A1)} Assume that
\begin{enumerate}
\item [(i)] $V\in C^1(\operatorname{End} E)$ and $V(x)\geq 0$, for all $x\in M$, where the inequality is
understood in the sense of linear operators $E_{x}\to E_{x}$;
\item[(ii)] $V$ satisfies the inequality
\begin{equation}\label{E:assumption-2}
|(\nabla^{\operatorname{End}} V)(x)|\leq \beta (\underbar{V}(x))^{3/2},\qquad \textrm{for all }x\in M,
\end{equation}
where $0\leq \beta<1$ is a constant, $|\cdot|$ is the norm of a linear operator $E_{x}\to (T^*M\otimes E)_{x}$, and
$\underbar{V}\colon M\to \mathbb{R}$ is defined by
\[
\underbar{V}(x)=\min(\sigma(V(x))),
\]
where $\sigma(V(x))$ is the spectrum of the operator $V(x)\colon E_{x}\to E_{x}$.
\end{enumerate}
We are ready to state the first result.
\begin{thm}\label{T:main} Assume that $(M,g)$ is a smooth geodesically complete
connected Riemannian manifold without boundary. Let $E$ be a Hermitian vector bundle over $M$ with a metric covariant derivative $\nabla$. Assume that $V$ satisfies the assumption (A1). Then
\begin{equation}\label{E:to-prove}
\|\nabla^{\dagger}\nabla u\|_{2}+\|Vu\|_{2}\leq C(\|L_{V}u\|_{2}+\|u\|_{2}),
\end{equation}
for all $u\in \textrm{D}^{\nabla}_2$ ,where $C\geq 0$ is a constant (independent of $u$). In particular, $L^{\nabla}_{V}$ is separated in $L^2(E)$.
\end{thm}
The second result concerns the separation for $L^{\nabla}_{V}$ in $L^p(E)$.
\begin{thm}\label{T:main-2} Let $1<p<\infty$. Assume that $(M,g)$ is a
geodesically complete connected Riemannian manifold without boundary. In the case $p\neq 2$, assume additionally that the Ricci curvature of $M$ is bounded from below by a constant. Furthermore, assume that
there exists a function $0\leq v\in C^1(M)$ such that
\begin{equation}\label{E:assumption-1}
v(x)I\leq V(x)\leq \delta v(x)I
\end{equation}
and
\begin{equation}\label{E:assumption-2}
|dv(x)|\leq \gamma v^{3/2}(x),\qquad \textrm{for all }x\in M,
\end{equation}
where $\delta\geq 1$ and $0\leq \gamma<2$ are constants, and $I\colon E_{x}\to E_{x}$ is the identity operator.
Then, the differential
expression $L^{\nabla}_{V}$ is separated in $L^p(E)$.
\end{thm}
\begin{rem}\label{R:rem-infty} \emph{From~(\ref{E:assumption-1}) it follows that
$0\leq V\inL_{\loc}^{\infty}(\operatorname{End} E)$.}
\end{rem}
\section{Preliminaries on Operators}\label{SS:op-h} We start by briefly recalling some abstract terminology concerning $m$-accretive operators on Banach spaces.
A linear operator $T$ on a Banach space $\mathscr{B}$ is called \emph{accretive}
if
\[
\|(\xi+T)u\|_{\mathscr{B}}\geq \xi\|u\|_{\mathscr{B}},
\]
for all $\xi>0$ and all $u\in\operatorname{Dom}(T)$. By Proposition II.3.14 in~\cite{engel-nagel}, a densely defined accretive operator $T$ is closable and its closure $T^{\sim}$ is also accretive. A (densely defined) operator $T$ on $\mathscr{B}$ is called \emph{$m$-accretive} if it is accretive and $\xi+T$ is surjective for all $\xi>0$. A (densely defined) operator $T$ on $\mathscr{B}$ is called \emph{essentially $m$-accretive} if it is accretive and $T^{\sim}$ is $m$-accretive. As the proof of our first result uses the notion of self-adjointness, we recall a link between $m$-accretivity and self-adjointness of operators on Hilbert spaces: $T$ is a self-adjoint and non-negative operator if and only if $T$ is symmetric, closed, and $m$-accretive; see Problem V.3.32 in~\cite{Kato80}.
We now describe some known results on the (essential) $m$-accretivity of operators in $L^p(E)$ used in this paper. With $L^{\nabla}_{V}$ and $L^{d}_{v}$ as in section~\ref{SS:setting} and with $0\leq V\inL_{\loc}^{\infty}(\operatorname{End} E)$ and $0\leq v\inL_{\loc}^{\infty}(M)$,
we define an operator $H^{\nabla}_{p,V}$ as $H^{\nabla}_{p,V} u:=L^{\nabla}_{V}u$ with the domain $\textrm{D}^{\nabla}_{p}$ as in~(\ref{E:D-p}) and an operator
$H^{d}_{p,v}$ as $H^{d}_{p,v} u:=L^{d}_{v}u$ for all $u\in \textrm{D}^{d}_{p}$, where
\[
\textrm{D}^{d}_{p}:=\{u\in L^p(M)\colon L^{d}_{v}u \in L^p(M)\}.
\]
For a geodesically complete manifold $M$ it is known that $(L^{d}_{v}|_{C_{c}^{\infty}(M)})^{\sim}$ in $L^p(M)$, $1<p<\infty$, is $m$-accretive and it coincides with $H^{d}_{p,v}$. Moreover, under the same assumption on $M$ and for $1<p<3$, the operator $(L^{\nabla}_{V}|_{C_{c}^{\infty}(E)})^{\sim}$ in $L^p(E)$, is $m$-accretive and it coincides with $H^{\nabla}_{p,V}$. Both of these statements are proven in~\cite{Strichartz-83} for $V=0$ and $v=0$, but the arguments there work for $0\leq V\inL_{\loc}^{\infty}$ and $0\leq v\inL_{\loc}^{\infty}$ without any change, as the non-negativity assumption makes $V$ and $v$ ``disappear" from the inequalities. It turns out that the $m$-accretivity result holds for $(L^{\nabla}_{V}|_{C_{c}^{\infty}(E)})^{\sim}$ in $L^{p}(E)$ in the case $p\geq 3$ as well if, in addition to geodesic completeness, we assume that the Ricci curvature of $M$ is bounded from below by a constant. The latter statement was proven for manifolds of bounded geometry in Theorem 1.3 of~\cite{Mi-2010}, and it was observed in~\cite{GP-2017} that the statement holds if we just assume that $M$ is geodesically complete and with Ricci curvature bounded from below by a constant. For the explanation of why $(L^{\nabla}_{V}|_{C_{c}^{\infty}(E)})^{\sim}$ coincides with $H^{\nabla}_{p,V}$, we again point the reader to~\cite{GP-2017}. As indicated above, in the case $p=2$, the term ``$m$-accretivity" in the above statements has the same meaning as the term ``self-adjointness."
\section{Proof of Theorem \ref{T:main}} Working in the $L^2$-context only, we find it convenient to indicate by $\|\cdot\|$ and $(\cdot,\cdot)$ the norm and the inner product in the spaces $L^2(E)$ and $L^2(T^*M\otimes E)$.
In subsequent discussion, we adapt the approach from~\cite{Boimatov88, Everitt-Giertz77} to our setting.
\begin{lem}\label{L:lemma-1} Under the hypotheses of Theorem~\ref{T:main}, the following inequalities hold for all $u\inC_{c}^{\infty}(E)$:
\begin{equation}\label{E:inequality-1}
\|\nabla^{\dagger}\nabla u\|+\|Vu\|\leq \widetilde{C}\|L_{V}u\|
\end{equation}
and
\begin{equation}\label{E:inequality-2}
\|V^{1/2}\nabla u\|\leq \widetilde{C}\|L_{V}u\|,
\end{equation}
where $L^{\nabla}_{V}$ is as in~(\ref{E:expression-L}), the notation $V^{1/2}$ means square root of the operator $V(x)\colon E_{x}\to E_{x}$, and $\widetilde{C}$ is a constant depending on $n=\dim M$, $m=\dim E_x$, and $\beta$.
\end{lem}
\begin{proof}
By the definition of $L^{\nabla}_{V}$, for all $\nu>0$ and all
$u\inC_{c}^{\infty}(E)$ we have
\begin{align}\label{E:equality}
&\|L^{\nabla}_{V}u\|^2 = \|Vu\|^2+ \|\nabla^{\dagger}\nabla u\|^2+2\operatorname{Re}(\nabla^{\dagger}\nabla u,Vu) \nonumber\\
&= \|Vu\|^2+
\nu\|\nabla^{\dagger}\nabla u\|^2+(1-\nu)\|\nabla^{\dagger}\nabla u\|^2+
+2\operatorname{Re}(\nabla^{\dagger}\nabla u,Vu) \nonumber\\
&= \|Vu\|^2+
\nu\|\nabla^{\dagger}\nabla u\|^2
+(1-\nu)\operatorname{Re}(\nabla^{\dagger}\nabla u,L^{\nabla}_{V}u-Vu)+2\operatorname{Re}(\nabla^{\dagger}\nabla u,Vu) \nonumber\\
&= \|Vu\|^2+\nu\|\nabla^{\dagger}\nabla u\|^2+(1-\nu)\operatorname{Re}(\nabla^{\dagger}\nabla u,L^{\nabla}_{V}u)+(1+\nu)\operatorname{Re}(\nabla^{\dagger}\nabla u,Vu),
\end{align}
Using integration by parts and the ``product
rule"
\[
\nabla(Vu)= (\nabla^{\operatorname{End}} V)u+V\nabla u,
\]
for all $u\inC_{c}^{\infty}(E)$ we have
\begin{align}\label{E:long}
&\operatorname{Re} (\nabla^{\dagger}\nabla u, Vu)= \operatorname{Re} (\nabla u,\nabla (Vu)) = \operatorname{Re} (\nabla u, (\nabla^{\operatorname{End}} V)u+V\nabla u)\nonumber\\
&= \operatorname{Re} (\nabla u,(\nabla^{\operatorname{End}} V)u)+(\nabla u,V\nabla u)= (\operatorname{Re} Z) + W,
\end{align}
where
\[
Z:=(\nabla u, (\nabla^{\operatorname{End}} V)u)
\]
and
\begin{equation}\nonumber
W:=(\nabla u,V\nabla u)=(V^{1/2}\nabla u,V^{1/2}\nabla u).
\end{equation}
From~(\ref{E:long}) we get
\begin{align}\label{E:long-1+nu}
&(1+\nu)\operatorname{Re}(\nabla^{\dagger}\nabla u,Vu)=(1+\nu)\operatorname{Re} Z+(1+\nu)W \nonumber\\
&\geq -(1+\nu)|Z|+(1+\nu)W.
\end{align}
Using~(\ref{E:assumption-2}) and
\begin{equation}\label{E:helping}
2ab\leq ka^2+k^{-1}b^2,
\end{equation}
where $a$, $b$ and $k$ are positive real numbers, we obtain
\begin{align}\label{E:long-1}
&|Z| \leq \ (\beta+1)\int_{M}|{\underbar{V}}^{1/2}
\nabla u|_{(T^*M\otimes E)_{x}}|\underbar{V}u|_{E_{x}}\,d\mu \nonumber\\
&\leq \frac{\nu\delta}{2}\|V^{1/2}\nabla u\|^2+\frac{(\beta+1)^2}{2\nu\delta}\|Vu\|^2,
\end{align}
for all $\delta>0$. Using~(\ref{E:helping}) again, we get
\begin{align}\label{E:long-2}
&|\operatorname{Re} (\nabla^{\dagger}\nabla u, L^{\nabla}_{V}u)| \leq |(\nabla^{\dagger}\nabla u, L^{\nabla}_{V}u)| \leq \frac{\alpha}{2}\|\nabla^{\dagger}\nabla u\|^2+\frac{1}{2\alpha}\|L^{\nabla}_{V}u\|^2,
\end{align}
for all $\alpha>0$. Combining~(\ref{E:equality}),~(\ref{E:long-1+nu}),~(\ref{E:long-1})
and~(\ref{E:long-2}) we obtain
\begin{align}\nonumber
&\|L^{\nabla}_{V}u\|^2 \ \geq \|Vu\|^2 + \nu\|\nabla^{\dagger}\nabla u\|^2
-\frac{(1+\nu)\nu\delta}{2}\|V^{1/2}\nabla u\|^2-\frac{(1+\nu)(\beta+1)^2}{2\nu\delta}\|Vu\|^2\nonumber\\
&+(1+\nu)\|V^{1/2}\nabla u\|^2-\frac{|1-\nu|\alpha}{2}\|\nabla^{\dagger}\nabla u\|^2
-\frac{|1-\nu|}{2\alpha}\|L^{\nabla}_{V}u\|^2,\nonumber
\end{align}
which upon rearranging leads to
\begin{align}\nonumber
&\left(1+\frac{|1-\nu|}{2\alpha}\right)\|L^{\nabla}_{V}u\|^2 \geq
\left(1-\frac{(1+\nu)(\beta+1)^2}{2\nu\delta}\right)\|Vu\|^2 \nonumber\\
&+\left(\nu-\frac{|1-\nu|\alpha}{2}\right)\|\nabla^{\dagger}\nabla u\|^2
+
\left((1+\nu)-\frac{(1+\nu)\nu\delta}{2}\right)\|V^{1/2}\nabla u\|^2. \nonumber
\end{align}
Finally, we observe that~(\ref{E:inequality-1}) and~(\ref{E:inequality-2}) will
follow from the last inequality if
\begin{equation}\label{E:constants}
|1-\nu|<\frac{2\nu}{\alpha}, \qquad \nu\delta<2,
\qquad\textrm{and}\qquad (1+\nu)(\beta+1)^2<2\nu\delta.
\end{equation}
Since, by hypothesis, $0\leq\beta<1$, there exist numbers $\nu>0$,
$\alpha>0$ and $\delta>0$ such that the
inequalities~(\ref{E:constants}) hold.
\end{proof}
\bigskip
\noindent{\textbf{Continuation of the Proof of Theorem~\ref{T:main}}
As indicated in section~\ref{SS:op-h}, the operator $L^{\nabla}_{V}|_{C_{c}^{\infty}(E)}$ is essentially self-adjoint and $(L^{\nabla}_{V}|_{C_{c}^{\infty}(E)})^{\sim}=H^{\nabla}_{2,V}$.
We will show that~(\ref{E:inequality-1}) and~(\ref{E:inequality-2}) hold for
all $u\in \textrm{D}^{\nabla}_2=\operatorname{Dom}(H^{\nabla}_{2,V})$, from which~(\ref{E:to-prove}) follows directly.
As $H^{\nabla}_{2,V}$ is a closed operator, there exists a sequence $\{u_k\}$ in
$C_{c}^{\infty}(E)$ such that $u_k\to u$ and $L^{\nabla}_{V}u_k\to H^{\nabla}_{2,V}$ in $L^2(E)$. By Lemma~\ref{L:lemma-1} the sequence $\{u_k\}$
satisfies~(\ref{E:inequality-1}) and~(\ref{E:inequality-2}); hence, $\{Vu_k\}$, $\{\nabla^{\dagger}\nabla u_k\}$, and $\{V^{1/2}\nabla u_k\}$ are Cauchy sequences in the appropriate $L^2$-space (corresponding to $E$ or $T^*M\otimes E$). Furthermore, $\{\nabla u_k\}$ is a Cauchy sequence in $L^2(T^*M\otimes E)$ because
\[
\|\nabla u_k\|^2=(\nabla u_k, \nabla u_k)=(\nabla^{\dagger}\nabla u_k, u_k)\leq \|\nabla^{\dagger}\nabla u_k\|\|u_k\|.
\]
It remains to show that $Vu_k\to Vu$, $V^{1/2}\nabla u_k\to V^{1/2}\nabla u$, and $\nabla u_k\to \nabla u$ in the appropriate $L^2$-space.
As the proofs of these three convergence relations follow the same pattern, we will only show the details for the third one. We start by observing that from the essential self-adjointness of $\nabla^{\dagger}\nabla|_{C_{c}^{\infty}(E)}$ we get $\nabla^{\dagger}\nabla u_k\to \nabla^{\dagger}\nabla u$ in $L^2(E)$. Since $\{\nabla u_k\}$ is a Cauchy sequence in $L^2(T^*M\otimes E)$, it follows that $\nabla u_k$ converges to some element $\omega\in
L^2(T^*M\otimes E)$. Then, for all $\psi\in C_{c}^{\infty}(T^*M\otimes E)$ we have
\begin{equation}\nonumber
0 = (\nabla u_k,\psi)-(u_k,\nabla^{\dagger}\psi)\to (\omega,\psi)- (u,\nabla^{\dagger}\psi)=(\omega,\psi)-(\nabla u,\psi),
\end{equation}
where in the second equality we used integration by parts (see, for instance, Lemma 8.8 in~\cite{bms}), which is applicable because elliptic regularity tells us that $\operatorname{Dom}(H^{\nabla}_{2,V})\subset W^{2,2}_{\operatorname{loc}}(E)$.
With the three convergence relations at our disposal, taking the limit as $k\to\infty$ in all terms
in~(\ref{E:inequality-1}) and~(\ref{E:inequality-2}) (with $u$ replaced
by $u_k$) shows that~(\ref{E:inequality-1}) and~(\ref{E:inequality-2})
hold for all $u\in \textrm{D}^{\nabla}_2=\operatorname{Dom}(H^{\nabla}_{2,V})$. $\hfill\square$
\section{Proof of Theorem \ref{T:main-2}}
\subsection{Semigroup Representation Formula.}\label{S:semi-group}
Assuming that $M$ is geodesically complete (with Ricci curvature bounded from below by a constant in the case $p\geq 3$) and $0\leq V\inL_{\loc}^{\infty}(\operatorname{End} E)$, the operator $H^{\nabla}_{p,V}$ is $m$-accretive (see section~\ref{SS:op-h}), and its negative, $-H^{\nabla}_{p,V}$, generates a strongly continuous contraction semigroup $S_t$ on $L^p(E)$, $1<p<\infty$; see abstract Theorem II.3.15 in~\cite{engel-nagel}.
Before stating a crucial proposition for the proof of Theorem~\ref{T:main-2}, we describe a probabilistic setting. In the subsequent discussion, we assume that the underlying filtered probability space $(\Omega,\mathscr{F},\mathscr{F}_{*},\mathbb{P})$, where $\mathscr{F}_{*}$ is right-continuous and the pair $(\mathbb{P},\mathscr{F}_{t})$ is complete in measure theoretic sense for all $t\geq 0$, carries a Brownian motion $W$ on $\mathbb{R}^{l}$ sped up by 2, that is, $d[W_{t}^{j},W_{t}^{k}]=2\delta_{jk}\,dt$, where $\delta_{jk}$ is the Kronecker delta and $l\geq n=\dim M$ is sufficiently large. We will also assume $\mathscr{F}_{*}=\mathscr{F}_{*}(W)$. Let $B_{t}(x)\colon \Omega \times [0,\zeta(x))\to M$ be a Brownian motion starting at $x\in M$ with lifetime $\zeta(x)$. It is well known that this process can be constructed as the maximally defined solution of the Stratonovich equation
\[
dB_{t}(x)=\sum_{j=1}^{l}A_{j}(B_{t}(x))\underbar{d}W_{t}^{j},\quad B_{0}(x)=x,
\]
where $A_{j}$ are smooth vector fields on $M$ such that $\displaystyle\sum_{j=1}^{l}A^2_{j}=\Delta_{M}$.
\begin{remark} In the setting of Theorem~\ref{T:main-2}, for $p\neq 2$ our assumptions $M$ imply that $M$ is stochastically complete; hence, in this case we have $\zeta(x)=\infty$.
\end{remark}
In the sequel, $\slash\slash_{t}^{x}\colon E_{x}\to E_{B_{t}(x)}$ stands for the stochastic parallel transport corresponding to the covariant derivative $\nabla$ on $E$. Additionally, the symbol $\mathscr{V}^{x}_{t}$ stands for the $\operatorname{End} E_{x}$-valued process (with lifetime $\zeta(x)$) defined as the unique pathwise solution to
\[
d\mathscr{V}^{x}_{t}=-\mathscr{V}^{x}_{t}(\slash\slash^{t,-1}_{x}V(B_{t}(x)))\slash\slash_{t}^{x})\,dt, \qquad \mathscr{V}^{x}_{0}=I,
\]
where $\slash\slash^{t,-1}_{x}$ is the inverse of $\slash\slash_{t}^{x}$ and $I$ is the identity endomorphism.
We now state the proposition, which in the $p=2$ context is a special case of Theorem 1.3 in~\cite{Guneysu-10}. For an extension to possibly negative $V$, in the case $p=2$, see Theorem 2.11 in~\cite{Guneysu-12}. The proof of Theorem 1.3 in~\cite{Guneysu-10} is almost entirely applicable to the proposition below. Thus, we will only explain those parts in need of small changes to accommodate the general $1<p<\infty$.
\begin{prop}\label{P:FK-rep} Let $1<p<\infty$, let $M$ be a (smooth) connected Riemannian manifold without boundary, and let $E$, $\nabla$ be as in Theorem~\ref{T:main-2}. Assume that $M$ is geodesically complete. In the case $p\geq 3$, assume additionally that the Ricci curvature of $M$ is bounded from below by a constant. Assume that $V\inL_{\loc}^{\infty}(\operatorname{End} E)$ satisfies the inequality $V(x)\geq 0$, for all $x\in M$. Let $S_t$ be the semigroup defined in section~\ref{S:semi-group}. Then, we have the representation
\begin{equation}\label{E:F-K}
(S_{t}f)(x)=\mathbb{E}\left[\mathscr{V}^{x}_{t}\slash\slash^{t,-1}_{x}f(B_{t}(x))1_{\{t<\zeta(x)\}} \right],
\end{equation}
for all $f\in L^p(E)$.
\end{prop}
\begin{proof} We first assume that $0\leq V\in C(\operatorname{End} E)\cap L^{\infty}(\operatorname{End} E)$.
Denoting by $L^{0}(E)$ Borel measurable sections, define a family of operators $Q_{t}\colon L^{p}(E)\to L^{0}(E)$, $t\geq 0$, as
\[
(Q_{t}h)(x):=\mathbb{E}\left[\mathscr{V}^{x}_{t}\slash\slash^{t,-1}_{x}h(B_{t}(x))1_{\{t<\zeta(x)\}} \right].
\]
We will show that $Q_{t}$ are bounded operators $L^{p}(E)\to L^{p}(E)$. Using H\"{o}lder's inequality, on letting $q$ be the dual exponent to $p$, we can estimate
\begin{align*}
\|Q_th\|_{p}^p &\leq e^{pt\|V\|_{\infty}}\int_{M}\mathbb{E}[|h(B_t(x))|_{B_t(x)}]^p\,d\mu(x) \\
&= e^{pt\|V\|_{\infty}}\int_M\bigg{(} \int_M|h(y)|_y\rho_t(x,y)\,d\mu(y)\bigg{)}^p \,d\mu(x) \\
&\leq e^{pt\|V\|_{\infty}}\int_M\int_M|h(y)|^p_y\rho_t(x,y)\,d\mu(y)
\bigg{(}\int_M\rho_t(x,z)\,d\mu(z) \bigg{)}^{p/q}\,d\mu(x) \\
&\leq e^{pt\|V\|_{\infty}}\|h\|_{p}^p
\end{align*}
where $\rho_t(x,y)$ denotes the minimal heat kernel of $M$. It follows that $Q_t\colon L^p(E)\to L^p(E)$ are bounded operators for all $t \geq 0$.
As shown in the discussion following equation (17) in \cite{Guneysu-10}, the operator $Q_t$ satisfies
the equation
\begin{equation*}
(Q_t\psi)(x) = \psi(x) - \int_0^tQ_sH^{\nabla}_{p, V}\psi(x)ds
\end{equation*}
for all $\psi \in C_c^{\infty}(E)$.
Therefore $Q_t$ solves the following differential equation
\begin{equation}\nonumber
\frac{dQ_t}{dt}\psi = -Q_tH^{\nabla}_{p, V}\psi,\qquad Q_0\psi = \psi,
\end{equation}
for all $\psi \in C_c^{\infty}(E)$.
On the other hand, by lemma II.1.3 (ii) in \cite{engel-nagel}, the semi-group $S_t$ satisfies the same equation
\begin{equation}\nonumber
\frac{dS_t}{dt}\psi = -S_tH^{\nabla}_{p, V}\psi, \qquad
S_0\psi = \psi,
\end{equation}
for all $\psi \in C_c^{\infty}(E)$. Hence, $Q_t\psi = S_t\psi$ for all $\psi \in C_c^{\infty}(E)$, and thus $Q_tf = S_tf$ for all $f \in L^p(E)$.
This proves the proposition in the case that $0\leq V\in C(\operatorname{End} E)\cap L^{\infty}(\operatorname{End} E)$.
Now assume $0\leq V \in L^{\infty}(\operatorname{End} E)$. By Lemma 3.1 of \cite{Guneysu-10}, we can find a sequence
$0\leq V_k \in C(\operatorname{End} E)\cap L^{\infty}(\operatorname{End} E)$ such that for all $\psi \in C_c^{\infty}(E)$ we have
\begin{equation*}
\|H^{\nabla}_{p, V_k}\psi - H^{\nabla}_{p, V}\psi\|_p \rightarrow 0
\end{equation*}
as $k \rightarrow \infty$. Denote by $S_t^k$ the (strongly continuous, contractive) semigroup in $L^p(E)$ generated by $-H^{\nabla}_{p, V_k}$. As $C_c^{\infty}(E)$ is a common core for $H^{\nabla}_{p, V_k}$ and $H^{\nabla}_{p, V}$, it follows from the abstract Kato--Trotter
theorem, see Theorem III.4.8 in \cite{engel-nagel}, that $S^k_tf \rightarrow S_tf$ in $L^p(E)$, for all $f \in L^p(E)$, $1 < p < \infty$. From here, the proof of Theorem 1.1 in \cite{Guneysu-10} applies to obtain the formula \eqref{E:F-K} for
$0\leq V \in L^{\infty}(\operatorname{End} E)$.
The case of $0\leq V \in L_{\loc}^{\infty}(\operatorname{End} E)$ proceeds in exactly the same way as Theorem 1.3 in \cite{Guneysu-10}.
\end{proof}
Before stating a corollary concerning the resolvent domination, we introduce the resolvent notations:
\begin{align*}
R^{\nabla}_V &:= (H^{\nabla}_{p,V} + 1)^{-1} : L^p(E) \rightarrow L^p(E), \\
R^{d}_{v} &:= (H^{d}_{p,v} + 1)^{-1} : L^p(M) \rightarrow L^p(M).
\end{align*}
With the formula~(\ref{E:F-K}) and the assumption $V\geq vI$ at our disposal, the proof of the following corollary is the same as that of property (iv) in Theorem 2.13 of~\cite{Guneysu-12}.
\begin{cor}\label{C:domination} Let $M$, $\nabla$, and $E$ be as in Proposition \ref{P:FK-rep}. Assume that $V\inL_{\loc}^{\infty}(\operatorname{End} E)$ satisfies the inequality $V(x)\geq v(x)I$, for all $x\in M$, where $0\leq v\inL_{\loc}^{\infty}(M)$. Then, for all $f \in L^p(E)$, $1<p<\infty$, we have
\begin{equation}\label{E:domination}
| R^{\nabla}_Vf(x) |_{E_{x}} \leq R^{d}_{v} |f(x)|.
\end{equation}
\end{cor}
In the next proposition we state a coercive estimate for $L^{d}_{v}$. In the case $p=2$, assuming just geodesic completeness on $M$, the inequality~(\ref{M:Del_bound}) below was proven in Lemma 8 in \cite{Milatovic06-separation}. For the proof of~(\ref{M:Del_bound}) in the case $p\neq 2$ see Theorem 1.2 in~\cite{Milatovic13-separation}. Though stated under a bounded geometry hypothesis on $M$, the proof of the quoted result from~\cite{Milatovic13-separation}, which uses a sequence of second order cut-off functions along with $L^p$-positivity preservation property mentioned in section~\ref{S:intro-1} above, works without change if we assume, in addition to geodesic completeness, that the Ricci curvature of $M$ is bounded from below by a constant. We should also mention that the two cited results from~\cite{Milatovic06-separation, Milatovic13-separation} use the assumption~\eqref{E:assumption-2}.
\begin{prop}\label{L:c-e} Let $M$ be as in the hypotheses of Theorem~\ref{T:main-2}. Assume that $0\leq v\in C^1(M)$ satisfies~(\ref{E:assumption-2}). Then, the following estimate holds for all $u\in D^{d}_p$:
\begin{equation}\label{M:Del_bound}
\|vu\|_{p} \leq C\|L^{d}_{v}u\|_{p} = C\|H^{d}_{p,v} u\|_{p},
\end{equation}
where $C\geq 0$ is a constant.
\end{prop}
\noindent \textbf{Continuation of the Proof of Theorem~\ref{T:main-2}} In the following discussion, $C$ will indicate a non-negative constant, not necessarily the same as the one in~(\ref{M:Del_bound}). Let $v \colon L^p(M) \rightarrow L^p(M)$ denote the maximal multiplication operator corresponding to the function $v$. We first show that the operator $vR^{d}_{v}: L^p(M) \rightarrow L^p(M)$ is bounded. Letting $w\in L^p(M)$ be arbitrary, we have $R^{d}_{v}w \in \operatorname{Dom}(H^{d}_{p,v})= D^{d}_p$. Applying \eqref{M:Del_bound} with $u=R^{d}_{v}w$, we obtain
\begin{align*}
&\|vR^{d}_{v}w\|_{p} \leq C\|H^{d}_{p,v}R^{d}_{v}w\|_{p} \leq C(\|w\|_p + \|R^{d}_{v}w\|_{p}) \leq C\|w\|_{p}.
\end{align*}
This proves $vR^{d}_{v}: L^p(M) \rightarrow L^p(M)$ is a bounded operator.
We then observe that by the boundedness of the operator $vR^{d}_{v}$, the assumption
$v(x)I \leq V(x) \leq \delta v(x)I$, and the
domination inequality~\eqref{E:domination}, we have
\begin{equation}\nonumber
\|V R^{\nabla}_Vf\|_{p} \leq \delta\|vR^{d}_{v}|f|\|_{p} \leq C\|f\|_{p},
\end{equation}
for all $f\in L^p(E)$. This shows that the operator $VR^{\nabla}_V\colon L^p(E)\rightarrow L^p(E)$ is bounded.
Let $h \in D^{\nabla}_p$ be arbitrary and write $Vh = VR^{\nabla}_V(h + L^{\nabla}_{V}h)$. Using the boundedness of the operator $V R^{\nabla}_V$, we obtain
\begin{equation}\nonumber
\|Vh\|_{p} \leq C(\|h\|_{p} + \|L^{\nabla}_{V}h\|_{p}),
\end{equation}
which shows that $L^{\nabla}_{V}$ is separated in $L^p(E)$. $\hfill\square$
|
1,116,691,497,399 | arxiv | \section{Introduction}
\indent Gowdy models are $U(1)\times U(1)$ symmetry reductions with
many interesting applications in cosmology and quantum gravity,
since they provide inhomogeneous systems with local degrees of
freedom and invariance under a certain class of diffeomorphisms
\cite{Gowdy:1971jh}.
\indent The exact quantization of the linearly polarized Gowdy
$\mathbb{T}^3$ model in the vacuum has been profusely analyzed
\cite{Misner,Berger:1973,Mena:1997,Pierri:2000ri,
Corichi:2002vy,Torre:2002xt,Cortez,Corichi:2006xi,BarberoG.:2006zw,Corichi:2006zv,Mena:2007,
Torre:2007zj,Mena:2007schr}. Its gravitational local degrees of
freedom can be interpreted as those corresponding to a massless
scalar field in a fiducial background, so that the usual techniques
of QFT in curved spacetimes can be applied in order to construct the
quantum theory. The fact that the linear symplectic transformations
describing the classical time evolution cannot be unitarily
implemented in the physical Hilbert space when the system is written
in terms of its original variables was initially interpreted as a
serious obstacle for the feasibility of the model
\cite{Corichi:2002vy}. Nevertheless, it is possible to overcome this
problem by a suitable time-dependent redefinition of the field
\cite{Corichi:2006xi}. Furthermore, by demanding the unitarity of
the dynamics and the invariance under an extra $U(1)$ symmetry
generated by a residual global constraint, the existence of a unique
(up to unitary equivalence) Fock representation can be proved for
the system \cite{Corichi:2006zv,Mena:2007}.
\bigskip
\indent The existing literature has been recently extended to the
remaining topologies, $\mathbb{S}^{1}\times\mathbb{S}^{2}$ and
$\mathbb{S}^{3}$, allowing the coupling of gravity to massless
scalar fields (see \cite{BarberoG.:2007} for a rigorous classical
treatment of these models). Here, both gravitational and matter
local degrees of freedom can be encoded by massless scalar fields
evolving in the same fixed background metric. Therefore, they can be
treated in a unified way for the construction of the quantum theory.
A re-scaling of the fields similar to the one defined in the
three-torus case permits also a unitary implementation of the
dynamics \cite{BarberoG.:2007qt}. Concretely, this redefinition is
dictated by the conformal factor $\sin t$ that relates the Gowdy
metrics to the Einstein static (1+2)-universe. For these models, at
variance with the three-torus case, there is no extra constraint, so
that one obtains a family of (in general) unitarily nonequivalent
Fock representations, and in principle there is no symmetry argument
to select a preferred one. However, the uniqueness of the
representation can be recovered in these cases by imposing the
unitarity of the dynamics and the $SO(3)$ invariance of the Fock
construction\footnote{See also \cite{MenaSchr} for an independent
proof of this result. In this reference, some problems concerning
the completeness of the results given in \cite{BarberoG.:2007qt}
were pointed out. Nevertheless, they can be easily solved by
introducing some minor changes that will be taken into account in
the next section.} \cite{BarberoG.:2007qt}. Furthermore, it is
expected that a discussion similar to the one developed in
\cite{Mena:2007} for the vacuum Gowdy $\mathbb{T}^{3}$ model will
lead us to conclude that this redefinition of the fields is the only
reasonable one (up to multiplicative constants) providing unitary
dynamics under the condition of $SO(3)$ invariance.
\bigskip
\indent We will consider the Schr\"{o}dinger representation for the
linearly polarized Gowdy $\mathbb{S}^{1}\times\mathbb{S}^{2}$ and
$\mathbb{S}^{3}$ models coupled to massless scalar fields, where the
states act as functionals on the quantum configuration space
$\overline{\mathscr{C}}$ for a fixed time $t_0$. Here
$\overline{\mathscr{C}}$ is an appropriate distributional extension
of the classical configuration space $\mathscr{C}$, taken in these
cases to be the space of tempered distributions on the 2-sphere. The
Hilbert space then takes the form
$\mathscr{H}_{s}(t_0)=L^{2}(\overline{\mathscr{C}},\mathrm{d}\mu_{t_0})$.
The identification of the Gaussian nature of the measure
$\mu_{t_0}$, the nonstandard representation of the momentum
operator, and the relation between Schr\"{o}dinger and Fock
representations were exhaustively analyzed in \cite{CorichiSchr1} as
a natural extension to the functional description of the Fock
quantization of scalar fields in curved backgrounds \cite{Wald}. In
the QFT context, the Schr\"{o}dinger representation has been
historically pushed into the background in favor of the usual Fock
one because of the difficulty in using it to address sensible
questions regarding physical scattering processes. However, it is
certainly the most natural representation in the context of
canonical quantum gravity, in view of the splitting of spacetime
into spatial sections of constant time. Furthermore, as was pointed
out in \cite{Torre:2007zj} for the vacuum three-torus case, it
provides a better understanding of the properties of the quantized
field, since it is possible to determine the behavior of the typical
field confi\-gurations through the study of the measure support. The
Schr\"{o}dinger representation is also a privileged framework to
probe the existence and properties of semiclassical and squeezed
states for these systems. This paper represents then a necessary
first step to tackle this issue that will be considered elsewhere
after the rigorous analysis of this type of quantizations
\cite{Preparation}. Note that this is not a trivial question owing
to the nonautonomous nature of the Hamiltonian that governs the
reduced dynamics of the models.
\bigskip
In this paper, we will closely rely on the notation and results of
\cite{BarberoG.:2007,BarberoG.:2007qt}, where the reader can find
the classical and quantum formulations of the Gowdy
$\mathbb{S}^{1}\times\mathbb{S}^{2}$ and $\mathbb{S}^{3}$ models, as
well as on the recent works \cite{Torre:2007zj,Mena:2007schr},
devoted to the Schr\"{o}dinger representation for the vacuum Gowdy
$\mathbb{T}^{3}$ model both for the original and the redefined
scalar fields, respectively. In particular, the need for extending
the results found for the three-torus case to the remaining
topologies, and discussing the differences between them, was already
pointed out in \cite{Torre:2007zj}. In section \ref{FOCK} we will
summarize the features of the Fock construction for the Gowdy
$\mathbb{S}^{1}\times\mathbb{S}^{2}$ and $\mathbb{S}^{3}$ models
corresponding to the re-scaled fields for which the dynamics is
unitary, analyzing in subsection \ref{Self-adjointness} the
implementation of the Hamiltonian as a self-adjoint operator for
each value of the time parameter. We will also discuss here the
possibility of modifying the expression of the Hamiltonian at the
classical level in order to avoid some problems regarding the domain
of its quantum counterpart. In section \ref{SCHROD}, we will proceed
to define the Schr\"{o}dinger representation for these models in
such a way that the construction is unitarily equivalent to the Fock
one. In particular, we will probe the properties and support of the
measure $\mu_{t_0}$ in subsection \ref{METRIC}, as well as the
representation of the canonical commutation relations in subsection
\ref{CCR}. In section \ref{UNITARY}, we will check that, as a
consequence of the unitary implementation of the time evolution, the
representations corresponding to different values of the time
parameter are unitarily equivalent, and also that their associated
measures are mutually absolutely continuous. Finally, in section
\ref{COMMENTS}, we will make some comments and remarks on the
results of the paper, in particular concerning their similarity with
those found for the three-torus case.
\section{Fock representation}\label{FOCK}
\subsection{General framework}
\indent The dynamics of both gravitational and matter local degrees
of freedom\footnote{In what follows we will consider the use of the
redefined scalar field $\xi$ for which the dynamics can be unitarily
implemented \cite{BarberoG.:2007qt}. We will not study the global
modes present in these models \cite{BarberoG.:2007}. They can be
quantized in a straightforward way in terms of standard position and
momentum operators with dense domain in $L^{2}(\mathbb{R})$.} in the
linearly polarized Gowdy $\mathbb{S}^{1}\times\mathbb{S}^{2}$ and
$\mathbb{S}^{3}$ models can be described by the same non\-autonomous
Hamiltonian system $(\mathbf{P},\omega,H(t))$, whose features we
proceed to summarize. Let
$\gamma_{ab}=(\mathrm{d}\theta)_{a}(\mathrm{d}\theta)_{b}+\sin^{2}\theta(\mathrm{d}\sigma)_{a}(\mathrm{d}\sigma)_{b}\,$
be the round metric in the 2-sphere $\mathbb{S}^{2}$, with spherical
coordinates $(\theta,\sigma)\in(0,\pi)\times(0,2\pi)$. $\mathbf{P}$
is the space of smooth and symmetric Cauchy data $(Q,P)\in
C^{\infty}(\mathbb{S}^{2};\mathbb{R})\times
C^{\infty}(\mathbb{S}^{2};\mathbb{R})$, with
$\mathcal{L}_{\sigma}Q=0=\mathcal{L}_{\sigma}P$, where
$\mathcal{L}_{\sigma}$ denotes the Lie derivative with respect to
the vector field $\sigma^{a}=(\partial/\partial\sigma)^{a}$. The
standard (weakly) symplectic structure
$\omega:\mathbf{P}\times\mathbf{P}\rightarrow\mathbb{R}$ is given by
\begin{equation}\label{omega}
\omega((Q_1,P_1),(Q_2,P_2)):=\int_{\mathbb{S}^{2}}|\gamma|^{1/2}(Q_{2}P_{1}-Q_{1}P_{2})\,,\,\,\,(Q_1,P_1),(Q_2,P_2)\in\mathbf{P}\,.
\end{equation}
The symplectic space $(\mathbf{P},\omega)$ is then the
\emph{canonical phase space} of the system. Finally,
$H:(0,\pi)\times\mathbf{P}\rightarrow\mathbb{R}$ is the (indefinite)
nonautonomous Hamiltonian
\begin{equation}\label{ClassHamiltonian}
H(t;Q,P):=\frac{1}{2}\int_{\mathbb{S}^{2}}|\gamma|^{1/2}(P^2+\cot
t\,QP-Q\Delta_{\mathbb{S}^{2}}Q)\,,
\end{equation}
where $\Delta_{\mathbb{S}^{2}}$ denotes the Laplace-Beltrami
operator on the round 2-sphere. Consider now the space of smooth and
symmetric real solutions to the Euler-Lagrange equation derived from
the Hamilton equations\footnote{The dot denotes time derivative.}
\begin{equation}
\mathcal{S}:=\Big\{\xi\in
C^{\infty}((0,\pi)\times\mathbb{S}^{2};\mathbb{R})\,\big|\,-\ddot{\xi}+\Delta_{\mathbb{S}^{2}}\xi=\frac{1}{4}(1+\csc^{2}t)\xi\,,\,\,\,\mathcal{L}_{\sigma}\xi=0\Big\}\,.
\end{equation}
We define the \emph{covariant phase space} of the system as the pair
$(\mathcal{S},\Omega)$, where
$\Omega:\mathcal{S}\times\mathcal{S}\rightarrow\mathbb{R}$ is the
symplectic structure naturally induced by the $\omega$ given in
(\ref{omega}),
\begin{equation}\label{Omega}
\Omega(\xi_1,\xi_2):=\int_{\mathbb{S}^{2}}|\gamma|^{1/2}\iota_{t}^{*}(\xi_2\dot{\xi}_1-\xi_1\dot{\xi}_2)\,,\,\,\,\xi_1,\xi_2\in\mathcal{S}\,,
\end{equation}
with
$\iota_{t}:\mathbb{S}^{2}\rightarrow(0,\pi)\times\mathbb{S}^{2}$
being the embedding of the 2-sphere as a Cauchy surface of constant
time $t$.
\bigskip
\indent In order to obtain the quantum theory for these models, it
is necessary to construct the one-particle Hilbert space of the
system $\mathscr{H}_{\mathcal{P}}$. Consider the Lagrangian subspace
\begin{equation}\label{subspaceP}
\mathcal{P}:=\Big\{Z\in\mathcal{S}_{\mathbb{C}}\,\big|\,Z=\sum_{\ell=0}^{\infty}a_{\ell}z_{\ell}Y_{\ell
0}\,,\,\,a_{\ell}\in\mathbb{C}\Big\}
\end{equation}
of the complexification $\mathcal{S}_{\mathbb{C}}$ of the solution
space $\mathcal{S}$, where $(z_{\ell})_{\ell=0}^{\infty}$ is a
family of complex linearly independent solutions to the equation
\begin{equation}\label{equationz}
\ddot{z}_{\ell}+\left(\frac{1}{4}\big(1+\csc^{2}t\big)+\ell(\ell+1)\right)z_{\ell}=0
\end{equation}
satisfying the normalization condition\footnote{The bar denotes
complex conjugation.}
\begin{equation}\label{norm}
z_{\ell}\dot{\bar{z}}_{\ell}-\bar{z}_{\ell}\dot{z}_{\ell}=i\,,
\end{equation}
and $Y_{\ell 0}$ are the spherical harmonics verifying the
orthogonality conditions $\int_{\mathbb{S}^{2}}|\gamma|^{1/2}Y_{\ell
0}Y_{\ell^{\prime}0}\\=\delta(\ell,\ell^{\prime})$. The one-particle
Hilbert space $\mathscr{H}_{\mathcal{P}}$ is then the Cauchy
completion of the subspace $\mathcal{P}$ with respect to the inner
product
\begin{equation}
\langle
Z_1\,|\,Z_2\rangle_{\mathcal{P}}:=-i\Omega_{\mathbb{C}}(\bar{Z}_1,Z_2)=\sum_{\ell=0}^{\infty}\bar{a}^{(1)}_{\ell}a^{(2)}_{\ell}\,,\,\,\,Z_{1},Z_{2}\in\mathcal{P}\,,
\end{equation}
where $\Omega_{\mathbb{C}}$ is the extension of the symplectic
structure (\ref{Omega}) to $\mathcal{S}_{\mathbb{C}}$ by linearity.
Finally, the Hilbert space\footnote{The unnecessary distinction
between kinematical and physical Hilbert spaces in these models
follows from the nonexistence of extra constraints
\cite{BarberoG.:2007}.} of the models is given by the symmetric Fock
space defined on $\mathscr{H}_{\mathcal{P}}$,
\begin{equation}
\mathscr{F}_{\mathcal{P}}:=\bigoplus_{n=0}^{\infty}\mathscr{H}_{\mathcal{P}}^{\otimes_{s}n}\,,
\end{equation}
where $\mathscr{H}^{\otimes_{s}n}_{\mathcal{P}}$ denotes the
subspace of $\mathscr{H}_{\mathcal{P}}^{\otimes
n}=\otimes_{k=1}^{n}\mathscr{H}_{\mathcal{P}}$ spanned by symmetric
tensor pro\-ducts of $n$ vectors in $\mathscr{H}_{\mathcal{P}}$. The
possible choices of Lagrangian subspaces $\mathcal{P}$ are encoded
in the following two-parameter family of $z_{\ell}$ functions
satisfying (\ref{norm}):
\begin{equation}
z_{\ell}(t)=\sqrt{\frac{\sin
t}{2}}\left(\rho_{\ell}\mathscr{P}_{\ell}(\cos
t)+(\nu_{\ell}+i\rho_{\ell}^{-1})\mathscr{Q}_{\ell}(\cos t)\right),
\end{equation}
with $\rho_{\ell}>0$, $\nu_{\ell}\in\mathbb{R}$, modulo a
multiplicative phase that plays no role in the context of the study
of unitary implementation of dynamics. $\mathscr{P}_{\ell}$ and
$\mathscr{Q}_{\ell}$ denote the first and second class Legendre
functions, respectively.\\
\indent Every election of $\mathcal{P}$ is in one-to-one
correspondence with a $\Omega$-compatible $SO(3)$-invariant complex
structure on $\mathcal{S}$,
$J_{\mathcal{P}}:\mathcal{S}\rightarrow\mathcal{S}$,
$J_{\mathcal{P}}^{2}=-\mathrm{Id}_{\mathcal{S}}$ (see
\cite{BarberoG.:2007qt}). Indeed, any solution $\xi\in\mathcal{S}$
can be uniquely decomposed as $\xi=Z+\bar{Z}$, with
$Z\in\mathcal{P}$, in such a way that $J_\mathcal{P}$ is defined as
\begin{equation}\label{Jp}
J_{\mathcal{P}}\xi:=i(Z-\bar{Z})\,.
\end{equation}
As proved in \cite{BarberoG.:2007qt}, the linear symplectic
transformations that describe the time evolution can be unitarily
implemented in the Hilbert space $\mathscr{F}_{\mathcal{P}}$ for all
those $SO(3)$-invariant complex structures $J_{\mathcal{P}}$
characterized by pairs $(\rho_{\ell},\nu_{\ell})_{\ell=0}^{\infty}$
such that\footnote{The expression of the $\rho_{\ell}$ coefficients
appearing in \cite{BarberoG.:2007qt} is incomplete, and needs to be
corrected by taking into account the subdominant term that appears
in (\ref{rhonu}). With more generality, the expression of
$\nu_{\ell}$ given in \cite{BarberoG.:2007qt} must also be replaced
by the one of equation (\ref{rhonu}) in order to explicitly include
nonpolynomial decreasing behaviors. Taking these minor changes into
consideration, we completely characterize the biparametric family of
complex structures for which dynamics is unitary, and not only a
subfamily as in \cite{BarberoG.:2007qt}, solving the problems
pointed out at the end of reference \cite{MenaSchr}. We must remark,
in any case, that these corrections do not affect the main
conclusions achieved in \cite{BarberoG.:2007qt}, in particular,
concerning the proof of uniqueness of the Fock representation, whose
simplicity typifies the usefulness of the formalism developed in
\cite{BarberoG.:2007qt}.}
\begin{equation}\label{rhonu}
\rho_{\ell}=\sqrt{\frac{\pi}{2}}+x_{\ell}>0\,,\,\,\,(x_{\ell})_{\ell=0}^{\infty}\in\ell^{2}(\mathbb{R})\,,\quad\mathrm{and}\quad
(\nu_{\ell})_{\ell=0}^{\infty}\in\ell^{2}(\mathbb{R})\,.
\end{equation}
In addition, all the Fock representations obtained through
(\ref{rhonu}) are unitarily equivalent
\cite{BarberoG.:2007qt,MenaSchr}. In the following, we will
implicitly assume the use of a concrete complex structure
$J_{\mathcal{P}}$ of this type.
\subsection{Self-adjointness of the quantum
Hamiltonian}\label{Self-adjointness}
\indent Note that due to the nonautonomous nature of the classical
Hamiltonian (\ref{ClassHamiltonian}), the dynamics does not define a
one-parameter symplectic group on $(\mathbf{P},\omega)$, so we
cannot apply Stone's theorem to justify the self-adjointness of the
corresponding (one-parameter family of) operators in the quantum
theory. Nevertheless, it is possible to show that the quantum
Hamiltonian is self-adjoint for each value of the time parameter $t$
by analyzing the unitary implementability on
$\mathscr{F}_{\mathcal{P}}$ of the one-parameter symplectic group
generated by the \emph{autonomous} Hamiltonian $H(\tau)$, once a
value $t=\tau\in(0,\pi)$ has been fixed. Here, we will follow the
efficient procedure employed in \cite{Torre:2002xt} for the Gowdy
$\mathbb{T}^{3}$ model, subsequently genera\-lized in
\cite{BarberoG.:2006zw} to discuss the self-adjointness of general
quadratic operators in this context. We start by considering the
auxiliary system $(\mathbf{P},\omega,H(\tau))$, where the dynamics
is governed by the classical autonomous Hamiltonian
\begin{equation}
H(\tau)=\frac{1}{2}\sum_{\ell=0}^{\infty}\big(K_{\ell}(\tau)a_{\ell}^{2}+\bar{K}_{\ell}(\tau)\bar{a}_{\ell}^{2}+2G_{\ell}(\tau)\bar{a}_{\ell}a_{\ell}\big)\,,
\end{equation}
with
\begin{eqnarray}\label{K&G}
K_{\ell}(\tau)&:=&\Big(\dot{z}_{\ell}(\tau)-\frac{1}{2}\cot
\tau\,z_{\ell}(\tau)\Big)^{2}+\ell(\ell+1)z_{\ell}^{2}(\tau)+\cot
\tau\,\Big(\dot{z}_{\ell}(\tau)-\frac{1}{2}\cot
\tau\,z_{\ell}(\tau)\Big)z_{\ell}(\tau)\,,\nonumber\\
G_{\ell}(\tau)&:=&\Big|\dot{z}_{\ell}(\tau)-\frac{1}{2}\cot
\tau\,z_{\ell}(\tau)\Big|^{2}+\ell(\ell+1)|z_{\ell}(\tau)|^{2}\nonumber\\
&&+\frac{1}{2}\cot\tau\,\left(\Big(\dot{z}_{\ell}(\tau)-\frac{1}{2}\cot
\tau\,z_{\ell}(\tau)\Big)\bar{z}_{\ell}(\tau)+\Big(\dot{\bar{z}}_{\ell}(\tau)-\frac{1}{2}\cot
\tau\,\bar{z}_{\ell}(\tau)\Big)z_{\ell}(\tau)\right).
\end{eqnarray}
The modes $a_{\ell},\bar{a}_{\ell}$ are defined through the
relations
$Q_{\ell}:=\int_{\mathbb{S}^{2}}|\gamma|^{1/2}QY_{\ell0}=z_{\ell}(\tau)a_{\ell}+\bar{z}_{\ell}(\tau)\bar{a}_{\ell}$,
$P_{\ell}:=\int_{\mathbb{S}^{2}}|\gamma|^{1/2}PY_{\ell0}=\big(\dot{z}_{\ell}(\tau)-(1/2)\cot\tau\,z_{\ell}(\tau)\big)a_{\ell}+\big(\dot{\bar{z}}_{\ell}(\tau)-(1/2)\cot\tau\,\bar{z}_{\ell}(\tau)\big)\bar{a}_{\ell}$.
Their evolution in a fictitious time parameter $s\in\mathbb{R}$ is
given by the linear equations\footnote{Here $\{\cdot,\cdot\}$
denotes the Poisson bracket defined from (\ref{omega}), with
$\{a_{\ell},\bar{a}_{\ell^{\prime}}\}=-i\delta(\ell,\ell^{\prime})\mathbb{I}$.}
\begin{eqnarray}
\frac{\mathrm{d}a_{\ell}}{\mathrm{d}s}&=&\{a_{\ell},H(\tau)\}=-i\big(G_{\ell}(\tau)a_{\ell}+\bar{K}_{\ell}(\tau)\bar{a}_{\ell}\big)\,,\\
\frac{\mathrm{d}\bar{a}_{\ell}}{\mathrm{d}s}&=&\{\bar{a}_{\ell},H(\tau)\}=i\big(K_{\ell}(\tau)a_{\ell}+G_{\ell}(\tau)\bar{a}_{\ell}\big)\,.\nonumber
\end{eqnarray}
Using the normalization condition (\ref{norm}), we easily obtain the
second-order differential equation
\begin{equation}\label{dderiva}
\frac{\mathrm{d}^{2}a_{\ell}}{\mathrm{d}s^{2}}=-\Big(\ell(\ell+1)-\frac{1}{4}\cot^{2}\tau\Big)a_{\ell}\,,
\end{equation}
whose solutions have a linear dependence on the initial conditions
$a_{\ell}(s_0)$ and $\bar{a}_{\ell}(s_0)$,
\begin{eqnarray}\label{transforma}
a_{\ell}(s)=\alpha_{\ell}(s,s_0)a_{\ell}(s_0)+\beta_{\ell}(s,s_0)\bar{a}_{\ell}(s_0)\,,\,\,\,\bar{a}_{\ell}(s)=\overline{a_{\ell}(s)}\,.
\end{eqnarray}
This symplectic transformation is unitarily implementable on
$\mathscr{F}_{\mathcal{P}}$ for each $s\in\mathbb{R}$, i.e., there
exists a unitary operator
$\hat{u}(s,s_0):\mathscr{F}_{\mathcal{P}}\rightarrow\mathscr{F}_{\mathcal{P}}$
such that
$\hat{u}(s,s_0)\hat{a}_{\ell}\hat{u}^{-1}(s,s_0)=\alpha_{\ell}(s,s_0)\hat{a}_{\ell}+\beta_{\ell}(s,s_0)\hat{a}_{\ell}^{\dag}$,
$\hat{u}(s,s_0)\hat{a}_{\ell}^{\dag}\hat{u}^{-1}(s,s_0)=\bar{\beta}_{\ell}(s,s_0)\hat{a}_{\ell}+\bar{\alpha}_{\ell}(s,s_0)\hat{a}_{\ell}^{\dag}$,
if and only if the Bogoliubov coefficients $\beta_{\ell}$ are square
summable \cite{Shale},
\begin{equation}\label{betacoeff}
\sum_{\ell=0}^{\infty}|\beta_{\ell}(s,s_0)|^{2}<+\infty\,.
\end{equation}
Note that, for each value of $\tau\in(0,\pi)$, there exists
$\ell_{0}\in\mathbb{N}\cup\{0\}$ such that
\begin{equation*}
\lambda_{\ell}^{2}:=\ell(\ell+1)-\frac{1}{4}\cot^{2}\tau>0\,,\,\,\,\forall\,\ell>\ell_{0}\,.
\end{equation*}
In this situation,
\begin{eqnarray*}
\alpha_{\ell}(s,s_0)&=&\cos\big(\lambda_{\ell}(s-s_0)\big)-i\lambda_{\ell}^{-1}G_{\ell}(\tau)\sin\big(\lambda_{\ell}(s-s_0)\big)\,,\\
\beta_{\ell}(s,s_0)&=&-i\lambda_{\ell}^{-1}\bar{K}_{\ell}(\tau)\sin\big(\lambda_{\ell}(s-s_0)\big)\,.
\end{eqnarray*}
It suffices to consider the modes corresponding to $\ell>\ell_{0}$,
since the convergence of the series (\ref{betacoeff}) depends, in
practice, only on the high-frequency behavior of the $\beta_{\ell}$
coefficients. Taking into account the asymptotic expansions in
$\ell$
\begin{eqnarray}\label{asymp}
&&z_{\ell}(t)=\frac{1}{\sqrt{2\ell}}\exp\left(-i[(\ell+1/2)t-\pi/4]\right)+O(\ell^{-3/2})\,,\\
&&\dot{z}_{\ell}(t)-\frac{1}{2}\cot t\,z_{\ell}(t)=
-i\sqrt{\frac{\ell}{2}}\exp\left(-i[(\ell+1/2)t-\pi/4]\right)+O(\ell^{-1/2})\,,\nonumber
\end{eqnarray}
we have $K_{\ell}(\tau)=O(1)$, so that
$\sum_{\ell>\ell_0}\lambda_{\ell}^{-2}|K_{\ell}(\tau)|^{2}\sin^{2}\big(\lambda_{\ell}(s-s_0)\big)<+\infty$,
$\forall\,s\in\mathbb{R}$, and hence (\ref{betacoeff}) is verified.
Finally, the transformation (\ref{transforma}) is implementable as a
continuous, unitary, one-parameter group if it verifies the strong
continuity condition in the auxiliary parameter $s$
\begin{equation}
\lim_{s\rightarrow
s_{0}}\sum_{\ell=0}^{\infty}|a_{\ell}(s)-a_{\ell}(s_0)|^{2}=0\,,\,\,\,s_{0}\in\mathbb{R}\,.
\end{equation}
Again, we can restrict ourselves to the modes $\ell>\ell_0$. It is
straightforward to check that this condition holds for the solution
(\ref{transforma}) with square summable initial data $a_{\ell}$ and
$\bar{a}_{\ell}$. Therefore, we have obtained a strongly continuous
and unitary one-parameter group whose generator is self-adjoint
according to Stone's theorem.
\bigskip
The quantum Hamiltonian of the models under consideration can be
explicitly calculated as the strong limit
\begin{equation*}
\mathrm{s-}\!\!\lim_{t^{\prime}\rightarrow
t}\frac{\hat{U}(t,t^{\prime})-\hat{\mathbb{I}}}{t-t^{\prime}}f=-i\hat{H}(t)f\,,\,\,\,
f\in\mathscr{D}_{\hat{H}(t)}\,,
\end{equation*}
where $\hat{U}(t,t^{\prime})$ denotes the quantum evolution operator
on $\mathscr{F}_{\mathcal{P}}$. The previous result ensures the
self-adjointness of the quantum Hamiltonian $\hat{H}(t)$, and hence
the existence of a dense domain
$\mathscr{D}_{\hat{H}(t)}\subset\mathscr{F}_{\mathcal{P}}$, for each
value of the time parameter $t\in(0,\pi)$. Unfortunately, the method
employed does not provide us with a characterization of such
domains, or the concrete expression of the quantum Hamiltonian.
Nevertheless, given the quadratic nature of the classical
Hamiltonian (\ref{ClassHamiltonian}), it is expected that this limit
coincides with the operator directly promoted from the classical
function up to normal ordering. As proved in
\cite{BarberoG.:2007qt}, this operator does not have the Fock vacuum
state $|0\rangle_{\mathcal{P}}:=1\oplus
0\oplus0\oplus\cdots\in\mathscr{F}_{\mathcal{P}}$ in its domain
because of the fact that the $K_{\ell}$ sequence defined in
(\ref{K&G}) is not square summable. As a consequence, the action of
the operator is not defined either on the dense subspace of states
with a finite number of particles. This difficulty can be overcome
right from the start by describing the classical dynamics through
the (positive definite) Hamiltonian \cite{BarberoG.:2007qt}
\begin{equation}\label{NewClassicHamilt}
H_{0}(Q,P;t):=\frac{1}{2}\int_{\mathbb{S}^{2}}|\gamma|^{1/2}\left(P^{2}+Q\Big[\frac{1}{4}(1+\csc^{2}t)-\Delta_{\mathbb{S}^{2}}\Big]Q\right).
\end{equation}
The Hamiltonians (\ref{ClassHamiltonian}) and
(\ref{NewClassicHamilt}) obviously govern the same classical
evolution, but they are connected by a time-dependent symplectic
transformation that in principle is not unitarily implementable, so
one possibly obtains nonequivalent quantum theories from them. The
corresponding quantum Hamiltonian is given, after normal ordering,
by
\begin{equation}\label{QuantHamiltonian}
\hat{H}_{0}(t)=\frac{1}{2}\sum_{\ell=0}^{\infty}\left(K_{0_{\ell}}(t)\hat{a}_{\ell}^{2}+\bar{K}_{0_\ell}(t){\hat{a}_{\ell}}^{\dag
2}+2G_{0_\ell}(t)\hat{a}_{\ell}^{\dag}\hat{a}_{\ell}\right),
\end{equation}
where
\begin{eqnarray}\label{K0&G0}
K_{0_\ell}(t)&:=&\dot{z}_{\ell}^{2}(t)+\left(\frac{1}{4}\big(1+\csc^{2}t\big)+\ell(\ell+1)\right)z_{\ell}^{2}(t)\,,\\
G_{0_\ell}(t)&:=&\big|\dot{z}_{\ell}(t)\big|^{2}+\left(\frac{1}{4}\big(1+\csc^{2}t\big)+\ell(\ell+1)\right)|z_{\ell}(t)|^{2}\,.\nonumber
\end{eqnarray}
Here, $\hat{a}^{\dag}_{\ell}$ and $\hat{a}_{\ell}$ are the creation
and annihilation operators associated with the modes
$z_{\ell}Y_{\ell0}$, respectively. This new self-adjoint Hamiltonian
has the advantage of including the vacuum state in its domain --in
this case $K_{0_\ell}(t)$ defines a square summable sequence for
each value of $t$--, as well as the fact that the results about the
unitary implementation of the time evolution and the uniqueness of
the Fock representation are also valid in this case. Concretely, the
biparametric family of complex structures for which the dynamics is
unitary is characterized again by the pairs (\ref{rhonu}). In what
follows, we will consider the dynamics of the system to be described
by (\ref{NewClassicHamilt}).
\section{Schr\"{o}dinger representation}\label{SCHROD}
\subsection{Constructing the $L^2$ space}
\indent Let us denote by $\mathscr{S}$ the Schwartz space of smooth
and symmetric test functions on the 2-sphere,
\begin{equation}
\mathscr{S}:=\{f\in
C^{\infty}(\mathbb{S}^{2};\mathbb{R})\,\,|\,\,\mathcal{L}_{\sigma}f=0\}\,,
\end{equation}
endowed with the standard nuclear
topology\footnote{\label{footnote}Every element $f\in\mathscr{S}$
can be expanded as $f(s)=\sum_{\ell=0}^{\infty}f_{\ell}Y_{\ell
0}(s)$, $s\in\mathbb{S}^{2}$, with $(f_\ell)_{\ell=0}^{\infty}$ a
sequence of rapidly decreasing real coefficients, such that
$\lim_{\ell\rightarrow\infty}\ell^{n}f_{\ell}=
0,\,\forall\,n\in\mathbb{N}\cup\{0\}$. We will revise the equivalent
description of the topological structure of $\mathscr{S}$ in terms
of the locally convex space of rapidly decreasing sequences in
section \ref{METRIC}. For more details, the reader can consult
\cite{Dubin&Hennings}.}. The quantum configuration space used to
define the Schr\"{o}dinger representation is then the topological
dual $\mathscr{S}^{\prime}$, consisting of conti\-nuous linear
functionals on $\mathscr{S}$. Note that this space includes the
delta functions and their derivatives. Given a time of embedding
$t_0$, the Schr\"{o}dinger representation is introduced by defining
a suitable Hilbert space\footnote{Here, the measure $\mu_{t_0}$ is
implicitly assumed to be defined on the sigma algebra
$\sigma(\mathrm{Cyl}(\mathscr{S}^{\prime}))$ generated by the
cylinder sets.} $L^{2}(\mathscr{S}^{\prime},\mathrm{d}\mu_{t_0})$,
for a certain measure $\mu_{t_0}$, in which the configuration
observables act as \emph{multiplication} operators. As we will see
later, given the Gaussian nature of the measure $\mu_{t_0}$, the
momentum operators will differ from the usual ones in terms of
derivatives by a multiplicative term depending on the configuration
variables.
\bigskip
\indent As a consequence of the linearity of
$\mathbf{P}=\mathscr{S}\times\mathscr{S}$, the set of elementary
classical observables $\mathcal{O}$ can be identified with the
$\mathbb{R}$-vector space generated by linear functionals on
$\mathbf{P}$. Every pair $\lambda:=(-g,f)\in\mathbf{P}$,
$f,g\in\mathscr{S}$, has an associated functional
$F_{\lambda}:\mathbf{P}\rightarrow\mathbb{R}$ such that for all
$X=(Q,P)\in\mathbf{P}$,
\begin{equation}
F_{\lambda}(X):=\omega(\lambda,X)=\int_{\mathbb{S}^{2}}|\gamma|^{1/2}(fQ+gP)\,.
\end{equation}
Therefore,
$\mathcal{O}=\mathrm{Span}\{\mathbb{I},F_{\lambda}\}_{\lambda\in\mathbf{P}}$.
As expected \cite{Ashtekar1}, this set satisfies the condition that
any regular function on $\mathbf{P}$ can be obtained as a (suitable
limit of) sum of products of elements in $\mathcal{O}$, and also
that it is closed under Poisson brackets,
$\{F_{\lambda}(\cdot),F_{\nu}(\cdot)\}=F_{\nu}(\lambda)\mathbb{I}$.
The configuration and momentum observables are objects of this type
defined by the pairs $\lambda=(0,f)$ and $\lambda=(-g,0)$,
respectively
\begin{eqnarray}
Q(f)&:=&F_{(0,f)}(Q,P)=\int_{\mathbb{S}^{2}}|\gamma|^{1/2}fQ=\sum_{\ell=0}^{\infty}f_{\ell}Q_{\ell}\,,\\
P(g)&:=&F_{(-g,0)}(Q,P)=\int_{\mathbb{S}^{2}}|\gamma|^{1/2}gP=\sum_{\ell=0}^{\infty}g_{\ell}P_{\ell}\,,
\end{eqnarray}
where the symmetric test functions have been expanded as explained
in footnote \ref{footnote}. Here, with the aim of simplifying the
notation, we have used the same symbol to denote the canonical
inclusion $\mathscr{S}\hookrightarrow\mathscr{S}^{\prime}$ of
$\mathscr{S}$ into $\mathscr{S}^{\prime}$. In this way,
$F_{(-g,f)}(Q,P)=Q(f)+P(g)$. The abstract quantum algebra of
observables $\mathcal{A}$ is then given by the usual Weyl
$C^{*}$-algebra generated by the elements
$W(\lambda)=\exp(iF_{\lambda})$, $\lambda\in\mathbf{P}$, satisfying
the conditions
\begin{equation}
\displaystyle W(\lambda)^{*}=W(-\lambda)\,,\quad
W(\lambda_1)W(\lambda_2)=\mathrm{e}^{\frac{i}{2}\omega(\lambda_1,\lambda_2)}W(\lambda_1+\lambda_2)\,,
\end{equation}
containing the information about the canonical commutation
relations.
\bigskip
\indent Let $\mathfrak{I}_{t_0}:\mathbf{P}\rightarrow\mathcal{S}$,
$t_0\in(0,\pi)$, be the symplectomorphism that defines for each pair
of Cauchy data $(Q,P)\in\mathbf{P}$ the unique solution
$\xi\in\mathcal{S}$ such that, under the evolution given by the
Hamiltonian (\ref{NewClassicHamilt}), it satisfies
$\xi(t_0,s)=Q(s)$, $\dot{\xi}(t_0,s)=P(s)$. That is
\begin{equation}\label{I0QP}
\xi(t,s)=(\mathfrak{I}_{t_0}(Q,P))(t,s)=\sum_{\ell=0}^{\infty}\left(a_{\ell}(t_0)z_{\ell}(t)+\overline{a_{\ell}(t_0)z_{\ell}(t)}\right)Y_{\ell
0}(s)\in\mathcal{S}\,,
\end{equation}
with
\begin{equation}\label{alt0}
a_{\ell}(t_0):=i\bar{z}_{\ell}(t_0)P_{\ell}-i\dot{\bar{z}}_{\ell}(t_0)Q_{\ell}\,.
\end{equation}
This map gives rise to a natural $\omega$-compatible complex
structure on the canonical phase space given by
\begin{equation}
J_{t_0}:=\mathfrak{I}_{t_0}^{-1}\circ
J_{\mathcal{P}}\circ\mathfrak{I}_{t_0}:\mathbf{P}\rightarrow\mathbf{P}\,,
\end{equation}
such that
\begin{equation*}
(Q,P)\in\mathbf{P}\mapsto
J_{t_0}(Q,P)=(A(t_0)Q+B(t_0)P,D(t_0)Q+C(t_0)P)\in\mathbf{P}\,,
\end{equation*}
where
$A(t_0),B(t_0),C(t_0),D(t_0):\mathscr{S}\rightarrow\mathscr{S}$ are
linear operators satisfying, in virtue of the $\omega$-compatibility
\cite{Ashtekar2}, the relations
\begin{eqnarray*}
\langle f,B(t_0)f^{\prime}\rangle=\langle
B(t_0)f,f^{\prime}\rangle\,,\,\,\,\,\langle
g,D(t_0)g^{\prime}\rangle=\langle
D(t_0)g,g^{\prime}\rangle\,,\,\,\,\,\langle
f,A(t_0)g\rangle=-\langle C(t_0)f,g\rangle\,,
\end{eqnarray*}
for all $f,g,f^{\prime},g^{\prime}\in\mathscr{S}$. Here, we have
denoted $\langle f,g\rangle:=\int_{\mathbb{S}^{2}}|\gamma|^{1/2}fg$.
Also, given the condition $J_{t_0}^{2}=-\mathrm{Id}_{\mathbf{P}}$,
and assuming $B(t_0)$ invertible, the $C(t_0)$ and $D(t_0)$
operators can be expressed in terms of the $A(t_0)$ and $B(t_0)$
operators through the relations $C(t_0)=-B^{-1}(t_0)A(t_0)B(t_0)$
and $D(t_0)=-B^{-1}(t_0)(\mathbf{1}+A^{2}(t_0))$, respectively, in
such a way that the complex structure $J_{t_0}$ is completely
characterized by $A(t_0)$ and $B(t_0)$. Using equations (\ref{Jp})
and (\ref{I0QP}), it is straightforward to obtain\footnote{Note that
the zero mode $\ell=0$ has been included into the spherical harmonic
expansion of the test functions. The $B(t_0)$ operator is well
defined even for this mode, ultimately as a consequence of equation
(\ref{equationz}) verified by the $z_{\ell}$ functions, where the
squared frequency is positive definite $\forall\,t\in(0,\pi)$ when
$\ell=0$.}
\begin{eqnarray}\label{A&B}
\big(A(t_0)Q\big)(s)&=&\sum_{\ell=0}^{\infty}\big(\dot{\bar{z}}_{\ell}(t_0)z_{\ell}(t_0)+\dot{z}_{\ell}(t_0)\bar{z}_{\ell}(t_0)\big)Q_{\ell}Y_{\ell 0}(s)\,,\\
\big(B(t_0)P\big)(s)&=&-2\sum_{\ell=0}^{\infty}|z_{\ell}(t_0)|^{2}P_{\ell}Y_{\ell
0}(s)\,.\nonumber
\end{eqnarray}
It is worth noting that, given the rapidly decreasing nature of the
sequences $(Q_{\ell})_{\ell=0}^{\infty}$ and
$(P_{\ell})_{\ell=0}^{\infty}$, as well as the asymptotic behavior
of the $z_{\ell}$ functions decaying like (\ref{asymp}), the
$A(t_0)$ and $B(t_0)$ operators are well defined on $\mathscr{S}$.
In addition, $B(t_0)$ has an inverse operator
$B^{-1}(t_0):\mathscr{S}\rightarrow\mathscr{S}$ given by
\begin{equation}\label{invB}
\big(B^{-1}(t_0)P\big)(s)=-\frac{1}{2}\sum_{\ell=0}^{\infty}|z_{\ell}(t_0)|^{-2}P_{\ell}Y_{\ell
0}(s)\,.
\end{equation}
\indent By definition, once a time of embedding $t_0$ is fixed, the
states in the Schr\"{o}dinger representation are characterized as
functionals $\Psi:\mathscr{S}^{\prime}\rightarrow\mathbb{C}$
belonging to a certain Hilbert space
$\mathscr{H}_{s}(t_0)=L^{2}(\mathscr{S}^{\prime},\mathrm{d}\mu_{t_0})$.
Due to the infinite dimensionality of the quantum configuration
space, it is not possible to define a Lebesgue-type measure
$\mu_{t_0}$, but rather a probability one\footnote{This is, a
measure satisfying
$\int_{\mathscr{S}^{\prime}}\mathrm{d}\mu_{t_0}=1$.}. This
representation is constructed in such a way that it is associated
with the state $\varpi_{t_0}:\mathcal{A}\rightarrow\mathbb{C}$ on
the Weyl algebra $\mathcal{A}$ whose action on the elementary
observables is given by \cite{CorichiSchr1,Wald}
\begin{equation}\label{statesigma}
\varpi_{t_0}(W(\lambda))=\exp\left(-\frac{1}{4}\omega(J_{t_0}(\lambda),\lambda)\right),\,\,\,\lambda\in\mathbf{P}\,.
\end{equation}
We will check in section \ref{UNITARY} that the Schr\"{o}dinger
representations corresponding to different values of the time
parameter are unitarily equivalent due to the unitary
implementability of the dynamics. We require that the configuration
observables are represented as \emph{multiplication} operators, so
that for $\lambda=(0,f)\in\mathbf{P}$,
\begin{equation}\label{Qoperator}
\displaystyle \pi_{s}(t_0)\cdot
W(\lambda)|_{\lambda=(0,f)}=\exp(i\hat{Q}_{t_0}[f])\,,\quad
\left(\hat{Q}_{t_0}[f]\Psi\right)[\tilde{Q}]=\tilde{Q}(f)\Psi[\tilde{Q}]\,,
\end{equation}
where $\tilde{Q}\in\mathscr{S}^{\prime}$ denotes a generic
distribution of $\mathscr{S}^{\prime}$ and $\tilde{Q}(f)$ gives the
usual pairing between $\mathscr{S}$ and $\mathscr{S}^{\prime}$,
$\Psi\in\mathscr{D}_{\hat{Q}_{t_0}[f]}\subset\mathscr{H}_{s}(t_{0})$
(the self-adjointness of the configuration and momentum operators
will be discussed in subsection \ref{CCR}), and
$\pi_{s}(t_0):\mathcal{A}\rightarrow
\mathscr{L}(\mathscr{H}_{s}(t_0))$ is the map from the Weyl algebra
$\mathcal{A}$ to the collection of bounded linear operators on
$\mathscr{H}_{s}(t_0)$. In this way, the measure $\mu_{t_0}$ is
Gaussian with covariance $\mathcal{C}(t_0):=-B(t_0)/2$, and thus its
Fourier transform is given by\footnote{This equation corresponds to
the expectation value (\ref{statesigma}) evaluated for
$\lambda=(0,f)$ that must coincide with the integral
$\int_{\mathscr{S}^{\prime}}\bar{\Psi}^{(t_0)}_{0}(\exp(i\hat{Q}_{t_0}[f])\Psi^{(t_0)}_{0})\,\mathrm{d}\mu_{t_0}$,
where $\Psi^{(t_0)}_{0}\in\mathscr{H}_{s}(t_0)$ is the normalized
vacuum state.}
\begin{equation}
\int_{\mathscr{S}^{\prime}}\mathrm{e}^{i\tilde{Q}(f)}\,\mathrm{d}\mu_{t_0}[\tilde{Q}]=\exp\left(\frac{1}{4}\langle
f,B(t_0)f\rangle\right).
\end{equation}
The covariance operator
$\check{\mathcal{C}}_{t_0}:\mathbf{P}\rightarrow\mathbb{R}$ is
defined as $\check{\mathcal{C}}_{t_0}(f,g):=\langle
f,\mathcal{C}(t_0)g\rangle$, $f,g\in\mathscr{S}$. Since
$|z_{\ell}(t_0)|^{2}$ is bounded and positive definite
$\forall\,t\in(0,\pi)$ and $\forall\,\ell\in\mathbb{N}\cup\{0\}$, it
follows that, as expected, $\check{\mathcal{C}}_{t_0}$ is a
nondegenerate positive definite and continuous bilinear form on the
topological vector space $\mathscr{S}$.
\subsection{Properties of the measure}\label{METRIC}
\indent In order to easily visualize the nature of the measure
$\mu_{t_0}$, note that upon restriction on any number of coordinate
directions in $\mathscr{S}^{\prime}$, say
$\tilde{Q}_{\ell}=\tilde{Q}(Y_{\ell 0})$, $\ell=0,1,\ldots,n$, we
obtain
\begin{equation}\label{restrict}
\displaystyle
\mathrm{d}\mu_{t_0}|_{(\tilde{Q}_{\ell})_{\ell=0}^{n}}=\prod_{\ell=0}^{n}\,\frac{1}{\sqrt{2\pi}}\,|z_{\ell}(t_0)|^{-1}\exp\left(-\frac{1}{2}|z_{\ell}(t_0)|^{-2}\tilde{Q}_{\ell}^{2}\right)\mathrm{d}\tilde{Q}_{\ell}\,.
\end{equation}
in terms of the Lebesgue measures $\mathrm{d}\tilde{Q}_{\ell}$
\cite{Glimm&Jaffe}.
\bigskip
\indent Now, we will prove that the support of the measure is
smaller than $\mathscr{S}^{\prime}$. Concretely, it is given by the
topological dual of the subspace of symmetric functions in the
Sobolev space $H^{\epsilon}(\mathbb{S}^{2})$ on the 2-sphere, for
any $\epsilon>0$. With this aim, we will use the Bochner-Minlos
theorem that plays a key role in the characterization of measures on
functional spaces, closely relying on the analysis developed in
\cite{Simon1}. We first point out that the space of test functions
$\mathscr{S}$ is topologically isomorphic to
$\varsigma=\bigcap_{r\in\mathbb{Q}}\varsigma_{r}$, where
\begin{equation}
\varsigma_{r}:=\Big\{f=(f_{\ell})_{\ell=0}^{\infty}\,\,\big|\,\,\|f\|_{r}^{2}:=\sum_{\ell=0}^{\infty}(\ell+1/2)^{2r}f_{\ell}^{2}<+\infty\Big\}\,,
\end{equation}
endowed with the Fr\'{e}chet topology induced by the norms
$(\|\cdot\|_{r})_{r\in\mathbb{Q}}$. As a consequence of the
Bochner-Minlos theorem (see the theorem 2.3 of \cite{Simon1}), if
the covariance $\check{\mathcal{C}}_{t_0}$ is continuous in the norm
associated with some $\varsigma_{r}$, then the Gaussian measure
$\mu_{t_0}$ has support on any set of the form
\begin{equation}\label{setr}
\Big\{f\,\,\big|\,\,\sum_{\ell=0}^{\infty}(\ell+1/2)^{-2r-1-2\epsilon}f_{\ell}^{2}<+\infty\,,\,\,\epsilon>0\Big\}\subset\displaystyle\bigcup_{r\in\mathbb{Q}}\varsigma_{r}=\varsigma^{\prime}\,,
\end{equation}
where $\varsigma^{\prime}$ is the topological dual\footnote{Here,
$g\in\varsigma^{\prime}$ is associated with the linear functional
$L_{g}(f):=\sum_{\ell=0}^{\infty}f_{\ell}g_{\ell}$,
$f\in\varsigma$.} of $\varsigma$. In particular, given the
asymptotic behavior of the $z_{\ell}$ functions, it is
straightforward to check the continuity in the norm corresponding to
$r=-1/2$, i.e.,
\begin{equation}
\langle f,\mathcal{C}(t_0)f\rangle\le
N(t_0)\sum_{\ell=0}^{\infty}(\ell+1/2)^{-1}f_{\ell}^{2}
\end{equation}
for certain constant $N(t_0)\in\mathbb{R}^{+}$. According to this
result, the measure $\mu_{t_0}$ is concentrated on the set
(\ref{setr}) for $r=-1/2$, which can be identified with the
topological dual $\mathfrak{h}_{\epsilon}^{\prime}$ of the subspace
of symmetric functions in the Sobolev space
$H^{\epsilon}(\mathbb{S}^{2})$, for any $\epsilon>0$,
\begin{equation}
\mathfrak{h}_{\epsilon}:=\Big\{f\in
H^{\epsilon}(\mathbb{S}^{2})\,\,\big|\,\,\mathcal{L}_{\sigma}f=0\,,\,\,\|f\|_{\epsilon}^{2}:=\sum_{\ell=0}^{\infty}(\ell+1/2)^{2\epsilon}f_{\ell}^{2}<+\infty\Big\}\,,\,\,\,\epsilon>0\,,
\end{equation}
where $f_\ell$ are the Fourier coefficients of the function $f$.
Therefore, the typical field configurations are not as singular as
the delta functions or their derivatives. However, the subset
$\mathfrak{b}\subset\mathfrak{h}_{\epsilon}^{\prime}$ of symmetric
$L^{2}(\mathbb{S}^{2})$ functions has also measure zero. Indeed,
consider the characteristic function $\chi_{\mathfrak{b}}$ of the
measurable set $\mathfrak{b}$, defined by
\begin{equation}
\chi_{\mathfrak{b}}[\tilde{Q}]:=\lim_{\alpha\rightarrow
+0}\,\exp\left(-\alpha\sum_{\ell=0}^{\infty}\tilde{Q}_{\ell}^{2}\right),
\end{equation}
so that $\chi_{\mathfrak{b}}[\tilde{Q}]=1$, for $\tilde{Q}\in
\mathfrak{b}$, and vanishes anywhere else. Making use of the
restriction (\ref{restrict}), and applying the Lebesgue monotone
convergence theorem, it is straightforward to obtain
\begin{equation}
\mu_{t_0}(\mathfrak{b})=\int_{\mathscr{S}^{\prime}}\chi_{\mathfrak{b}}[\tilde{Q}]\,\mathrm{d}\mu_{t_0}[\tilde{Q}]=\lim_{\alpha\rightarrow
+0}\,\lim_{n\rightarrow\infty}\,\prod_{\ell=0}^{n}\frac{1}{\sqrt{1+2\alpha|z_{\ell}(t_0)|^{2}}}\,.
\end{equation}
The limit of the product vanishes as $n\rightarrow\infty$ because of
the nonconvergence of the series
$\sum_{\ell=0}^{\infty}\log(1+2\alpha|z_{\ell}(t_0)|^{2})$, and
hence\footnote{Since $\mathscr{S}\hookrightarrow\mathfrak{b}$, we
have that, as usual for a field theory, the measure $\mu_{t_0}$ is
not supported on the classical configuration space $\mathscr{S}$.
This is precisely the reason why a suitable distributional extension
of $\mathscr{S}$ must be chosen as measure space in order to
construct the $L^{2}$ space for the Schr\"{o}dinger representation.}
$\mu_{t_0}(\mathfrak{b})=0$.
\subsection{Canonical commutation relations}\label{CCR}
\indent By virtue of the interrelation between operator
representation and measures, the representation of the basic
momentum observables is \cite{CorichiSchr1}
\begin{eqnarray}
&&\pi_{s}(t_0)\cdot
W(\lambda)|_{\lambda=(-g,0)}=\exp(i\hat{P}_{t_0}[g])\,,\nonumber\\
&&\left(\hat{P}_{t_0}[g]\Psi\right)[\tilde{Q}]=-i(D_{\tilde{Q}}\Psi)[g]-i\tilde{Q}\left(B^{-1}(t_0)(\mathbf{1}-i
A(t_0))g\right)\Psi[\tilde{Q}]\,,
\end{eqnarray}
where $\tilde{Q}\in\mathscr{S}^{\prime}$,
$\Psi\in\mathscr{D}_{\hat{P}_{t_0}[g]}\subset\mathscr{H}_{s}(t_{0})$,
and $(D_{\tilde{Q}}\Psi)$ denotes the directional derivative of the
functional $\Psi$ in the direction defined by
$\tilde{Q}\in\mathscr{S}^{\prime}$, which will acquire a definite
sense in terms of the modes $\tilde{Q}_{\ell}$. Note the appearance
of the multiplicative term in the momentum operator that depends
both on the measure $\mu_{t_0}$ --uniquely characterized by the
operator $B(t_0)$-- and the operator $A(t_0)$. It guarantees that
the momentum operator is symmetric with respect to the inner
pro\-duct $\langle\cdot|\cdot\rangle_{\mathscr{H}_{s}(t_0)}$.
Indeed, just by using the Gaussian integration by parts formula
$\int_{\mathscr{S}^{\prime}}(D_{\tilde{Q}}\Psi)[f]\,\mathrm{d}\mu_{t_0}[\tilde{Q}]=\int_{\mathscr{S}^{\prime}}\tilde{Q}(\mathcal{C}^{-1}(t_0)f)\Psi[\tilde{Q}]\,\mathrm{d}\mu_{t_0}[\tilde{Q}]$
that can be easily deduced from (\ref{restrict}), we obtain
\begin{eqnarray*}
\big\langle\Phi\,\big|\,\hat{P}_{t_0}[g]\Psi\big\rangle_{\mathscr{H}_{s}(t_0)}&=&i\big\langle
(D_{\tilde{Q}}\Phi)[g]\,\big|\,\Psi\big\rangle_{\mathscr{H}_{s}(t_0)}+i\big\langle\Phi\,\big|\,\tilde{Q}\big(B^{-1}(t_0)(\mathbf{1}+iA(t_0))g\big)\Psi\big\rangle_{\mathscr{H}_{s}(t_0)}\\
&=&i\big\langle(D_{\tilde{Q}}\Phi)[g]+\tilde{Q}\big(B^{-1}(t_0)(\mathbf{1}-iA(t_0))g\big)\Phi\,\big|\,\Psi\big\rangle_{\mathscr{H}_{s}(t_0)}\\
&=&\big\langle\hat{P}_{t_0}[g]\Phi\,\big|\,\Psi\big\rangle_{\mathscr{H}_{s}(t_0)}\,,\,\,\,\forall\,\Phi,\Psi\in\mathscr{D}_{\hat{P}_{t_0}[g]}\,.
\end{eqnarray*}
Let us denote $\hat{Q}_{\ell}(t_{0}):=\hat{Q}_{t_0}[Y_{\ell 0}]$ and
$\hat{P}_{\ell}(t_0):=\hat{P}_{t_0}[Y_{\ell 0}]$, where the
$\hat{Q}_{t_0}[f]$ operator was defined in (\ref{Qoperator}). By
considering the normalization condition (\ref{norm}) and equation
(\ref{invB}), we get
\begin{equation*}
\big(B^{-1}(t_0)(\mathbf{1}-iA(t_0))Y_{\ell
0}\big)(s)=i\frac{\dot{\bar{z}}_{\ell}(t_0)}{\bar{z}_{\ell}(t_0)}Y_{\ell
0}(s)\,,
\end{equation*}
and hence we finally obtain
\begin{equation}\label{QlyPl}
\hat{Q}_{\ell}(t_0)\Psi=\tilde{Q}_{\ell}\Psi\,,\,\,\,\,\hat{P}_{\ell}(t_0)\Psi=-i\frac{\partial\Psi}{\partial
\tilde{Q}_\ell}+\frac{\dot{\bar{z}}_{\ell}(t_0)}{\bar{z}_{\ell}(t_0)}\tilde{Q}_{\ell}\Psi\,,
\end{equation}
where $\Psi$ is a functional of the components $\tilde{Q}_{\ell}$.
The canonical commutation relations
$[\hat{Q}_{\ell}(t_0),\hat{P}_{\ell^{\prime}}(t_0)]=i\delta(\ell,\ell^{\prime})\hat{\mathbb{I}}$
and
$[\hat{Q}_{\ell}(t_0),\hat{Q}_{\ell^{\prime}}(t_0)]=0=[\hat{P}_{\ell}(t_0),\hat{P}_{\ell^{\prime}}(t_0)]$
are obviously satisfied on the appropriate domains.
\bigskip
\indent It is possible to relate the Fock and Schr\"{o}dinger
representations through the action of the annihilation and creation
operators on wave functionals \cite{CorichiSchr1}. Making use of
equations (\ref{alt0}) and (\ref{QlyPl}), we get
\begin{equation}\label{aadag}
\hat{a}_{\ell}(t_0)=\bar{z}_{\ell}(t_0)\frac{\partial}{\partial
\tilde{Q}_{\ell}}\,,\quad\hat{a}_{\ell}^{\dag}(t_0)=-z_{\ell}(t_0)\frac{\partial}{\partial
\tilde{Q}_{\ell}}+\frac{1}{\bar{z}_{\ell}(t_0)}\tilde{Q}_{\ell}\,.
\end{equation}
In particular, the vacuum state is given by the unit constant
functional (up to multiplicative phase)
\begin{equation*}
\Psi^{(t_0)}_{0}[\tilde{Q}]=1\,,\,\,\,\forall\,\tilde{Q}\in\mathscr{S}^{\prime}\,.
\end{equation*}
There exists then a map
$\hat{T}_{t_0}:\mathscr{F}_{\mathcal{P}}\rightarrow\mathscr{H}_{s}(t_0)$
that unitarily connects the creation and annihilation operators of
the Fock and Schr\"{o}dinger representations \cite{Glimm&Jaffe}.
Given the annihilation and creation operators associated with the
modes $z_{\ell}Y_{\ell0}$, $\hat{a}_{\ell}$ and
$\hat{a}_{\ell}^{\dag}$ respectively, the expressions (\ref{aadag})
correspond to
$\hat{T}_{t_0}\circ\hat{a}_{\ell}\circ\hat{T}^{-1}_{t_0}$ and
$\hat{T}_{t_0}\circ\hat{a}_{\ell}^{\dag}\circ\hat{T}^{-1}_{t_0}$,
respectively. These relations, and the action
$\Psi^{(t_0)}_{0}=\hat{T}_{t_0}|0\rangle_{\mathcal{P}}$ on the Fock
vacuum state $|0\rangle_{\mathcal{P}}\in\mathscr{F}_{\mathcal{P}}$,
univocally characterize the unitary transformation $\hat{T}_{t_0}$.
\\
\indent The general procedure that we have followed guarantees the
self-adjointness of the configu\-ration and momentum operators.
Indeed, by the successive action of the creation operator on the
vacuum state $\Psi_{0}^{(t_0)}$, we obtain the $N$-particle states
in the Schr\"{o}dinger representation. These states define, for
$N<\infty$, a common, invariant, dense domain of analytic vectors
for the configuration and momentum operators, so that their
essential self-adjointness is gua\-ranteed, and hence the existence
of unique self-adjoint extensions (see Nelson's analytic vector
theorem in \cite{ReedSimon}).
\bigskip
\indent Finally, the probabilistic interpretation of the models is
given by the usual Born's corres\-pondence rules \cite{Prugovecki}.
Concretely, given $f\in\mathscr{S}$, the theoretical probability
that a measurement carried out in the state $\Psi$ at certain time
to determine the value of $\tilde{Q}(f)$ will yield a result
contained in the Borel set $\Delta\in\mathrm{Bor}(\mathbb{R})$ for
some $\tilde{Q}\in\mathscr{S}^{\prime}$ is given by
\begin{equation}\label{probability}
\mathrm{P}_{\Psi}^{\hat{Q}_{t_0}[f]}(\Delta)=\|\Psi\|_{\mathscr{H}_{s}(t_0)}^{-2}\big\langle\Psi\,\big|\,E^{\hat{Q}_{t_0}[f]}(\Delta)\Psi\big\rangle_{\mathscr{H}_{s}(t_0)}
=\|\Psi\|_{\mathscr{H}_{s}(t_0)}^{-2}\int_{V_{f,\,\Delta}}\big|\Psi[\tilde{Q}]\big|^{2}\,\mathrm{d}\mu_{t_0}[\tilde{Q}]\,,
\end{equation}
where $E^{\hat{Q}_{t_0}[f]}(\Delta)$ is the spectral measure
univocally associated with $\hat{Q}_{t_0}[f]$, defined by
$\big(E^{\hat{Q}_{t_0}[f]}(\Delta)\Psi\big)[\tilde{Q}]=\chi_{V_{f,\,\Delta}}[\tilde{Q}]\,\Psi[\tilde{Q}]$,
with $\chi_{V_{f,\,\Delta}}$ being the characteristic function of
the measu\-rable set
$V_{f,\,\Delta}:=\{\tilde{Q}\in\mathscr{S}^{\prime}\,|\,\tilde{Q}(f)\in\Delta\}\in\sigma(\mathrm{Cyl}(\mathscr{S}^{\prime}))$.
$\|\cdot\|_{\mathscr{H}_{s}(t_0)}$ denotes the norm associated with
the inner product
$\langle\cdot|\cdot\rangle_{\mathscr{H}_{s}(t_0)}$. According to
this, the measure $\mu_{t_0}$ admits the following physical
interpretation: it defines the probability measure
(\ref{probability}) for the vacuum state $\Psi^{(t_0)}_{0}$.
\section{Unitary equivalence of Schr\"{o}dinger representations}\label{UNITARY}
\indent Denote by
$\tau_{(t_0,t_1)}:=\mathfrak{I}_{t_1}^{-1}\circ\mathfrak{I}_{t_0}:\mathbf{P}\rightarrow\mathbf{P}$,
$t_1>t_0$, the symplectomorphism that (i) takes Cauchy data on the
embedding
$\iota_{t_0}(\mathbb{S}^{2})\subset(0,\pi)\times\mathbb{S}^{2}$;
(ii) evolves them to obtain the corres\-ponding solution in
$\mathcal{S}$; and (iii) finally finds the Cauchy data that this
solution induces on the embedding
$\iota_{t_1}(\mathbb{S}^{2})\subset(0,\pi)\times\mathbb{S}^{2}$.
This map implements the classical time evolution from the embedding
$\iota_{t_0}(\mathbb{S}^{2})$ to $\iota_{t_1}(\mathbb{S}^{2})$ on
the canonical phase space, inducing a one-parameter family of states
on the Weyl algebra: Let
$\alpha_{(t_0,t_1)}:\mathcal{A}\rightarrow\mathcal{A}$ be the
$*$-automorphism associated with the symplectic transformation
$\tau_{(t_0,t_1)}$, defined by $\alpha_{(t_0,t_1)}\cdot
W(\lambda):=W(\tau_{(t_0,t_1)}(\lambda))$; the dynamical evolution
of states in the algebraic formulation of the theory is then given
by $\varpi_{t_1}=\varpi_{t_0}\circ\alpha_{(t_0,t_1)}^{-1}$
(Schr\"{o}dinger picture), with $\varpi_{t_0}$ defined in equation
(\ref{statesigma}). The evolved state $\varpi_{t_1}$ acts on the
elementary observables as
$\varpi_{t_1}(W(\lambda))=\exp\big(-\omega(J_{t_1}(\lambda),\lambda)/4\big)$,
where the complex structure
\begin{equation*}
J_{t_{1}}:=\tau_{(t_0,t_1)}\circ
J_{t_0}\circ\tau^{-1}_{(t_0,t_1)}=\mathfrak{I}_{t_1}^{-1}\circ
J_{\mathcal{P}}\circ\mathfrak{I}_{t_1}:\mathbf{P}\rightarrow\mathbf{P}
\end{equation*}
defines a new Schr\"{o}dinger representation\footnote{Here, we will
make a notational abuse and simply denote the triplet
$\big(\mathscr{H}_{s}(t),\pi_{s}(t),\Psi^{(t)}_{0}\big)$ as
$\mathscr{H}_{s}(t)$.} $\mathscr{H}_{s}(t_1)$. Clearly, the
condition of unitary equivalence of the Schr\"{o}dinger
representations corresponding to different values $t_0<t_1$ of the
time parameter amounts to demanding the unitary implementability of
the symplectic transformation $\tau_{(t_0,t_1)}$ in the
$\mathscr{H}_{s}(t_0)$ representation\footnote{In this way,
$J_{t_1}-J_{t_0}$ is a Hilbert-Schmidt operator in the one-particle
Hilbert space constructed from $J_{t_0}$ (or equivalently
$J_{t_1}$).}. In that case, there exists a unitary transformation
$\hat{V}_{(t_0,t_1)}:\mathscr{H}_{s}(t_0)\rightarrow\mathscr{H}_{s}(t_1)$
mapping the configuration and momentum operators from one
representation into the other, in such a way that
\begin{eqnarray}\label{Vt0t1}
\hat{V}_{(t_0,t_1)}\circ\hat{a}_{\ell}(t_0)\circ\hat{V}^{-1}_{(t_0,t_1)}&=&\alpha_{\ell}(t_0,t_1)\hat{a}_{\ell}(t_1)+\beta_{\ell}(t_0,t_1)\hat{a}^{\dag}_{\ell}(t_1)\,,\\
\hat{V}_{(t_0,t_1)}\circ\hat{a}^{\dag}_{\ell}(t_0)\circ\hat{V}^{-1}_{(t_0,t_1)}&=&\bar{\beta}_{\ell}(t_0,t_1)\hat{a}_{\ell}(t_1)+\bar{\alpha}_{\ell}(t_0,t_1)\hat{a}^{\dag}_{\ell}(t_1)\,,\nonumber
\end{eqnarray}
where
\begin{equation}\label{alpha&beta}
\alpha_{\ell}(t_0,t_1):=i\Big(\bar{z}_{\ell}(t_0)\dot{z}_{\ell}(t_1)-z_{\ell}(t_1)\dot{\bar{z}}_{\ell}(t_0)\Big),\,\,\,\,
\beta_{\ell}(t_0,t_1):=i\Big(\bar{z}_{\ell}(t_0)\dot{\bar{z}}_{\ell}(t_1)-\bar{z}_{\ell}(t_1)\dot{\bar{z}}_{\ell}(t_0)\Big).
\end{equation}
According to the results achieved in \cite{BarberoG.:2007qt}, once
we consider an $SO(3)$ invariant complex structure $J_{\mathcal{P}}$
verifying the conditions (\ref{rhonu}), the quantum dynamics can be
unita\-rily implemented in $\mathscr{F}_{\mathcal{P}}$, i.e., there
exists a unitary operator
$\hat{U}(t,t^{\prime}):\mathscr{F}_{\mathcal{P}}\rightarrow\mathscr{F}_{\mathcal{P}}$
encoding the information about the evolution of the system from time
$t$ to $t^{\prime}$. This condition is precisely ensured by the
square summability of the $\beta_{\ell}$ coefficients appearing in
the Bogoliubov transformation (\ref{Vt0t1}), and guarantees that the
map $\hat{V}_{(t_0,t_1)}$ is well defined, i.e., the Schr\"{o}dinger
representations corresponding to different times $t_0,t_1$ are
equivalent. The unitary transformation
$\hat{V}_{(t_0,t_1)}=\hat{T}_{t_1}\circ\hat{U}(t_0,t_1)\circ\hat{T}_{t_0}^{-1}$
relating them is completely cha\-racterized by the relations
(\ref{Vt0t1}) and the action on the vacuum state
$\Psi^{(t_0)}_{0}\in\mathscr{H}_{s}(t_0)$, given by
\begin{equation}\label{VPsi0}
\Big(\hat{V}_{(t_0,t_1)}\Psi^{(t_0)}_{0}\Big)[\tilde{Q}]=\prod_{\ell=0}^{\infty}\frac{|z_{\ell}(t_1)|^{1/2}}{|z_{\ell}(t_0)|^{1/2}}\exp\left(-\frac{1}{2}\frac{\beta_{\ell}(t_0,t_1)}{\bar{z}_{\ell}(t_0)\bar{z}_{\ell}(t_1)}\tilde{Q}_{\ell}^{2}\right)\in\mathscr{H}_{s}(t_1)\,,
\end{equation}
where we have used the fact that
$\hat{a}_{\ell}(t_0)\Psi^{(t_0)}_{0}=0$,
$\forall\,\ell\in\mathbb{N}\cup\{0\}$, and the expressions
(\ref{norm}), (\ref{aadag}) and (\ref{Vt0t1}) to obtain the
differential equations verified by this state; namely, $\partial
\hat{V}_{(t_0,t_1)}\Psi^{(t_0)}_{0}/\partial\tilde{Q}_{\ell}=-\big(\beta_{\ell}(t_0,t_1)/\bar{z}_{\ell}(t_0)\bar{z}_{\ell}(t_1)\big)\tilde{Q}_{\ell}\hat{V}_{(t_0,t_1)}\Psi^{(t_0)}_{0}$,
$\ell\in\mathbb{N}\cup\{0\}$. The equation (\ref{VPsi0}) must be
interpreted as the limit in the $\mathscr{H}_{s}(t_1)$-norm of the
Cauchy sequence of normalized vectors $f_{n}\in\mathscr{H}_{s}(t_1)$
obtained by extending the product (\ref{VPsi0}) to a finite integer
$n\in\mathbb{N}$.
\bigskip
\indent The mutual absolute continuity of any two Gaussian measures
associated with different times $t_0,t_1\in(0,\pi)$ is also
verified\footnote{It is possible to show that the equivalence of
measures is a necessary condition for the unitary equivalence
between Schr\"{o}dinger representations \cite{Mena:2007schr}.}, i.e.
they have the same zero measure sets. This property requires that
the operator $\mathcal{C}(t_1)-\mathcal{C}(t_0)$ is Hilbert-Schmidt
\cite{Simon,Yamasaki,Mourao}, which is satisfied in our case.
Indeed, it is straightforward to check that the sequence
$\big(|z_{\ell}(t_1)|^{2}-|z_{\ell}(t_0)|^2\big)_{\ell=0}^{\infty}$
is square summable. On the contrary, for the original scalar field
$\phi=\xi/\sqrt{\sin t}$, for which the time evolution is not
unitary, we get the nonequivalence of the representations obtained
for different times, and also the impossibility of such continuity.
In this case, the mutual singularity of measures can be expected, as
was proved for the vacuum Gowdy $\mathbb{T}^{3}$ model in
\cite{Torre:2007zj}.
\bigskip
\indent Note that the map
$\hat{T}_{t_0}:\mathscr{F}_{\mathcal{P}}\rightarrow\mathscr{H}_{s}(t_0)$
introduced in subsection \ref{CCR} does not connect the
configuration and momentum operators of the Fock representation,
$\hat{Q}_{\ell}(t)=z_{\ell}(t)\hat{a}_{\ell}+\bar{z}_{\ell}(t)\hat{a}_{\ell}^{\dag}$
and
$\hat{P}_{\ell}(t)=\dot{z}_{\ell}(t)\hat{a}_{\ell}+\dot{\bar{z}}_{\ell}(t)\hat{a}_{\ell}^{\dag}$,
respectively, with those of the Schr\"{o}dinger one (except for
$t=t_0$). However, owing to the unitary implementability of the
dynamics, there exists also a unitary transformation
$\hat{V}_{\mathscr{F}_{\mathcal{P}},t_0}(t):\mathscr{F}_{\mathcal{P}}\rightarrow\mathscr{H}_{s}(t_0)$,
such that $\hat{V}_{\mathscr{F}_{\mathcal{P}},t_0}(t)\circ
\hat{a}_{\ell}\circ\hat{V}_{\mathscr{F}_{\mathcal{P}},t_0}^{-1}(t)=\alpha_{\ell}(t,t_0)\hat{a}_{\ell}(t_0)+\beta_{\ell}(t,t_0)\hat{a}_{\ell}^{\dag}(t_0)$,
$\hat{V}_{\mathscr{F}_{\mathcal{P}},t_0}(t)\circ
\hat{a}^{\dag}_{\ell}\circ\hat{V}_{\mathscr{F}_{\mathcal{P}},t_0}^{-1}(t)=\bar{\beta}_{\ell}(t,t_0)\hat{a}_{\ell}(t_0)+\bar{\alpha}_{\ell}(t,t_0)\hat{a}_{\ell}^{\dag}(t_0)$,
relating these operators. In terms of the unitary evolution operator
on $\mathscr{F}_{\mathcal{P}}$, we have
$\hat{V}_{\mathscr{F}_{\mathcal{P}},t_0}(t)=\hat{T}_{t_0}\circ\hat{U}^{-1}(t_0,t)$.
Fina\-lly, given the quantum Hamiltonian (\ref{QuantHamiltonian}) in
the Fock representation, with dense domain
$\mathscr{D}_{\hat{H}_{0}(t)}\subset\mathscr{F}_{\mathcal{P}}$
spanned by the states with a finite number of particles, the
corresponding operator in the $\mathscr{H}_{s}(t_0)$ representation
is given by
$\hat{V}_{\mathscr{F}_{\mathcal{P}},t_0}(t)\circ\hat{H}_{0}(t)\circ\hat{V}_{\mathscr{F}_{\mathcal{P}},t_0}^{-1}(t)$,
\begin{equation*}\label{SchquantHamilt}
\frac{1}{2}\sum_{\ell=0}^{\infty}\Bigg[-\frac{\partial^{2}}{\partial\tilde{Q}_{\ell}^{2}}
-2i\frac{\dot{\bar{z}}_{\ell}(t_0)}{\bar{z}_{\ell}(t_0)}\tilde{Q}_{\ell}\frac{\partial}{\partial\tilde{Q}_{\ell}}
+\left(\frac{\dot{\bar{z}}^{2}_{\ell}(t_0)}{\bar{z}_{\ell}^{2}(t_0)}+\frac{1}{4}\big(1+\csc^{2}t\big)+\ell(\ell+1)\right)\big(\tilde{Q}_{\ell}^{2}-|z_{\ell}(t_0)|^{2}\big)\Bigg]
\end{equation*}
modulo an irrelevant real term proportional to the identity. Note,
by contrast, that the complex independent term appearing in the
previous expression is necessary to ensure that the operator is
self-adjoint. This Hamiltonian is defined in the dense subspace
$\hat{V}_{\mathscr{F}_{\mathcal{P}},t_0}(t)\mathscr{D}_{\hat{H}_{0}(t)}=\big\{\hat{V}_{\mathscr{F}_{\mathcal{P}},t_0}(t)f\,|\,f\in\mathscr{D}_{\hat{H}_{0}(t)}\big\}\subset\mathscr{H}_{s}(t_0)$
generated by the cyclic vector
$\hat{V}_{\mathscr{F}_{\mathcal{P}},t_0}(t)|0\rangle_{\mathcal{P}}\in\mathscr{H}_{s}(t_0)$.
\section{Comments}\label{COMMENTS}
\indent We have constructed the Schr\"{o}dinger representation for
the linearly polarized Gowdy $\mathbb{S}^{1}\times\mathbb{S}^{2}$
and $\mathbb{S}^{3}$ models coupled to massless scalar fields in a
mathematically rigorous and self-contained way, completing in this
way the quantization of these systems given in
\cite{BarberoG.:2007qt}. We have assumed the use of the redefined
fields for which the dynamics is well defined and unitary. As proved
in \cite{BarberoG.:2007qt,MenaSchr}, the complex structures
$J_{\mathcal{P}}$ verifying the conditions (\ref{rhonu}) lead to
unitarily equivalent quantum theories, and hence the Schr\"{o}dinger
representations corresponding to them are also equivalent. Note
that, as far as the support of the measure or the unitary
implementability of the dynamics is concerned, the discussions and
results obtained for these models are analogous to those found for
the vacuum $\mathbb{T}^{3}$ model in \cite{Torre:2007zj} and
\cite{Mena:2007schr}. It could be argued that this similarity is
somehow expected due to the fact that the critical features of the
systems are determined by their ultraviolet behaviors, and these
should not be sensitive to the topology of the spacetimes. This
argument can be found, for example, in \cite{Birrell&Davies}
concerning the simplest generalization of Minkowski space quantum
field theory to the $\mathbb{R}\times\mathbb{T}^{3}$ spacetime with
closed spatial sections. This compactification can modify the
long-wavelength behavior of the system, but not the ultraviolet one,
so that both spacetimes suffer from the same ultraviolet divergence
properties. Such statement is clearly intuitive, but it is not
obvious to what extent it is true for quantum field theories in
spacetimes, like those corresponding to the Gowdy models, that are
not locally isometric. In this respect, the similarity of the
results is probably due to the similar structure of the differential
equations verified by the mode functions. In any case, it is
interesting to analyze in a rigorous way the particularities of the
quantizations for the different topologies.
\bigskip
\indent Finally, it is important to highlight the advantage of using
the re-scaled fields that make the quantum dynamics unitary, given
that in this case it is possible to obtain a unique (up to unitary
equivalence) Fock/Schr\"{o}dinger representation for these models.
As a direct consequence, the mutual absolute continuity of the
measures corresponding to different times is verified. Neither of
these properties can be attained for the original variables. In this
situation, even if the failure of the unitarity of time evolution
and the mutual singularity of measures are not serious obstacles for
a suitable probabilistic interpretation of the models
\cite{Torre:2002xt,Torre:2007zj}, we must face the lack of
uniqueness of the representation.
\begin{acknowledgments}
The author is indebted to J. Fernando Barbero G. and Eduardo J. S.
Villase\~{n}or for many enlightening discussions and helpful
suggestions. He also wishes to thank G. A. Mena Marug\'an for his
valuable comments regarding the uniqueness of the Fock
representation, that have led to include some necessary
clarifications in the main body of the paper. The author
acknowledges the support of the Spanish Research Council (CSIC)
through a I3P research assistantship. This work is also supported by
the Spanish MEC under the research grant FIS2005-05736-C03-02.
\end{acknowledgments}
|
1,116,691,497,400 | arxiv | \section{Introduction}\label{sec:SI_framework}
There is a growing wave towards personalized decision-making, where the aim is to select the optimal intervention for each unit from a collection of interventions.
In policy evaluation, for instance, one may want to design a governmental policy (intervention) that is particularly suited to the socio-economic realities of a geographic location (unit).
The key challenge in doing so---and indeed the fundamental problem of causal inference---is that we often only get to observe a unit undergo a single intervention or stay under control (i.e., no intervention).
This is true not only in observational studies such as policy evaluation, but also in experimental settings such as clinical trials or A/B testing in e-commerce.
Thus, we would ideally like to infer the counterfactual outcome of each unit under any intervention.
The sub-problem of estimating what would have happened to a ``treated'' unit (i.e., one that undergoes an intervention) under control has a rich literature within econometrics and beyond.
A prominent framework to do so within the panel data setting, where one gets repeated measurements of a unit across time, is synthetic controls (SC) \cite{abadie1, abadie2}.
At its core, SC builds a synthetic model of a treated unit as a weighted combination of control units to estimate this counterfactual outcome of interest.
As an example, consider a canonical case study within the SC literature that evaluates the impact of tobacco legislations (interventions) on tobacco consumption within states in the United States (units) \cite{abadie2}.
In particular, to assess the effect of California's Proposition 99, a large-scale tobacco control program that included raising taxes on cigarettes by 25 cents and other non-fiscal measures \cite{Siegel2002},
the authors ask the question: ``what would have happened to California in the absence of any tobacco control legislation (control)?''
By expressing California as a weighted combination of control states (e.g., Colorado, Nevada, etc.), the authors find that there was a marked fall in tobacco consumption in California following Proposition 99, relative to the constructed ``synthetic California''.
Given its broad applicability, the SC framework has been methodologically extended, analyzed, and applied to answer similar questions in numerous, diverse settings.
Notably, SC has even been regarded as ``one of the most important innovations in the policy evaluation literature in the last 15 years'' \cite{athey}.
However, towards the broader goal of personalized decision-making, one needs to answer counterfactual questions beyond what would have happened under control.
Continuing with the example above, these may include
``what would have happened to Colorado (a control state) had it implemented a program similar to Proposition 99?'' or
``what would have happened to California had it instead raised cigarette taxes by 50 cents or more as in New York?''.
In essence, this boils down to answering what would have happened to a unit under any intervention, rather than just under control.
Indeed, extending SC to overcome this challenge has been posed as an open question
in \cite{abadie_survey}.
The goal of this work is to provide one meaningful answer to this question.
\subsection{Problem Statement}
We consider a panel data setting with $N \ge 1$ units, $T > 1$ time periods, and $D \ge 1$ interventions (or treatments).
Throughout, we index units with $n \in [N] \coloneqq \{1, \dots, N\}$, time with $t \in [T]$, and interventions with $d \in \{0, \dots, D-1\}$.
We follow the causal framework of \cite{neyman} and \cite{rubin}, where we denote the random variable (r.v.) $Y_{tn}^{(d)} \in \mathbb{R}$ as the potential outcome of unit $n$ at time $t$ under intervention $d$.
We denote $d=0$ as {\em control}, i.e., $Y_{tn}^{(0)}$ is the potential outcome for unit $n$ at time $t$ if no intervention occurs.
See Figure \ref{fig:potential_outcomes_tensor} for a graphical depiction of the various potential outcomes of interest.
\begin{figure}[t!]
\centering
\includegraphics[width=0.5\linewidth]
%
{images/po_tensor.pdf}
%
\caption{Potential outcomes of $N$ units over $T$ time periods across $D $ interventions encoded as a tensor.}
%
\label{fig:potential_outcomes_tensor}
\vspace{-2mm}
\end{figure}
\vspace{7.5pt}
\noindent {\em Pre- and post-intervention observations. }
We consider a data setup similar to that considered in the SC literature.
In particular, we consider $T_0$ with $1 \le T_0 < T$ as the intervention point, i.e., prior to $T_0$, all $N$ units are under control, and after $T_0$, each unit receives exactly
one of the $D$ interventions (including control); we define $T_1 = T - T_0$.
This partitions the time horizon into a pre- and post-intervention period.
See Figure \ref{fig:data_observations_SI} for a graphical depiction of our observation patterns.
Let the r.v. $D(n) \in \{0, \dots, D - 1\}$ denote the intervention assignment of unit $n$, and let the r.v. $\mathcal{D} = \{D(1), \dots, D(N) \}$ denote the collection of intervention assignments across units.
We group the units by the intervention they receive during the post-intervention period,
i.e., the r.v.
$
\mathcal{I}^{(d)} = \{n : D(n) = d ~ \text{for} ~ t > T_0\}
$
denotes the subgroup of units that receive intervention $d$,
and $N_d} %N^{(d) = | \mathcal{I}^{(d)} |$ denotes its size.
For all $d$, observe that both $\mathcal{I}^{(d)}$ and $N_d} %N^{(d)$ are deterministic
conditioned on $\mathcal{D}$.
The observation of unit $n$ at time $t$, denoted as $Y_{tn}$, obeys the following distributional assumption.
\begin{assumption} [SUTVA] \label{assumption:sutva}
For each unit $n$, we define $Y_{tn} = Y^{(0)}_{tn}$ for $t \le T_0$, and $Y_{tn} = Y^{(d)}_{tn}$ for $t > T_0$ if $D(n) = d$.
\end{assumption}
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]
{images/missing_1.pdf}
%
\caption{A graphical view of our observations for each unit. In the pre-intervention period, we observe all units under control. In the post-intervention period, we observe each unit under exactly one of the $D$ interventions. Notably, we do not observe any pre-intervention data associated with $d \neq 0$.}
%
\label{fig:data_observations_SI}
%
\vspace{-1mm}
\end{figure}
\vspace{7.5pt}
\noindent {\em Goal. }
The primary goal of this work is to estimate the causal parameter $\theta_n^{(d)}$, which represents unit $n$'s potential outcomes under intervention $d$ averaged over the post-intervention period.
We formally define $\theta_n^{(d)}$ in \eqref{eq:target_causal_parameter} in Section \ref{sec:causal.framework}.
\vspace{7.5pt}
\noindent {\em Challenge in estimating $\theta^{(d)}_n$ vs. $\theta^{(0)}_n$: difference in data availability. }
Identification of $\theta^{(d)}_n$ across all $(n,d)$ complements the existing SC literature, which focuses on estimating $\{\theta^{(0)}_n: n \notin \mathcal{I}^{(0)}\}$, i.e., the counterfactual average potential outcome under control for a treated unit, otherwise referred to as the treatment effect on the treated.
In contrast, within the SI framework, even if unit $n$ remains under control for $t > T_0$, we are interested in producing its counterfactuals under any intervention $d \neq 0$.
A reasonable question that arises is whether $\theta^{(d)}_n$ across $(n,d)$ can be estimated by simply fitting $N \times D$ separate SC estimators, one for each pair.
This is not possible since pre-intervention outcomes for each unit are only observed under control---recall the data setup summarized in Assumption \ref{assumption:sutva} and Figure \ref{fig:data_observations_SI}.
This begs the pivotal question: ``when does the structure between units under control continue to hold in other intervention regimes?''
As we will see in Section \ref{sec:causal.framework}, a factor model across time, units, and interventions (Assumption \ref{assumption:form}) provides one natural answer.
\vspace{-10pt}
\subsection{Related Works} \label{sec:related_works}
There are two key conceptual frameworks that are foundational to this work:
synthetic controls and factor models (i.e., interactive fixed effect models).
Given their sizeable literatures, we provide a brief overview of closely related works from each topic.
To begin, we note that a key quantity of interest in panel data settings is the counterfactual
potential outcome under control for a ``treated'' unit during the post-intervention period, e.g.,
$\theta^{(0)}_n$ for $n \notin \mathcal{I}^{(0)}$.
In the seminal works of \cite{abadie1, abadie2}, the authors introduced the SC estimator,
which provides a powerful and interpretable solution towards estimating this causal parameter.
Variants of SC have been considered in numerous works,
\cite{Hsiao12, imbens16, athey1, LiBell17, xu_2017, rsc, mrsc, Li18, ark, bai2020matrix, asc, Kwok20, chernozhukov2020practical, fernandezval2020lowrank}.
In Section \ref{sec:causal_param_comparison}, we provide a detailed comparison between this paper and closely related works;
in particular, we focus on the recent excellent works of \cite{ark, bai2020matrix, chernozhukov2020practical} as they highlight some of the primary points of comparison of our work with the SC literature.
A critical aspect that enables SC is the structure between units and time under control ($d=0$).
One elegant encoding of this structure is through a factor model,
\cite{chamberlain, liang_zeger, arellano, bai03, bai09, pesaran, moon_15, moon_weidner_2017}.
Recent works have connected factor models with low-rank matrices, which are prevalent within the matrix estimation
(ME) literature, \cite{CandesTao10, Recht11, usvt}.
Here, the aim is to recover an underlying low-rank matrix from a sparse subset of noisy observations.
The ME lens offers a novel perspective on missing data, and popular estimators have been shown to provably
recover the underlying matrix under different sparsity patterns without needing auxiliary covariates.
As such, several works of late, including \cite{athey1, rsc, mrsc, agarwal2020robustness, fernandezval2020lowrank},
have developed estimators that are guided by ME principles to estimate related causal parameters to that of
$\theta^{(0)}_n$ by directly learning on the observed outcomes $Y_{tn}$ rather than relying on covariates.
Collectively, the panel data literature has developed an expressive framework to identify and estimate $\theta^{(0)}_n$.
As stated earlier, an important follow-up question posed by \cite{abadie_survey} asks whether this framework can be extended to identify and estimate $\theta^{(d)}_n$ for any $d$.
Below, we provide an overview of our contributions towards answering this question.
\vspace{-10pt}
\subsection{Contributions \& Organization of Paper}\label{sec:contributions}
\noindent{\em Section~\ref{sec:causal.framework}: Identification of causal parameters}.
The key challenge in estimating $\theta^{(d)}_n$
is that we only ever observe unit $n$ under {\em one} of the $D$ interventions in the post-intervention period,
yet want to produce counterfactuals for unit $n$ under {\em all} $D$ interventions (including control).
Towards overcoming this challenge, we propose an extension of the standard factor model to not only hold across
units and time, but also across {\em interventions}.
This in turn enables our identification result in Theorem \ref{thm:identification}, which expresses
$\theta^{(d)}_n$ for any $(n, d)$ pair via a linear combination, $w^{(n,d)}$, of {\em observed} quantities $\{Y_{tj}: j \in \mathcal{I}^{(d)}\}$ in expectation.
Thus, in principle,
$\theta_n^{(d)}$
can be estimated by observed quantities, given knowledge of $w^{(n,d)}$.
\vspace{7.5pt}
\noindent{\em Section \ref{sec:algo}: Estimator. }
We present a simple two-step procedure to first estimate $w^{(n,d)}$ from pre-intervention observations, and
subsequently $\theta^{(d)}_n$ from post-intervention observations.
We provide guidance on how to choose the single hyper-parameter of our estimation procedure in a data-driven manner.
We discuss incorporation of covariates in Appendix \ref{sec:covariates}.
\vspace{7.5pt}
\noindent{\em Section \ref{sec:results}: Consistency, asymptotic normality. }
We show our estimator is consistent and asymptotically normal.
Collectively, our identification and inference results provide one answer to the open question of extending SC to multiple interventions, \cite{abadie_survey}.
\vspace{7.5pt}
\noindent{\em Section \ref{sec:hypo}: Hypothesis test. }
We furnish a hypothesis test to validate a key assumption that enables our statistical results, and provide guarantees for both its Type I and Type II errors.
We believe this hypothesis test can serve as an additional robustness check of when to use SC-like estimators under factor model assumptions.
\vspace{7.5pt}
\noindent{\em Section \ref{sec:simulations}: Simulations.}
We present illustrative simulations that support our theoretical results.
We find that (i) our estimator yields a consistent estimate of the underlying causal parameter and (ii) our inference procedure produces ``tight'' confidence intervals, i.e.,
our histogram of estimates
is approximately normally distributed, centered around $\theta^{(d)}_n$, and matches the theoretical variance of our estimator.
Exploring whether our inference procedure is efficient, i.e., whether it achieves minimal asymptotic variance amongst all (asymptotically) unbiased estimators, is an important future research direction.
\vspace{7.5pt}
\noindent{\em Section~\ref{sec:empirics}: Empirics.}
We empirically assess the SI framework on real-world A/B testing data from a large e-commerce platform that issued $D$ discounts (interventions) to increase engagement of its $N$ customer groups (units).
The goal of the e-commerce company was to find the best suited discount for each customer group, and they met their goal by performing $N \times D$ A/B tests, as is traditionally done.
That is, we observe the engagement levels of each of the $N$ customer groups under each of the $D$ discounts.
Therefore, unlike the traditional setting of policy evaluation, this case study is particularly well-suited to validate our framework and estimator as we observe all potential outcomes.
In particular, to evaluate efficacy of the SI estimator, we expose it to only $2N$ out of the $N \times D$ experiments.
We find that the SI estimator accurately estimates the outcomes of the remaining $N \times (D-2)$ experiments.
More generally, this case study empirically motivates how the SI framework can be used to design ``data-efficient'' randomized control trials (RCTs).
\vspace{7.5pt}
\noindent{\em Section \ref{sec:causal_param_comparison}: Comparison with literature. }
We do a detailed discussion of our key assumptions and theoretical results within the context of the panel data literature and SC more specifically;
in particular, we discuss \cite{ark, bai2020matrix, chernozhukov2020practical}, which provide state-of-the-art analyses for the SC literature.
Notably, by restricting the SI framework to just the estimation of $\theta^{(0)}_n$, our results establish novel identification and inference results for SC as well.
Further, in connection with our A/B testing case study, we compare our framework with that of multi-arm RCTs, and argue that our results provide a data-efficient approach to achieve a similar end goal.
\vspace{7.5pt}
\noindent{\em Section \ref{sec:tensor_framework}: SI and tensor estimation---a curious connection. }
We explore connections between SI and the growing field of tensor estimation.
By viewing SI as an instance of tensor estimation, we discuss how one can gain a clearer understanding of the trade-offs between sample complexity, statistical accuracy, experimental design, and the role of computation in estimating various causal parameters.
We believe it is an exciting direction for future inquiries in both causal inference and tensor estimation (and machine learning more broadly).
\vspace{7.5pt}
\noindent {\em Notations. }
See Appendix~\ref{sec:notation} for formal definitions of the standard notations we use.
\section{Proof: Asymptotic Normality}
\section{Proof of Theorem \ref{thm:normality}}
For ease of notation, we suppress the conditioning on $\mathcal{E}$ for the remainder of the proof.
To begin, we scale the left-hand side (LHS) of \eqref{eq:inf.1} by $\sqrt{T_1}$ and
analyze each of the three terms on the right-hand side (RHS) of \eqref{eq:inf.1} separately.
To address the first term on the RHS of \eqref{eq:inf.1}, we scale \eqref{eq:consistency.1} by $\sqrt{T_1}$ and recall our assumption on $T_1$ given by \eqref{eq:stupid_t1}.
We then obtain
\begin{align} \label{eq:normality.1}
\frac{1}{\sqrt{T_1}} \sum_{t > T_0} \langle \mathbb{E}[Y_{t, \mathcal{I}^{(d)}}], \mathcal{P}_{V_\text{pre}}(\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}) \rangle
= o_p(1).
\end{align}
To address the second term on the RHS of \eqref{eq:inf.1}, we scale \eqref{eq:consistency.2} by $\sqrt{T_1}$.
Since $\langle \varepsilon_{t, \mathcal{I}^{(d)}}, \widetilde{w}^{(n,d)} \rangle$ are independent across $t$, the Lindeberg–Lévy Central Limit Theorem (see \cite{Billingsley}) yields
\begin{align} \label{eq:normality.2}
\frac{1}{\sqrt{T_1}} \sum_{t > T_0} \langle \varepsilon_{t, \mathcal{I}^{(d)}}, \widetilde{w}^{(n,d)} \rangle
%
\xrightarrow{d}
%
\mathcal{N}\big( 0, \sigma^2 \text{plim} \|\widetilde{w}^{(n,d)}\|_2^2 \big).
\end{align}
To address the third term on the RHS of \eqref{eq:inf.1}, we scale \eqref{eq:consistency.3} by $\sqrt{T_1}$ and recall the assumption $\log(T_0 N_d} %N^{(d)) = o\left(\min\{T_0, N_d} %N^{(d) \} / C(\|\widetilde{w}^{(n,d)}\|_2)^2\right)$. This yields
\begin{align} \label{eq:normality.3}
\frac{1}{\sqrt{T_1}} \sum_{t > T_0} \langle \varepsilon_{t, \mathcal{I}^{(d)}}, \widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)} \rangle
&= o_p(1).
\end{align}
Finally, scaling \eqref{eq:inf.1} by $\sqrt{T_1}$ and collecting \eqref{eq:normality.1}, \eqref{eq:normality.2}, \eqref{eq:normality.3}, we conclude
$
\sqrt{T_1} (\widehat{\theta}^{(d)}_n - \theta^{(d)}_n)
\xrightarrow{d} \mathcal{N}\big( 0, \sigma^2 \text{plim} \|\widetilde{w}^{(n,d)}\|_2^2 \big).
$
\section{Causal Inference \& Tensor Estimation}\label{sec:tensor_framework}
In this section, we re-interpret the classical potential outcomes framework through the lens of tensors.
Specifically, we consider an order-3 tensor with axes that correspond to time, units, and interventions.
Each entry of this tensor is associated with the potential outcome for a specific time, unit, and intervention.
Recall Figure \ref{fig:potential_outcomes_tensor} for a graphical depiction of this tensor.
Therefore, estimating unobserved potential outcomes, the fundamental task in causal inference, is equivalent to estimating various missing entries of this order-3 potential outcomes tensor.
Indeed, imputing entries of a tensor that are noisy and/or missing is the goal of tensor estimation (TE).
In Section \ref{sec:encoding_CI_tensors}, we discuss the connection between these two fields in greater detail.
More specifically, we point out how important concepts in causal inference have a related notion in tensor estimation.
In Section \ref{sec:low_rank_tensor_factor_models}, we show how low-rank tensor factor models, prevalent in the TE literature, provide an elegant way to encode the structure between time, units, {\em and interventions}, while making minimal parametric assumptions.
We hope low-rank tensor factor models will lead to the identification of new causal parameters and guide the design of novels algorithms to estimate said causal parameters.
In Section \ref{sec:new_tensor_toolkit}, we pose what algorithmic advances are required in the TE literature to allow it to more directly connect with causal inference.
\subsection{Encoding Causal Inference as Tensor Estimation}\label{sec:encoding_CI_tensors}
\noindent
{\em Observational/experimental studies and tensor sparsity patterns. }
We discuss how different observational and experimental studies that are prevalent in causal inference can be equivalently posed as different sparsity patterns within a tensor.
Continuing with our notation, consider the setting with $T$ measurements (which may refer to different metrics or time points), $N$ units, and $D$ interventions.
A common thread of these studies is that each of the $N$ units can only experience one, or a small subset, of the possible $D$ interventions, e.g., if a unit is an individual, then it is only feasible to observe her under one intervention.
This constraint naturally induces a {\em block sparsity} pattern, as exhibited in Figure \ref{fig:observation_patterns}.
As we can see, Figure \ref{fig:setup1} depicts the setting we have been analyzing throughout this work, i.e., there is a notion of a pre- and post-intervention period and each unit can only undergo one intervention (recall the definition of $T_0$ and $T_1$);
as a result, the block sparsity pattern induced is that for each unit $n$, we are missing a $(D-1)\times T_1$ block of measurements that are of interest.\footnote{Strictly speaking, the pre- and post-intervention periods are not imperative. Instead, for each unit we observe some subset of size $T_0$ of the $T$ measurements, and are missing the remaining $T_1$ elements.}
In comparison, Figure \ref{fig:setup2} showcases another standard setup that removes the notion of a pre- and post-intervention period, but now each unit receives one or possibly a small subset of the possible interventions.
One natural application of such a setup is in the context of healthcare and the life sciences, where it is feasible to perform multiple experiments on the same unit (e.g., cell type or patient subgroup), but the number of experiments is severely constrained by financial and ethical considerations, cf. \cite{squires2020causal}.
In such a setup, for each $d$, we are missing a $(N-N_d) \times T$ block of measurements, where we recall $N_d = | \mathcal{I}^{(d)}|$ is the number of units that experienced $d$.
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{0.8\textwidth}
\centering
\includegraphics[width=\linewidth]
{images/setup1.pdf}
\caption{Studies with pre- and post-intervention periods, but units can only receive a single intervention.}
\label{fig:setup1}
\end{subfigure}
\\
\begin{subfigure}[b]{0.8\textwidth}
\centering
\includegraphics[width=\linewidth]
{images/setup3.pdf}
\caption{Studies without pre- and post-intervention periods, but units can receive one or a subset of interventions.}
\label{fig:setup2}
\end{subfigure}
\caption{Block sparsity patterns induced by standard observational/experimental studies. The rows correspond to measurements (e.g., time), columns to units, and slices to interventions. Observed outcomes under different interventions are shaded by a unique color, while unobservable counterfactuals are shown in white.
}
\label{fig:observation_patterns}
\vspace{-4mm}
\end{figure}
\vspace{5pt}
\noindent
{\em Causal parameters and TE error metrics. }
Here, we discuss the relationship between causal inference with different target causal parameters and TE under different error metrics.
The first step in causal inference is to define a target causal parameter, while the first step in tensor estimation is to define an error metric between the underlying and estimated tensors.
Below, we discuss a few important connections between these two concepts.
To begin, consider as the causal parameter the average potential outcome under intervention $d$ across all $T$ measurements and $N$ units.\footnote{
If there is a pre- and post-intervention period, then the target causal parameter is typically restricted to the $T_1$ post-intervention measurements.
}
Then, estimating this parameter can equivalently be posed as requiring a tensor estimation method with a Frobenius-norm error guarantee for the $d$-th slice of the potential outcomes tensor with dimension $T \times N$, normalized by $1/\sqrt{TN}$.
As such, a uniform bound for this causal parameter over all $D$ interventions would in turn require a guarantee over the max (normalized) Frobenius-norm error for each of the $D$ slices.
Another causal parameter is unit $n$'s potential outcome under intervention $d$ averaged over all $T$ measurements (recall, this is $\theta^{(d)}_n$).
Analogously, this translates to the $\ell_2$-norm error guarantee of the $n$-th column of the $d$-th tensor slice, normalized by $1/\sqrt{T}$.
A uniform bound over all $N$ units for the $d$-th intervention would then correspond to a $\ell_{2, \infty}$-norm error (defined in \eqref{eq:2.inf.ineq}) for the $d$-th tensor slice.
As a final example, let the target causal parameter be the unit potential outcome under intervention $d$ and measurement $t$.
This would require a TE method with a max-norm (entry-wise) error of the $d$-th matrix slice.
Similar as above, a uniform bound over all measurements, units, and interventions corresponds to a max-norm error over the entire tensor.
\subsection{Learning Across Interventions via Tensor Factor Models}\label{sec:low_rank_tensor_factor_models}
\noindent
{\em Tensor factor model. }
We start by introducing a low-rank tensor factor model, which is natural generalization of the traditional factor model considered in the panel data literature.
More formally, let $\boldsymbol{Y} = [Y^{(d)}_{tn}] \in \mathbb{R}^{T \times N \times D}$ denote an order-3 tensor of potential outcomes.
A tensor factor model over $\boldsymbol{Y}$ admits the following decomposition:
\begin{align}\label{eq:cp_rank_potenial_outcome}
Y^{(d)}_{tn} = \sum^r_{\ell = 1} u_{t\ell} v_{n\ell} \lambda_{d\ell} + \varepsilon_{tn},
\end{align}
where $r$ is the canonical polyadic (CP) rank, $u_t \in \mathbb{R}^r$ is a latent time (or more generally, measurement) factor, $v_n \in \mathbb{R}^r$ is a latent unit factor, and $\lambda_d \in \mathbb{R}^r$ is a latent intervention factor.
Note that if we restrict our attention to a matrix (e.g., restricting the potential outcomes to a particular time, unit, or intervention slice), then the CP rank and standard matrix rank match.
Importantly, the factorization in Assumption \ref{assumption:form} is {\em implied} by the factorization assumed by a low-rank tensor as given in \eqref{eq:cp_rank_potenial_outcome}.
In particular, Assumption \ref{assumption:form} does not require the additional factorization of the (time, intervention) factor $u^{(d)}_t$ as $\langle u_t, \lambda_d \rangle$, where $u_t$ is a time specific factor and $\lambda_d$ is an intervention specific factor.
\vspace{7.5pt}
\noindent
{\em Algorithm design guided by tensor factor models. }
As discussed in Section \ref{sec:causal_param_comparison}, the implicit factorization of $\mathbb{E}[Y^{(d)}_{tn}]$ given in \eqref{eq:form.0} allows the SI causal framework to identify and estimate $\theta^{(d)}_n$ for any $d$, i.e., beyond $d=0$.
In particular, this factorization enables a model to be learned under control yet transferred to another intervention regime.
An added benefit, as previously stated, is that this also precludes the need for covariate information with respect to time, units, or interventions.
That is, directly learning on the observed outcomes (and appropriately de-noising) exploits this latent structure between the dimensions of the tensor to impute missing potential outcomes.
In contrast, traditional methods that learn across interventions require access to meaningful covariate information.
An important question we pose is whether it is feasible to design estimators that exploit an implicit factorization of $u^{(d)}_t = \langle u_t, \lambda_d \rangle$ to operate under less stringent causal assumptions or data requirements, or identify and estimate more involved causal parameters, in comparison to the estimator in this work, e.g., by possibly building upon the recent work of~\cite{christina_tensor}, which directly exploits this tensor factor structure to provide strong max-norm error bounds for TE under a uniformly missing at random sparsity pattern.
\subsection{Need for a New Tensor Estimation Toolkit for Causal Inference}\label{sec:new_tensor_toolkit}
The TE literature has grown tremendously because it provides an expressive formal framework for a large number of emerging applications and also allows one to reason about the statistical and computational trade-offs within these applications, which are a less understood phenomena.
In particular, this literature quantifies the number of samples required and the computational complexity of different estimators to achieve theoretical guarantees for a given error metric (e.g., Frobenius-norm error over the entire tensor)---indeed, this trade-off is of central importance in the emerging sub-discipline at the interface of computer science, statistics, and machine learning.
Given our preceding discussions connecting causal inference and TE, a natural question is whether we can apply the current toolkit of tensor estimation to understand statistical and computational trade-offs in causal inference.
We believe a direct transfer of the techniques and analyses used in TE is not immediately possible for the following reasons.
First, most results in the TE literature assume uniformly randomly missing entries over all $T \times N \times D$ elements of the tensor.
In comparison, as discussed in Section \ref{sec:encoding_CI_tensors}, causal inference settings naturally induce a block sparsity pattern.
Further, this literature typically studies the Frobenius-norm error across the entire tensor.
On the other hand, as also discussed in Section \ref{sec:encoding_CI_tensors}, most meaningful causal parameters require more refined error metrics over the tensor.
As we have established in Theorems \ref{thm:consistency} and \ref{thm:normality}, the SI estimator is one method that provides guarantees over all $D$ slices under the block sparsity pattern induced by the SC setup; in particular, we uniformly bound $\theta^{(d)}_n$ for all $(n, d)$.
Further, the only computationally expensive step of the estimator is in \eqref{eq:pcr}, where we compute the SVD of the pre-intervention outcomes; a naive implementation of the SVD scales as $O(\max\{T_0, N_d\}^3)$.
Therefore, the SI estimator is computationally efficient.
Hence, we pose two important and related open questions:
(i) what block sparsity patterns and structural assumptions on the potential outcomes tensor allow for faithful recovery with respect to a meaningful error metric for causal inference,
and (ii) if recovery is possible, what are the fundamental statistical and computational trade-offs that are achievable?
An answer to these questions will formally bridge causal inference with TE, as well as computer science and machine learning more broadly.
\section{Proof: Subspace Inclusion Hypothesis Test}
\section{Proof of Theorem \ref{thm:hypo}}
For ease of notation, we suppress the conditioning on $\mathcal{E}$ for the remainder of the proof.
We make use of the following notation: for any matrix $\boldsymbol{A}$ with orthonormal columns, let $\mathcal{P}_A = \boldsymbol{A} \boldsymbol{A}^\prime$ denote the projection matrix onto the subspace spanned by the columns of $\boldsymbol{A}$.
Additionally, we follow the notation established in Section \ref{sec:hypo}.
In particular, we recall $\phi_\text{pre}(a) = \sqrt{T_0} + \sqrt{N_d} + \sqrt{\log(1/a)}$;
$\phi_\text{post}(a) = \sqrt{T_1} + \sqrt{N_d} + \sqrt{\log(1/a)}$;
and $s_\ell, \varsigma_\ell$ are the $\ell$-th singular values of $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]$ and $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(d)}}]$, respectively.
Finally, we let $C \ge 0$ denote an absolute constant, whose value can change from line to line or even within a line.
\vspace{10pt} \noindent
{\em Type I error. }
We first bound the Type I error, which anchors on Lemma \ref{lemma:type1.1}, stated below. The proof of Lemma \ref{lemma:type1.1} can be found in Appendix \ref{sec:type1_lem}.
\begin{lemma} \label{lemma:type1.1}
Suppose $H_0$ is true. Then,
\begin{align} \label{eq:type1.1}
\widehat{\tau} &= \|(\mathcal{P}_{V_\emph{pre}} - \mathcal{P}_{\widehat{V}_\emph{pre}}) \widehat{\bV}_\emph{post}\|_F^2 +
\|(\boldsymbol{I} - \mathcal{P}_{V_\emph{pre}}) (\mathcal{P}_{\widehat{V}_\emph{post}} - \mathcal{P}_{V_\emph{post}})\|_F^2
\\ &\quad + 2 \langle (\mathcal{P}_{V_\emph{pre}} - \mathcal{P}_{\widehat{V}_\emph{pre}}) \widehat{\bV}_\emph{post}, (\boldsymbol{I} - \mathcal{P}_{V_\emph{pre}}) \widehat{\bV}_\emph{post} \rangle_F.
\end{align}
\end{lemma}
We proceed to bound each term on the right-hand side of \eqref{eq:type1.1} independently.
\vspace{10pt} \noindent
{\em Bounding $\|(\mathcal{P}_{V_\emph{pre}} - \mathcal{P}_{\widehat{V}_\emph{pre}}) \widehat{\bV}_\emph{post}\|_F^2$. }
By Lemma \ref{lemma:subspace}, we have w.p. at least $1- \alpha_1$,
{\small
\begin{align}
\| (\mathcal{P}_{\widehat{V}_\text{pre}} - \mathcal{P}_{V_\text{pre}}) \widehat{\bV}_\text{post} \|_F^2
&\le \| \mathcal{P}_{\widehat{V}_\text{pre}} - \mathcal{P}_{V_\text{pre}}\|_\text{op}^2 ~ \|\widehat{\bV}_\text{post}\|_F^2
\le \frac{C \sigma^2 r_\text{post} \phi^2_\text{pre}(\alpha_1)}{s^2_{r_\text{pre}}}. \label{eq:t.1}
\end{align}
}
Note that we have used the fact that $\| \widehat{\bV}_\text{post} \|_F^2 = r_\text{post}$.
\vspace{10pt} \noindent
{\em Bounding $\|(\boldsymbol{I} - \mathcal{P}_{V_\emph{pre}}) (\mathcal{P}_{\widehat{V}_\emph{post}} - \mathcal{P}_{V_\emph{post}})\|_F^2$. }
Observe that $(\boldsymbol{I} - \mathcal{P}_{V_\text{pre}})$ is a projection matrix, and hence $\|\boldsymbol{I} - \mathcal{P}_{V_\text{pre}}\|_\text{op} \le 1$.
By adapting Lemma \ref{lemma:subspace} for $\widehat{\bV}_\text{post}, \boldsymbol{V}_\text{post}$ in place of $\widehat{\bV}_\text{pre}, \boldsymbol{V}_\text{pre}$,
we have w.p. at least $1-\alpha_2$
\begin{align} \label{eq:adapt.1}
\| \mathcal{P}_{\widehat{V}_\text{post}} - \mathcal{P}_{V_\text{post}}\|_F^2
&\le r_\text{post} \| \mathcal{P}_{\widehat{V}_\text{post}} - \mathcal{P}_{V_\text{post}}\|_\text{op}^2
\le \frac{C \sigma^2 r_\text{post} \phi^2_\text{post}(\alpha_2)}{\varsigma^2_{r_\text{post}}}.
\end{align}
Note that we have used the following:
(i) $\| \mathcal{P}_{\widehat{V}_\text{post}} - \mathcal{P}_{V_\text{post}}\|_F =
\| \sin \Theta \|_F$, where $\sin \Theta \in \mathbb{R}^{r_\text{post} \times r_\text{post}}$ is a matrix of principal angles between the two projectors (see \cite{ABSIL2006288}), which implies $\rank(\mathcal{P}_{\widehat{V}_\text{post}} - \mathcal{P}_{V_\text{post}}) \le r_\text{post}$;
(ii) the standard norm inequality $\| \boldsymbol{A} \|_F \le \sqrt{\rank(\boldsymbol{A})} \| \boldsymbol{A} \|_\text{op}$ for any matrix $\boldsymbol{A}$.
Using the result above, we have
{\small
\begin{align}
\| (\boldsymbol{I} - \mathcal{P}_{V_\text{pre}}) (\mathcal{P}_{\widehat{V}_\text{post}} - \mathcal{P}_{V_\text{post}})\|_F^2 &\le
\|\boldsymbol{I} - \mathcal{P}_{V_\text{pre}}\|_\text{op}^2 ~ \| \mathcal{P}_{\widehat{V}_\text{post}} - \mathcal{P}_{V_\text{post}}\|_F^2
\le \frac{C \sigma^2 r_\text{post} \phi^2_\text{post}(\alpha_2)}{\varsigma^2_{r_\text{post}}}. \label{eq:t.2}
\end{align}
}
\vspace{5pt} \noindent
{\em Bounding $\langle (\mathcal{P}_{V_\emph{pre}} - \mathcal{P}_{\widehat{V}_\emph{pre}}) \widehat{\bV}_\emph{post}, ~(\boldsymbol{I} - \mathcal{P}_{V_\emph{pre}}) \widehat{\bV}_\emph{post} \rangle_F$. }
Using the cyclic property of the trace operator, we have that
{\small
\begin{align}
&\langle (\mathcal{P}_{\widehat{V}_\text{pre}} - \mathcal{P}_{V_\text{pre}}) \widehat{\bV}_\text{post}, ~ \mathcal{P}_{V_\text{pre}} \widehat{\bV}_\text{post} \rangle_F
= \tr \big( \widehat{\bV}_\text{post}^\prime (\mathcal{P}_{\widehat{V}_\text{pre}} - \mathcal{P}_{V_\text{pre}}) \mathcal{P}_{V_\text{pre}} \widehat{\bV}_\text{post} \big)
\\&= \tr \big( (\mathcal{P}_{\widehat{V}_\text{pre}} - \mathcal{P}_{V_\text{pre}}) \mathcal{P}_{V_\text{pre}} \mathcal{P}_{\widehat{V}_\text{post}} \big). \label{eq:helper_type_2_trace}
\end{align}
}
Note that $\mathcal{P}_{\widehat{V}_\text{pre}} - \mathcal{P}_{V_\text{pre}}$ is symmetric, and $\mathcal{P}_{V_\text{pre}}$ and $\mathcal{P}_{\widehat{V}_\text{post}}$ are both symmetric positive semidefinite (PSD). As a result,
Lemmas \ref{lemma:subspace} and \ref{lemma:trace.2} yield w.p. at least $1-\alpha_1$
\begin{align}
&\tr \big( (\mathcal{P}_{\widehat{V}_\text{pre}} - \mathcal{P}_{V_\text{pre}}) \mathcal{P}_{V_\text{pre}} \mathcal{P}_{\widehat{V}_\text{post}} \big)
%
\le \| \mathcal{P}_{\widehat{V}_\text{pre}} - \mathcal{P}_{V_\text{pre}} \|_\text{op} \tr \big( \mathcal{P}_{V_\text{pre}} \mathcal{P}_{\widehat{V}_\text{post}} \big)
%
\\ &\le \| \mathcal{P}_{\widehat{V}_\text{pre}} - \mathcal{P}_{V_\text{pre}} \|_\text{op} \| \mathcal{P}_{V_\text{pre}} \|_\text{op} \tr \big( \mathcal{P}_{\widehat{V}_\text{post}} \big)
%
\le \frac{C \sigma r_\text{post} \phi_\text{pre}(\alpha_1)}{s_{r_\text{pre}}}.
\label{eq:t.3}
\end{align}
Again, to arrive at the above inequality, we use $\| \mathcal{P}_{V_\text{pre}}\|_\text{op} \le 1$ and $\text{tr}(\mathcal{P}_{\widehat{V}_\text{post}}) = r_\text{post}$.
\vspace{10pt} \noindent {\em Collecting terms. }
Collecting \eqref{eq:t.1}, \eqref{eq:t.2}, and \eqref{eq:t.3} with $\alpha_1 = \alpha_2 = \alpha/2$, w.p. at least $1-\alpha$,
\begin{align}
\widehat{\tau} &\le
\frac{C \sigma^2 r_\text{post} \phi^2_\text{pre}(\alpha/2)}{s^2_{r_\text{pre}}}
+
\frac{C \sigma^2 r_\text{post} \phi^2_\text{post}(\alpha/2)}{\varsigma^2_{r_\text{post}}}
+
\frac{C \sigma r_\text{post} \phi_\text{pre}(\alpha/2)}{s_{r_\text{pre}}}.
\end{align}
Defining the upper bound as $\tau(\alpha)$ completes the bound on the Type I error.
\vspace{10pt} \noindent
{\em Type II error. }
Next, we bound the Type II error.
We will leverage Lemma \ref{lemma:type2}, the proof of which can be found in Appendix \ref{sec:type2_lem}.
\begin{lemma} \label{lemma:type2}
The following equality holds: $\widehat{\tau} = r_\emph{post} - c - c_1$, where
{\small
\begin{align}
c &= \norm{\mathcal{P}_{V_\emph{pre}} \boldsymbol{V}_\emph{post}}_F^2
%
\\ c_1 &= \|(\mathcal{P}_{\widehat{V}_\emph{pre}} - \mathcal{P}_{V_\emph{pre}}) \widehat{\bV}_\emph{post}\|_F^2
+ \|\mathcal{P}_{V_\emph{pre}} (\mathcal{P}_{\widehat{V}_\emph{post}} - \mathcal{P}_{V_\emph{post}})\|_F^2
%
\\ &\quad + 2 \langle (\mathcal{P}_{\widehat{V}_\emph{pre}} - \mathcal{P}_{V_\emph{pre}}) \widehat{\bV}_\emph{post}, \mathcal{P}_{V_\emph{pre}} \widehat{\bV}_\emph{post} \rangle_F
%
+ 2 \langle \mathcal{P}_{V_\emph{pre}} (\mathcal{P}_{\widehat{V}_\emph{post}} - \mathcal{P}_{V_\emph{post}}), \mathcal{P}_{V_\emph{pre}} \mathcal{P}_{V_\emph{post}} \rangle_F. \label{eq:type2.1}
\end{align}
}
\end{lemma}
We proceed to bound each term on the right hand side of \eqref{eq:type2.1} separately.
\vspace{10pt} \noindent
{\em Bounding $\|(\mathcal{P}_{\widehat{V}_\emph{pre}} - \mathcal{P}_{V_\emph{pre}}) \widehat{\bV}_\emph{post}\|_F^2$. }
From \eqref{eq:t.1}, we have that w.p. at least $1-\alpha_1$,
\begin{align} \label{eq:t.5}
\|(\mathcal{P}_{\widehat{V}_\text{pre}} - \mathcal{P}_{V_\text{pre}}) \widehat{\bV}_\text{post} \|_F^2
&\le \frac{C \sigma^2 r_\text{post} \phi^2_\text{pre}(\alpha_1)}{s^2_{r_\text{pre}}}.
\end{align}
\vspace{5pt} \noindent
{\em Bounding $\|\mathcal{P}_{V_\emph{pre}} (\mathcal{P}_{\widehat{V}_\emph{post}} - \mathcal{P}_{V_\emph{post}})\|_F^2$. }
Using the inequality $\|\boldsymbol{A} \boldsymbol{B} \|_F \le \|\boldsymbol{A} \|_\text{op} \|\boldsymbol{B} \|_F$ for any two matrices $\boldsymbol{A}$ and $\boldsymbol{B}$, as well as the bound in \eqref{eq:adapt.1}, we have w.p. at least $1-\alpha_2$,
\begin{align} \label{eq:t.6}
\| \mathcal{P}_{V_\text{pre}} (\mathcal{P}_{\widehat{V}_\text{post}} - \mathcal{P}_{V_\text{post}}) \|_F^2
&\le \frac{C \sigma^2 r_\text{post} \phi^2_\text{post}(\alpha_2)}{\varsigma^2_{r_\text{post}}}.
\end{align}
\vspace{10pt} \noindent
{\em Bounding $\langle (\mathcal{P}_{\widehat{V}_\emph{pre}} - \mathcal{P}_{V_\emph{pre}}) \widehat{\bV}_\emph{post}, \mathcal{P}_{V_\emph{pre}} \widehat{\bV}_\emph{post} \rangle_F$. }
We invoke the bounds in \eqref{eq:helper_type_2_trace} and \eqref{eq:t.3} to obtain w.p. at least $1-\alpha_1$
\begin{align} \label{eq:t.7}
\langle (\mathcal{P}_{\widehat{V}_\text{pre}} - \mathcal{P}_{V_\text{pre}}) \widehat{\bV}_\text{post}, \mathcal{P}_{V_\text{pre}} \widehat{\bV}_\text{post} \rangle_F
& \le \frac{C \sigma r_\text{post} \phi_\text{pre}(\alpha_1)}{s_{r_\text{pre}}}.
\end{align}
\vspace{5pt} \noindent
{\em Bounding $\langle \mathcal{P}_{V_\emph{pre}} (\mathcal{P}_{\widehat{V}_\emph{post}} - \mathcal{P}_{V_\emph{post}}), \mathcal{P}_{V_\emph{pre}} \mathcal{P}_{V_\emph{post}} \rangle_F$. }
Like in the argument to produce the bound in \eqref{eq:t.3}, we use Lemmas \ref{lemma:subspace} and \ref{lemma:trace.2} to get that w.p. at least $1-\alpha_2$,
{\small
\begin{align}
&\langle \mathcal{P}_{V_\text{pre}} (\mathcal{P}_{\widehat{V}_\text{post}} - \mathcal{P}_{V_\text{post}}), \mathcal{P}_{V_\text{pre}} \mathcal{P}_{V_\text{post}} \rangle_F
= \tr \big( (\mathcal{P}_{\widehat{V}_\text{post}} - \mathcal{P}_{V_\text{post}}) \mathcal{P}_{V_\text{pre}} \mathcal{P}_{V_\text{pre}} \mathcal{P}_{V_\text{post}} \big)
\\&= \tr \big( (\mathcal{P}_{\widehat{V}_\text{post}} - \mathcal{P}_{V_\text{post}}) \mathcal{P}_{V_\text{pre}} \mathcal{P}_{V_\text{post}} \big)
\le \| \mathcal{P}_{\widehat{V}_\text{post}} - \mathcal{P}_{V_\text{post}} \|_\text{op} ~ \|\mathcal{P}_{V_\text{pre}}\|_\text{op} \tr \big( \mathcal{P}_{V_\text{post}} \big)
\le \frac{C \sigma r_\text{post} \phi_\text{post}(\alpha_2)}{\varsigma_{r_\text{post}}}. \label{eq:t.8}
\end{align}
}
\noindent {\em Collecting terms. }
Combining \eqref{eq:t.5}, \eqref{eq:t.6}, \eqref{eq:t.7}, \eqref{eq:t.8} with $\alpha_1 = \alpha_2 = \alpha/2$, and using the definition of $\tau(\alpha)$, we have that w.p. at least $1-\alpha$
$
c_1 \le \tau(\alpha) + \frac{C \sigma r_\text{post} \phi_\text{post}(\alpha/2)}{\varsigma_{r_\text{post}}}.
$
Hence, along with Lemma \ref{lemma:type2}, it follows that w.p. at least $1-\alpha$,
\begin{align}\label{eq:unonditional_type_2_bound}
\widehat{\tau} &\ge r_\text{post} - c - \tau(\alpha) - \frac{C \sigma r_\text{post} \phi_\text{post}(\alpha/2)}{\varsigma_{r_\text{post}}}.
\end{align}
Now, suppose $r_\text{post}$ satisfies \eqref{eq:type2_cond}, which implies that $H_1$ must hold.
Then, \eqref{eq:unonditional_type_2_bound} and \eqref{eq:type2_cond} together imply $\mathbb{P}(\widehat{\tau} > \tau(\alpha) | H_1) \ge 1 - \alpha$.
This completes the proof.
\subsection{Proof of Corollary \ref{cor:hypo}}
We utilize Lemmas \ref{lemma:subg_matrix_gaussian} and \ref{lem:wedin_gaussian}, which are sharp versions of Lemmas \ref{lemma:subg_matrix} and \ref{lemma:subspace}
\begin{lemma} [Gaussian Matrices: Theorem 7.3.1 of \cite{vershynin2018high}] \label{lemma:subg_matrix_gaussian}
Let the setup of Lemma \ref{lemma:subg_matrix} hold.
Assume $A_{ij}$ are Gaussian r.v.s with variance $\sigma^2$. Then for any $t > 0$,
$
\| \boldsymbol{A} \|_{\emph{op}} \le \sigma (\sqrt{m} + \sqrt{n} + t)
$
w.p. at least $1-2\exp(-t^2)$.
\end{lemma}
\begin{lemma}\label{lem:wedin_gaussian}
Let the setup of Lemma \ref{lemma:subspace} hold.
Further, assume $\varepsilon_{tn}$ are Gaussian r.v.s with variance $\sigma^2$.
Then for any $\alpha \in (0,1)$, we have w.p. at least $1-\alpha$,
\begin{align}
\| \mathcal{P}_{\widehat{V}_\emph{pre}} - \mathcal{P}_{V_\emph{pre}} \|_\emph{op}
\le \frac{2 \sigma \phi_\emph{pre}(\alpha)}{s_{r_\emph{pre}}},
~~
\| \mathcal{P}_{\widehat{V}_\emph{post}} - \mathcal{P}_{V_\emph{post}} \|_\emph{op}
\le \frac{2 \sigma \phi_\emph{post}(\alpha)}{\varsigma_{r_\emph{post}}}.
\end{align}
\end{lemma}
\begin{proof}
The proof is identical to that of Lemma \ref{lemma:subspace} except $\| \boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}} - \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}] \|_\text{op}$ is now bounded above using Lemma \ref{lemma:subg_matrix_gaussian}.
\end{proof}
The remainder of the proof of Corollary \ref{cor:hypo} is identical to that of Theorem \ref{thm:hypo}.
\section{Synthetic Interventions Estimator} \label{sec:algo}
Theorem \ref{thm:identification} allows us to express $\theta^{(d)}_n$ in terms of a linear combination of observed quantities $\{Y_{tj}: t > T_0, j \in \mathcal{I}^{(d)}\}$ in expectation.
The coefficients associated with this linear combination are denoted by the vector $w^{(n,d)}$.
This is the key quantity that we need to estimate, which
in turn, allows us to estimate $\theta^{(d)}_n$.
We provide one such estimator below; without loss of generality, we consider a particular $(n,d)$ pair.
\vspace{10pt}
\noindent{\em Additional notation. }
Throughout, let
$ Y_{\text{pre}, n} = [Y_{tn} : t \le T_0] \in \mathbb{R}^{T_0} $
represent the vector of pre-intervention outcomes for unit $n$.
Let
$
\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}} = [ Y_{tj} : t \le T_0, \ j \in \mathcal{I}^{(d)} ] \in \mathbb{R}^{T_0 \times N_d} %N^{(d)}$
and
$\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(d)}} = [ Y_{tj} : t > T_0, \ j \in \mathcal{I}^{(d)} ] \in \mathbb{R}^{T_1 \times N_d} %N^{(d)} $
represent the pre- and post-intervention outcomes, respectively, associated with the units within $\mathcal{I}^{(d)}$.
We define the singular value decomposition (SVD) of $\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}$ as
\begin{align} \label{eq:ypre_svd}
\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}} = \sum_{\ell = 1}^{M} \widehat{s}_{\ell} \widehat{u}_{\ell} \widehat{v}^\prime_{\ell},
\end{align}
where $M = \min\{T_0, N_d} %N^{(d) \}$, $\widehat{s}_\ell \in \mathbb{R}$ are the singular values (arranged in decreasing order), and $\widehat{u}_\ell \in \mathbb{R}^{T_0}, \widehat{v}_\ell \in \mathbb{R}^{N_d} %N^{(d)}$ are the left and right singular vectors, respectively.
\subsection{Estimating the Causal Parameter}\label{sec:si_estimator}
The SI estimator is a simple two-step procedure with only one hyper-parameter $k \in [M]$ that quantifies the number of singular components of $\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}$ to retain.
\vspace{3pt}
\begin{enumerate}
\item Estimating $w^{(n, d)}$:
\begin{align} \label{eq:pcr}
\widehat{w}^{(n,d)} &= \Big( \sum_{\ell=1}^k (1/\widehat{s}_\ell) \widehat{v}_\ell \widehat{u}^\prime_\ell \Big) Y_{\text{pre}, n}.
\end{align}
\item Estimating $\theta_n^{(d)}$:
\begin{align} \label{eq:si.2}
\widehat{\theta}_n^{(d)} &= \frac{1}{T_1}\sum_{t > T_0} \sum_{j \in \mathcal{I}^{(d)}} \widehat{w}_j^{(n,d)} \ Y_{tj}.
\end{align}
\end{enumerate}
\vspace{5pt}
\noindent{\em Choosing $k$. }
There exist a number of principled heuristics to select the hyper-parameter $k$, and we name a few here.
Perhaps the most popular data-driven approach is simply to use cross-validation, where the pre-intervention data is our training set and the post-intervention data is our validation set.
Another standard approach is to use a ``universal'' thresholding scheme that preserves the singular values above a precomputed threshold (see \cite{Gavish_2014, usvt}).
Finally, a ``human-in-the-loop'' approach is to inspect the spectral characteristics of $\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}$, and choose $k$ to be the natural ``elbow'' point that partitions the singular values into those of large and small magnitudes.
To understand the third approach, recall that $\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}} = \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}] + \boldsymbol{E}_{\text{pre}, \mathcal{I}^{(d)}}$, where $\boldsymbol{E}_{\text{pre}, \mathcal{I}^{(d)}} = [\varepsilon_{tj}: t \le T_0, j \in \mathcal{I}^{(d)}]$.
Under the factor model in Assumption \ref{assumption:linear}, $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]$ is low-rank.
If the noise $\varepsilon_{tn}$ is reasonably well-behaved (i.e., has sufficiently light tails), then random matrix theory informs us that the singular values corresponding to the noise matrix $\boldsymbol{E}_{\text{pre}, \mathcal{I}^{(d)}}$ are much smaller in magnitude compared to those of the signal matrix $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]$.
Hence, it is likely that a ``sharp'' threshold or gap exists between the top singular values associated with $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]$ and the remaining singular values induced by $\boldsymbol{E}_{\text{pre}, \mathcal{I}^{(d)}}$.
For example, if the rows of $\boldsymbol{E}_{\text{pre}, \mathcal{I}^{(d)}}$ are sub-Gaussian, then $\|\boldsymbol{E}_{\text{pre}, \mathcal{I}^{(d)}}\|_\text{op} = O_p(\sqrt{T_0} + \sqrt{N_d} %N^{(d)})$, cf. \cite{vershynin2018high}.
In comparison, if the entries of $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]$ are $\Theta(1)$ and its nonzero singular values $s_i$ are of the same magnitude, then $s_i = \Theta(\sqrt{T_0 N_d})$.
For a more detailed exposition of our assumptions on the spectra of $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]$, refer to Assumption \ref{assumption:spectra}.
For a graphical depiction of the spectral gap, see Figure \ref{fig:elbow}. %
\begin{figure}[t!]
\centering
\includegraphics[width=0.5\linewidth]
{images/elbow.pdf}
\caption{
%
Simulation displays the spectrum of $\boldsymbol{Y} = \mathbb{E}[\boldsymbol{Y}] + \boldsymbol{E} \in \mathbb{R}^{100 \times 100}$.
%
Here, $\mathbb{E}[\boldsymbol{Y}] = \boldsymbol{U} \boldsymbol{V}^\prime$, where the entries of $\boldsymbol{U}, \boldsymbol{V} \in \mathbb{R}^{100 \times 10}$ are sampled independently from $\mathcal{N}(0,1)$; further,
the entries of $\boldsymbol{E}$ are sampled independently from $\mathcal{N}(0, \sigma^2)$ with $\sigma^2 \in \{0, 0.2, \dots, 0.8\}$.
%
Across varying levels of noise $\sigma^2$, there is a steep drop-off in magnitude from the top to remaining singular values---this marks the ``elbow'' point.
%
As seen from the figure, the top singular values of $\boldsymbol{Y}$ correspond closely with that of $\mathbb{E}[\boldsymbol{Y}]$ ($\sigma^2=0$), and the remaining singular values are induced by $\boldsymbol{E}$.
%
Thus, $\rank(\boldsymbol{Y}) \approx \rank(\mathbb{E}[\boldsymbol{Y}]) = 10$.
%
}
\label{fig:elbow}
\end{figure}
\vspace{10pt}
\noindent{\em Interpretation of SI Estimator. }
Recall from Theorem \ref{thm:identification} that we are interested in estimating $w^{(n,d)}$, for which $\mathbb{E}[Y_{\text{pre}, n} ] = \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}} ] w^{(n,d)}$ holds.
However, we only have access to the {\em noisy} instantiations $Y_{\text{pre}, n}$ and $\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}$, where the noise is due to $\varepsilon_{tn}$.
Nevertheless, we argue that the approach given by \eqref{eq:pcr}, known as principal component regression (PCR), overcomes the challenge of estimating $w^{(n,d)}$ under measurement error.\footnote{The problem of learning from noisy covariate observations is known as ``error-in-variables'' regression.}
PCR is a two-stage procedure that (i) first applies principal component analysis to extract the low-dimensional signal captured by the top $k$ singular components of $\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}$, and
(ii) then performs linear regression in the subspace spanned by these $k$ components.
If $k = \rank(\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}])$, then
the subspace spanned by the top $k$ right singular vectors of $\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}$ will be ``close'' to the rowspace of $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]$ via standard perturbation theory arguments.
In particular, by \cite{davis1970rotation, Wedin1972PerturbationBI}, we know the distance
between the two subspaces\footnote{Distance between subspaces $A$ and $B$ is defined as $\| \mathcal{P}_A - \mathcal{P}_B \|_\text{op}$, where $\mathcal{P}_A$ is the projection matrix onto $A$.} scales as $O_p(\| \boldsymbol{E}_{\text{pre}, \mathcal{I}^{(d)}} \|_\text{op} / s_{\min})$, where $s_{\min}$ is the smallest nonzero singular value of $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]$.
In other words, PCR {\em implicitly de-noises} $\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}$ by exploiting the low-rank factor structure of $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]$.
Further, even if we had access to $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]$, there remains an issue of identifiability of $w^{(n, d)}$ in the underdetermined (high-dimensional) case where $T_0 < N_d} %N^{(d)$.
It is well-known, however, that the projection of $w^{(n,d)}$ onto the rowspace of $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]$, denoted as $\widetilde{w}^{(n,d)}$, is not only unique, but is also the only relevant component for prediction.
To see this, note that $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]\widetilde{w}^{(n,d)} = \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}] w^{(n,d)}$.
Further, we highlight that $\widetilde{w}^{(n,d)}$ is the unique minimum $\ell_2$-norm vector for which this equality holds.
This further reinforces the motivation for PCR since \eqref{eq:pcr} enforces $\widehat{w}^{(n,d)}$ to lie within the subspace spanned by the top $k$ right singular vectors of $\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}$.
Thus, $\widehat{w}^{(n,d)}$ will be close to the unique $\widetilde{w}^{(n,d)}$, provided $k$ is aptly chosen.
We formalize this argument in Lemma \ref{lemma:param_est}.
\vspace{7.5pt}
\noindent
{\em Incorporating covariates. }
In Appendix \ref{sec:covariates}, we provide a discussion on incorporating covariates into our estimator, as well as a theoretical model that justifies this approach.
\section{Proof: Singular Values}
\section{Proof of Lemma \ref{lemma:singular_values}} \label{sec:proof_singular_values}
For ease of notation, we suppress the conditioning on $\mathcal{E}$ for the remainder of the proof.
To bound the gap between $s_i$ and $\widehat{s}_i$, we recall the following well-known results.
\begin{lemma} [Weyl's inequality] \label{lemma:weyl}
Given $\boldsymbol{A}, \boldsymbol{B} \in \mathbb{R}^{m \times n}$, let $\sigma_i$ and $\widehat{\sigma}_i$ be the $i$-th singular values of $\boldsymbol{A}$ and $\boldsymbol{B}$, respectively, in decreasing order and repeated by multiplicities.
Then for all $i \le \min\{m, n\}$,
$
\abs{ \sigma_i - \widehat{\sigma}_i} \le \|\boldsymbol{A} - \boldsymbol{B} \|_\emph{op}.
$
\end{lemma}
\begin{lemma} [Sub-Gaussian Matrices: Theorem 4.4.5 of \cite{vershynin2018high}] \label{lemma:subg_matrix}
Let $\boldsymbol{A} = [A_{ij}]$ be an $m \times n$ random matrix where the entries $A_{ij}$ are independent, mean zero, sub-Gaussian random variables.
Then for any $t > 0$, we have
$
\| \boldsymbol{A} \|_{\emph{op}} \le CK (\sqrt{m} + \sqrt{n} + t)
$
w.p. at least $1-2\exp(-t^2)$.
Here, $K = \max_{i,j} \|A_{ij}\|_{\psi_2}$, and $C>0$ is an absolute constant.
\end{lemma}
By Lemma \ref{lemma:weyl}, we have for any $i \le \min\{T_0, N_d\}$,
$
\abs{s_i - \widehat{s}_i} \leq \| \boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}} - \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]\|_\text{op}.
$
Recalling Assumption \ref{assumption:noise} and applying Lemma \ref{lemma:subg_matrix}, we conclude for any $t>0$ and some absolute constant $C>0$,
$\abs{s_i - \widehat{s}_i} \le C\sigma(\sqrt{T_0} + \sqrt{N_d} + t)$
w.p. at least $1-2\exp(-t^2)$.
\section{Proof of Lemma \ref{lemma:theta}}
For ease of notation, we suppress the conditioning on $\mathcal{E}$ for the remainder of the proof.
Let $\boldsymbol{V}_\text{post}$ denote the right singular vectors of $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(d)}}]$.
Under Assumption \ref{assumption:subspace}, we have
\begin{align}\label{eq:subspace_inclusion_implication}
\boldsymbol{V}_\text{post} = \boldsymbol{V}_\text{pre} \boldsymbol{V}^\prime_\text{pre} \boldsymbol{V}_\text{post}.
\end{align}
Because the columns of $\boldsymbol{V}_\text{post}$ are orthonormal, we have
\begin{align} \label{eq:theta.0}
\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(d)}}] \widetilde{w}^{(n,d)}
&= \mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(d)}}] \boldsymbol{V}_\text{pre} \boldsymbol{V}^\prime_\text{pre} w^{(n,d)}
= \mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(d)}}] w^{(n,d)},
\end{align}
where we use \eqref{eq:subspace_inclusion_implication} in the second equality.
%
Hence, we conclude
\begin{align}
\theta^{(d)}_n
%
&= \frac{1}{T_1} \sum_{t > T_0} \sum_{j \in \mathcal{I}^{(d)}} w^{(n,d)}_j \mathbb{E}[Y_{tj} ]
%
= \frac{1}{T_1} \sum_{t > T_0} \sum_{j \in \mathcal{I}^{(d)}} \widetilde{w}^{(n,d)}_j \mathbb{E}[Y_{tj} ].
\end{align}
The first equality follows from \eqref{eq:identification} in Theorem \ref{thm:identification}.
The second equality follows from \eqref{eq:theta.0}.
\section{Proof of Lemma \ref{lemma:param_est}}
We first re-state Corollary 4.1 of \cite{pcr_aos}.
Recall that $O_p$ absorbs dependencies on $r$ and $\sigma$ into the constant, and $C(\|\widetilde{w}^{(n,d)}\|_2) = \max\{1, \|\widetilde{w}^{(n,d)}\|_2\}$.
\begin{lemma*}[Corollary 4.1 of \cite{pcr_aos}]
Let the setup of Lemma \ref{lemma:theta} hold.
Then w.p. at least $1 - O(1 / (T_0 N_d} %N^{(d))^{10})$
\begin{align}\label{eq:corollary_new_PCR}
\|\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}\|^2_2 \le C(\sigma, r) \frac{C^2(\|\widetilde{w}^{(n,d)}\|_2)\log(T_0 N_d} %N^{(d))}{\min\{T_0, N_d} %N^{(d)\}},
\end{align}
where $C(\sigma, r)$ is a constant that only depends on $\sigma$ and $r$.
\end{lemma*}
This is seen by adapting the notation in \cite{pcr_aos} to that used in this paper.
In particular, $y = Y_{\text{pre}, n}, \boldsymbol{X} = \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}], \widetilde{\boldsymbol{Z}} = \boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}, \widehat{\beta} = \widehat{w}^{(n,d)}, \beta^* = \widetilde{w}^{(n,d)}$, where $y, \boldsymbol{X}, \widetilde{\boldsymbol{Z}}, \widehat{\beta}, \beta^*$ are the notations used in \cite{pcr_aos}\footnote{Since we do not consider missing values in this work, $\widetilde{\boldsymbol{Z}} = \boldsymbol{Z}$, where $\boldsymbol{Z}$ is the notation used in \cite{pcr_aos}.}
Further, we also use the fact that $\boldsymbol{X} \beta^*$ in the notation of \cite{pcr_aos} (i.e., $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}] \widetilde{w}^{(n,d)})$ in our notation) equals $\mathbb{E}[Y_{\text{pre}, n}]$.
This follows from
\begin{align}\label{eq:pre_intervention_identification}
\mathbb{E}[Y_{\text{pre},n}] = \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}] \widetilde{w}^{(n,d)},
\end{align}
which follows from \eqref{eq:identification_strong} in Theorem \ref{thm:identification} and \eqref{eq:theta.0}.
We conclude by noting \eqref{eq:corollary_new_PCR} implies
\begin{align}
\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)} = O_p\left( \frac{C(\|\widetilde{w}^{(n,d)}\|_2) \sqrt{\log(T_0 N_d} %N^{(d))}}{\min\{\sqrt{T_0}, \sqrt{N_d} %N^{(d)}\}} \right).
\end{align}
\section{Conclusion}
We extend the SC framework to the multiple intervention setting, where we estimate unit specific potential outcomes under any intervention averaged over the post-intervention period.
In doing so, we also provide novel identification and inference results for SC.
SI also presents a new perspective on learning with distribution shifts, an active area of research in econometrics and machine learning.
We hope this work adds to the growing literature on data-efficient, personalized decision-making within causal inference and beyond.
\section{Comparison of SI Causal Framework with Literature}\label{sec:causal_param_comparison}
Broadly speaking, our work belongs to the panel data literature, where one collects multiple measurements for each unit often over time.
Given the breadth of this literature, a variety of causal frameworks and associated estimation procedures
have been proposed.
In this section, we compare the SI causal framework with related works considered in the SC and multi-arm randomized control trial (RCT) literatures.
As we will see, the target causal parameter in SI nicely complements the parameters considered in these works.
In Sections \ref{sec:comparison_single_unit_averaged} and \ref{sec:SC_discussion}, we focus our comparison with the SC literature, where the target causal parameter is often the treatment effect for the treated under control.
That is, for treated units, one would like to estimate the counterfactual outcome under control in the post-intervention period.
Given the large and growing SC literature, there are many variants of this causal parameter that are analyzed.
Due to the vastness of this literature, we focus our comparison on some of the most related and recent causal frameworks in SC.
We stress that these set of comparisons are by no means exhaustive.
We focus our discussion on the
(a) target causal parameters;
(b) key assumptions;
(c) statistical guarantees for inference.
In Section \ref{sec:comparison_RCT}, motivated by the empirical results from the A/B testing case study in Section \ref{sec:empirics}, we discuss connections between the SI causal framework and multi-arm RCTs.
\vspace{7.5pt}
\noindent{\em Additional notation. }
For any $(n,d)$ pair, let
$
\barY_n^{(d)} = \frac{1}{T_1} \sum_{t > T_0} Y^{(d)}_{tn}.
$
In addition, let $ \tau_n^{(d_1, d_2)} = \barY^{(d_1)}_n - \barY^{(d_2)}_n$ denote the (relative) treatment effect between interventions $d_1$ and $d_2$ for unit $n$, averaged over the post-intervention period.
\subsection{Target Causal Parameter: $\barY^{(0)}_n$}\label{sec:comparison_single_unit_averaged}
The most closely related causal parameter in the SC literature to our work is $\tau_n^{(d,0)}$ for $n \in \mathcal{I}^{(d)}$ and $d \neq 0$.
This is referred to as the unit specific treatment effect on the treated, averaged over the post-intervention period.
Recall $\barY^{(d)}_n$ is observed for $n \in \mathcal{I}^{(d)}$.
As such, the goal in these works is to estimate the counterfactual potential outcomes $\barY^{(0)}_n$ for $n \notin \mathcal{I}^{(0)}$,
i.e., what would have happened to a ``treated'' unit $n$ had it remained under control.
The quantity $\barY^{(0)}_n$ is closely related to one of our target causal parameters, $\theta^{(0)}_n$.
The slight difference is that $\theta^{(0)}_n = \mathbb{E}[ \barY^{(0)}_n | \{u^{(0)}_t, v_n: t > T_0\}]$, where the expectation is taken with respect to the idiosyncratic shocks and conditioned on the latent factors.
Many of the works listed below {\em implicitly} condition on either the latent factors or directly on the observations $Y_{tn}$, if a factor model is not assumed.
We also note the SI framework allows for identification and inference of $\theta^{(d)}_n$ for $d \neq 0$, which these previous works do not consider.
Arguably, $\tau_n^{(d,0)}$ is the most common casual parameter considered in the SC literature, e.g., see
\cite{Hsiao12, imbens16, athey1, LiBell17, xu_2017, Li18, bai2020matrix, Kwok20, chernozhukov2020practical, fernandezval2020lowrank}.
We restrict our attention to two excellent recent works, \cite{bai2020matrix} and \cite{chernozhukov2020practical}, as they highlight some of the primary points of comparison with the SC literature and also provide rigorous asymptotic normality results for their respective estimators.
In particular, we discuss four key assumptions in detail: Assumptions A, B in Section \ref{sec:chero_comparison} when we compare with \cite{chernozhukov2020practical}, and Assumptions C, D in Section \ref{sec:bai_comparison} when we compare \cite{bai2020matrix}.
The collection of works listed above use some combination of Assumptions A to D (or variation, thereof) to prove formal statistical guarantees.
Again, given the vastness of the literature, we underscore that these comparisons are by no means exhaustive.\footnote{A related causal parameter considered in many of these works is the time specific treatment effect on the treated, averaged over all treated units.
By SUTVA, this reduces to estimating
$\barY^{(0)}_t = (N-N_0} %N^{(0))^{-1}\sum_{n \notin \mathcal{I}^{(0)}} Y^{(0)}_{tn}$ for a particular $t > T_0$.
Estimating this quantity is quite similar to estimating $\barY^{(0)}_n$.
In particular, we can estimate $\barY^{(0)}_t$ by simply transposing the observations and then running the same estimator used to estimate $\barY^{(0)}_n$.}
\subsubsection{Key Assumptions in \cite{chernozhukov2020practical}}\label{sec:chero_comparison}
\noindent{\em Overview of assumptions. }
There are two key assumptions made in \cite{chernozhukov2020practical}.
First, the authors assume the existence of $w^{(n, 0)} \in \mathbb{R}^{N_0} %N^{(0)}$ such that for all time $t$ and conditioned\footnote{The conditioning on $\{Y^{(0)}_{tj}: j \in \mathcal{I}^{(0)} \}$ is implicit in \cite{chernozhukov2020practical}.} on any sampling of $\{Y^{(0)}_{tj}: j \in \mathcal{I}^{(0)} \}$,
\begin{align}\label{eq:cherno_form}
Y^{(0)}_{tn} = \sum_{j \in \mathcal{I}^{(0)}} w^{(n, 0)}_j Y^{(0)}_{tj} + \varepsilon_{tn},
\end{align}
where $\mathbb{E}[\varepsilon_{tn}] = 0$ and $\mathbb{E}[\varepsilon_{tn} Y^{(0)}_{tj}] = 0$.
Second, they assume the existence of an oracle estimator for $w^{(n, 0)}$, denoted as $\widehat{w}^{(n, 0)}$, such that $\|w^{(n, 0)} - \widehat{w}^{(n, 0)}\|_2 = o(1)$.
The two most common restrictions placed on $w^{(n, 0)}$, for which there exist estimators with formal performance guarantees, include
(i) $w^{(n, 0)}$ is convex, i.e., $\sum_{j \in \mathcal{I}^{(0)}} w_j^{(n, 0)} = 1$ and $w_j^{(n, 0)} \ge 0$, cf. \cite{abadie1, abadie2, imbens16};
(ii) $w^{(n, 0)}$ is approximately sparse, i.e., $\|w^{(n, 0)}\|_1 = O(1)$, which is a relatively weaker assumption, cf. \cite{raskutti11, chernozhukov2020exact}.
Under these assumptions, the authors provide a flexible and practical t-test based inference procedure to estimate $\barY^{(0)}_n$, which utilizes the oracle estimator $\widehat{w}^{(n, 0)}$.
We compare these assumptions with the SI causal framework.
\vspace{7.5pt}
\noindent{\em Assumption A: functional form in \eqref{eq:cherno_form}. }
Note that under Assumption \ref{assumption:form}, we can equivalently write Assumption \ref{assumption:linear} as follows:
\begin{align}\label{eq:linear_span_expectation_new}
\mathbb{E}[Y^{(d)}_{tn}] = \sum_{j \in \mathcal{I}^{(d)}} w^{(n,d)}_j \mathbb{E}[Y^{(d)}_{tj}] \quad \text{for all} \ d.
\end{align}
Hence, compared to \eqref{eq:cherno_form} and for $d=0$, we operate under a weaker functional form assumption as given in \eqref{eq:linear_span_expectation_new}.
To see this, simply take expectations on both sides of \eqref{eq:cherno_form} with respect to $\varepsilon_{tn}$, which leads to \eqref{eq:linear_span_expectation_new}, i.e.,
\eqref{eq:linear_span_expectation_new} is weaker than \eqref{eq:cherno_form}.
In other words, we do not require the linear relationship $w^{(n, 0)}_j$ to hold between the {\em noisy} potential outcomes $Y^{(0)}_{tn}$ and $\{Y^{(0)}_{tj}: j \in \mathcal{I}^{(0)} \}$.
Rather, we require the weaker assumption that the relationship only holds between the {\em expected} potential outcomes.
\vspace{7.5pt}
\noindent{\em Assumption B: restrictions on $w^{(n, 0)}$. }
Another key assumption made is regarding the restrictions placed on $w^{(n, 0)}$.
As discussed, theoretical guarantees in previous works require $w^{(n, 0)}$ to be approximately sparse, or even more stringently, convex.
Further, the estimators used to learn $w^{(n, 0)}$ require {\em explicit knowledge} of the restriction placed on $w^{(n, 0)}$.
For example, if $w^{(n, 0)}$ is assumed to be convex, then convex regression is used; if $w^{(n, 0)}$ is assumed to be approximately sparse, then a $\ell_1$-norm regularizer is used.
In comparison, the estimator we propose to learn $w^{(n,0)}$ in \eqref{eq:pcr} does not impose any such restriction on $w^{(n,0)}$, even in the high-dimensional setting where $N_0} %N^{(0) > \max\{T_0, T_1\}$.
Indeed, Lemma \ref{lemma:param_est} establishes that our estimator consistently learns the {\em unique} minimum $\ell_2$-norm $w^{(n,d)}$, denoted as $\widetilde{w}^{(n,d)}$.
Further, our consistency and asymptotic normality results {\em implicitly} scale with the $\ell_1$- and $\ell_2$-norm of $\widetilde{w}^{(n,d)}$. In particular, the error in our consistency result of Theorem \ref{thm:consistency} implicitly scales with $\|\widetilde{w}^{(n,d)}\|_1$ and the variance in our asymptotic normality result of Theorem \ref{thm:normality} implicitly scales with $\|\widetilde{w}^{(n,d)}\|_2$.
We do, however, assume that there exists a latent factor model between the potential outcomes (Assumption \ref{assumption:form}); such an assumption is not made in \cite{chernozhukov2020practical}.
In general, factor models are standard within the panel data literature, and we discuss this point in detail below in Assumption C.
To overcome high-dimensionality, where $w^{(n, d)}$ is not uniquely specified, the estimator in \eqref{eq:pcr} directly exploits the low-rank structure induced by this factor model in Assumption \ref{assumption:form}.
\subsubsection{Key Assumptions in \cite{bai2020matrix}}\label{sec:bai_comparison}
\noindent{\em Overview of assumptions. }
In \cite{bai2020matrix}, they consider the following factor model:
\begin{align}\label{bai_form}
Y^{(0)}_{tn} &= \langle x_{tn}, \beta \rangle + \langle F_t, \Lambda_n \rangle + \varepsilon_{tn}
\\ Y^{(d)}_{tn} &= \alpha^{(d)}_{tn} + Y^{(0)}_{tn}, \ \text{for} \ d \neq 0.
\end{align}
Here, $\Lambda_n \in \mathbb{R}^{r_1}$ is a unit $n$ specific latent factor and $F_t \in \mathbb{R}^{r_1}$ is a time $t$ specific latent factor associated with intervention $d = 0$ (control);
$x_{tn} \in \mathbb{R}^k$ is a vector of {\em observed covariates}; and $\beta \in \mathbb{R}^k$ is a latent factor acting on $x_{tn}$, which crucially is invariant across $(t,n)$.
The authors make appropriate assumptions on the factor loadings $\boldsymbol{\Lambda} = [\Lambda_n] \in \mathbb{R}^{r_1 \times N}$ and $\boldsymbol{F} = [F_t] \in \mathbb{R}^{r_1 \times T}$, and on the idiosyncratic shock $\varepsilon_{tn}$.
See Assumptions A to D in \cite{bai2020matrix} for details.
In essence, these assumptions require
(i) $\boldsymbol{\Lambda}, \boldsymbol{F}$ to satisfy an incoherence-like condition, which implies a bounded operator norm (e.g., $\mathbb{E}[\|\boldsymbol{F}\|^4_{\text{op}}], \|\boldsymbol{\Lambda}\|_{\text{op}} \le M$, where $M$ does not scale with $N_0} %N^{(0), T$);
(ii) each of the $r$ singular vectors of $\boldsymbol{\Lambda}, \boldsymbol{F}$ are identifiable (e.g., the non-zero singular values of $\boldsymbol{\Lambda}, \boldsymbol{F}$ are distinct);
(iii) $\varepsilon_{tn}$ across $(t,n)$ have sufficiently light tails (e.g., $\mathbb{E}[|\varepsilon_{tn}|^8] < M$) and are weakly correlated.
Further, the authors propose an estimator composed of the following main steps:
(i) estimate $\widehat{\beta}$ using the observations $\{Y_{tn}, x_{tn}\}$ across $(t,n)$;
(ii) estimate $\widehat{\boldsymbol{\Lambda}}, \widehat{\boldsymbol{F}}$ using the residuals of $\{Y_{tn} - x_{tn}\widehat{\beta}\}$;
(iii) estimate $\barY^{(0)}_n$ using $\{x_{tn}, \widehat{\beta}, \widehat{\boldsymbol{\Lambda}}, \widehat{\boldsymbol{F}}\}$.
Below, we compare their primary assumptions with that of the SI causal framework.
\vspace{7.5pt}
\noindent{\em Assumption C: factor model in \eqref{bai_form}. }
First, we compare the factor model assumed in \eqref{bai_form} with that in \eqref{eq:form.0} of Assumption \ref{assumption:form}.
Under \eqref{bai_form}, if we further assume that $\langle x_{tn}, \beta \rangle$ admits a latent factorization given by $\langle x_{tn}, \beta \rangle = \langle \tilde{x}_{t}, \tilde{x}_{n} \rangle$ with $\tilde{x}_{t}, \tilde{x}_{n} \in \mathbb{R}^{k_1}$,
then we can write \eqref{bai_form} as a special case of \eqref{eq:form.0}.
To see this, let $d=0$ and define $u^{(0)}_t, v_n \in \mathbb{R}^{r}$ in \eqref{eq:form.0} as
\begin{align}\label{eq:bai_LFM_SI}
u^{(0)}_t = (F_t, \tilde{x}_{t}), \ v_n = (\Lambda_n, \tilde{x}_{n}),
\end{align}
where $r=r_1 + k_1$.
We stress that we do not require access to $\tilde{x}_{t}$ or $\tilde{x}_{n}$ to express
the model in \cite{bai2020matrix} as an instance of \eqref{eq:form.0};
instead, we require that such a factorization of $\langle x_{tn}, \beta \rangle$ {\em implicitly} exists.
%
However, if one does not assume such a latent factorization, then the estimation procedure of \cite{bai2020matrix}, where one first learns $\beta$ using the covariates and then estimates $\boldsymbol{\Lambda}, \boldsymbol{F}$ on the residuals, is a principled approach.
In Appendix \ref{sec:covariates}, we show how to extend the SI framework to incorporate unit-specific covariates.
In addition, if one wants to estimate $\{\theta^{(d)}_n: d \neq 0\}$ in the SI causal framework (which we underscore is not considered in the SC literature), we require the added assumption that $\alpha^{(d)}_n$ factorizes into a (time, intervention) specific and unit specific latent factor, i.e.,
\begin{align}\label{eq:additional_assumption_theta_tnd}
\alpha^{(d)}_{tn} = \langle \tilde{\alpha}^{(d)}_{t}, \tilde{\alpha}_{n} \rangle,
\end{align}
where $\tilde{\alpha}^{(d)}_{t}, \tilde{\alpha}_{n} \in \mathbb{R}^{r_2}$.
Then, for any $d \neq 0$, we can define $u^{(d)}_t, v_n \in \mathbb{R}^{r}$ of \eqref{eq:form.0} as
\begin{align}\label{eq:SI_LFM_additional}
u^{(d)}_t = (F_t, \tilde{x}_{t}, \tilde{\alpha}^{(d)}_{t}), \ v_n = (\Lambda_n, \tilde{x}_{n}, \tilde{\alpha}_{n}),
\end{align}
where $r = r_1 + r_2 + k_1$.
As before, we do not require observing $\tilde{\alpha}^{(d)}_{t}$ or $\tilde{\alpha}_{n}$, but rather that such a factorization of $\alpha^{(d)}_{tn}$ implicitly exists.\footnote{
In the SC literature, a closely related factor model is
$Y^{(0)}_{tn} = \langle \beta_t, x_n \rangle + \langle F_t, \Lambda_n \rangle + \varepsilon_{tn}$ and
$Y^{(d)}_{tn} = \alpha^{(d)}_{tn} + Y^{(0)}_{tn}$ for $d \neq 0$,
cf. \cite{abadie1, abadie2}.
Here, $\beta_t \in \mathbb{R}^k$ is a latent factor, $x_n \in \mathbb{R}^k$ is an observed unit specific covariate, and $F_t, \Lambda_n \in \mathbb{R}^{r_1}$ and $\alpha^{(d)}_{tn} \in \mathbb{R}$ are defined as in \eqref{eq:bai_LFM_SI}.
As such, similar to \eqref{eq:bai_LFM_SI}, we can write this factor model as a special case of \eqref{eq:form.0},
where
$u^{(0)}_t = (\beta_t, F_t)$ and $\ v_n = (x_n, \Lambda_n)$.
Again, if one wants to estimate $\theta^{(d)}_{tn}$, then we need to assume an additional factorization of $\alpha^{(d)}_{tn}$, just as in \eqref{eq:additional_assumption_theta_tnd} and \eqref{eq:SI_LFM_additional}.
}
\vspace{7.5pt}
\noindent{\em Assumption D: $\boldsymbol{\Lambda}, \boldsymbol{F}$. }
Since our ultimate goal is to estimate $\theta^{(d)}_{n}$, it is not necessary to accurately estimate either of the latent factor loadings $(\boldsymbol{\Lambda}, \boldsymbol{F})$.
In fact, our proposed estimator in Section \ref{sec:algo} explicitly circumvents estimating these two quantities.
In comparison, \cite{bai2020matrix} establish their theoretical guarantees by requiring explicit bounds on the estimation qualities of $(\boldsymbol{\Lambda}, \boldsymbol{F})$, which naturally require making more stringent assumptions on the latent factor loadings;
for example, they require each singular vector of $(\boldsymbol{\Lambda}, \boldsymbol{F})$ to be identifiable, which itself requires all of the non-zero singular values to be distinct.
\subsubsection{Statistical Guarantees}\label{sec:stat_gaurantees_comparison}
As stated earlier, the works listed at the beginning of Section \ref{sec:comparison_single_unit_averaged} use some combination (and/or variation) of Assumptions A to D to prove formal statistical guarantees.
In particular, both \cite{bai2020matrix} and \cite{chernozhukov2020practical} prove asymptotic normality for their proposed estimators under these assumptions.
Analogously in Section \ref{sec:results}, we prove asymptotic normality of $\widehat{\theta}^{(d)}_n$ for each $(n,d)$ pair (Theorem \ref{thm:normality}), i.e., beyond $d=0$.
With regards to scalings, \cite{chernozhukov2020practical} requires
(i) $T_0, T_1 \to \infty$;
(ii) $T_0 / T_1 \to c_0$, where $c_o \in [0, \infty]$.
\cite{bai2020matrix} requires
(i) $T_0, T_1, N \to \infty$;
(ii) $\sqrt{N} / \min\{N_0} %N^{(0), T_0\} \to 0$;
and (iii) $\sqrt{T_0 + T_1} / \min\{N_0} %N^{(0), T_0\}$ $\to 0$.
To estimate $\theta^{(d)}_n$, we require
(i) $T_0, T_1, N_d} %N^{(d) \to \infty$;
(ii) $T_1 / \min\{N_d} %N^{(d), \sqrt{T_0}\} \to 0$.
Finding the optimal relative scalings between $T_0$, $T_1$, $N_0} %N^{(0)$ (or $N_d$) is interesting future work.
\subsection{Additional Discussion For SC Literature}\label{sec:SC_discussion}
\subsubsection{Other Causal Parameters Considered in SC Literature}\label{sec:other_causal_params}
\noindent{\em Parameter A: $\frac{1}{N - N_0} %N^{(0)}\sum_{n \notin \mathcal{I}^{(0)}} \theta^{(0)}_{n}$. }
This parameter represents the average potential outcome under control across all ``treated'' units over the post-intervention period.
In the recent excellent work of \cite{ark}, the authors provide rigorous inference guarantees for this parameter under a binary intervention model: $\boldsymbol{Y}^{(0)} = \boldsymbol{L} + \boldsymbol{E}$ and $\boldsymbol{Y}^{(1)} = \boldsymbol{Y}^{(0)} + \boldsymbol{A}$, where
$\boldsymbol{Y}^{(0)} = [Y^{(0)}_{tn}] \in \mathbb{R}^{T \times N}$ is the matrix of potential outcomes under control;
$\boldsymbol{L} = [L_{tn}] \in \mathbb{R}^{T \times N}$ encodes the latent factor model under control (in our notation, $L_{tn} = \langle u^{(0)}_{t}, v_n \rangle$);
$\boldsymbol{E} = [\varepsilon_{tn}] \in \mathbb{R}^{T \times N}$ encodes the idiosyncratic shocks;
$\boldsymbol{Y}^{(1)} = [Y^{(1)}_{tn}] \in \mathbb{R}^{T \times N}$ is the matrix of potential outcomes under $d=1$;
and $\boldsymbol{A} = [\alpha_{tn}] \in \mathbb{R}^{T \times N}$ encodes the treatment effect.
The authors propose the ``synthetic difference-in-differences'' estimator for
$\frac{1}{T_1(N - N_0} %N^{(0))} \sum_{t > T_0} \sum_{n \notin \mathcal{I}^{(0)}}\alpha_{tn} | \boldsymbol{L}$,
and establish rigorous asymptotic normality results.
\cite{ark} allows for the additional flexibility that they only require the product $(N-N_0)T_1 \rightarrow \infty$.
As an implication, $N-N_0$ can be a constant as long as $T_1$ grows; hence, if $N-N_0 = 1$, then this is equivalent to $\theta_n^{(0)}$.
On the other extreme, they allow for $T_1$ to be fixed as long as $N-N_0$ grows;
one can verify that our results hold under this setting by using the estimate
$\frac{1}{N-N_0}\sum_{n \notin \mathcal{I}^{(0)}} \sum_{j \in \mathcal{I}^{(0)}} \widehat{w}_j^{(n,0)} Y_{tj}$.
To establish their results, \cite{ark} assume certain relative scalings between $T_0, T_1, N_0, (N-N_0)$ (see Assumption 2 of \cite{ark}), which require that $T_1 (N - N_0)$ do not grow ``too quickly'' compared to $T_0$ and $N_0$.
As discussed in Section \ref{sec:stat_gaurantees_comparison}, we also require similar relative scalings between these parameters.
If we continue to restrict our attention to the estimation of counterfactuals under control, then the two primary differences between this work and \cite{ark} are the assumptions made on the
(i) spectral profile of the factor models and (ii) relationship between the target and donors.
In particular, we consider a low-rank $\boldsymbol{L}$ (Assumption \ref{assumption:form}) with a well-balanced spectra (Assumption \ref{assumption:spectra}); the latter effectively assumes a lower bound on the smallest non-zero singular value of $\boldsymbol{L}$.
In comparison, \cite{ark} makes a weaker assumption that $\boldsymbol{L}$ is {\em approximately} low-rank, i.e., its $\sqrt{\min\{T_0,N_0\}}$-th singular value is sufficiently small (see Assumption 3 of their work).
However, as in \cite{abadie2}, they require a convex relationship to hold between the target and donors, while we make the weaker assumption that a linear relationship holds (see the discussion under Assumption~\ref{assumption:linear} for why a linear relationship is directly implied by a low-rank factor model).
Another recent work within the SC literature that also considers an approximately low-rank $\boldsymbol{L}$ is \cite{agarwal2020robustness}.
There, the authors analyze the recently proposed ``robust SC'' (RSC) estimator of \cite{rsc}, which is closely related to our proposed estimator in Section \ref{sec:algo} in the binary intervention setting.
Specifically, \cite{agarwal2020robustness} considers two natural generating processes that induce an approximately low-rank $\boldsymbol{L} = [L_{tn}]$:
(i) the spectra of $\boldsymbol{L}$ is geometrically decaying (i.e., the $k$-th singular value of $\boldsymbol{L}$, denoted as $s_k$, obeys $s_k = s_1 c^{k - 1}$, where $c \in (0, 1)$;
(ii) $\boldsymbol{L}$ follows a generalized factor model, i.e., $L_{tn} = g(\rho_t, \omega_n)$, where $g(\cdot, \cdot)$ is a latent, H\"{o}lder continuous function, and $\rho_t, \omega_n \in [0, 1]^\ell$ are latent variables associated with time $t$ and unit $n$ (an exact low-rank model is a special case of this where $g$ is bi-linear).
\cite{agarwal2020robustness} establish finite-sample consistency for the RSC estimator with respect to a similar causal parameter as that of $\theta^{(0)}_n$ (see Section 4 of \cite{agarwal2020robustness} for details).
However, they do not provide asymptotic normality results for their target causal parameter.
An interesting future research direction is to build upon these works to study the trade-offs between the assumptions placed on the latent factor model (e.g., spectral profile), relationship between the target and donors, and subsequent inference guarantees one can establish for various target causal parameters
(e.g., extending the model and estimator of \cite{ark} to estimate $\theta^{(d)}_n$ for any $n \notin \mathcal{I}^{(d)}$ and $d \neq 0$).
\vspace{7.5pt}
\noindent{\em Parameter B: $\frac{1}{TN}\sum_{n \ge 1} \sum_{t \ge 1} \mathbb{E}[Y^{(0)}_{tn} | \{u_t^{(0)}, v_n: t \in [T], n \in [N]\}]$. }
This is the average potential outcome under control across all units (treated and control) and time ({\em both} pre- and post-intervention periods).
In terms of theoretical results, this is the focus of \cite{athey1}, where they prove finite-sample consistency.
Though this causal parameter is an averaged version of $\theta^{(0)}_n$ (across all time and units), their intervention setup exhibits greater flexibility.
More specifically, the duration of the pre-intervention period can vary amongst the units, i.e., $T_0$ is specific to each unit.
Extending the SI framework to allow for different intervention times for each unit is an important future research direction.
\vspace{7.5pt}
\noindent{\em Parameter C: $\mathbb{E}[Y^{(0)}_{tn} | u^{(0)}_t, v_n]$. }
This is the expected potential outcome under control for unit $n$ at any time point $t$.
Unlike the other causal parameters listed before, estimating this parameter for any $n \notin \mathcal{I}^{(0)}$ is more challenging than estimating $\theta^{(0)}_{n}$; the latter is the average of the former over the post-intervention period.
As such, formal consistency and inference results for this parameter are notably absent from the literature.
Recently, \cite{bai2020matrix} provide some inference results for this parameter.
However, the authors require $\varepsilon_{tn}$ to be strictly Gaussian and operate under all of the assumptions previously discussed in Section \ref{sec:bai_comparison}.
Extending the SI framework to provide estimation and inference results for $\mathbb{E}[Y^{(d)}_{tn} | u^{(d)}_t, v_n]$ for each $(t > T_0, n, d)$ is an important future research direction.
\subsection{Comparison with Multi-Arm RCTs}\label{sec:comparison_RCT}
\noindent{\em Causal parameter. }
In a multi-arm RCT, a standard objective is to estimate the single intervention $d$ that performs the best on average over the entire population of units.
Among the many causal parameters that are considered, the most closely related to ours is the potential outcome for each intervention $d$ averaged across units and the post-intervention period:
\begin{align}\label{eq:RCT_causal_param}
\frac{1}{NT_1} \sum_{n \ge 1} \sum_{t > T_0} Y_{tn}^{(d)} \quad \text{for all} \ d.
\end{align}
\vspace{7.5pt}
\noindent{\em Data requirements. }
Strictly speaking, multi-arm RCTs do not require pre- and post-intervention periods, or more generally, a notion of time or multiple measurements, i.e., it is feasible for $T=1$.
In this sense, SI has more stringent data requirements.
Nevertheless, it is not uncommon for RCT studies to exhibit a similar data pattern as that described in the SI framework;
namely, one collection of observations across all units and over multiple measurements under a common intervention (typically, control), i.e., a ``pre-intervention period'', and another collection of observations across units and multiple measurements, where each unit undergoes a particular intervention, i.e., a ``post-intervention period''.
In these studies, \eqref{eq:RCT_causal_param} is a common target causal parameter.
If there is indeed a pre- and post-intervention period, then each unit is randomly assigned into one of $D$ mutually exclusive groups, $\{\mathcal{I}^{(0)}, \dots, \ \mathcal{I}^{(D-1)}\}$, i.e., in the post-intervention period, we observe each unit under one of the $D$ interventions.
This is precisely the data setup in SI (recall Figure \ref{fig:data_observations_SI}).
\vspace{7.5pt}
\noindent{\em Causal assumptions. }
Multi-arm RCTs do not require any particular functional form on the potential outcomes to estimate \eqref{eq:RCT_causal_param}.
One good estimator for \eqref{eq:RCT_causal_param} is simply an empirical average over $\mathcal{I}^{(d)}$:
$(T_1 N_d)^{-1} \sum_{t > T_0} \sum_{j \in \mathcal{I}^{(d)}} Y_{tj}$.
However, given the correspondence in data setup between multi-arm RCTs and SI, we hope that the SI framework allows us to use the {\em available} data from a multi-arm RCT to produce ``personalized'' policy-decisions for each unit.
In particular, if Assumptions \ref{assumption:form} and \ref{assumption:linear} hold across $(t,n,d)$,
then our proposed estimator in Section \ref{sec:algo} can use multi-arm RCT data to estimate not only the expectation of \eqref{eq:RCT_causal_param} (conditioned on the latent factors), but also $\theta^{(d)}_n$ for every $(n,d)$ pair, i.e., achieve personalization.
Finally, due to the explicit randomization of treatment assignments $\mathcal{D}$ in multi-arm RCT studies, Assumption \ref{assumption:conditional_mean_zero} immediately holds.
\section{Proof: Consistency}
\section{Proof of Theorem \ref{thm:consistency}} \label{sec:proof_consistency}
For ease of notation, we suppress the conditioning on $\mathcal{E}$ for the remainder of the proof.
Recall that we are absorbing the dependencies on $r$ and $\sigma$ into the constant within $O_p$,
and $C(\|v\|_p) = \max\{1, \|v\|_p\}$ for any $v \in \mathbb{R}^a$.
For any $t > T_0$, let
$Y_{t, \mathcal{I}^{(d)}} = [Y_{tj} = Y^{(d)}_{tj}: j \in \mathcal{I}^{(d)}] \in \mathbb{R}^{N_d} %N^{(d)}$
and
$ \varepsilon_{t, \mathcal{I}^{(d)}} = [\varepsilon_{tj}: j \in \mathcal{I}^{(d)}] \in \mathbb{R}^{N_d} %N^{(d)}$.
Note that the rows of $\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(d)}}$ are formed by $\{Y_{t, \mathcal{I}^{(d)}}: t > T_0\}$.
Further, for any matrix $\boldsymbol{A}$ with orthonormal columns, let $\mathcal{P}_A = \boldsymbol{A} \boldsymbol{A}^\prime$ denote the projection matrix onto the subspace spanned by the columns of $\boldsymbol{A}$.
By \eqref{eq:si.2}, Lemma \ref{lemma:theta}, and Theorem \ref{thm:identification}, we have
\begin{align}
\widehat{\theta}^{(d)}_n - \theta^{(d)}_n
&= \frac{1}{T_1} \sum_{t > T_0}
\big( \langle Y_{t, \mathcal{I}^{(d)}}, \widehat{w}^{(n,d)} \rangle - \langle \mathbb{E}[ Y_{t, \mathcal{I}^{(d)}}], \widetilde{w}^{(n,d)} \rangle \big)
\\ &=
\frac{1}{T_1} \sum_{t > T_0} \big( \langle \mathbb{E}[Y_{t, \mathcal{I}^{(d)}}], \widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)} \rangle + \langle \varepsilon_{t, \mathcal{I}^{(d)}}, \widetilde{w}^{(n,d)} \rangle
\\ &\qquad \qquad + \langle \varepsilon_{t, \mathcal{I}^{(d)}}, \widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)} \rangle \big). \label{eq:inf.0}
\end{align}
From Assumption \ref{assumption:subspace}, it follows that $\boldsymbol{V}_\text{post} = \mathcal{P}_{V_\text{pre}} \boldsymbol{V}_\text{post}$, where $\boldsymbol{V}_\text{pre}, \boldsymbol{V}_\text{post}$ are the right singular vectors of $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]$ and $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(d)}}]$, respectively.
Hence, $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(d)}}] = \mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(d)}}] \mathcal{P}_{V_\text{pre}}$.
As such, for any $t > T_0$,
\begin{align}
\langle \mathbb{E}[Y_{t, \mathcal{I}^{(d)}}], \widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)} \rangle
&= \langle \mathbb{E}[Y_{t, \mathcal{I}^{(d)}}], \mathcal{P}_{V_\text{pre}}(\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}) \rangle. \label{eq:inf.01}
\end{align}
Plugging \eqref{eq:inf.01} into \eqref{eq:inf.0} yields
\begin{align}
\widehat{\theta}^{(d)}_n - \theta^{(d)}_n
&= \frac{1}{T_1}\sum_{t > T_0} \big(\langle \mathbb{E}[Y_{t, \mathcal{I}^{(d)}}], \mathcal{P}_{V_\text{pre}} (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}) \rangle
\\ &\qquad \qquad
+ \langle \varepsilon_{t, \mathcal{I}^{(d)}}, \widetilde{w}^{(n,d)} \rangle
+ \langle \varepsilon_{t, \mathcal{I}^{(d)}}, \widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)} \rangle \big). \label{eq:inf.1}
\end{align}
Below, we bound the three terms on the right-hand side (RHS) of \eqref{eq:inf.1} separately.
\vspace{10pt}
\noindent
{\em Bounding term 1. }
By Cauchy-Schwartz inequality, observe that
$$
\langle \mathbb{E}[Y_{t, \mathcal{I}^{(d)}}], \mathcal{P}_{V_\text{pre}} (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}) \rangle
\le
\| \mathbb{E}[Y_{t, \mathcal{I}^{(d)}}] \|_2 ~ \| \mathcal{P}_{V_\text{pre}} (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}) \|_2.
$$
Under Assumption \ref{assumption:boundedness}, we have
$\| \mathbb{E}[Y_{t, \mathcal{I}^{(d)}}] \|_2 \le \sqrt{N_d}$.
As such,
$$
\frac{1}{T_1} \sum_{t > T_0} \langle \mathbb{E}[Y_{t, \mathcal{I}^{(d)}}], \mathcal{P}_{V_\text{pre}} (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}) \rangle
\le \sqrt{N_d} \| \mathcal{P}_{V_\text{pre}} (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}) \|_2.
$$
Hence, it remains to bound $\| \mathcal{P}_{V_\text{pre}} (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}) \|_2$.
Towards this, we state the following lemma.
Its proof can be found in Appendix \ref{sec:consistency.1}.
\begin{lemma} \label{lemma:pcr_aos2}
Consider the setup of Theorem \ref{thm:consistency}.
Then,
$$
\mathcal{P}_{V_\emph{pre}} (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)})
= O_p \Bigg( \frac{1}{\sqrt{N_d} T_0^{\frac 1 4}} + \frac{ C(\| \widetilde{w}^{(n,d)}\|_1) \sqrt{\log(T_0 N_d} %N^{(d))}}{\sqrt{N_d} \cdot \min\{\sqrt{T_0}, \sqrt{N_d} \}} \Bigg).
$$
\end{lemma}
Using Lemma \ref{lemma:pcr_aos2}, we obtain
{\small
\begin{align}
&\frac{1}{T_1} \sum_{t > T_0} \langle \mathbb{E}[Y_{t, \mathcal{I}^{(d)}}], \mathcal{P}_{V_\text{pre}}(\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}) \rangle
%
= O_p \Bigg( \frac{1}{T_0^{\frac 1 4}} + \frac{ C(\| \widetilde{w}^{(n,d)}\|_1) \sqrt{\log(T_0 N_d} %N^{(d))}}{\min\{\sqrt{T_0}, \sqrt{N_d} \}} \Bigg).
\label{eq:consistency.1}
\end{align}
}
This concludes the analysis for the first term.
\vspace{10pt}
\noindent
{\em Bounding term 2. }
We begin with a simple lemma.
\begin{lemma}\label{lemmma:standard_convergences}
Let $\gamma_t$ be a sequence of independent mean zero sub-Gaussian r.v.s with variance $\sigma^2$.
Then,
$
\frac{1}{T} \sum^T_{t = 1} \gamma_t = O_p\left(\frac{\sigma^2}{\sqrt{T}}\right).
$
\end{lemma}
\begin{proof}
Immediately holds by Hoeffding's lemma (Lemma \ref{lemma:hoeffding_random}).
\end{proof}
By Assumptions \ref{assumption:form} and \ref{assumption:noise}, we have for any $t > T_0$,
\begin{align}
\mathbb{E}[\langle \varepsilon_{t, \mathcal{I}^{(d)}}, \widetilde{w}^{(n,d)} \rangle ] = 0, \quad
%
\mathrm{Var}(\langle \varepsilon_{t, \mathcal{I}^{(d)}}, \widetilde{w}^{(n,d)} \rangle ) = \sigma^2 \| \widetilde{w}^{(n,d)} \|_2^2.
\end{align}
Since $\langle \varepsilon_{t, \mathcal{I}^{(d)}}, \widetilde{w}^{(n,d)} \rangle$ are independent across $t$, Lemma \ref{lemmma:standard_convergences} yields
\begin{align} \label{eq:consistency.2}
\frac{1}{T_1} \sum_{t > T_0} \langle \varepsilon_{t, \mathcal{I}^{(d)}}, \widetilde{w}^{(n,d)} \rangle
&= O_p \left( \frac{\| \widetilde{w}^{(n,d)} \|_2}{\sqrt{T_1}} \right).
\end{align}
\vspace{10pt}
\noindent
{\em Bounding term 3. }
First, we define the event $\mathcal{E}_1$ as
$$
\mathcal{E}_1 = \left\{ \|\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)} \|_2 = O \left( \frac{C(\| \widetilde{w}^{(n,d)}\|_2) \sqrt{\log(T_0 N_d} %N^{(d))}}{\min \{\sqrt{T_0}, \sqrt{N_d} %N^{(d)}\}} \right) \right\}.
$$
By Lemma \ref{lemma:param_est}, $\mathcal{E}_1$ occurs w.h.p. (defined in Appendix \ref{sec:notation}).
Next, we define the event $\mathcal{E}_2$ as
$$
\mathcal{E}_2 = \Bigg\{
\frac{1}{T_1} \sum_{t > T_1} \langle \varepsilon_{t, \mathcal{I}^{(d)}}, \widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)} \rangle
= O\Bigg( \frac{C(\| \widetilde{w}^{(n,d)}\|_2) \sqrt{\log(T_0 N_d} %N^{(d))}}{\sqrt{T_1}\min\{\sqrt{T_0}, \sqrt{N_d} %N^{(d)}\}} \Bigg)
\Bigg\}.
$$
Now, condition on $\mathcal{E}_1$.
By Assumptions \ref{assumption:form} and \ref{assumption:noise}, we have for any $t > T_0$,
\begin{align}
\mathbb{E}[ \langle \varepsilon_{t, \mathcal{I}^{(d)}}, \widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)} \rangle] &= 0
\\
\text{Var}(\langle \varepsilon_{t, \mathcal{I}^{(d)}}, \widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)} \rangle) &= \sigma^2 \| \widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)} \|_2^2.
\end{align}
Given that $\langle \varepsilon_{t, \mathcal{I}^{(d)}}, \widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)} \rangle$ are independent across $t$, Lemmas \ref{lemma:param_est} and \ref{lemmma:standard_convergences} imply
$ \mathcal{E}_2 | \mathcal{E}_1$ occurs w.h.p.
Further, we note
\begin{align}\label{eq:tower_law}
\mathbb{P}(\mathcal{E}_2) = \mathbb{P}(\mathcal{E}_2 | \mathcal{E}_1) \mathbb{P}(\mathcal{E}_1) + \mathbb{P}(\mathcal{E}_2 | \mathcal{E}_1^c) \mathbb{P}(\mathcal{E}_1^c) \ge \mathbb{P}(\mathcal{E}_2 | \mathcal{E}_1) \mathbb{P}(\mathcal{E}_1).
\end{align}
Since $\mathcal{E}_1$ and $\mathcal{E}_2 | \mathcal{E}_1$ occur w.h.p, it follows from \eqref{eq:tower_law} that $\mathcal{E}_2$ occurs w.h.p.
As a result,
\begin{align}
\frac{1}{T_1} \sum_{t > T_1} \langle \varepsilon_{t, \mathcal{I}^{(d)}}, \widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)} \rangle
&= O_p \left( \frac{C(\| \widetilde{w}^{(n,d)}\|_2) \sqrt{\log(T_0 N_d} %N^{(d))}}{\sqrt{T_1}\min\{\sqrt{T_0}, \sqrt{N_d} %N^{(d)}\}} \right). \label{eq:consistency.3}
\end{align}
\vspace{10pt} \noindent
{\em Collecting terms. }
Incorporating \eqref{eq:consistency.1}, \eqref{eq:consistency.2}, \eqref{eq:consistency.3} into \eqref{eq:inf.1}, and simplifying
yields
\begin{align} \label{eq:consistency.4}
\widehat{\theta}^{(d)}_n - \theta^{(d)}_n
= O_p \left(
%
\frac{1}{T_0^{\frac 1 4}}
%
+ \frac{\| \widetilde{w}^{(n,d)} \|_2}{ \sqrt{T_1}}
%
+ \frac{C(\| \widetilde{w}^{(n,d)}\|_1) \sqrt{\log(T_0 N_d} %N^{(d))}}{\min\{\sqrt{T_0}, \sqrt{N_d} \}} \right).
\end{align}
\subsection{Proof of Lemma \ref{lemma:pcr_aos2}} \label{sec:consistency.1}
We begin by introducing some helpful notations:
let
$ \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} = \sum_{\ell=1}^{r_\text{pre}} \widehat{s}_\ell \widehat{u}_\ell \widehat{v}^\prime_\ell$
be the rank $r_\text{pre}$-approximation of $\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}$, given by \eqref{eq:ypre_svd}.
More compactly, we write its singular value decomposition (SVD) as
$\boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} = \widehat{\bU}_\text{pre} \widehat{\bS}_\text{pre} \widehat{\bV}^\prime_\text{pre}$.
Further, let $\varepsilon_{\text{pre}, n} = [\varepsilon_{tn}: t \le T_0]$.
To establish Lemma \ref{lemma:pcr_aos2}, consider the following decomposition:
$$
\mathcal{P}_{V_\text{pre}} (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)})
= (\mathcal{P}_{V_\text{pre}} - \mathcal{P}_{\widehat{V}_\text{pre}}) (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)})
+ \mathcal{P}_{\widehat{V}_\text{pre}} (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}).
$$
We proceed to bound each term separately.
\vspace{10pt}
\noindent
{\em Bounding term 1. }
Recall $\| \boldsymbol{A} v \|_2 \le \| \boldsymbol{A} \|_\text{op} \| v \|_2$ for any $\boldsymbol{A} \in \mathbb{R}^{a \times b}$ and $v \in \mathbb{R}^b$.
Thus,
\begin{align}\label{eq:intermediate_Cauchy_Schwarz}
\| (\mathcal{P}_{V_\text{pre}} - \mathcal{P}_{\widehat{V}_\text{pre}}) (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}) \|_2
\le \| \mathcal{P}_{V_\text{pre}} - \mathcal{P}_{\widehat{V}_\text{pre}}\|_\text{op} \| \widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)} \|_2.
\end{align}
To control the above term, we state a helper lemma that bounds the distance between the subspaces spanned by the columns of $\boldsymbol{V}_\text{pre}$ and $\widehat{\bV}_\text{pre}$.
Its proof is given in Appendix \ref{sec:consistency.2}.
\begin{lemma} \label{lemma:subspace}
%
Consider the setup of Theorem \ref{thm:consistency}. Then for any $t > 0$, the following holds w.p. at least $1 - 2\exp(-t^2)$:
%
$
\| \mathcal{P}_{\widehat{V}_\emph{pre}} - \mathcal{P}_{V_\emph{pre}} \|_\emph{op}
\le \frac{C\sigma (\sqrt{T_0} + \sqrt{N_d} %N^{(d)} + t)}{s_{r_\emph{pre}}},
$
%
where $s_{r_\emph{pre}}$ is the $r_\emph{pre}$-th singular value of $\mathbb{E}[\boldsymbol{Y}_{\emph{pre}, \mathcal{I}^{(d)}}]$, and $C>0$ is an absolute constant.
%
\end{lemma}
Applying Lemma \ref{lemma:subspace} with Assumption \ref{assumption:spectra}, we have
\begin{align}\label{eq:wedin_balanced_spectra}
\| \mathcal{P}_{\widehat{V}_\text{pre}} - \mathcal{P}_{V_\text{pre}} \|_\text{op}
= O_p \left( \frac{1}{\min\{\sqrt{T_0}, \sqrt{N_d}\}}\right).
\end{align}
Substituting \eqref{eq:wedin_balanced_spectra} and the bound in Lemma \ref{lemma:param_est} into \eqref{eq:intermediate_Cauchy_Schwarz}, we obtain
\begin{align}
(\mathcal{P}_{V_\text{pre}} - \mathcal{P}_{\widehat{V}_\text{pre}})
(\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)})
&= O_p \left(
\frac{C(\|\widetilde{w}^{(n,d)} \|_2) \sqrt{\log(T_0 N_d} %N^{(d))}}{\min\{T_0, N_d \} }
\right). \label{eq:dd.0}
\end{align}
\vspace{10pt} \noindent
{\em Bounding term 2. }
To begin, since $\widehat{\bV}_{\text{pre}}$ is an isometry, it follows that
\begin{align}\label{eq:thm1.2.1}
\| \mathcal{P}_{\widehat{V}_{\text{pre}}} (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}) \|_2^2 &
= \|\widehat{\bV}_{\text{pre}}^\prime (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}) \|_2^2.
\end{align}
We upper bound $\|\widehat{\bV}_{\text{pre}}^\prime (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}) \|_2^2$ as follows: consider
{\small
\begin{align}
&\| \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}}(\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)})\|_2^2
= (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)})^\prime \widehat{\bV}_{\text{pre}} \widehat{\bS}_{r_\text{pre}}^2 \widehat{\bV}_{\text{pre}}^\prime (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)})
\\ & = (\widehat{\bV}_{\text{pre}}^\prime (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}))^\prime \widehat{\bS}_{r_\text{pre}}^2 (\widehat{\bV}_{\text{pre}}^\prime (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}))
\geq \widehat{s}_{r_\text{pre}}^2 \| \widehat{\bV}_{\text{pre}}^\prime (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}) \|_2^2 \label{eq:key_singular_value_trick}.
\end{align}
}
Using \eqref{eq:key_singular_value_trick} and \eqref{eq:thm1.2.1} together implies
\begin{align}\label{eq:intermediate_bound_2_Lem10.1}
\| \mathcal{P}_{\widehat{V}_{\text{pre}}} (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}) \|_2^2 \le \frac{\| \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}}(\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)})\|_2^2}{\widehat{s}_{r_\text{pre}}^2}.
\end{align}
To bound the numerator in \eqref{eq:intermediate_bound_2_Lem10.1}, note
\begin{align}
& \| \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}}(\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)})\|_2^2
\\ & \leq 2\| \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \widehat{w}^{(n,d)} -\mathbb{E}[Y_{\text{pre},n}] \|_2^2
+ 2 \| \mathbb{E}[Y_{\text{pre},n}] - \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \widetilde{w}^{(n,d)}\|_2^2
\\ & = 2\| \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \widehat{w}^{(n,d)} -\mathbb{E}[Y_{\text{pre},n}] \|_2^2
+ 2 \| (\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}] - \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}}) \widetilde{w}^{(n,d)}\|_2^2, \label{eq:intermediate_bound_1_Lem10.1}
\end{align}
where we have used \eqref{eq:pre_intervention_identification}.
To further upper bound the the second term on the RHS above, we use the following inequality:
for any $\boldsymbol{A} \in \mathbb{R}^{a \times b}$, $v \in \mathbb{R}^{b}$,
\begin{align}\label{eq:2.inf.ineq}
\| \boldsymbol{A} v \|_2 & = \| \sum_{j=1}^b \boldsymbol{A}_{\cdot j} v_j \|_2 \leq \big(\max_{j \le b} \| \boldsymbol{A}_{\cdot j}\|_2\big) \big(\sum_{j=1}^b |v_j|\big)
= \| \boldsymbol{A}\|_{2, \infty} \|v\|_1,
\end{align}
where $\|\boldsymbol{A}\|_{2,\infty} = \max_{j} \| \boldsymbol{A}_{\cdot j}\|_2$ and $\boldsymbol{A}_{\cdot j}$ represents the $j$-th column of $\boldsymbol{A}$.
Thus,
\begin{align}\label{eq:2_infty_matrix}
\| (\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}] - \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}}) \widetilde{w}^{(n,d)}\|_2^2
\le
\| \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}] - \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \|_{2,\infty}^2 \|\widetilde{w}^{(n,d)}\|_1^2.
\end{align}
Substituting \eqref{eq:intermediate_bound_1_Lem10.1} into \eqref{eq:intermediate_bound_2_Lem10.1} and subsequently using \eqref{eq:2_infty_matrix} implies
\begin{align}\label{eq:thm1.2.2}
&\| \mathcal{P}_{\widehat{V}_{\text{pre}}} (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}) \|_2^2
\\&\leq \frac{2}{\widehat{s}_{r_\text{pre}}^2} \Big(\| \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \widehat{w}^{(n,d)} -\mathbb{E}[Y_{\text{pre},n}] \|_2^2
+ \| \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}] - \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \|_{2,\infty}^2 \|\widetilde{w}^{(n,d)}\|_1^2\Big).
\end{align}
Next, we bound $\| \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \widehat{w}^{(n,d)} -\mathbb{E}[Y_{\text{pre},n}] \|_2$.
To this end, observe that
{\small
\begin{align}
&\| \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \widehat{w}^{(n,d)} - Y_{\text{pre}, n} \|_2^2 = \| \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \widehat{w}^{(n,d)} - \mathbb{E}[Y_{\text{pre},n}] - \varepsilon_{\text{pre}, n}\|_2^2 \\
& = \| \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \widehat{w}^{(n,d)} - \mathbb{E}[Y_{\text{pre},n}]\|_2^2 + \|\varepsilon_{\text{pre}, n}\|_2^2
- 2 \langle \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \widehat{w}^{(n,d)} - \mathbb{E}[Y_{\text{pre},n}], \varepsilon_{\text{pre}, n} \rangle %
\label{eq:thm1.2.3}.
\end{align}
}
We call upon Property 3.1 of \cite{pcr_aos}, which states that $\widehat{w}^{(n,d)}$, as given by \eqref{eq:pcr}, is the unique solution to the following program:
\begin{align}
\mbox{minimize} & \quad \norm{w}_2 \quad \mbox{over}\quad w \in \mathbb{R}^{N_d} \nonumber \\
\mbox{such~that}~& ~w \in \argmin_{\omega \in \mathbb{R}^{N_d}}
\| Y_{\text{pre}, n} - \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \omega \|_2^2.
\end{align}
This implies that
{\small
\begin{align}
&\| \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \widehat{w}^{(n,d)} - Y_{\text{pre},n} \|_2^2
\leq \| \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \widetilde{w}^{(n,d)} - Y_{\text{pre}, n} \|_2^2
\\ &= \|( \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} - \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}])\widetilde{w}^{(n,d)} \|_2^2 + \| \varepsilon_{\text{pre}, n}\|_2^2
- 2 \langle \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \widetilde{w}^{(n,d)} - \mathbb{E}[Y_{\text{pre}, n}] , \varepsilon_{\text{pre}, n}\rangle.
\label{eq:thm1.2.4}
\end{align}
}
From \eqref{eq:thm1.2.3} and \eqref{eq:thm1.2.4}, we have
{\small
\begin{align}
&\| \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \widehat{w}^{(n,d)} - \mathbb{E}[Y_{\text{pre},n}]\|_2^2
\\&\leq \| (\boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} - \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]) \widetilde{w}^{(n,d)}\|_2^2
+ 2 \langle \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}), \varepsilon_{\text{pre}, n}\rangle
\\ &\leq \| \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}] - \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \|_{2,\infty}^2 \|\widetilde{w}^{(n,d)}\|_1^2
+ 2 \langle \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}), \varepsilon_{\text{pre}, n}\rangle,
\label{eq:thm1.2.5}
\end{align}
}
where we used \eqref{eq:2.inf.ineq} in the second inequality above.
Using \eqref{eq:thm1.2.2} and \eqref{eq:thm1.2.5}, we obtain
\begin{align} \label{eq:thm1.2}
&\| \mathcal{P}_{\widehat{V}_{\text{pre}}} (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}) \|_2^2
\\& \leq \frac{4}{\widehat{s}_{r_\text{pre}}^2} \Big( \| \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}] - \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \|_{2,\infty}^2 \|\widetilde{w}^{(n,d)}\|_1^2
+\langle \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}), \varepsilon_{\text{pre}, n}\rangle \Big).
\end{align}
We now state two helper lemmas that will help us conclude the proof.
The proof of Lemmas \ref{lemma:hsvt} and \ref{lemma:annoying} are given in Appendices \ref{sec:proof_hsvt_lemma} and \ref{ssec:annoying}, respectively.
\begin{lemma} [Lemma 6.2 of \cite{pcr_aos}] \label{lemma:hsvt}
Let Assumptions \ref{assumption:sutva}, \ref{assumption:form}, \ref{assumption:conditional_mean_zero},
\ref{assumption:noise},
\ref{assumption:boundedness},
\ref{assumption:spectra} hold.
Suppose $k = r_\emph{pre}$, where $k$ is defined as in \eqref{eq:pcr}.
Then,
$
\| \boldsymbol{Y}^{r_\emph{pre}}_{\emph{pre}, \mathcal{I}^{(d)}} - \mathbb{E}[\boldsymbol{Y}_{\emph{pre}, \mathcal{I}^{(d)}}] \|_{2, \infty}
= O_p \left( \frac{\sqrt{T_0 \log(T_0 N_d} %N^{(d))}}{\min\{\sqrt{T_0}, \sqrt{N_d}\}} \right).
$
\end{lemma}
\begin{lemma} \label{lemma:annoying}
Let Assumptions \ref{assumption:sutva} to \ref{assumption:spectra} hold.
Then, given $\boldsymbol{Y}^{r_\emph{pre}}_{\emph{pre}, \mathcal{I}^{(d)}}$, the following holds with respect to the randomness in $\varepsilon_{\emph{pre}, n}$:
$
\langle \boldsymbol{Y}^{r_\emph{pre}}_{\emph{pre}, \mathcal{I}^{(d)}} (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}), \varepsilon_{\emph{pre}, n}\rangle
= O_p \left( \sqrt{T_0} + \| \boldsymbol{Y}^{r_\emph{pre}}_{\emph{pre}, \mathcal{I}^{(d)}} - \mathbb{E}[\boldsymbol{Y}_{\emph{pre}, \mathcal{I}^{(d)}}]\|_{2,\infty} \| \widetilde{w}^{(n,d)}\|_1 \right).
$
\end{lemma}
Incorporating Lemmas \ref{lemma:singular_values}, \ref{lemma:hsvt}, \ref{lemma:annoying}, and Assumption \ref{assumption:spectra}
into \eqref{eq:thm1.2}, we conclude
\begin{align}
\mathcal{P}_{\widehat{V}_{\text{pre}}} (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)})
&= O_p \left( \frac{1}{\sqrt{N_d} T_0^{\frac 1 4}} + \frac{\| \widetilde{w}^{(n,d)}\|_1 \sqrt{\log(T_0 N_d} %N^{(d))}}{\sqrt{N_d} \cdot \min\{\sqrt{T_0}, \sqrt{N_d} \}} \right). \label{eq:dd.1}
\end{align}
\vspace{5pt}
\noindent
{\em Collecting terms. }
Combining \eqref{eq:dd.0} and \eqref{eq:dd.1}, and noting $\| v \|_2 \le \|v \|_1$ for any $v$, we conclude
$
\mathcal{P}_{V_{\text{pre}}} (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)})
= O_p \left( \frac{1}{\sqrt{N_d} T_0^{\frac 1 4}} + \frac{C(\| \widetilde{w}^{(n,d)}\|_1) \sqrt{\log(T_0 N_d} %N^{(d))}}{\sqrt{N_d} \cdot \min\{\sqrt{T_0}, \sqrt{N_d} \}} \right).
$
\subsection{Proof of Lemma \ref{lemma:subspace}} \label{sec:consistency.2}
We recall the well-known singular subspace perturbation result by Wedin.
\begin{thm} [Wedin's Theorem \cite{Wedin1972PerturbationBI}] \label{thm:wedin}
Given $\boldsymbol{A}, \boldsymbol{B} \in \mathbb{R}^{m \times n}$, let $\boldsymbol{V}, \widehat{\bV} \in \mathbb{R}^{n \times n}$ denote their respective right singular vectors.
Further, let $\boldsymbol{V}_{k} \in \mathbb{R}^{n \times k}$ (respectively, $\widehat{\bV}_{k} \in \mathbb{R}^{n \times k}$) correspond to the truncation of $\boldsymbol{V}$ (respectively, $\widehat{\bV}$), respectively, that retains the columns corresponding to the top $k$ singular values of $\boldsymbol{A}$ (respectively, $\boldsymbol{B}$).
Let $s_{i}$ represent the $i$-th singular value of $\boldsymbol{A}$.
Then,
$
\| \mathcal{P}_{V_{\emph{pre}}} - \mathcal{P}_{\widehat{V}_{\emph{pre}}} \|_\emph{op} \leq \frac{2\|\boldsymbol{A} - \boldsymbol{B}\|_\emph{op}} { s_{k} - s_{k+1}}.
$
\end{thm}
Recall that $\widehat{\bV}_\text{pre}$ is formed by the top $r_\text{pre}$ right singular vectors of $\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}$.
As a result, applying Theorem \ref{thm:wedin} gives
$
\| \mathcal{P}_{\widehat{V}_\text{pre}} - \mathcal{P}_{V_\text{pre}} \|_\text{op} \le
\frac{2 \| \boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}} - \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}] \|_\text{op}} {s_{r_\text{pre}}},
$
where we used $\rank(\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]) = r_\text{pre}$, and hence $s_{r_{\text{pre}}+1}=0$.
By Assumption \ref{assumption:noise} and Lemma \ref{lemma:subg_matrix}, we can further bound the inequality above.
In particular, for any $t >0$, we have
w.p. at least $1-2\exp(-t^2)$,
$
\| \mathcal{P}_{\widehat{V}_\text{pre}} - \mathcal{P}_{V_\text{pre}} \|_\text{op} \le \frac{C \sigma (\sqrt{T_0} + \sqrt{N_d} %N^{(d)} + t)}{s_{r_\text{pre}}},
$
where $C>0$ is an absolute constant.
\subsection{Proof of Lemma \ref{lemma:hsvt}}\label{sec:proof_hsvt_lemma}
We first re-state Lemma 6.2 of \cite{pcr_aos}.
\begin{lemma*}
Let the setup of Lemma \ref{lemma:hsvt} hold.
Then w.p. at least $1 - O(1 / (T_0 N_d} %N^{(d))^{10})$,
\begin{align}\label{eq:hsvt_heler_new_pcr}
&\|\boldsymbol{Y}^{r_\emph{pre}}_{\emph{pre}, \mathcal{I}^{(d)}} - \mathbb{E}[\boldsymbol{Y}_{\emph{pre}, \mathcal{I}^{(d)}}] \|^2_{2, \infty}
%
\\& \quad \le C(\sigma, r) \left(\frac{(T_0 + N_d} %N^{(d))(T_0 + \sqrt{T_0}\log(T_0 N_d} %N^{(d)))}{s_{r_\text{pre}}^2} + \log(T_0 N_d} %N^{(d)) \right),
\end{align}
where $C(\sigma, r)$ is a constant that only depends on $\sigma, r$.
\end{lemma*}
This is seen by adapting the notation in \cite{pcr_aos} to that used in this paper.
In particular, $\boldsymbol{X} = \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}], \widetilde{\boldsymbol{Z}}^{r} = \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}}$, where $\boldsymbol{X}, \widetilde{\boldsymbol{Z}}^{r}$ are the notations used in \cite{pcr_aos}.
Next, we simplify \eqref{eq:hsvt_heler_new_pcr} using Assumption \ref{assumption:spectra}.
As such, w.p. at least $1 - O(1 / (T_0 N_d} %N^{(d))^{10})$,
\begin{align}\label{eq:hsvt_heler_new_pcr_2}
&\|\boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} - \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}] \|^2_{2, \infty} \le C(\sigma, r) \left(\frac{T_0\log(T_0 N_d} %N^{(d))}{\min\{T_0, N_d} %N^{(d)\}}\right)
\end{align}
We conclude the proof by noting that \eqref{eq:hsvt_heler_new_pcr_2} implies
$
\|\boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} - \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}] \|_{2, \infty} = O_p \left( \frac{\sqrt{T_0 \log(T_0 N_d} %N^{(d))}}{\min\{\sqrt{T_0}, \sqrt{N_d}\}} \right).
$
\subsection{Proof of Lemma \ref{lemma:annoying}.}\label{ssec:annoying}
Throughout this proof, $C, c >0$ will denote absolute constants, which can change from line to line or even within a line.
Recall $\widehat{w}^{(n,d)} = \widehat{\bV}_\text{pre} \widehat{\bS}^{-1}_\text{pre} \widehat{\bU}^\prime_\text{pre} Y_{\text{pre}, n}$ and $Y_{\text{pre},n} = \mathbb{E}[Y_{\text{pre},n}] + \varepsilon_{\text{pre},n}$.
Thus,
{\small
\begin{align} \label{eq:rewrite}
\boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \widehat{w}^{(n,d)} = \widehat{\bU}_\text{pre}\widehat{\bS}_\text{pre} \widehat{\bV}_\text{pre}^\prime \widehat{\bV}_\text{pre} \widehat{\bS}_\text{pre}^{-1} \widehat{\bU}_\text{pre}^\prime Y_{\text{pre}, n}
= \mathcal{P}_{\widehat{U}_\text{pre}} \mathbb{E}[Y_{\text{pre},n}] + \mathcal{P}_{\widehat{U}_\text{pre}} \varepsilon_{\text{pre}, n}.
\end{align}
}
Therefore,
{\small
\begin{align} \label{eq:lem4.0}
&\langle \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}), \varepsilon_{\text{pre}, n} \rangle
%
\\&= \langle \mathcal{P}_{\widehat{U}_\text{pre}} \mathbb{E}[Y_{\text{pre},n}], \varepsilon_{\text{pre}, n}\rangle
+ \langle \mathcal{P}_{\widehat{U}_\text{pre}} \varepsilon_{\text{pre}, n}, \varepsilon_{\text{pre}, n}\rangle
- \langle \widehat{\bU}_\text{pre}\widehat{\bS}_\text{pre} \widehat{\bV}_\text{pre}^\prime \widetilde{w}^{(n,d)}, \varepsilon_{\text{pre}, n} \rangle.
\end{align}
}
Note that $\varepsilon_{\text{pre}, n}$ is independent of $\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}$, and thus also independent of $\widehat{\bU}_\text{pre}, \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}}$.
This follows since both quantities are determined by $\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}$.
Therefore,
\begin{align}
\mathbb{E}\Big[\langle \mathcal{P}_{\widehat{U}_\text{pre}} \mathbb{E}[Y_{\text{pre},n}], \varepsilon_{\text{pre}, n}\rangle \Big] &= 0, \label{eq:lemma10.5_exp_2}
%
\\ \mathbb{E}\Big[\langle \widehat{\bU}_\text{pre}\widehat{\bS}_\text{pre} \widehat{\bV}_\text{pre}^\prime \widetilde{w}^{(n,d)}, \varepsilon_{\text{pre}, n} \rangle \Big] &= 0 \label{eq:lemma10.5_exp_3}
\end{align}
Moreover, using the cyclic property of the trace operator, we obtain
{\small
\begin{align}
&\mathbb{E}\big[\langle \mathcal{P}_{\widehat{U}_\text{pre}} \varepsilon_{\text{pre}, n}, \varepsilon_{\text{pre}, n} \rangle\big]
= \mathbb{E}\big[ \varepsilon_{\text{pre}, n}^\prime \mathcal{P}_{\widehat{U}_\text{pre}} \varepsilon_{\text{pre}, n} \big]
= \mathbb{E} \big[\tr(\varepsilon_{\text{pre}, n}^\prime \mathcal{P}_{\widehat{U}_\text{pre}} \varepsilon_{\text{pre}, n})\big]
\\ &= \mathbb{E}\big[\tr(\varepsilon_{\text{pre}, n} \varepsilon_{\text{pre}, n}^\prime \mathcal{P}_{\widehat{U}_\text{pre}})\big]
= \tr(\mathbb{E}\big[\varepsilon_{\text{pre}, n} \varepsilon_{\text{pre}, n}^\prime\big] \mathcal{P}_{\widehat{U}_\text{pre}})
= \tr(\sigma^2 \mathcal{P}_{\widehat{U}_\text{pre}})
= \sigma^2 \|\widehat{\bU}_\text{pre}\|_F^2 = \sigma^2 r_\text{pre}.
\label{eq:thm1.lem3.0}
\end{align}
}
Note that the above also uses (i) the mean zero and coordinate-wise independence of $\varepsilon_{\text{pre}, n}$; (ii) orthonormality of $\widehat{\bU}_\text{pre} \in \mathbb{R}^{n \times r_\text{pre}}$.
Therefore, it follows that
\begin{align}\label{eq:thm1.lem3.1}
\mathbb{E} [\langle \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}), \varepsilon_{\text{pre}, n}\rangle ] & = \sigma^2 r_\text{pre}.
\end{align}
Next, to obtain high probability bounds for the inner product term, we use the following lemmas, the proofs of which can be found in Appendix A of \cite{pcr_aos}.
\begin{lemma} [Modified Hoeffding's Lemma] \label{lemma:hoeffding_random}
Let $X \in \mathbb{R}^n$ be r.v. with independent mean-zero sub-Gaussian random coordinates with $\| X_i \|_{\psi_2} \le K$.
Let $a \in \mathbb{R}^n$ be another random vector that satisfies $\|a\|_2 \le b$ for some constant $b \ge 0$.
Then for any $t \ge 0$,
$
\mathbb{P} \Big( \Big| \sum_{i=1}^n a_i X_i\Big| \ge t \Big) \le 2 \exp\Big(-\frac{ct^2}{K^2 b^2} \Big),
$
where $c > 0$ is a universal constant.
\end{lemma}
\begin{lemma} [Modified Hanson-Wright Inequality] \label{lemma:hansonwright_random}
Let $X \in \mathbb{R}^n$ be a r.v. with independent mean-zero sub-Gaussian coordinates with $\|X_i\|_{\psi_2} \le K$.
Let $\boldsymbol{A} \in \mathbb{R}^{n \times n}$ be a random matrix satisfying $\|\boldsymbol{A}\|_{\emph{op}} \le a$ and $\|\boldsymbol{A}\|_F^2 \, \le b$ for some $a, b \ge 0$.
Then for any $t \ge 0$,
\begin{align*}
\mathbb{P} \left( \abs{ X^T \boldsymbol{A} X - \mathbb{E}[X^T \boldsymbol{A} X] } \ge t \right) &\le 2 \exp \Big( -c \min\Big(\frac{t^2}{K^4 b}, \frac{t}{K^2 a} \Big) \Big).
\end{align*}
\end{lemma}
Using Lemma \ref{lemma:hoeffding_random} and \eqref{eq:lemma10.5_exp_2}, and Assumptions \ref{assumption:form} and \ref{assumption:noise}, it follows that for any $t > 0$
\begin{align}\label{eq:lem4.1}
\mathbb{P}\left( \langle \mathcal{P}_{\widehat{U}_\text{pre}} \mathbb{E}[Y_{\text{pre},n}], \varepsilon_{\text{pre}, n} \rangle \geq t \right) & \leq \exp\Big( - \frac{c t^2}{T_0 \sigma^2 } \Big).
\end{align}
Note that the above also uses
$
\| \mathcal{P}_{\widehat{U}_\text{pre}} \mathbb{E}[Y_{\text{pre}, n}] \|_2
\le \| \mathbb{E}[Y_{\text{pre}, n}] \|_2
\le \sqrt{T_0},
$
which follows from the fact that $\|\mathcal{P}_{\widehat{U}_\text{pre}}\|_\text{op} \le 1$ and Assumption \ref{assumption:boundedness}.
Further, \eqref{eq:lem4.1} implies
\begin{align}\label{eq:lem4.1_Op}
\langle \mathcal{P}_{\widehat{U}_\text{pre}} \mathbb{E}[Y_{\text{pre},n}], \varepsilon_{\text{pre}, n} \rangle = O_p(\sqrt{T_0}).
\end{align}
Similarly, using \eqref{eq:lemma10.5_exp_3}, we have for any $t > 0$
{\small
\begin{align}\label{eq:lem4.2}
\mathbb{P}\left( \langle \widehat{\bU}_\text{pre}\widehat{\bS}_\text{pre} \widehat{\bV}_\text{pre}^\prime\widetilde{w}^{(n,d)}, \varepsilon_{\text{pre}, n} \rangle \geq t\right)
\leq \exp\left( - \frac{c t^2}{\sigma^2 (T_0 +\| \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} - \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]\|^2_{2,\infty} \|\widetilde{w}^{(n,d)}\|_1^2) } \right),
\end{align}
}
where we use the fact that
{\small
\begin{align}
&\| \widehat{\bU}_\text{pre}\widehat{\bS}_\text{pre} \widehat{\bV}_\text{pre}^\prime\widetilde{w}^{(n,d)}\|_2
= \| \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \widetilde{w}^{(n,d)} \pm \mathbb{E}[Y_{\text{pre},n}] \|_2
\\ & = \| (\boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} - \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}])\widetilde{w}^{(n,d)} + \mathbb{E}[Y_{\text{pre},n}]\|_2
\\ &\leq \| (\boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} - \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]) \widetilde{w}^{(n,d)}\|_2 + \|\mathbb{E}[Y_{\text{pre}, n}] \|_2 \nonumber \\
& \leq \| \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} - \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]\|_{2,\infty} \| \widetilde{w}^{(n,d)}\|_1 + \sqrt{T_0}.
\end{align}
}
In the inequalities above,
we use $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}] \widetilde{w}^{(n,d)} = \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]w^{(n,d)} = \mathbb{E}[Y_{\text{pre},n}]$, which follows from \eqref{eq:theta.0} and \eqref{eq:2.inf.ineq}.
Then, \eqref{eq:lem4.2} implies
{\small
\begin{align}\label{eq:lem4.2_Op}
\langle \widehat{\bU}_\text{pre}\widehat{\bS}_\text{pre} \widehat{\bV}_\text{pre}^\prime\widetilde{w}^{(n,d)}, \varepsilon_{\text{pre}, n} \rangle
= O_p\left(
\sqrt{T_0} + \| \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} - \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]\|_{2,\infty} \| \widetilde{w}^{(n,d)}\|_1
\right)
\end{align}
}
Finally, using Lemma \ref{lemma:hansonwright_random} and \eqref{eq:thm1.lem3.1}, along with Assumptions \ref{assumption:form} and \ref{assumption:noise}, we have for any $t > 0$
\begin{align}\label{eq:lem4.3}
\mathbb{P}\left( \langle \mathcal{P}_{\widehat{U}_\text{pre}} \varepsilon_{\text{pre},n}, \varepsilon_{\text{pre},n}\rangle \geq \sigma^2 r_\text{pre} + t\right) & \leq \exp\Big( - c \min\Big(\frac{t^2}{\sigma^4 r_\text{pre}}, \frac{t}{\sigma^2}\Big)\Big),
\end{align}
where we have used $\|\mathcal{P}_{\widehat{U}_\text{pre}}\|_\text{op} \le 1$ and $\|\mathcal{P}_{\widehat{U}_\text{pre}}\|^2_F = r_\text{pre}$.
Then, \eqref{eq:lem4.3} implies
\begin{align}\label{eq:lem4.3_Op}
\langle \mathcal{P}_{\widehat{U}_\text{pre}} \varepsilon_{\text{pre},n}, \varepsilon_{\text{pre},n}\rangle = O_p\left( 1 \right).
\end{align}
From \eqref{eq:lem4.0}, \eqref{eq:lem4.1_Op}, \eqref{eq:lem4.2_Op}, and \eqref{eq:lem4.3_Op}, we conclude that
$$
\langle \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} (\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)}), \varepsilon_{\text{pre},n} \rangle
= O_p\left(
\sqrt{T_0} + \| \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} - \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]\|_{2,\infty} \| \widetilde{w}^{(n,d)}\|_1
\right).
$$
\section{Causal Framework \& Identification}\label{sec:causal.framework}
In this section, we present the SI causal framework, formally introduce the causal parameter of interest, and establish an identification result for this parameter.
\subsection{Causal Assumptions} \label{sec:causal_assumptions}
Below, we state our main causal assumptions.
The key structural assumption is that the potential outcomes follow a latent factor model.
\begin{assumption}[Latent factor model]\label{assumption:form}
For all $(t,n,d)$,
\begin{align} \label{eq:form.0}
Y_{tn}^{(d)} = \langle u^{(d)}_t, v_n \rangle + \varepsilon_{tn}.
\end{align}
Here,
the r.v. $u^{(d)}_t \in \mathbb{R}^r$ is a latent factor specific to a particular time $t$ and intervention $d$;
the r.v. $v_n \in \mathbb{R}^r$ is a latent factor specific to unit $n$ (i.e., factor loading);
$r \ge 1$ is the dimension of the latent space;
and $\varepsilon_{tn} \in \mathbb{R}$ is the idiosyncratic shock for unit $n$ at time $t$.
\end{assumption}
\noindent
We stress that the latent factors $\{u^{(d)}_t, v_n\}$ are {\em unobserved}.
Further, $v_n$ is {\em specific} to unit $n$, i.e., each unit can have a different latent factor.
An analogous statement holds for $u^{(d)}_t$ with respect to time $t$ and intervention $d$.
\begin{assumption} [Linear span inclusion] \label{assumption:linear}
Given unit-intervention pair $(n,d)$ and conditioned on $\mathcal{D}$,
$
v_n \in \emph{span}(\{v_j: j \in \mathcal{I}^{(d)}\}),
$
i.e., there exists $w^{(n,d)} \in \mathbb{R}^{N_d} %N^{(d)}$ such that $ v_{n} = \sum_{j \in \mathcal{I}^{(d)}} w^{(n,d)}_j v_{j}$.
\end{assumption}
\noindent{\em Assumption~\ref{assumption:form} ``implies'' Assumption~\ref{assumption:linear}. }
Given Assumption \ref{assumption:form}, we note Assumption \ref{assumption:linear} is rather weak.
Consider the matrix of unit latent factors $[v_i: i \in \mathcal{I}^{(d)} \cup \{n\} ] \in \mathbb{R}^{(N_d} %N^{(d) + 1) \times r}$, where we recall $v_i$ are $r$-dimensional.
If $r = o(N_d} %N^{(d))$, then by definition of matrix rank, Assumption \ref{assumption:linear} almost always holds.
In the worst case, Assumption \ref{assumption:linear} fails for at most $r - 1$ units.
Practically, this requires that the intervention assignment $\mathcal{D}$ is such that sufficiently many units that undergo intervention $d$ and they are collectively ``diverse'' enough.
\begin{assumption} [Conditional mean independence]\label{assumption:conditional_mean_zero}
For all $(t,n)$, conditioned on a particular sampling of the latent time-intervention and unit factors, let $\varepsilon_{tn}$ be mean independent of $\mathcal{D}$.
Equivalently,
$\mathbb{E}\left[\varepsilon_{tn} | \cLF \right] = \mathbb{E}\left[\varepsilon_{tn} | \mathcal{E} \right] = 0, $
where
\begin{align}\label{eq:conditioning}
\cLF &:= \Big\{u^{(d)}_t, v_n : t \in [T], n \in [N], d \in \{0, \dots, D-1\} \Big\} \quad \text{and} \quad
\mathcal{E} := \left\{ \cLF, \mathcal{D} \right\}.
\end{align}
\end{assumption}
\noindent{\em An example. } Consider the following conditions:
(i) $\mathbb{E}[\varepsilon_{tn}| \cLF] = 0$;
(ii) $\varepsilon_{tn} \independent \mathcal{D} ~|~ \cLF$.
Together, they imply Assumption \ref{assumption:conditional_mean_zero}, i.e., Assumption \ref{assumption:conditional_mean_zero} is weaker.
We discuss these two conditions on $\varepsilon_{tn}$ as they are arguably more interpretable than Assumption \ref{assumption:conditional_mean_zero}.
Since (i) is self-explanatory, we focus on (ii) and compare it with a standard analogous assumption in the literature known as ``selection on observables''.
Conditioned on observable covariates corresponding to time, units, and/or interventions, ``selection on observables'' assumes that the potential outcomes are independent of the treatment assignments.
Many identification arguments crucially rely on this conditional independence.
In our setting, given Assumption \ref{assumption:form} and conditioned on the latent factors $\cLF$, the only remaining randomness in the potential outcome $Y^{(d)}_{tn}$ is due to $\varepsilon_{tn}$.
Hence, (ii) implies that conditioned on the latent factors, the potential outcomes are independent of $\mathcal{D}$.
As such, these latent factors can be thought of as ``unobserved covariates'' and (ii) can be interpreted as ``selection on latent factors''.
\vspace{10pt}
\noindent {\em Summary. }
Assumptions \ref{assumption:sutva} to \ref{assumption:conditional_mean_zero} suggest that the observations are generated as follows:
\vspace{5pt}
\begin{enumerate}
\item Sample $\mathcal{E}$.
%
We stress that the latent factors and treatment assignments can be mutually dependent.
%
That is, we do not make any assumptions about the structure of their dependencies.
%
However, we operate under the key assumption that for each $(n,d)$ pair of interest, the intervention assignment $\mathcal{D}$ is such that the set $\{v_j :j \in \mathcal{I}^{(d)}\}$ is ``expressive'' enough, i.e., $v_n$ lies within the linear span of this set (Assumption \ref{assumption:linear}).
%
\item Sample $\varepsilon_{tn}$ across $(t,n)$, where
$\varepsilon_{tn}$ is mean independent of $\mathcal{D}$ (Assumption \ref{assumption:conditional_mean_zero}).
\end{enumerate}
\subsection{Target Causal Parameter \& Identification}\label{sec:identification_proof}
Below, we formally define our target causal parameter $\theta^{(d)}_n$, and then state the key identification result in Theorem \ref{thm:identification}.
\vspace{7.5pt}
\noindent{\em Target causal parameter.}
For each $(n,d)$, we are interested in identifying and estimating
\begin{align}\label{eq:target_causal_parameter}
\theta^{(d)}_n := \frac{1}{T_1} \sum_{t > T_0} \mathbb{E}\left[ Y^{(d)}_{tn} ~\Big|~ \{u^{(d)}_t, v_n: t > T_0\} \right],
\end{align}
unit $n$'s expected potential outcomes under intervention $d$ averaged over the post-intervention period.
The expectation in \eqref{eq:target_causal_parameter} is taken over $\varepsilon_{tn}$ for $t > T_0$, and is conditioned on
unit $n$'s specific latent factor $v_n$
and the time-intervention latent factors $\{u^{(d)}_t: t > T_0\}$.
\vspace{7.5pt}
\noindent{\em Identification. } We establish that for each $(n,d)$ pair, given knowledge of $w^{(n,d)}$, it is feasible to identify the causal parameter $\theta^{(d)}_n$ under Assumptions \ref{assumption:sutva} to \ref{assumption:conditional_mean_zero}.
In fact, we prove a stronger result---the identification of $\mathbb{E}[ Y^{(d)}_{tn} ~|~ u^{(d)}_t, v_n ]$ for any $t$.
\smallskip
\begin{thm}\label{thm:identification}
For a given unit-intervention pair $(n,d)$, let Assumptions \ref{assumption:sutva} to \ref{assumption:conditional_mean_zero} hold.
Then, given knowledge of $w^{(n,d)}$, we have
\begin{align}
&\mathbb{E} [ Y^{(d)}_{tn} \big| u^{(d)}_t, v_n ] = \sum_{j \in \mathcal{I}^{(d)}} w_j^{(n,d)} \mathbb{E}\left[ Y_{tj} | \mathcal{E} \right] ~~\text{for all } t, \label{eq:identification_strong}
\\ &\theta^{(d)}_n = \frac{1}{T_1} \sum_{t > T_0}\sum_{j \in \mathcal{I}^{(d)}} w_j^{(n,d)} \mathbb{E}\left[ Y_{tj} | \mathcal{E} \right] \label{eq:identification}.
\end{align}
\end{thm}
As a consequence of Theorem \ref{thm:identification}, we have
$\mathbb{E}[Y^{(d)}_{tn}] = \mathbb{E} [ \sum_{j \in \mathcal{I}^{(d)}} w^{(n,d)}_j Y_{tj} ]$,
where the expectation is now taken over $\varepsilon_{tn}$ and $\mathcal{E}$, i.e., we no longer condition on the latent factors and the treatment assignment.
This relation follows from \eqref{eq:identification_strong} and the tower law for expectations, i.e., for any two r.v.'s $A$ and $B$, we have $\mathbb{E}[A] = \mathbb{E}[ \mathbb{E}[A | B]]$.
\section{Estimation and Inference of Causal Parameter} \label{sec:results}
Below, we establish the statistical accuracy\footnote{We remark that all log factors within our results can be removed with careful analysis.}
of our estimate $\widehat{\theta}_n^{(d)}$.
Section \ref{sec:results_assumptions} lists additional assumptions needed to establish our results.
In Section \ref{sec:model_learning}, we bound the estimation error of the model parameter $w^{(n,d)}$.
In Sections \ref{sec:consistency} and \ref{sec:normality_results}, we establish consistency and asymptotic normality of our estimate $\widehat{\theta}^{(d)}_n$, respectively.
Notably, in the specialized case where $d=0$, our results contribute new inference guarantees to the SC literature as well.
\vspace{5pt}
\noindent
{\em Additional notation. }
For any vector $v \in \mathbb{R}^a$, let $C(\|v\|_p) = \max\{1, \|v\|_p\}$, where $\|v\|_p$ denotes its $\ell_p$-norm.
To simplify notation, we will henceforth absorb dependencies on $r$ and $\sigma$ into the constant within $O_p$, which is defined in Appendix \ref{sec:notation}.
\vspace{-2pt}
\subsection{Additional Assumptions for the SI Estimator}\label{sec:results_assumptions}
We state additional assumptions required to establish guarantees for our estimation procedure.
Strictly speaking, these assumptions are not needed to establish identification of $\theta^{(d)}_n$ (Theorem \ref{thm:identification}), but are necessary for the estimator used in Section \ref{sec:si_estimator} to produce $\widehat{\theta}_n^{(d)}$.
These additional assumptions are in some sense ``context-specific'', i.e., they depend on the assumptions made on $\varepsilon_{tn}$ and the procedure chosen to estimate $w^{(n,d)}$.
Since Assumptions \ref{assumption:noise} and \ref{assumption:boundedness} are standard and self-explanatory, we focus our attention on interpreting and justifying Assumptions \ref{assumption:spectra} and \ref{assumption:subspace} below.
Towards this, recall the definition of $\mathcal{E}$, given in \eqref{eq:conditioning}.
\vspace{-2pt}
\begin{assumption} [Sub-Gaussian shocks] \label{assumption:noise}
Conditioned on $\mathcal{E}$,
$\varepsilon_{tn}$ are independent sub-Gaussian r.v.s with $\mathbb{E}[\varepsilon_{tn}^2] = \sigma^2$ and $\|\varepsilon_{tn}\|_{\psi_2} \le C\sigma$ for some constant $C > 0$.
\end{assumption}
\begin{assumption} [Bounded support] \label{assumption:boundedness}
Conditioned on $\mathcal{E}$,
$\mathbb{E}[Y_{tn}^{(d)}] \in [-1,1]$.\footnote{The precise bound $[-1,1]$ is without loss of generality, i.e., it can be extended to $[a, b]$ for $a, b \in \mathbb{R}$ with $a \le b$.}
\end{assumption}
\begin{assumption} [Well-balanced spectra] \label{assumption:spectra}
Conditioned on $\mathcal{E}$ and given $\mathcal{I}^{(d)}$, the $r_{\emph{pre}}$ nonzero singular values $s_i$ of $\mathbb{E}[\boldsymbol{Y}_{\emph{pre}, \mathcal{I}^{(d)}}]$ are well-balanced, i.e., $s_i^2 = \Theta(T_0 N_d} %N^{(d) / r_{\emph{pre}})$.
\end{assumption}
\begin{assumption} [Subspace inclusion] \label{assumption:subspace}
Conditioned on $\mathcal{E}$ and given $\mathcal{I}^{(d)}$, the rowspace of $\mathbb{E}[\boldsymbol{Y}_{\emph{post}, \mathcal{I}^{(d)}}]$ lies within that of $\mathbb{E}[\boldsymbol{Y}_{\emph{pre}, \mathcal{I}^{(d)}}]$, i.e.,
$ \Rc(\mathbb{E}[\boldsymbol{Y}^\prime_{\emph{post}, \mathcal{I}^{(d)}}])
\subseteq \Rc(\mathbb{E}[\boldsymbol{Y}^\prime_{\emph{pre}, \mathcal{I}^{(d)}}])$.
\end{assumption}
Note $\Theta(\cdot)$ and $\Rc(\cdot)$ used in the Assumptions above are defined in Appendix \ref{sec:notation}.
\vspace{5pt}
\noindent{\em Interpretation of Assumption \ref{assumption:spectra}. }
Assumption \ref{assumption:spectra} requires that the nonzero singular values of the latent matrix, $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]$, are well-balanced.
A natural setting in which Assumption \ref{assumption:spectra} holds is if its elements $\mathbb{E}[Y^{(d)}_{tn}]=\Theta(1)$ and nonzero singular values satisfy $s^2_i = \Theta(\zeta)$ for some $\zeta$.
Then, for some absolute constant $C > 0$,
$$
\Theta( T_0 N_d} %N^{(d)) = \|\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}] \|_F^2 = \sum^{r_{\text{pre}}}_{i = 1} s^2_i = C \zeta r_{\text{pre}} \implies \zeta = \Theta(T_0 N_d} %N^{(d) / r_{\text{pre}}).
$$
In effect, Assumption \ref{assumption:spectra}, or analogous versions to it, are pervasive across many fields.
Within the econometrics factor model analyses and matrix completion literature, Assumption \ref{assumption:spectra} is analogous to incoherence-style conditions, e.g., Assumption A of \cite{bai2020matrix},
and is also closely related to the notion of pervasiveness (see Proposition 3.2 of \cite{fan2017ellinfty}).
Additionally, Assumption \ref{assumption:spectra} has been shown to hold w.h.p. for the canonical probabilistic generating process used to analyze probabilistic principal component analysis in \cite{bayesianpca} and \cite{probpca};
here, the observations are assumed to be a high-dimensional embedding of a low-rank matrix with independent sub-Gaussian entries (see Proposition 4.2 of \cite{agarwal2020robustness}).
Below, Lemma \ref{lemma:singular_values} further establishes that under Assumption \ref{assumption:noise}, the singular values of $\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}$ and $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]$ must be close.
\begin{lemma} \label{lemma:singular_values}
Let Assumptions \ref{assumption:sutva}, \ref{assumption:form}, \ref{assumption:conditional_mean_zero}, \ref{assumption:noise} hold.
Then conditioned on $\mathcal{E}$, for any $t >0$ and $i \le \min\{T_0, N_d\}$,
$
\abs{s_i - \widehat{s}_i} \le C\sigma (\sqrt{T_0} + \sqrt{N_d} + t)
$
with probability at least $1-2\exp(-t^2)$,
where $C>0$ is an absolute constant.
\end{lemma}
Hence, Assumption \ref{assumption:spectra} can be empirically assessed by simply inspecting the spectra of $\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}$; refer to Figure \ref{fig:elbow} for a graphical depiction of the typical spectra of $\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}$ and the existence of a natural elbow point.
Further, we provide a data-driven estimator for $\sigma$ in \eqref{eq:hat_sigma}, and establish its consistency in Lemma \ref{lemma:variance_estimate}.
Lastly, we highlight that the assumption of a gap between the top $r_\text{pre}$ of the population matrix $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]$ and the remaining singular values has been widely adopted in the econometrics literature of large dimensional factor analysis dating back to \cite{chamberlainfactor}.
\vspace{10pt}
\noindent{\em Interpretation of Assumption \ref{assumption:subspace}. }
In the SI framework, potential outcomes from different interventions are likely to arise from different distributions.
Under Assumption \ref{assumption:form}, this translates to the latent time-intervention factors $u^{(d_1)}_t$ and $u^{(d_2)}_t$ for $d_1 \neq d_2$ being sampled from different distributions.
This distribution shift presents a challenge in establishing guarantees for estimating $\theta_n^{(d)}$ as it invalidates the traditional i.i.d. assumptions between the pre- and post-intervention outcomes (i.e., train and test sets) required for
standard generalization error analyses.
Indeed, learning with distribution shifts is an active area of research in causal inference and beyond (e.g., transfer learning).
Notably, this issue does not arise in the standard SC framework since the target causal parameter is $\theta^{(0)}_n$, and hence the model $\widehat{w}^{(n,0)}$ is both learnt and applied on outcomes under control (i.e., same distribution).
To accurately estimate $\theta^{(d)}_n$ despite the possible distribution shift, we show that Assumption \ref{assumption:subspace} suffices.
In words, we require that the rowspace of $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]$ is sufficiently ``rich'' relative to that of $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(d)}}]$.
We highlight that Assumption \ref{assumption:subspace} is a {\em purely linear algebraic} condition, i.e., no distributional assumptions are made on $v_n$ and $u^{(d)}_t$.
If we do assume that $u^{(d)}_t$ are sampled independently from an intervention-specific distribution, then one can verify that Assumption \ref{assumption:subspace} holds w.h.p.,
indicating that it is not a particularly stringent condition.
As we will see, Assumption \ref{assumption:subspace} is the key condition that allows $\widehat{w}^{(n,d)}$, learnt during the pre-intervention (training) period, to ``generalize'' to the post-intervention (testing) period.
In Section \ref{sec:empirics}, we perform simulations to show that $\widehat{\theta}^{(d)}_n$ is an accurate estimate of $\theta^{(d)}_n$ even if the pre- and post-intervention outcomes come from different distributions, provided Assumption \ref{assumption:subspace} holds; if it does not hold, then $\widehat{\theta}^{(d)}_n$ is non-trivially biased.
Given its importance, in Section \ref{sec:hypo}, we provide a data-driven hypothesis test with provable guarantees to validate when Assumption \ref{assumption:subspace} holds in practice.
We believe this hypothesis test serves as an additional robustness check for the SC framework as well.
\vspace{-2pt}
\subsection{Model Learning}\label{sec:model_learning}
We now show how Assumption \ref{assumption:subspace} enables the SI estimator to accurately learn the model parameter $w^{(n,d)}$.
First, Lemma \ref{lemma:theta} establishes that $\theta^{(n, d)}$ can be expressed via $\widetilde{w}^{(n,d)} := \boldsymbol{V}_\text{pre} \boldsymbol{V}^\prime_\text{pre} w^{(n,d)}$, where $\boldsymbol{V}_\text{pre}$ are the right singular vectors of $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]$.
\begin{lemma} \label{lemma:theta}
For a given unit-intervention pair $(n,d)$, let the setup of Theorem \ref{thm:identification} and Assumption \ref{assumption:subspace} hold. %
Then, $\theta^{(d)}_n = \frac{1}{T_1} \sum_{t > T_0} \sum_{j \in \mathcal{I}^{(d)}}
\widetilde{w}_j^{(n,d)} \mathbb{E}[Y_{tj} | \mathcal{E}].$
\end{lemma}
Next, we establish that $\widehat{w}^{(n,d)}$, given by \eqref{eq:pcr}, is a consistent estimate of $\widetilde{w}^{(n,d)}$.
\begin{lemma} [Corollary 4.1 of \cite{pcr_aos}] \label{lemma:param_est}
For a given unit-intervention pair $(n,d)$, let Assumptions \ref{assumption:sutva} to \ref{assumption:subspace} hold.
Further, suppose $k = \rank(\mathbb{E}[\boldsymbol{Y}_{\emph{pre}, \mathcal{I}^{(d)}}])$, where $k$ is defined as in \eqref{eq:pcr}.
Then, conditioned on $\mathcal{E}$,
\begin{align} \label{eq:param_est_lem}
\widehat{w}^{(n,d)} - \widetilde{w}^{(n,d)} &= O_p\left( \frac{C(\| \widetilde{w}^{(n,d)}\|_2)\sqrt{\log(T_0 N_d} %N^{(d))}}{\min\{\sqrt{T_0}, \sqrt{N_d} %N^{(d)}\}} \right).
\end{align}
\end{lemma}
Recall the definitions of $O_p$ and $C(\cdot)$ at the start of this section.
\subsection{Finite-Sample Consistency}\label{sec:consistency}
The following finite-sample guarantee establishes that the estimator described in Section \ref{sec:algo} yields a consistent estimate of the causal parameter for a given unit-intervention pair.
\begin{thm} \label{thm:consistency}
For a given unit-intervention pair $(n,d)$,
let Assumptions \ref{assumption:sutva} to \ref{assumption:subspace} hold.
Suppose $k = \rank(\mathbb{E}[\boldsymbol{Y}_{\emph{pre}, \mathcal{I}^{(d)}}])$, where $k$ is defined as in \eqref{eq:pcr}.
Then, conditioned on $\mathcal{E}$,
$$
\widehat{\theta}^{(d)}_n - \theta^{(d)}_n
= O_p \left(
%
\frac{1}{T_0^{1/4}}
%
+ \frac{\| \widetilde{w}^{(n,d)} \|_2}{ \sqrt{T_1}}
%
+ \frac{C(\| \widetilde{w}^{(n,d)}\|_1) \sqrt{\log(T_0 N_d} %N^{(d))}}{\min\{\sqrt{T_0}, \sqrt{N_d} \}} \right).
$$
\end{thm}
\subsection{Asymptotic Normality} \label{sec:normality_results}
Below, we establish that our estimate is asymptotically normal around the true causal parameter.
We then show how to use this result to construct confidence intervals.
\begin{thm} \label{thm:normality}
For a given unit-intervention pair $(n,d)$, let the setup of Theorem \ref{thm:consistency} hold.
Suppose (i) $T_0, T_1, N_d} %N^{(d) \to \infty$; (ii)
$(C(\| \widetilde{w}^{(n,d)}\|_2))^2 \log(T_0 N_d} %N^{(d)) = o(\min \{T_0, N_d} %N^{(d) \})$;
(iii)
\begin{align} \label{eq:stupid_t1}
&T_1
= o\Bigg(
%
\min\Bigg\{
%
\sqrt{T_0},
%
\frac{\min\left\{T_0, N_d} %N^{(d) \right\}}{ \big(C(\| \widetilde{w}^{(n,d)}\|_1)\big)^2 \log(T_0 N_d} %N^{(d))} \Bigg\}
\Bigg).
\end{align}
Then, conditioned on $\mathcal{E}$, $\sqrt{T_1} (\widehat{\theta}_n^{(d)} - \theta_n^{(d)}) ~\xrightarrow{d}~ \mathcal{N} \left(0, \sigma^2 \emph{plim} \|\widetilde{w}^{(n,d)}\|_2^2 \right). $
\end{thm}
\noindent{\em Interpretation. }
Ignoring dependencies on log factors and $\widetilde{w}^{(n,d)}$, Theorem \ref{thm:normality} establishes that if $T_1 = o(\min\{\sqrt{T_0}, N_d} %N^{(d) \})$, then $\widehat{\theta}_n^{(d)}$ is asymptotically normally distributed around $\theta_n^{(d)}$.
Effectively, this condition restricts the number of post-intervention (testing) measurements $T_1$ from growing too quickly with respect to the number of pre-intervention (training) measurements $T_0$ and the size of the donor subgroup $N_d$.
We note that the asymptotic variance scales with $\sigma^2$ and $\|\widetilde{w}^{(n,d)}\|_2^2$.
Recall that the former represents the variance of $\varepsilon_{tn}$ across all $(t,n)$, while the latter measures the size of the projection of $w^{(n,d)}$ onto the rowspace of $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]$.
Thus, the asymptotic variance scales with the underlying ``model complexity'' of the latent factors, given by $\rank(\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}])$, which can be much smaller than the ambient dimension $N_d$.
\subsubsection{Constructing Confidence Intervals}
To construct confidence intervals, we require an estimate for the asymptotic variance.
Consistent with standard practice, we estimate $\sigma^2$ using the pre-intervention error:
\begin{align}\label{eq:hat_sigma}
\widehat{\sigma}^2 = \frac{1}{T_0} \| Y_{\text{pre}, n} - \boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}} \widehat{w}^{(n,d)} \|_2^2.
\end{align}
We justify this estimator through the following lemma.
\begin{lemma} \label{lemma:variance_estimate}
Let Assumptions \ref{assumption:sutva} to \ref{assumption:spectra} hold.
Suppose $k = \rank(\mathbb{E}[\boldsymbol{Y}_{\emph{pre}, \mathcal{I}^{(d)}}])$, where $k$ is defined as in \eqref{eq:pcr}.
Then conditioned on $\mathcal{E}$,
\begin{align}
\widehat{\sigma}^2 - \sigma^2 &= O_p\left( \frac{1}{\sqrt{T_0}}
+ \frac{\| \widetilde{w}^{(n,d)} \|^2_1 \log(T_0 N_d)}{\min\{T_0, N_d\}} \right).
\end{align}
\end{lemma}
Recall from Lemma \ref{lemma:param_est} that we can estimate $\|\widetilde{w}^{(n,d)}\|_2$ from $\|\widehat{w}^{(n,d)}\|_2$.
Hence, $\widehat{\sigma} \|\widehat{w}^{(n,d)}\|_2$ is an accurate estimate of $\sigma \|\widetilde{w}^{(n,d)}\|_2$.
Coupling this estimator with Theorem \ref{thm:normality}, we can then create confidence intervals for the causal parameter in a straightforward manner.
For example, a $95\%$ confidence interval is given by
\begin{align}\label{eq:finite_sample_CI}
\theta_n^{(d)} \in \left[ \widehat{\theta}_n^{(d)} \pm \frac{1.96 \cdot \widehat{\sigma} \|\widehat{w}^{(n,d)}\|_2}{\sqrt{T_1}} \right].
\end{align}
We empirically assess the ``tightness'' of this confidence interval in Section \ref{sec:empirics_normality}.
\section{Standard Notation}\label{sec:notation}
For a matrix $\boldsymbol{A} \in \mathbb{R}^{a \times b}$, we denote its transpose as $\boldsymbol{A}' \in \mathbb{R}^{b \times a}$.
We denote the operator (spectral) and Frobenius norms of $\boldsymbol{A}$ as $\|\boldsymbol{A}\|_{\text{op}}$ and $\|\boldsymbol{A}\|_F$, respectively.
The columnspace (or range) of $\boldsymbol{A}$ is the span of its columns, which we denote as
$\Rc(\boldsymbol{A}) = \{v \in \mathbb{R}^a: v = \boldsymbol{A} x, x \in \mathbb{R}^b\}$.
The rowspace of $\boldsymbol{A}$, given by $\Rc(\boldsymbol{A}')$, is the span of its rows.
Recall that the nullspace of $\boldsymbol{A}$
is the set of vectors that are mapped to zero under $\boldsymbol{A}$.
For any vector $v \in \mathbb{R}^a$, let $\|v\|_p$ denote its
$\ell_p$-norm.
Further, we define the inner product between vectors $v, x \in \mathbb{R}^a$ as
$\langle v, x \rangle = \sum_{\ell=1}^a v_\ell x_\ell$.
If $v$ is a random variable, we define its sub-Gaussian (Orlicz) norm as $\|v\|_{\psi_2}$.
Let $[a] = \{1, \dots, a\}$ for any integer $a$.
Let $f$ and $g$ be two functions defined on the same space. We say that $f(n)$ = $O(g(n))$ if and only if
there exists a positive real number $M$ and a real number $n_0$ such that for all $n \ge n_0, |f (n)| \le M|g(n)|$.
Analogously we say:
$f (n) = \Theta(g(n))$ if and only if there exists positive real numbers $m, M$ such that for all $n \ge n_0, \ m|g(n)| \le |f(n)| \le M|g(n)|$;
$f (n) = o(g(n))$ if for any $m > 0$, there exists $n_0$ such that for all
$n \ge n_0, |f(n)| \le m|g(n)|$.
We adopt the standard notations and definitions for stochastic convergences.
As such, we denote $\xrightarrow{d}$ and $\xrightarrow{p}$ as convergences in distribution and probability, respectively.
We will also make use of $O_p$ and $o_p$, which are probabilistic versions of the commonly used deterministic $O$ and $o$ notations.
More formally, for any sequence of random vectors $X_n$, we say $X_n = O_p(a_n)$ if for every $\varepsilon>0$, there exists constants $C_\varepsilon$ and $n_\varepsilon$ such that $\mathbb{P}( \| X_n \|_2 > C_\varepsilon a_n) < \varepsilon$ for every $n \ge n_\varepsilon$;
equivalently, we say $(1/a_n) X_n$ is ``uniformly tight'' or ``bounded in probability''.
Similarly, $X_n = o_p(a_n)$ if for all $\varepsilon, \varepsilon'> 0$, there exists $n_\varepsilon$ such that $\mathbb{P}(\| X_n \|_2 > \varepsilon' a_n) < \varepsilon$ for every $n \ge n_\varepsilon$.
Therefore, $X_n = o_p(1) \iff X_n \xrightarrow{p} 0$.
Additionally, we use the ``plim'' probability limit operator: $\text{plim} \ X_n = a \iff X_n \xrightarrow{p} a$.
We say a sequence of events $\mathcal{E}_n$, indexed by $n$,
holds ``with high probability'' (w.h.p.) if $\mathbb{P}(\mathcal{E}_n) \rightarrow 1$ as $n \rightarrow \infty$, i.e., for any $\varepsilon > 0$, there exists a $n_\varepsilon$ such that for all $n > n_\varepsilon$, $\mathbb{P}(\mathcal{E}_n) > 1 - \varepsilon$. More generally, a multi-indexed
sequence of events $\mathcal{E}_{n_1,\dots, n_d}$, with indices $n_1,\dots, n_d$ with $d \geq 1$, is said to
hold w.h.p. if $\mathbb{P}(\mathcal{E}_{n_1,\dots, n_d}) \rightarrow 1$ as $\min\{n_1,\dots, n_d\} \rightarrow \infty$.
We also use $\mathcal{N}(\mu, \sigma^2)$ to denote a normal or Gaussian distribution with mean $\mu$ and variance $\sigma^2$---we call it {\em standard} normal or Gaussian if $\mu = 0$ and $\sigma^2 = 1$.
\subsection{Helper Lemmas}
\begin{lemma} \label{lemma:trace.1}
Let $\boldsymbol{A}, \boldsymbol{B} \in \mathbb{R}^{n \times n} $ be symmetric PSD matrices. Then, $\tr(\boldsymbol{A} \boldsymbol{B}) \ge 0$.
\end{lemma}
\begin{proof}
Let $\boldsymbol{B}^{1/2}$ denote the square root of $\boldsymbol{B}$. Since $\boldsymbol{A} \succeq 0$, we have
$
\tr(\boldsymbol{A} \boldsymbol{B}) = \tr(\boldsymbol{A} \boldsymbol{B}^{1/2} \boldsymbol{B}^{1/2})
= \tr(\boldsymbol{B}^{1/2} \boldsymbol{A} \boldsymbol{B}^{1/2}) %
= \sum_{i=1}^n (\boldsymbol{B}^{1/2} e_i)^\prime \boldsymbol{A} (\boldsymbol{B}^{1/2} e_i) \ge 0,
$
\end{proof}
\begin{lemma} \label{lemma:trace.2}
If $\boldsymbol{A} \in \mathbb{R}^{n \times n}$ is a symmetric matrix and $\boldsymbol{B} \in \mathbb{R}^{n \times n}$ is a symmetric PSD matrix, then
$
\tr(\boldsymbol{A} \boldsymbol{B}) \le \lambda_{\max}(\boldsymbol{A}) \cdot \tr(\boldsymbol{B}),
$
where $\lambda_{\max}(\boldsymbol{A})$ is the top eigenvalue of $\boldsymbol{A}$.
\end{lemma}
\begin{proof}
Since $\boldsymbol{A}$ is symmetric, it follows that
$\lambda_{\max}(\boldsymbol{A}) \boldsymbol{I} - \boldsymbol{A} \succeq 0$.
As a result, applying Lemma \ref{lemma:trace.1} yields
$
\tr((\lambda_{\max}(\boldsymbol{A}) \boldsymbol{I} - \boldsymbol{A}) \boldsymbol{B}) = \lambda_{\max}(\boldsymbol{A}) \cdot \tr(\boldsymbol{B}) - \tr(\boldsymbol{A} \boldsymbol{B}) \ge 0.
$
\end{proof}
\section{Empirics}\label{sec:empirics}
We consider a case study exploring the effect of different discount strategies to increase user engagement in an A/B testing framework for a large e-commerce company, which we anonymize due to privacy considerations.
Our results suggest the SI causal framework offers a useful perspective towards designing {\em data-efficient} A/B testing or randomized control trial (RCT).
In particular, the SI framework allows one to selectively run a small number of experiments, yet estimate a personalized policy for each unit enrolled in the RCT.
\vspace{7.5pt}
\noindent {\em Background. }
The company segmented its users into 25 groups (units), with approximately $10000$ individual users per group, based on the historical time and money spent by a user on the platform.
The aim of the company was to learn how different discount strategies (interventions) changed the engagement of each of the 25 user groups.
The strategies were $10\%$, $30\%$, and $50\%$ discounts over the regular subscription cost (control).
The A/B test was performed by {\em randomly} partitioning users in each of the 25 user groups into 4 subgroups; these subgroups corresponded to either one of the 3 discount strategies or a control group that received a 0\% discount.
User engagement in these 100 subgroups (25 user groups multiplied by 4 discount strategies) was measured daily over 8 days.
\vspace{7.5pt}
\noindent
{\em Suitability of case study. }
We highlight that this web A/B testing case study is particularly suited to validate the SI framework as we observe the engagement levels of each customer group under each of the three discounts and control, i.e., for each customer group, we observe all four ``counterfactual'' trajectories.
As such, we have access to all potential outcomes, which we encode into a tensor $\boldsymbol{Y} = [Y^{(d)}_{tn}] \in \mathbb{R}^{8 \times 25 \times 4}$, where each slice $\boldsymbol{Y}^{(d)} \in \mathbb{R}^{8 \times 25}$ is a matrix of potential outcomes under discount $d$.
We now argue that the operating assumptions in this work likely hold for this case study.
First, Assumption \ref{assumption:sutva} likely holds since the same discount $d$ was applied to every subgroup of users within $\mathcal{I}^{(d)}$ and the discount one customer receives is unlikely to have an effect on another customer.
Next, we re-emphasize the discounts were randomly assigned; hence, Assumption \ref{assumption:conditional_mean_zero} holds (see the discussion in Section \ref{sec:causal.framework} for details).
Moreover, the engagement levels were bounded, which implies Assumptions \ref{assumption:noise} and \ref{assumption:boundedness} hold.
To ensure Assumptions \ref{assumption:form}, \ref{assumption:linear}, and \ref{assumption:spectra} hold, we study the spectral profile of the tensor $\boldsymbol{Y}$.
Specifically, we inspect the spectra of the mode-1 and mode-2 unfoldings of $\boldsymbol{Y}$, shown in Figure \ref{fig:slice_rank}, to measure $r$ (given in Assumption \ref{assumption:form}).
A mode-1 unfolding of $\boldsymbol{Y}$ results in a $8 \times (25 \times 4)$ matrix created by a column-wise concatenation of the columns of $\{\boldsymbol{Y}^{(d)}\}$, while the latter is the $25 \times (8 \times 4)$ matrix resulting from the column-wise concatenation of the transposed rows of $\{\boldsymbol{Y}^{(d)}\}$.
By Proposition 1 of \cite{side_information}, the maximum rank of the two mode unfoldings provides an upper bound to $r$.
Since over 99\% of the spectral energy is captured by the top two singular values in both cases, $r \lesssim 2$ and hence Assumption \ref{assumption:form} is likely valid;
more specifically, this implies that $\boldsymbol{Y}$ has a low canonical polyadic tensor rank, which then implies Assumption \ref{assumption:form} (see Section \ref{sec:low_rank_tensor_factor_models} for details).
As discussed in Section \ref{sec:causal_assumptions}, this also suggests that Assumption \ref{assumption:linear} likely holds since $r$ is much smaller than $N=25$.
Finally, the clear ``elbow'' points in both Figures \ref{fig:s1} and \ref{fig:s2} suggest
that Assumption \ref{assumption:spectra} likely holds.
We verify Assumption \ref{assumption:subspace} in Section \ref{sec:empirics_results} below.
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\linewidth]
{images/mode1.pdf}
\caption{Spectra of mode-1 unfolding.}
\label{fig:s1}
\end{subfigure}
\qquad
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width=\linewidth]
{images/mode2.pdf}
\caption{Spectra of mode-2 unfolding.}
\label{fig:s2}
\end{subfigure}
\caption{
In both \ref{fig:s1} and \ref{fig:s2}, the top two singular values capture more than 99\% of the spectral energy.
}
\label{fig:slice_rank}
\vspace{-2mm}
\end{figure}
\subsection{Experimental Setup}
\noindent
{\em Pre- and post-intervention periods. }
For each of the 25 user groups, we denote the 8 day user engagement trajectories of the subgroups associated with control as the pre-intervention period.
Correspondingly, for each of the 25 user groups,
we denote the 8 day user engagement trajectories associated with the 10\%, 30\%, and 50\% discount coupons as the post-intervention period.
\vspace{7.5pt}
\noindent
{\em Choosing donor groups for each intervention. }
To simulate a {\em data-efficient} RCT,
we randomly partition the 25 user groups into three clusters, denoted as user groups 1-8, 9-16, and 17-25.
For the 10\% discount coupon strategy, we choose user groups 1-8 as our donor pool, and user groups 9-25 as our collection of targets, i.e., we observe the post-intervention data under said discount for groups 1-8, but hold out the corresponding data for groups 9-25.
In other words, the SI estimator does not get to observe the trajectories of groups 9-25 under a 10\% discount.
Using the observed trajectories of user groups 1-8, we separately apply the SI estimator to create unique synthetic user engagement trajectories for each of the 9-25 user groups under a 10\% discount.
We use the metric given by \eqref{eq:se_rct} to compare our estimates against the actual trajectories for user groups 9-25 under a 10\% discount.
Analogously, we utilize user groups 9-16 and 17-25 as our donor groups for the 30\% and 50\% discounts, respectively.
See Figure \ref{fig:ab_si} for a visual depiction of the set of experiments/observations the SI estimator uses to make predictions.
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]
{images/ab_real_3.pdf}
\caption{Actual experimental setup.}
\label{fig:ab_rct}
\end{subfigure}
\qquad
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]
{images/ab_si_3.pdf}
\caption{SI experimental setup.}
\label{fig:ab_si}
\end{subfigure}
\caption{Experimental setups for the e-commerce A/B testing case study.
Observations are represented by colored blocks while unobservable counterfactuals (held-out test sets) are represented by white blocks.
}
\label{fig:ab}
\end{figure}
\vspace{7.5pt}
\noindent
{\em Quantifying counterfactual prediction accuracy. }
To quantify the accuracy of SI's counterfactual predictions, we need a meaningful baseline to compare against.
To that end, we use the following squared error metric for any $(n,d)$ pair:
\begin{align} \label{eq:se_rct}
\text{SE}_n^{(d)} = 1 - \frac{(\theta_n^{(d)} - \widehat{\theta}_n^{(d)})^2 } {(\theta_n^{(d)} - (1/N_d) \sum_{j \in \mathcal{I}^{(d)}} Y_{tj} )^2}.
\end{align}
In words, the numerator on the right-hand side of \eqref{eq:se_rct} represents the squared error associated with the SI estimate $\widehat{\theta}_n^{(d)}$.
We coin the denominator, $(1/N_d) \sum_{j \in \mathcal{I}^{(d)}} Y_{tj}$, as the ``RCT estimator''; this is the average outcome across all units within subgroup $\mathcal{I}^{(d)}$ over the post-intervention period.
If the units in a RCT are indeed homogeneous (i.e., they react similarly to intervention $d$), then the RCT estimator will be a good predictor of $\theta^{(d)}_n$. Therefore, $\text{SE}_n^{(d)} > 0$ indicates
the success of the SI estimator over the RCT baseline.
Therefore, \eqref{eq:se_rct} can be interpreted as a modified $R^2$ statistic with respect to a RCT baseline.
In effect, the $\text{SE}_n^{(d)}$ captures the gain by ``personalizing'' the prediction to the target unit using SI over the natural baseline of taking the average outcome over $\mathcal{I}^{(d)}$.
\subsection{Results and Takeaways} \label{sec:empirics_results}
Under the setup above, we apply SI to produce the synthetic ``counterfactual'' trajectories under the three discounts.
We evaluate the accuracy under the 10\% discount using only the estimated trajectories of user groups 9-25 (since user groups 1-8 are our donors).
Similarly, we use the estimated trajectories of user groups 1-8 and 17-25 for the 30\% discount, and user groups 1-16 for the 50\% discount.
To mitigate the effects of randomness, we repeat this procedure $100$ times.
Within each iteration, we randomly sample a permutation of the 25 user groups to create different sets of donors (i.e., groups 1-8, 9-16, 17-25).
\vspace{7.5pt}
\noindent
{\em Empirical results. }
In Figure \ref{fig:r_scores_web_ab_testing}, we show the hypothesis test results for the three discount strategies, and the median $\text{SE}^{(d)}_n$ across the 25 user groups averaged over all $100$ iterations, denoted as $\text{SE}^{(d)}$.
The hypothesis test passes for each discount at a significance level of $\alpha = 0.05$ in every iteration, which indicates Assumption \ref{assumption:subspace} likely holds.
Across the three discounts, SI achieves an average $\text{SE}^{(d)}$ of at least 0.98 and a standard deviation of at most $0.04$.
In words, the SI estimator far outperforms the RCT estimator.
This indicates significant heterogeneity amongst the user groups in how they respond to discounts, and thus warrants the e-commerce company running separate A/B tests for each of the 25 groups.
\vspace{-9pt}
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{images/table.pdf}
\caption{Hypothesis test and prediction accuracy results for SI in e-commerce A/B testing case study.}
\label{fig:r_scores_web_ab_testing}
\vspace{-20pt}
\end{figure}
\vspace{7.5pt}
\noindent
{\em Takeaway: SI enables data-efficient RCTs. }
In this A/B testing framework, the e-commerce company implemented a total of $100$ distinct experiments---one experiment for each of the 25 user groups under each of the 4 interventions (0\%, 10\%, 30\%, and 50\% discounts).
In contrast, the SI framework only required observations from $50$ experiments to produce the estimated post-intervention trajectories.
In particular, SI utilized two experiments for each of the 25 user groups: one in the pre-intervention period (under 0\% discount) and one in the post-intervention period (under exactly one of the $10\%, 30\%, 50\%$ discounts).
See Figure \ref{fig:ab_rct} for a visual depiction of the experiments conducted by the e-commerce company in comparison to what is required by SI, shown in Figure \ref{fig:ab_si}.
More generally, with $N$ units and $D$ interventions, an RCT requires $N \times D$ experiments to estimate the optimal ``personalized'' intervention for every unit.
Meanwhile, under Assumptions \ref{assumption:sutva} to \ref{assumption:subspace},
SI requires $2N$ experiments as follows: in the first $N$ experiments, all units are under the same intervention regime, say control ($d=0$); next, divide all $N$ units into $D$ partitions each of size $N/D$,
and assign intervention $d$ to units in the $d$-th partition, which leads to another $N$ experiments.
Crucially, the number of required experiments does not scale with $D$, which becomes significant as the number of interventions, i.e, level of personalization, grows.
Also, if pre-intervention data is already being collected, which is common in many settings (e.g., this e-commerce case study), then SI only requires running $N$ experiments.
This can be significant when experimentation is costly (e.g., clinical trials).
\section{Simulations}\label{sec:simulations}
In this section, we present illustrative simulations to reinforce our theoretical results.
\vspace{-5pt}
\subsection{Consistency}\label{sec:empirics_consistency}
This section demonstrates our estimator is consistent and the rate of convergence matches our theoretical results (Theorem \ref{thm:consistency}).
In the process, we show the importance of Assumption \ref{assumption:subspace} in enabling successful estimation of $\theta^{(d)}_n$ in the SI causal framework.
\vspace{-4pt}
\subsubsection{Generative Model for Synthetic Data} \label{ssec:empirics_consistency_model}
Without loss of generality,
we consider the binary $D=2$ setting, i.e., $d \in \{0, 1\}$, where our aim is to estimate $\theta_n^{(1)}$.
We let $N_1 = | \mathcal{I}^{(1)}| = 200$ and $r = 15$, where $r$ is defined in Assumption \ref{assumption:form}.
We define the latent unit factors associated with $\mathcal{I}^{(1)}$ as $\boldsymbol{V}_{\mathcal{I}^{(1)}} \in \mathbb{R}^{N_1 \times r}$ (refer to \eqref{eq:form.0}), where its entries are independently sampled from a standard normal distribution.
\vspace{7.5pt}
\noindent
{\em Pre-intervention data. }
We choose $T_0 = 200$ and $r_\text{pre} = 10$.
We define the latent pre-intervention time factors under control ($d=0$) as $\boldsymbol{U}^{(0)}_\text{pre} \in \mathbb{R}^{T_0 \times r}$, which is sampled as follows:
(i) let $\boldsymbol{A} \in \mathbb{R}^{T_0 \times r_\text{pre}}$, where its entries are independently sampled from a standard normal;
(ii) let $\boldsymbol{Q} \in \mathbb{R}^{r_\text{pre} \times (r-r_\text{pre})}$, where its entries are first independently sampled from a uniform distribution over $[0,1]$, and then its columns are normalized to sum to one;
(iii) define $\boldsymbol{U}^{(0)}_\text{pre} = [\boldsymbol{A}, \boldsymbol{A} \boldsymbol{Q}]$ as the concatenation of $\boldsymbol{A}$ and $\boldsymbol{A} \boldsymbol{Q}$.
By construction, $\boldsymbol{U}^{(0)}_\text{pre}$ has rank $r_\text{pre}$ w.h.p., which we empirically verify.
Next, we define $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}}] = \boldsymbol{U}^{(0)}_\text{pre} \boldsymbol{V}^\prime_{\mathcal{I}^{(1)}} \in \mathbb{R}^{T_0 \times N_1}$.
Again by construction, we note that $\rank(\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}}]) = r_\text{pre}$ w.h.p.,
which we empirically verify.
We then generate the model $w^{(n,1)} \in \mathbb{R}^{N_1}$ from a multivariate standard normal distribution, and define $\mathbb{E}[Y_{\text{pre}, n}] = \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}}] w^{(n,1)} \in \mathbb{R}^{T_0}$.
\vspace{7.5pt}
\noindent
{\em Post-intervention data. }
We sample two sets of post-intervention time factors under $d=1$: one will obey Assumption \ref{assumption:subspace}, while the other will violate it.
To begin, let $T_1 = 200$.
We sample the former set as follows:
(i) let $\boldsymbol{P} \in \mathbb{R}^{T_1 \times T_0}$, where its entries are first independently sampled from a uniform distribution over $[0,1]$, and then its rows are normalized to sum to one;
(ii) define $\boldsymbol{U}^{(1)}_\text{post} = \boldsymbol{P} \boldsymbol{U}^{(0)}_\text{pre} \in \mathbb{R}^{T_1 \times r}$.
We then define $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}}] = \boldsymbol{U}^{(1)}_\text{post} \boldsymbol{V}^\prime_{\mathcal{I}^{(1)}} \in \mathbb{R}^{T_1 \times N_1}$.
Next, we generate the latter set as $\boldsymbol{F}^{(1)}_\text{post} \in \mathbb{R}^{T_1 \times r}$, where its entries are independently sampled from a standard normal.
Similar to before, we define $\mathbb{E}[\boldsymbol{Z}_{\text{post}, \mathcal{I}^{(1)}}] = \boldsymbol{F}^{(1)}_\text{post} \boldsymbol{V}^\prime_{\mathcal{I}^{(1)}} \in \mathbb{R}^{T_1 \times N_1}$.
To study the effect of the post-intervention period length, we will treat it as a variable.
As such, we define $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}}(\rho)] \in \mathbb{R}^{\rho T_1 \times N_1}$ as the first $\rho T_1$ rows of $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}}]$, where $\rho \in \{0.1, 0.2, \dots, 1.0\}$.
Analogously, we define $\mathbb{E}[\boldsymbol{Z}_{\text{post}, \mathcal{I}^{(1)}}(\rho)] \in \mathbb{R}^{\rho T_1 \times N_1}$ with respect to $\mathbb{E}[\boldsymbol{Z}_{\text{post}, \mathcal{I}^{(1)}}]$.
For every $\rho$, we define $\theta^{(1)}_n(\rho)$ using $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}}(\rho)]$ and $w^{(n,1)}$, and define $\vartheta^{(1)}_n(\rho)$ using $\mathbb{E}[\boldsymbol{Z}_{\text{post}, \mathcal{I}^{(1)}}(\rho)]$ and $w^{(n,1)}$.
\vspace{7.5pt}
\noindent
{\em Interpretation of data generating process. }
We now motivate the construction of $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}}]$, $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}}]$, and $\mathbb{E}[\boldsymbol{Z}_{\text{post}, \mathcal{I}^{(1)}}]$.
Recall that the SI framework allows potential outcomes from different interventions to be sampled from different distributions.
As such, we construct $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}}]$ and $\mathbb{E}[\boldsymbol{Z}_{\text{post}, \mathcal{I}^{(1)}}]$ such that they follow a different distribution to that of $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}}]$.
This allows us to study when the model learnt using pre-intervention data ``generalizes'' to a post-intervention regime generated from a different distribution.
To highlight the impact of Assumption \ref{assumption:subspace}, we note that by construction, Assumption \ref{assumption:subspace} holds w.h.p. between $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}}(\rho)]$ and $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}}]$ for every $\rho$.
In contrast, by construction, $\mathbb{E}[\boldsymbol{Z}_{\text{post}, \mathcal{I}^{(1)}}] = r$ w.h.p.
Notably, since $r_\text{pre} < r$, Assumption \ref{assumption:subspace} fails w.h.p. between
$\mathbb{E}[\boldsymbol{Z}_{\text{post}, \mathcal{I}^{(1)}}(\rho)]$ and $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}}]$ for every $\rho$.
We empirically verify all three conditions.
\vspace{7.5pt}
\noindent
{\em Observations. }
We generate $Y_{\text{pre}, n}$ and $\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}}$ by adding independent noise entries from a normal distribution with mean zero and variance $\sigma^2 = 0.3$ to $\mathbb{E}[Y_{\text{pre}, n}]$ and $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}}]$, respectively.
For every $\rho$, we generate $\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}}(\rho)$ and $\boldsymbol{Z}_{\text{post}, \mathcal{I}^{(1)}} (\rho)$ by applying the same additive noise model to $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}} (\rho)]$ and $\mathbb{E}[\boldsymbol{Z}_{\text{post}, \mathcal{I}^{(1)}} (\rho)]$, respectively.
\vspace{7.5pt}
\noindent
{\em Verifying assumptions. }
We note that our data generating process ensures that Assumptions \ref{assumption:sutva}, \ref{assumption:conditional_mean_zero}, \ref{assumption:noise} hold.
In addition, we empirically verify Assumptions \ref{assumption:boundedness} and \ref{assumption:spectra}.
Further, for Assumption \ref{assumption:form}, we note that our pre- and post-intervention (expected) potential outcomes associated with $\mathcal{I}^{(1)}$ were all generated using $\boldsymbol{V}_{\mathcal{I}^{(1)}}$; thus, their variations only arise due to the sampling procedure for their respective latent time-intervention factors.
Given that $\mathbb{E}[Y_{\text{pre}, n}]$, $\theta^{(1)}_n(\rho)$, and $\vartheta^{(1)}(\rho)$ were all defined using $w^{(n,1)}$, Assumption \ref{assumption:linear} holds.
\vspace{-5pt}
\subsubsection{Results}
We perform $100$ iterations for each $\rho$.
The potential outcomes, $\mathbb{E}[Y_{\text{pre},n}]$, $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}}]$, $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}}(\rho)]$, $\mathbb{E}[\boldsymbol{Z}_{\text{post}, \mathcal{I}^{(1)}}(\rho)]$, are fixed, but the idiosyncratic shocks are re-sampled every iteration to yield new (random) outcomes.
For each iteration, we use $(Y_{\text{pre}, n}, \boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}})$ to learn $\widehat{w}^{(n,1)}$, as given by \eqref{eq:pcr}.
Then, we use $\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}}(\rho)$ and $\widehat{w}^{(n,1)}$ to yield $\widehat{\theta}^{(1)}_n(\rho)$, as given by \eqref{eq:si.2}.
Similarly, we use $\boldsymbol{Z}_{\text{post}, \mathcal{I}^{(1)}}(\rho)$ and $\widehat{w}^{(n,1)}$ to yield $\widehat{\vartheta}^{(1)}_n(\rho)$.
The mean absolute errors (MAEs), $|\widehat{\theta}^{(1)}_n(\rho) - \theta^{(1)}_n(\rho)|$ and
$|\widehat{\vartheta}^{(1)}_n(\rho) - \vartheta^{(1)}_n(\rho)|$, are plotted
in Figure \ref{fig:consistency}.
As the figure shows, the MAE of $\widehat{\theta}^{(1)}_n(\rho)$ decays as the post-intervention period $\rho T_1$ increases.
Moreover, given the choice of $T_0 = N_d} %N^{(d)$, the error effectively scales as $O_p(1/\sqrt{\rho T_1})$, which matches the implication of Theorem \ref{thm:consistency}.
Meanwhile, the MAE of $\widehat{\vartheta}^{(1)}_n(\rho)$ is not only roughly eight times larger, but also does not decrease with $\rho T_1$.
This simulation demonstrates that the estimator described in Section \ref{sec:algo} produces a consistent estimate of the underlying causal parameter if Assumptions \ref{assumption:sutva} to \ref{assumption:subspace} hold, and also highlights the importance of Assumption \ref{assumption:subspace} in enabling accurate estimation.
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\linewidth]
{images/cc_0.pdf}
\caption{Mean absolute errors when Assumption \ref{assumption:subspace} holds (blue) and fails (orange).}
\label{fig:c1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\linewidth]
{images/cc1.pdf}
\caption{Zoomed-in plot of mean absolute error when Assumption \ref{assumption:subspace} holds.}
\label{fig:c2}
\end{subfigure}
\caption{Plot of mean absolute errors when Assumption \ref{assumption:subspace} holds (blue) and fails (orange) across 100 iterations for every $\rho$, which corresponds to different post-intervention lengths.
%
The error decays as $O_p(1/\sqrt{\rho T_1})$ when Assumption \ref{assumption:subspace} holds, as implied by Theorem \ref{thm:consistency}, but does not decay when Assumption \ref{assumption:subspace} fails.}
\label{fig:consistency}
\end{figure}
\subsection{Asymptotic Normality Part I}\label{sec:empirics_normality}
In this section, we demonstrate that our estimate $\widehat{\theta}^{(d)}_n$ is very well-approximated by a Gaussian
or Normal distribution centered around $\theta^{(d)}_n$, even if the pre- and post-intervention potential outcomes follow different distributions.
\subsubsection{Generative Model for Synthetic Data}
Again, we consider the binary $D=2$ intervention model.
Our interest is in estimating $\theta^{(d)}_n$ for $d \in \{0,1\}$.
Our data generating process will be such that when $d=0$, the pre- and post-intervention potential outcomes will obey the same distribution, and when $d=1$,
they will obey different distributions.
Nevertheless, we will use a single learned model to ``generalize'' to both post-intervention settings.
As such, we will consider a single donor subgroup, but allow them to ``undergo'' two different interventions during the post-intervention phase.
That is, $\mathcal{I}^{(0)} = \mathcal{I}^{(1)}$ with $N_0 = N_1 = 400$.
Choosing $r = 15$, we define $\boldsymbol{V} = \boldsymbol{V}_{\mathcal{I}^{(0)}} = \boldsymbol{V}_{\mathcal{I}^{(1)}} \in \mathbb{R}^{N_0 \times r}$, where its entries are independently sampled from a standard normal.
We stress that the same latent unit factors are used for both $\mathcal{I}^{(0)}$ and $\mathcal{I}^{(1)}$.
\vspace{7.5pt}
\noindent
{\em Pre-intervention data. }
We choose $T_0 = 400$, and define the latent pre-intervention time factors under control as $\boldsymbol{U}^{(0)}_\text{pre} \in \mathbb{R}^{T_0 \times r}$, where its entries are sampled independently from a standard normal.
Next, we define $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(0)}}] = \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}}] = \boldsymbol{U}^{(0)}_\text{pre} \boldsymbol{V}^\prime \in \mathbb{R}^{T_0 \times N_0}$.
By construction, $\rank(\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(0)}}]) = r$ w.h.p., which we empirically verify.
We then generate the model $w^{(n,0)} = w^{(n,1)} \in \mathbb{R}^{N_0}$ from a standard normal, and define $\mathbb{E}[Y_{\text{pre}, n}] = \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(0)}}] w^{(n,0)} \in \mathbb{R}^{T_0}$.
We define $\widetilde{w}^{(n,0)} = \widetilde{w}^{(n,1)} = \mathbb{E}[Y_{\text{pre}, \mathcal{I}^{(0)}}]^{\dagger} \mathbb{E}[Y_{\text{pre}, n}]$, where $\dagger$ is the pseudo-inverse.
Thus, the pre-intervention data associated with $\mathcal{I}^{(0)}$ and $\mathcal{I}^{(1)}$ are identical.
\vspace{7.5pt}
\noindent
{\em Post-intervention data. }
We choose $T_1= 20$, and generate post-intervention time factors under $d=0$ and $d=1$.
For $d = 0$, we define $\boldsymbol{U}^{(0)}_{\text{post}} \in \mathbb{R}^{T_1 \times r}$, where its entries are independently sampled from a standard normal.
For $d = 1$, we define $\boldsymbol{U}^{(1)}_{\text{post}} \in \mathbb{R}^{T_1 \times r}$, where its entries are independently sampled uniformly over $[-\sqrt{3}, \sqrt{3}]$.
We then define $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(0)}}] = \boldsymbol{U}^{(0)}_\text{post} \boldsymbol{V}^\prime \in \mathbb{R}^{T_1 \times N_0}$ and $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}}] = \boldsymbol{U}^{(1)}_\text{post} \boldsymbol{V}^\prime \in \mathbb{R}^{T_1 \times N_1}$.
We highlight again that both sets of potential outcomes use the same latent unit factors, and only differ in their post-intervention latent time factors to emulate different interventions.
Finally, we define $\theta^{(d)}_n$ using $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(d)}}]$ and $w^{(n,d)}$ for $d \in \{0,1\}$, and re-emphasize $w^{(n,0)} = w^{(n,1)}$.
\vspace{7.5pt}
\noindent
{\em Interpretation of data generating process. }
We note $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(0)}}]$ and $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(0)}}]$ obey the same distribution to reflect that both sets of potential outcomes are associated with control.
In contrast, $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}}]$ and $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}}]$ follow different distributions to reflect that the pre- and post-intervention potential outcomes are associated with different interventions; the former with $d=0$ and the latter with $d=1$.
However, by construction, both $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(0)}}]$ and $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}}]$ satisfy Assumption \ref{assumption:subspace} with respect to $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}}]$ w.h.p.,
which we empirically verify.
Further, note the mean and variance of $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(0)}}], \mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}}]$ are identical.
\vspace{7.5pt}
\noindent
{\em Observations. }
We generate $Y_{\text{pre}, n}$ and $\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(0)}} = \boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}}$
by adding independent noise from a normal distribution with mean zero and
variance $\sigma^2 = 0.5$ to $\mathbb{E}[Y_{\text{pre}, n}]$ and
$\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(0)}}] = \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}}]$, respectively.
We generate $\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(d)}}$ by applying the same additive
noise model to $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(d)}}]$ for $d \in \{0,1\}$.
\vspace{7.5pt}
\noindent
{\em Verifying assumptions. }
As before, Assumptions \ref{assumption:sutva} to \ref{assumption:spectra} hold by construction.
\vspace{-3pt}
\subsubsection{Results}
For each $d \in \{0, 1\}$, we perform 5000 iterations, where $\mathbb{E}[Y_{\text{pre}, n}], \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(0)}}]$, $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(0)}}]$, and $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}}]$ are fixed throughout, but the idiosyncratic shocks are re-sampled to generate new (random) outcomes.
Within each iteration, we first use $(Y_{\text{pre}, n}, \boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(0)}})$ to fit $\widehat{w}^{(n,0)}$, as in \eqref{eq:pcr}.
Since the pre-intervention observations are identical across $d$, we highlight that $\widehat{w}^{(n,1)} = \widehat{w}^{(n,0)}$.
Next, we use $\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(d)}}$ and $\widehat{w}^{(n,0)}$ to yield $\widehat{\theta}^{(d)}_n$ for each $d \in \{0,1\}$, as in \eqref{eq:si.2}.
In other words, we learn a single model $\widehat{w}^{(n,0)}$ and apply it to two different post-intervention outcomes, $\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(0)}}$ and $\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}}$.
We display the histogram of estimates in Figure \ref{fig:normal}.
As implied by Theorem \ref{thm:normality}, the histograms in Figures \ref{fig:n1} and \ref{fig:u1} are very well-approximated by a normal distribution with mean $\theta^{(d)}_n$ and variance $(1/ \sqrt{T_1}) \sigma^2 \|\widetilde{w}^{(n,d)}\|_2^2 $ for $d \in \{0,1\}$.
We stress that the asymptotic variance is identical in both cases since $\widetilde{w}^{(n,0)} = \widetilde{w}^{(n,1)}$, but the means $\theta^{(d)}_n$ are different.
These figures demonstrate that even if the pre- and post-intervention potential outcomes follow different distributions, our estimates remain normally distributed around the true causal parameter.
In summary, these simulations show that under the SI framework, it is feasible to learn $\widehat{w}^{(n,d)}$ under one intervention setting (e.g., control), and then transfer the learned model to a different intervention regime, which may obey a different distribution.
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\linewidth]
{images/n1_final.pdf}
\caption{Histogram of $\widehat{\theta}^{(0)}_n$ and its theoretical asymptotic normal distribution.}
\label{fig:n1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\linewidth]
{images/u1_final.pdf}
\caption{Histogram of $\widehat{\theta}^{(1)}_n$ and its theoretical asymptotic normal distribution.}
\label{fig:u1}
\end{subfigure}
\caption{
In \ref{fig:n1}, the pre- and post-intervention potential outcomes follow the same distribution.
%
In \ref{fig:u1}, the pre- and post-intervention potential outcomes follow different distributions.
%
Regardless, the estimates in \ref{fig:n1} and \ref{fig:u1} are normally distributed around the true causal parameter, as predicted by Theorem \ref{thm:normality}.}
\label{fig:normal}
\vspace{-3pt}
\end{figure}
\vspace{-7pt}
\subsection{Asymptotic Normality Part II}\label{sec:empirics_normality_bad}
Next, we show $\widehat{\theta}^{(d)}_n$ is non-trivially biased when Assumption \ref{assumption:subspace} fails.
\vspace{-7pt}
\subsubsection{Generative Model for Synthetic Data}
We continue analyzing the binary $D=2$ intervention model.
We let $N_1 = 400, r = 15$, and generate
$\boldsymbol{V}_{\mathcal{I}^{(1)}} \in \mathbb{R}^{N_1 \times r}$ by independently sampling its entries from a standard normal.
\vspace{7.5pt}
\noindent
{\em Pre-intervention data. }
We choose $T_0 = 400$ and $r_\text{pre} = 12$.
We construct $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}}]$ using $\boldsymbol{V}_{\mathcal{I}^{(1)}}$
identically to that in Section \ref{ssec:empirics_consistency_model}, such that $\rank(\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}}]) = r_\text{pre}$ w.h.p., which we empirically verify.
As before, we generate $w^{(n,1)} \in \mathbb{R}^{N_1}$ from a standard normal
and define $\mathbb{E}[Y_{\text{pre},n}] = \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}}] w^{(n,1)} \in \mathbb{R}^{T_0}$, as well as
$\widetilde{w}^{(n,1)} = \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}}]^{\dagger} \mathbb{E}[Y_{\text{pre}, n}]$.
\vspace{7.5pt}
\noindent
{\em Post-intervention data. }
We choose $T_1=20$, and define the post-intervention time factors under $d=1$ as $\boldsymbol{U}^{(1)}_\text{post} \in \mathbb{R}^{T_1 \times r}$, where its entries are sampled independently from a standard normal.
Next, we define $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}}] = \boldsymbol{U}^{(1)}_\text{post} \boldsymbol{V}^\prime_{\mathcal{I}^{(1)}} \in \mathbb{R}^{T_1 \times N_1}$.
By construction, $\rank(\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}}]) = r$ w.h.p., which we empirically verify.
Since $r_\text{pre} < r$, Assumption \ref{assumption:subspace} fails between $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}}]$ and $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}}]$.
We define $\theta^{(1)}_n$ using $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}}]$ and $w^{(n,1)}$.
\vspace{7.5pt}
\noindent
{\em Observations. }
We generate $Y_{\text{pre}, n}$ and $\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}}$ by adding independent noise from a normal distribution with mean zero and variance $\sigma^2 = 0.5$ to $\mathbb{E}[Y_{\text{pre}, n}]$
and $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}}]$, respectively.
We generate $\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}}$ by applying the same noise model to $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}}]$.
\vspace{7.5pt}
\noindent
{\em Verifying assumptions. }
As before, Assumptions \ref{assumption:sutva} to \ref{assumption:spectra} hold by construction.
\vspace{-3pt}
\subsubsection{Results}
\begin{figure}[t!]
\centering
\includegraphics[width=0.45\linewidth]
{images/bad.pdf}
\caption{Histogram of $\widehat{\theta}^{(1)}_n$ and its theoretical asymptotic normal distribution. Since Assumption \ref{assumption:subspace} fails in this context, it follows that the theoretical distribution does not fit the histogram.}
\label{fig:nbad}
\label{fig:normal_bad}
\vspace{-3mm}
\end{figure}
We perform 5000 iterations, where $\mathbb{E}[Y_{\text{pre},n}], \mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}}]$, $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}}]$ are fixed, but the shocks are re-sampled.
In each iteration, we use $(Y_{\text{pre}, n}, \boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(1)}})$ to fit $\widehat{w}^{(n,1)}$, and then use $\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(1)}}$ and $\widehat{w}^{(n,1)}$ to yield $\widehat{\theta}^{(1)}_n$.
The resulting histogram is displayed in Figure \ref{fig:normal_bad}.
Unlike Figure \ref{fig:normal}, the histogram is not well-approximated by the normal distribution with mean $\theta^{(1)}_n$ and variance $(1/ \sqrt{T_1}) \sigma^2 \|\widetilde{w}^{(n,1)}\|_2^2$, but has non-trivial
bias.
This reinforces the phenomena observed in Figure \ref{fig:consistency} and the importance of Assumption \ref{assumption:subspace} for successful inference.
\section{Proof: Identification}
\section{Proof of Theorem \ref{thm:identification}} \label{sec:identification}
In what follows, the descriptors above the equalities will denote the assumption used, e.g., $A1$ represents Assumption 1:
\begin{align}
&\mathbb{E} [ Y^{(d)}_{tn} \big| u^{(d)}_t, v_n ]
\\& \stackrel{A2}= \mathbb{E} [\langle u^{(d)}_t, v_n \rangle + \varepsilon_{tn} ~\big|~ u^{(d)}_t, v_n ]
\\& \stackrel{A4}= \langle u^{(d)}_t, v_n \rangle ~\big|~ \{u^{(d)}_t, v_n\}
\\&= \langle u^{(d)}_t, v_n \rangle ~\big|~ \mathcal{E}
\\& \stackrel{A3}= \langle u^{(d)}_t, \sum_{j \in \mathcal{I}^{(d)}} w_j^{(n,d)} v_j \rangle ~\big|~ \mathcal{E}
\\& \stackrel{A4}= \sum_{j \in \mathcal{I}^{(d)}} w_j^{(n,d)}
\mathbb{E} \left[ \big(\langle u^{(d)}_t, v_j \rangle + \varepsilon_{tj} \big) ~\big|~ \mathcal{E} \right]
\\ &\stackrel{A2}= \sum_{j \in \mathcal{I}^{(d)}} w_j^{(n,d)} \mathbb{E} [ Y^{(d)}_{tj} \big| \mathcal{E} ]
\\& \stackrel{A1}= \sum_{j \in \mathcal{I}^{(d)}} w_j^{(n,d)} \mathbb{E} \left[ Y_{tj} | \mathcal{E} \right]. %
\end{align}
The third equality follows since $\langle u^{(d)}_t, v_n \rangle$ is deterministic conditioned on $\{u^{(d)}_t, v_n\}$.
Recalling \eqref{eq:target_causal_parameter}, one can verify \eqref{eq:identification} using the same the arguments used to prove \eqref{eq:identification_strong}, i.e., we begin by conditioning on the set of latent factors $\{u^{(d)}_t, v_n: t > T_0\}$ rather than $\{u^{(d)}_t, v_n\}$.
\section{Covariates} \label{sec:covariates}
In this section, we discuss how access to meaningful covariate information about each unit can help in more efficiently learning the model $w^{(n,d)}$.
To this end, let $\boldsymbol{X} = [X_{kn}] \in \mathbb{R}^{K \times N}$ denote the matrix of covariates across units, i.e.,
$X_{kn}$ denotes the $k$-th descriptor (or feature) of unit $n$.
One approach towards incorporating covariates into the SI estimation procedure described in Section \ref{sec:algo}, is to impose the following structure on $\boldsymbol{X}$.
\begin{assumption} [Covariate structure] \label{assump:covariates}
For every $k$, let $X_{kn} = \langle \varphi_k, v_n \rangle + \zeta_{kn}$.
Here, the r.v.
$\varphi_k \in \mathbb{R}^r$ is a latent factor specific to descriptor $k$;
the r.v. $v_n \in \mathbb{R}^r$ is defined as in \eqref{eq:form.0};
and $\zeta_{kn} \in \mathbb{R}$ is a mean zero measurement noise specific to descriptor $k$ and unit $n$.
\end{assumption}
\noindent
{\em Interpretation. }
The key structure we impose in Assumption~\ref{assump:covariates} is that the covariates $X_{kn}$ have the same latent unit factors $v_n$ as the potential outcomes $Y^{(d)}_{tn}$.
Thus, given a target unit $n$ and subgroup $\mathcal{I}^{(d)}$, this allows us to express unit $n$'s covariates as a linear combination of the covariates associated with units within $\mathcal{I}^{(d)}$ via the {\em same} linear model that describes the relationship between their respective potential outcomes (formalized in Proposition~\ref{prop:covariates} below).
One notable flexibility of our covariate model is that the observations of covariates can be {\em noisy} due to the presence of $\zeta_{kn}$.
\begin{prop} \label{prop:covariates}
For a given unit-intervention pair $(n,d)$, let Assumptions \ref{assumption:linear} and \ref{assump:covariates} hold.
Then, conditioned on $\mathcal{E}_\varphi \coloneqq \mathcal{E} \cup \{\varphi_k: k \in [K]\}$, we have for all $k$,
\begin{align}
\mathbb{E}[X_{kn}] = \sum_{j \in \mathcal{I}^{(d)}} w_j^{(n,d)} \mathbb{E}[X_{kj}].
\end{align}
\end{prop}
\begin{proof}
Proof is immediate by plugging Assumption \ref{assumption:linear} into Assumption \ref{assump:covariates}.
\end{proof}
\vspace{7.5pt}
\noindent
{\em Adapting the SI estimator. }
Proposition \ref{prop:covariates} suggests a natural modification to the model learning stage of the SI estimator presented in Section \ref{sec:algo}.
In particular, similar to the estimation procedure of \cite{abadie2}, we propose appending the covariates to the pre-intervention outcomes.
Formally, we define $X_n = [X_{kn}] \in \mathbb{R}^{K}$ as the vector of covariates associated with the target unit $n$; analogously, we define
$\boldsymbol{X}_{\mathcal{I}^{(d)}} = [X_{kj}: j \in \mathcal{I}^{(d)}] \in \mathbb{R}^{K \times N_d}$
as the matrix of covariates associated with those donor units within $\mathcal{I}^{(d)}$.
We further define $Z_n = [Y_{\text{pre},n}, X_n] \in \mathbb{R}^{T_0 + K}$
and
$\boldsymbol{Z}_{\mathcal{I}^{(d)}} = [\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}, \boldsymbol{X}_{\mathcal{I}^{(d)}}] \in \mathbb{R}^{(T_0 + K) \times N_d}$
as the concatenation of pre-intervention outcomes and features associated with the target unit $n$ and subgroup $\mathcal{I}^{(d)}$, respectively.
We denote the SVD of $\boldsymbol{Z}_{\mathcal{I}^{(d)}}$ as
\begin{align}
\boldsymbol{Z}_{\mathcal{I}^{(d)}} &= \sum_{\ell=1}^M \widehat{\lambda}_\ell \widehat{\mu}_\ell \widehat{\nu}_\ell^\prime,
\end{align}
where $M = \min\{(T_0 + K), N_d\}$, $\widehat{\lambda}_\ell \in \mathbb{R}$ are the singular values (arranged in decreasing order), and $\widehat{\mu}_\ell \in \mathbb{R}^{T_0+K}, \widehat{\nu}_\ell \in \mathbb{R}^{N_d}$ are the left and right singular vectors, respectively.
Then, we define the modified model parameter estimator as
\begin{align}
\widehat{w}^{(n,d)} = \Big( \sum_{\ell=1}^k (1/\widehat{\lambda}_\ell) \widehat{\nu}_\ell \widehat{\mu}_\ell^\prime \Big) Z_n.
\end{align}
The remainder of the algorithm proceeds as before with the new estimate $\widehat{w}^{(n,d)}$.
\vspace{7.5pt}
\noindent
{\em Theoretical implications.}
Suppose analogous versions of Assumptions \ref{assumption:sutva} to \ref{assump:covariates} hold with respect to $\boldsymbol{Z}_{\mathcal{I}^{(d)}}$ rather than just $\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}$.
Then, it can be verified that the statistical guarantees in Section \ref{sec:results} (and hypothesis test in Section \ref{sec:hypo_test}) continue to hold with $T_0$ being replaced by $T_0 + K$.
In essence, adding covariates into the model-learning stage augments the ``pre-intervention'' data size, which can improve the estimation rates.
\section{Proof: Variance Estimator}
\section{Proof of Lemma \ref{lemma:variance_estimate}}
For ease of notation, we suppress the conditioning on $\mathcal{E}$ for the remainder of the proof.
Recall that we are absorbing the dependencies on $r$ and $\sigma$ into the constant within $O_p$.
Further, recall the definitions of $\boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}}$ and $\varepsilon_{\text{pre}, n}$, as stated in the proof of Theorem \ref{thm:consistency} (Appendix \ref{sec:proof_consistency}).
Using the definition of $\widehat{w}^{(n,d)}$ in \eqref{eq:pcr},
$
\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}} \widehat{w}^{(n,d)} =
\boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \widehat{w}^{(n,d)}.
$
Substituting this equality into our definition of the variance estimator in \eqref{eq:hat_sigma}, we get
$
\widehat{\sigma}^2 = \frac{1}{T_0} \| Y_{\text{pre}, n} - \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \widehat{w}^{(n,d)} \|_2^2.
$
Next, consider
{\small
\begin{align} \label{eq:variance.0}
%
&\| Y_{\text{pre}, n} - \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \widehat{w}^{(n,d)} \|_2^2
%
\\& = \| \mathbb{E}[Y_{\text{pre},n}] - \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \widehat{w}^{(n,d)} \|_2^2
+ \| \varepsilon_{\text{pre}, n} \|_2^2 \nonumber
%
+ 2 \langle \varepsilon_{\text{pre}, n}, (\mathbb{E}[Y_{\text{pre},n}] - \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \widehat{w}^{(n,d)}) \rangle.
\end{align}
}
Below, we analyze each term on the RHS of \eqref{eq:variance.0} separately.
\vspace{10pt} \noindent
{\em Bounding $\| \mathbb{E}[Y_{\emph{pre},n}] - \boldsymbol{Y}^{r_\emph{pre}}_{\emph{pre}, \mathcal{I}^{(d)}} \widehat{w}^{(n,d)} \|_2$. }
In order to control this term,
we incorporate Lemmas \ref{lemma:hsvt} and \ref{lemma:annoying} into the bound in \eqref{eq:thm1.2.5} to obtain
\begin{align}
\frac{1}{T_0} \| \mathbb{E}[Y_{\text{pre},n}] - \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \widehat{w}^{(n,d)} \|_2^2
&= O_p \left( \frac{1}{\sqrt{T_0}} + \frac{\log(T_0 N_d)\| \widetilde{w}^{(n,d)} \|_1^2}{\min\{T_0, N_d\}} \right).
\label{eq:variance.1}
\end{align}
\vspace{5pt} \noindent
{\em Bounding $\| \varepsilon_{\emph{pre}, n} \|_2$. }
Since the entries of $\varepsilon_{\text{pre}, n}$ are independent mean zero sub-Gaussian random variables, it follows from Lemma \ref{lemmma:standard_convergences}
that
\begin{align} \label{eq:variance.2}
\frac{1}{T_0} \| \varepsilon_{\text{pre}, n} \|_2^2 - \sigma^2 &= O_p \left( \frac{1}{\sqrt{T_0}} \right).
\end{align}
\vspace{10pt} \noindent
{\em Bounding $\langle \varepsilon_{\emph{pre}, n}, ~(\mathbb{E}[Y_{\emph{pre},n}] - \boldsymbol{Y}^{r_\emph{pre}}_{\emph{pre}, \mathcal{I}^{(d)}} \widehat{w}^{(n,d)}) \rangle$. }
From \eqref{eq:rewrite}, we have
$
\boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}}\widehat{w}^{(n,d)} =
\mathcal{P}_{\widehat{U}_\text{pre}} (\mathbb{E}[Y_{\text{pre},n}] + \varepsilon_{\text{pre}, n}).
$
Hence,
\begin{align}
%
&\langle \varepsilon_{\text{pre}, n}, (\mathbb{E}[Y_{\text{pre},n}] - \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \widehat{w}^{(n,d)}) \rangle
%
\\ & \quad =
\langle \varepsilon_{\text{pre}, n}, \mathbb{E}[Y_{\text{pre},n}] \rangle
- \langle \varepsilon_{\text{pre}, n}, \mathcal{P}_{\widehat{U}_\text{pre}} \mathbb{E}[Y_{\text{pre},n}] \rangle
- \langle \varepsilon_{\text{pre}, n}, \mathcal{P}_{\widehat{U}_\text{pre}} \varepsilon_{\text{pre}, n} \rangle. \label{eq:variance_cross_term}
\end{align}
By Lemma \ref{lemma:hoeffding_random} and Assumption \ref{assumption:noise}, it follows that for any $t > 0$
\begin{align}
\mathbb{P}\left( \langle \varepsilon_{\text{pre}, n}, \mathbb{E}[Y_{\text{pre},n}] \rangle \geq t \right) & \leq \exp\Big( - \frac{c t^2}{T_0 \sigma^2 } \Big) \label{eq:variance_hoeff_1},
\\
\mathbb{P}\left( \langle \varepsilon_{\text{pre}, n}, \mathcal{P}_{\widehat{U}_\text{pre}} \mathbb{E}[Y_{\text{pre},n}] \rangle \geq t \right) & \leq \exp\Big( - \frac{c t^2}{T_0 \sigma^2 } \Big) \label{eq:variance_hoeff_2}.
\end{align}
Note that we have used $\|\mathcal{P}_{\widehat{U}_\text{pre}}\|_{\text{op}} \le 1$ and Assumption \ref{assumption:boundedness}
to obtain
$
\| \mathcal{P}_{\widehat{U}_\text{pre}} \mathbb{E}[Y_{\text{pre},n}]\|_2
\le
\| \mathbb{E}[Y_{\text{pre},n}]\|_2
\le \sqrt{T_0}.
$
Together, \eqref{eq:variance_hoeff_1} and \eqref{eq:variance_hoeff_2} then imply that
\begin{align}
\langle \varepsilon_{\text{pre}, n}, \mathbb{E}[Y_{\text{pre},n}] \rangle = O_p(\sqrt{T_0}), \quad
%
\langle \varepsilon_{\text{pre}, n}, \mathcal{P}_{\widehat{U}_\text{pre}} \mathbb{E}[Y_{\text{pre},n}] \rangle = O_p(\sqrt{T_0}). \label{eq:variance_hoeff_2_Op}
\end{align}
From \eqref{eq:lem4.3}, we have for any $t > 0$,
\begin{align}
\mathbb{P}\left( \langle \varepsilon_{\text{pre}, n}, \mathcal{P}_{\widehat{U}_\text{pre}} \varepsilon_{\text{pre}, n} \rangle \geq \sigma^2 r_\text{pre} + t\right) & \leq \exp\Big( - c \min\Big(\frac{t^2}{\sigma^4 r_\text{pre} }, \frac{t}{\sigma^2}\Big)\Big) \label{eq:variance_hanson_1},
\end{align}
which implies that
\begin{align}
\langle \varepsilon_{\text{pre}, n}, \mathcal{P}_{\widehat{U}_\text{pre}} \varepsilon_{\text{pre}, n} \rangle = O_p(1). \label{eq:variance_hanson_1_Op}
\end{align}
Plugging \eqref{eq:variance_hoeff_2_Op} and \eqref{eq:variance_hanson_1_Op} into \eqref{eq:variance_cross_term}, we obtain
\begin{align} \label{eq:variance.3}
\frac{1}{T_0} \langle \varepsilon_{\text{pre}, n}, ~(\mathbb{E}[Y_{\text{pre},n}] - \boldsymbol{Y}^{r_\text{pre}}_{\text{pre}, \mathcal{I}^{(d)}} \widehat{w}^{(n,d)}) \rangle
&= O_p \left( \frac{1}{\sqrt{T_0}}\right).
\end{align}
\vspace{5pt} \noindent
{\em Collecting terms. }
Normalizing \eqref{eq:variance.0} by $T_0$ and subsequently incorporating the bounds in \eqref{eq:variance.1}, \eqref{eq:variance.2}, and \eqref{eq:variance.3}, we conclude
$
\widehat{\sigma}^2 - \sigma^2 = O_p\left( \frac{1}{\sqrt{T_0}}
+ \frac{\log(T_0 N_d)\| \widetilde{w}^{(n,d)} \|_1^2}{\min\{T_0, N_d\}} \right).
$
\subsection{Proof of Lemma \ref{lemma:type1.1}} \label{sec:type1_lem}
{\small
$
\widehat{\tau} = \|(\boldsymbol{I} - \mathcal{P}_{\widehat{V}_\text{pre}}) \widehat{\bV}_\text{post}\|_F^2
= \|(\boldsymbol{I} - \mathcal{P}_{\widehat{V}_\text{pre}}) \widehat{\bV}_\text{post} - (\boldsymbol{I} - \mathcal{P}_{V_\text{pre}}) \widehat{\bV}_\text{post} + (\boldsymbol{I} - \mathcal{P}_{V_\text{pre}}) \widehat{\bV}_\text{post}\|_F^2
= \|(\mathcal{P}_{V_\text{pre}} - \mathcal{P}_{\widehat{V}_\text{pre}}) \widehat{\bV}_\text{post} + (\boldsymbol{I} - \mathcal{P}_{V_\text{pre}}) \widehat{\bV}_\text{post}\|_F^2
= \|(\mathcal{P}_{V_\text{pre}} - \mathcal{P}_{\widehat{V}_\text{pre}}) \widehat{\bV}_\text{post}\|_F^2 + \|(\boldsymbol{I} - \mathcal{P}_{V_\text{pre}}) \widehat{\bV}_\text{post}\|_F^2
+ 2 \langle (\mathcal{P}_{V_\text{pre}} - \mathcal{P}_{\widehat{V}_\text{pre}}) \widehat{\bV}_\text{post}, (\boldsymbol{I} - \mathcal{P}_{V_\text{pre}}) \widehat{\bV}_\text{post} \rangle_F. \label{eq:type1.0.new}
$
}
Under $H_0$, it follows that $(\boldsymbol{I} - \mathcal{P}_{V_\text{pre}}) \boldsymbol{V}_\text{post} = 0$.
As a result,
$
\|(\boldsymbol{I} - \mathcal{P}_{V_\text{pre}}) \widehat{\bV}_\text{post}\|_F^2 = \|(\boldsymbol{I} - \mathcal{P}_{V_\text{pre}}) \mathcal{P}_{\widehat{V}_\text{post}}\|_F^2
= \|(\boldsymbol{I} - \mathcal{P}_{V_\text{pre}}) \mathcal{P}_{\widehat{V}_\text{post}} - (\boldsymbol{I} - \mathcal{P}_{V_\text{pre}}) \mathcal{P}_{V_\text{post}} \|_F^2
= \|(\boldsymbol{I} - \mathcal{P}_{V_\text{pre}}) (\mathcal{P}_{\widehat{V}_\text{post}} - \mathcal{P}_{V_\text{post}})\|_F^2.
$
Applying these two sets of equalities above together completes the proof.
\subsection{Proof of Lemma \ref{lemma:type2}} \label{sec:type2_lem}
Because the columns of $\widehat{\bV}_\text{post}$ are orthonormal,
$
r_\text{post} = \|\widehat{\bV}_\text{post}\|_F^2 = \| \mathcal{P}_{\widehat{V}_\text{pre}} \widehat{\bV}_\text{post}\|_F^2 + \|(\boldsymbol{I} - \mathcal{P}_{\widehat{V}_\text{pre}}) \widehat{\bV}_\text{post}\|_F^2.
$
Therefore, it follows that
\begin{align}
\widehat{\tau} &= \|(\boldsymbol{I} - \mathcal{P}_{\widehat{V}_\text{pre}}) \widehat{\bV}_\text{post}\|_F^2
= r_\text{post} - \| \mathcal{P}_{\widehat{V}_\text{pre}} \widehat{\bV}_\text{post}\|_F^2. \label{eq:type2.0}
\end{align}
Now, consider the second term of the equality above. That is,
{\small
\begin{align} \label{eq.1}
&\|\mathcal{P}_{\widehat{V}_\text{pre}} \widehat{\bV}_\text{post}\|_F^2 = %
\|\mathcal{P}_{\widehat{V}_\text{pre}} \widehat{\bV}_\text{post} - \mathcal{P}_{V_\text{pre}} \widehat{\bV}_\text{post} + \mathcal{P}_{V_\text{pre}} \widehat{\bV}_\text{post}\|_F^2
%
\\ &= \| (\mathcal{P}_{\widehat{V}_\text{pre}} - \mathcal{P}_{V_\text{pre}}) \widehat{\bV}_\text{post}\|_F^2 + \|\mathcal{P}_{V_\text{pre}} \widehat{\bV}_\text{post}\|_F^2
%
+ 2 \langle (\mathcal{P}_{\widehat{V}_\text{pre}} - \mathcal{P}_{V_\text{pre}}) \widehat{\bV}_\text{post}, \mathcal{P}_{V_\text{pre}} \widehat{\bV}_\text{post} \rangle_F.
%
\label{eq:type2.1}
\end{align}
}
Further, analyzing the second term of \eqref{eq:type2.1}, we note that
{\small
\begin{align}
&\|\mathcal{P}_{V_\text{pre}} \widehat{\bV}_\text{post}\|_F^2 = \|\mathcal{P}_{V_\text{pre}} \mathcal{P}_{\widehat{V}_\text{post}}\|_F^2
%
= \| \mathcal{P}_{V_\text{pre}} \mathcal{P}_{\widehat{V}_\text{post}} - \mathcal{P}_{V_\text{pre}} \mathcal{P}_{V_\text{post}} + \mathcal{P}_{V_\text{pre}} \mathcal{P}_{V_\text{post}}\|_F^2
%
\\ &= \| \mathcal{P}_{V_\text{pre}} (\mathcal{P}_{\widehat{V}_\text{post}} - \mathcal{P}_{V_\text{post}})\|_F^2 + \|\mathcal{P}_{V_\text{pre}} \mathcal{P}_{V_\text{post}}\|_F^2
%
+ 2 \langle \mathcal{P}_{V_\text{pre}} (\mathcal{P}_{\widehat{V}_\text{post}} - \mathcal{P}_{V_\text{post}}), \mathcal{P}_{V_\text{pre}} \mathcal{P}_{V_\text{post}}\rangle_F.
%
\label{eq:type2.2}
\end{align}
}
Incorporating \eqref{eq:type2.1} and \eqref{eq:type2.2} into \eqref{eq:type2.0}, and recalling $c = \|\mathcal{P}_{V_\text{pre}} \mathcal{P}_{V_\text{post}}\|_F^2$ completes the proof.
\section{SI Robustness Check: Subspace Inclusion Hypothesis Test} \label{sec:hypo}
A key condition that enables our theoretical results in Section \ref{sec:results} is Assumption \ref{assumption:subspace}.
Below, we provide a data-driven hypothesis test to check when this condition holds.
\vspace{7.5pt}
\noindent{\em Additional notation. }
Recall $r_\text{pre} = \rank(\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}])$, and
let $r_\text{post} = \rank(\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(d)}}])$.
Recall $\boldsymbol{V}_\text{pre} \in \mathbb{R}^{N_d \times r_\text{pre}}$ denotes the right singular vectors of $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]$;
analogously, define $\boldsymbol{V}_\text{post} \in \mathbb{R}^{N_d \times r_\text{post}}$ with respect to $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(d)}}]$.
Let $\widehat{\bV}_\text{pre} \in \mathbb{R}^{N_d \times r_\text{pre}}$ and $\widehat{\bV}_\text{post} \in \mathbb{R}^{N_d \times r_\text{post}}$ denote their respective estimates, which are constructed from the top $r_\text{pre}$ and $r_\text{post}$ right singular vectors of $\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}$ and $\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(d)}}$, respectively.
\subsection{Hypothesis Test}\label{sec:hypo_test}
Consider the following two hypotheses:
\begin{align}
H_0: ~ \text{span}(\boldsymbol{V}_\text{post}) \subseteq \text{span}(\boldsymbol{V}_\text{pre})
\quad \text{and} \quad
H_1: ~ \text{span}(\boldsymbol{V}_\text{post}) \nsubseteq \text{span}(\boldsymbol{V}_\text{pre}).
\end{align}
We define the test statistic as
$\widehat{\tau} = \|(\boldsymbol{I} - \widehat{\bV}_\text{pre} \widehat{\bV}_\text{pre}^\prime) \widehat{\bV}_\text{post}\|_F^2.$
This yields the following test:
for any significance level $\alpha \in (0,1)$,
\begin{align}
\text{Retain } H_0 \text{ if } ~\widehat{\tau} \le \tau(\alpha)
\quad \text{and} \quad \text{Reject } H_0 \text{ if } ~\widehat{\tau} > \tau(\alpha).
\end{align}
Here, $\tau(\alpha)$ is the critical value, which we define for some absolute constant $C \ge 0$:
\begin{align} \label{eq:crit_value}
\tau(\alpha) &=
\frac{C\sigma^2 r_\text{post} \phi^2_\text{pre}(\alpha/2)}{s^2_{r_\text{pre}}}
+ \frac{C\sigma^2 r_\text{post} \phi^2_\text{post}(\alpha/2)}{\varsigma^2_{r_\text{post}}}
+ \frac{C\sigma r_\text{post} \phi_\text{pre}(\alpha/2)}{s_{r_\text{pre}}},
\end{align}
where $\phi_\text{pre}(a) = \sqrt{T_0} + \sqrt{N_d} + \sqrt{\log(1/a)}$;
$\phi_\text{post}(a) = \sqrt{T_1} + \sqrt{N_d} + \sqrt{\log(1/a)}$;
and $s_\ell, \varsigma_\ell$ are the $\ell$-th singular values of $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]$ and $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(d)}}]$, respectively.
\subsection{Type I and Type II Error Guarantees}
Given our choice of $\widehat{\tau}$ and $\tau(\alpha)$, we control both Type I and Type II errors of our test.
\begin{thm} \label{thm:hypo}
Let Assumptions \ref{assumption:sutva}, \ref{assumption:form}, \ref{assumption:conditional_mean_zero}, \ref{assumption:noise} hold.
Fix any $\alpha \in (0,1)$. Then conditioned on $\mathcal{E}$,
the Type I error is bounded as
$
\mathbb{P}(\widehat{\tau} > \tau(\alpha) | H_0) \le \alpha.
$
To bound the Type II error, suppose that the choice
of $C$, given in \eqref{eq:crit_value}, satisfies
\begin{align} \label{eq:type2_cond}
r_\emph{post} > \| \boldsymbol{V}_\emph{pre} \boldsymbol{V}_\emph{pre}^\prime \boldsymbol{V}_\emph{post} \|_F^2
+ 2 \tau(\alpha) +
\frac{C\sigma r_\emph{post} \phi_\emph{post}(\alpha/2)}{\varsigma_{r_\emph{post}}},
\end{align}
which implies $H_1$ must hold.
Then, the Type II error is bounded as
$
\mathbb{P}(\widehat{\tau} \le \tau(\alpha) | H_1) \le \alpha.
$
\end{thm}
\noindent
{\em A comment on $C$. }
The choice of $C$ depends on the underlying distribution of $\varepsilon_{tn}$, and can be made explicit for certain classes of distributions.
As an example, Corollary \ref{cor:hypo} specializes Theorem \ref{thm:hypo} to when $\varepsilon_{tn}$ are normally distributed.
\vspace{-3pt}
\begin{cor}\label{cor:hypo}
Consider the setup of Theorem \ref{thm:hypo} with $C=4$.
Let $\varepsilon_{tn}$ be normally distributed.
Then,
$\mathbb{P}(\widehat{\tau} > \tau(\alpha) | H_0) \le \alpha$ and $\mathbb{P}(\widehat{\tau} \leq \tau(\alpha) | H_1) \le \alpha.$
\end{cor}
\vspace{-3pt}
Corollary \ref{cor:critical_value} specializes Theorem \ref{thm:hypo} under Assumption \ref{assumption:spectra}.
\vspace{-3pt}
\begin{cor}\label{cor:critical_value}
Let the setup of Theorem \ref{thm:hypo} hold.
Suppose Assumption \ref{assumption:spectra} holds.
Further, suppose that conditioned on $\mathcal{E}$, the $r_{\emph{post}}$ nonzero singular values $\varsigma_i$ of $\mathbb{E}[\boldsymbol{Y}_{\emph{post}, \mathcal{I}^{(d)}}]$ are well-balanced, i.e., $\varsigma_i^2 = \Theta(T_1 N_d} %N^{(d) / r_{\emph{post}})$.
Then,
$\tau(\alpha) = O\Big( \frac{\sqrt{\log(1 / \alpha)}}{\min\{\sqrt{T_0}, \sqrt{T_1}, \sqrt{N_d} %N^{(d)} \}} \Big).$
\end{cor}
\vspace{-3pt}
\noindent{\em Interpretation. }
We interpret $\widehat{\tau}$, $\tau(\alpha)$, and the condition in \eqref{eq:type2_cond}.
$\widehat{\tau}$: Consider the noiseless case (i.e., $\varepsilon_{tn} = 0$), where $\boldsymbol{V}_\text{pre}$ and $\boldsymbol{V}_\text{post}$ are perfectly observed.
Conditioned on $H_0$, note that $\|(\boldsymbol{I} - \boldsymbol{V}_\text{pre} \boldsymbol{V}_\text{pre}^\prime)\boldsymbol{V}_\text{post}\|_F = 0$, while conditioned on $H_1$, $\|(\boldsymbol{I} - \boldsymbol{V}_\text{pre} \boldsymbol{V}_\text{pre}^\prime)\boldsymbol{V}_\text{post}\|_F > 0$.
Hence, $\|(\boldsymbol{I} - \boldsymbol{V}_\text{pre} \boldsymbol{V}_\text{pre}^\prime)\boldsymbol{V}_\text{post}\|_F$ serves as a natural test statistic.
Since we do not observe these quantities, we use $\widehat{\bV}_\text{pre}$ and $\widehat{\bV}_\text{post}$ as proxies.
$\tau(\alpha)$: Again considering the noiseless case, we note that $\tau(\alpha) = 0$.
More generally, if the spectra of $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]$ and $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(d)}}]$ are well-balanced, then Corollary \ref{cor:critical_value} establishes that $\tau(\alpha) = o(1)$, even in the presence of noise.
We remark that Corollary \ref{cor:hypo} allows for exact constants in the definition of $\tau(\alpha)$ under the Gaussian noise model.
{\em Condition in \eqref{eq:type2_cond}}:
We now argue \eqref{eq:type2_cond} is not a restrictive condition.
Conditioned on $H_1$, observe that $ r_\text{post} > \|\boldsymbol{V}_\text{pre} \boldsymbol{V}_\text{pre}^\prime \boldsymbol{V}_\text{post} \|_F^2$ always holds.
If Assumption \ref{assumption:spectra} holds and the nonzero singular values of $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(d)}}]$ are well-balanced, then the latter two terms on the right-hand side of \eqref{eq:type2_cond} decay to zero as $T_0, T_1, N_d$ grow.
\vspace{-4pt}
\subsubsection{Computing $\tau(\alpha)$}
\noindent{\em An exact computation. }
Computing $\tau(\alpha)$ requires estimating (i) $\sigma^2$; (ii) $r_\text{pre}, r_\text{post}$; (iii) $s_{r_\text{pre}}, \varsigma_{r_\text{post}}$.
We provide an estimator for $\sigma$ in \eqref{eq:hat_sigma} and establish its consistency in Lemma \ref{lemma:variance_estimate}.
Further, recall that Lemma \ref{lemma:singular_values} establishes that the singular values of $\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}$ and $\mathbb{E}[\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}]$ must be close.
Thus, we can use the spectra of $\boldsymbol{Y}_{\text{pre}, \mathcal{I}^{(d)}}$ as a good proxy to estimate $r_\text{pre}$ and $s_{r_\text{pre}}$.
Analogous arguments hold for $\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(d)}}$ with respect to $\mathbb{E}[\boldsymbol{Y}_{\text{post}, \mathcal{I}^{(d)}}]$.
\vspace{7.5pt}
\noindent{\em A practical heuristic. }
Here, we provide a complementary approach to computing $\tau(\alpha)$ as used in \cite{squires2020causal}.
To build intuition, observe that $\widehat{\tau}$ represents the remaining spectral energy of $\widehat{\bV}_\text{post}$ not contained within $\text{span}(\widehat{\bV}_\text{pre})$.
Further, we note $\widehat{\tau}$ is trivially bounded by $r_\text{post}$ since the columns of $\widehat{\bV}_\text{post}$ are orthonormal.
Thus, one can fix some fraction $\rho \in (0,1)$ and reject $H_0$ if $\widehat{\tau} \ge \rho \cdot r_\text{post}$.
In words, if more than $\rho$ fraction of the spectral energy of $\widehat{\bV}_\text{post}$ lies outside the span of $\widehat{\bV}_\text{pre}$, then the alternative test rejects $H_0$.
\section{???}
\end{document}
|
1,116,691,497,401 | arxiv | \section{Background}
A key item in the Mk\,IV upgrade of the EVN was the construction
of the EVN Mk\,IV data processor at the Joint Institute for VLBI in Europe (JIVE).
JIVE is hosted by ASTRON in Dwingeloo, the Netherlands, and is funded by
science councils of a number of European countries. Special projects have
been funded directly by the European Commission.
The EVN Mk\,IV data processor
was constructed in the context of the International Advanced Correlator
Consortium through which the other Mk\,IV geodetic correlators
were also built,
with significant contributions from European partners
(Casse \cite{Casse99};
Schilizzi et al.\ \cite{rts01}).
The first fringe on the EVN Mk\,IV data processor was seen on 21 July 1997
and its official inauguration took place on 22 October 1998.
The EVN Mk\,IV data processor now correlates the vast majority of astronomical
EVN experiments, and about half of the
global experiments. Altogether, we have processed 175 user and
116 network/test experiments as of 9 September 2004.
Here, we will concentrate on
areas of interest to the user having data correlated at JIVE:
tools for planning observations with regard
to the data processor's increased capabilities, current operational and
communication flow between JIVE and the PI, and services you can call on
at JIVE to help you get the most out of your observations.
More information about the EVN and JIVE can be found at
the websites
{\tt www.evlbi.org}
and {\tt www.jive.nl}.
\section{Current Capabilities}
The EVN Mk\,IV data processor can correlate
simultaneously up to 16 stations with 16 channels per station,
each having a maximum sampling rate of 32\,Ms/s (thus a total of
1\,Gb/s per station for 2-bit recordings).
The correlator houses 32 Mk\,IV boards.
The principal science drivers behind the development of the data processor
and associated software
include the ability to handle continuum dual-polarization observations,
spectral line experiments,
and phase-reference mapping.
\subsection{Features Snapshot}
\label{sec-feat}
\noindent The EVN Mk\,IV data processor can currently correlate/provide:
\begin{enumerate}
\item[$\bullet$] 1- and 2-bit sampling (all but a handful of experiments use
2-bit sampling).
\item[$\bullet$] Mk\,III, Mk\,IV, VLBA, and Mk5(A) recordings.
\item[$\bullet$] sustained 512\,Mb/s tape recordings
or 1\,Gb/s for Mk5 disk recordings.
\item[$\bullet$] parallel- and cross-hand polarization products as desired in
dual-polarization observations.
\item[$\bullet$] up to 2048 frequency points per interferometer
({\it i.e.,} each baseline/subband/polarization --- see
the discussion following equation~(\ref{corr}) below).
\item[$\bullet$] full-correlator integration times down to 0.25\,s
(half-correlator $t_{\rm int}$ down to 0.125\,s).
\item[$\bullet$] oversampling at 2 or 4 times the Nyquist frequency,
in order to provide subband bandwidths down to
500\,kHz (the maximum Nyquist-sampled $BW_{\rm sb}$ is 16\,MHz).
\item[$\bullet$] multi-pass correlation ({\it e.g.,} for observations
having $>$16 stations at any given time).
\item[$\bullet$] an improved 2-bit van Vleck correction to account for
the statistics of high/low bits for each IF's data stream from
each station.
\end{enumerate}
\noindent Capabilities whose development are still underway or not yet fully
tested
include pulsar gating, speed-up (playback at a bit-rate faster than
used in recording), and phase-cal extraction.
Capabilities that are yet to come include sub-netting (although
we can manually handle the set-ups
for reasonably straightforward instances)
and recirculation
(achieving greater equivalent correlator
capacity, through a time-sharing scheme, for observations that don't use
the maximum bandwidth per subband).
\subsection{Correlator Capacity}
\label{sec-corr}
The total correlator capacity can be expressed as:
\begin{equation}
N^2_{\rm sta} \cdot N_{\rm sb} \cdot N_{\rm pol} \cdot N_{\rm frq} \le 131072
\label{corr}
\end{equation}
Here, $N_{\rm frq}$ is the number of frequency points per
interferometer (baseline/subband/polarization).
$N_{\rm pol}$ is the number of polarizations
in the correlation (1, 2, or 4).
$N_{\rm sb}$ represents the number of different
subbands, counting lower- and upper-sidebands from
the same BBC as distinct subbands.
The value to use for $N_{\rm sta}$
is ``granular" in multiples of 4: for example, if you have 5--8 stations,
use ``8". Independent of this equation, the maximum number of
input channels ($N_{\rm sb}\cdot
N_{{\rm pol}_\parallel})$ is 16, and the
maximum $N_{\rm frq}$ is 2048 (a single interferometer must
fit onto a single correlator board). The minimum $N_{\rm frq}$ is 16.
On a more technical note,
all capabilities discussed in this report assume the use of local
validity, which avoids problems arising from Mk\,IV-format data-replacement
headers correlating against each other in
certain baseline-source geometries, but does so at the expense of a
factor of two in $N_{\rm frq}$.
\begin{table}[h]
\label{tab-corr}
\begin{tabular}{|r|r|r|r|l|}
\hline
$N_{\rm sta}$ & $N_{\rm sb}$ & $N_{\rm pol}$ & $N_{\rm frq}$ & comment \\ \hline\hline
8 & 1 & 1 & 2048 & EVN spectral-line\\ \hline
9 & 1 & 1 & 512 & \quad 9$^{\rm th}$ sta: $N_{\rm frq} \rightarrow N_{\rm frq}/4$ \\ \hline\hline
16 & 8 & 4 & 16 & global cross-polarization \\ \hline
16 & 2 & 2 & 128 & \quad re-arrange $\{N_{\rm sb}, N_{\rm pol}, N_{\rm frq}\}$ \\ \hline\hline
8 & 16 & 1 & 128 & How $N_{\rm sta}$ increase can be\\ \hline
12 & 7 & 1 & 128 & \quad absorbed by $N_{\rm sb}$\\ \hline
16 & 4 & 1 & 128 & \quad (not constrained to be $2^n$)\\ \hline
\end{tabular}
\caption{Examples of ``maximal" correlator configurations
(local validity).
\end{table}
\subsection{Output Capacity}
\label{sec-out}
The minimum $t_{\rm int}$
for a configuration
using the whole correlator is now $1/4$\,s; configurations that use no more
than one-half of the correlator can achieve a minimum $t_{\rm int}$
of $1/8$\,s.
In the near future, the development of the Post-Correlator Integrator (PCI) aims
to provide a minimum $t_{\rm int}$
for the whole correlator of $1/64$\,s.
These low integration times, together with
the fine spectral resolution afforded by large $N_{\rm frq}$,
will provide the possibility to map considerably
wider fields of view through reduced bandwidth- and time-smearing
effects in the {\it u-v} plane
(see, {\it e.g.}, Bridle \& Schwab \cite{BrSch89}; Wrobel \cite{wrob95}, \S\,21.7.5).
For example, the fields of view having $\le10\%$ decrease in the response
to a point source arising from each of these two effects are:
\begin{equation}
FoV_{\rm BW} \stackrel{{\textstyle <}}{\sim} 49.\!''5\,\frac{1}{B}\,
\frac{N_{\rm frq}}{BW_{\rm sb}}; \quad
FoV_{\rm time} \stackrel{{\textstyle <}}{\sim}
18.\!''56 \, \frac{\lambda}{B}\,\frac{1}{t_{\rm int}} \label{fovarcsec}
\end{equation}
Here, $B$ is the longest baseline length in units of 1000\,km, $\lambda$ is in
cm, and $BW_{\rm sb}$ is in MHz. A primary goal of such wide-field
correlations would be to map the entire primary beam of each antenna composing
the array with only a single correlation pass.
With our existing $N_{\rm frq}$ and
$t_{\rm int}$ capabilities, we can already achieve this for a variety
of observing configurations, with time-smearing usually the limiting
factor. More details can be found in
{\tt www.evlbi.org/user\_guide/limit.html}.
Of course, one potential drawback to such wide-field correlations, and the short
$t_{\rm int}$ they require, is the rapid growth
of the size of the FITS file seen by the user --- reaching
about 7\,GB per hour of observation at our current maximum output rate.
\section{Operational Overview}
Since the previous EVN Symposium, a re-organization at JIVE has brought
the Correlator Science Operations and the EVN Support Group together
to form the Science Operations \& Support Group.
From the PI's viewpoint, this new structure should present better
integrated
assistance for all segments of your experiment --- from
proposing/scheduling through correlation to analysis of the resulting FITS data.
We are also continually upgrading our visitor computing resources,
and the domain of European PIs eligible for financial support for visiting
JIVE (or other EVN institutes) has been broadened as of February 2004.
Both the JIVE and EVN web pages have been revised to improve
the ease of navigation and the mutual cross-linking. The EVN Users' Guide
on the EVN web page ({\tt www.evlbi.org/user\_guide.html}) remains
the best ``first stop" for on-line help. It has direct links to:
\begin{enumerate}
\item[$\bullet$] tasks encountered in conducting EVN experiments
(proposing, scheduling, correlating, analyzing).
\item[$\bullet$] the EVN Data Archive (see \S\,\ref{postcorr}).
\item[$\bullet$] travel support information.
\item[$\bullet$] handy tools \& documents (the EVN calculator,
the EVN data analysis guide).
\item[$\bullet$] EVN facts \& figures (frequency/{\it u-v} coverage, resolution,
baseline/image sensitivity, imaging limitations).
\end{enumerate}
\subsection{Pre-observation, Pre-correlation}
Figure~\ref{opflow} summarizes operational and communication flow among
the PI, JIVE, and various EVN assets during an EVN astronomical experiment.
The first step is creation of the experiment schedule. We
actively encourage the PI
to consult with Science Operations \& Support group
at JIVE during scheduling, in order to help side-step the
myriad little pitfalls that may lead to unpleasant surprises when (and after)
the observations are carried out.
Following the observation but prior to the correlation itself beginning,
we confer with the PI to
make sure the correlation parameters
({\it e.g.,} $N_{\rm pol}$, $N_{\rm frq}$, $t_{\rm int}$)
are appropriate and to discover any desired changes ({\it e.g.,} improved
source coordinates). Each experiment is assigned a support scientist,
who shepherds it through the correlation and post-correlation analysis
stages discussed below.
\begin{figure}[ht]
\centering
\vspace{4in}
\special{psfile=RCampbell_fig1.eps hscale=48 vscale=48 hoffset=0 voffset=0}
\caption{Operational flow for the observations and correlation of an
experiment (check = item available via JIVE or EVN web pages).}
\label{opflow}
\end{figure}
\subsection{Correlation and Logistics}
We operate the correlator 80 hours per week, from which time
system testing/development as well as network tests must also come;
typically 45--60 hours per
week are spent in production.
Incorporation of Mk5 disk-based recordings has been increasing since their
first use in the November 2003 session. We were hoping to see our first disk-only
user experiments in the
Oct/Nov 2004 session --- when we can begin to reap the full efficiency
gains inherent in disk playback (mixed observations of course proceed at
the pace of the slower tapes).
We have enough Mk5A playback units to handle any currently feasible observation,
and maintain sufficient tape playback units to process tapes from NRAO stations
in globals.
In the longer term, when stations upgrade to Mk5B, the possibility exists to
move away from using local validity, effectively doubling the correlator
capacity
as described in equation~(\ref{corr}) and the maximum $N_{\rm frq}$
(the impact on the minimum $t_{\rm int}$
could be more complicated, depending on the stage of PCI development).
\subsection{Post-correlation Data Review}
\label{postcorr}
Our main priority is always the quality of the data we provide to the EVN
users. Our internal data review process, as illustrated in
Figure~\ref{rvwproc},
begins by transforming the lag-based correlator
output into an AIPS++ Measurement Set (MS).
From the MS, the support scientist can investigate slices of the
correlation functions in both time and frequency, allowing us to
detect and diagnose various problems with the recorded data or the correlation
itself, and to find any scans for which re-correlation would be profitable.
We can also make various plots more suited to providing feedback to the
stations rather than to the PI ({\it e.g.,} parity-error rates, sampler
statistics).
We apply various corrections to the correlated data at this stage
({\it e.g.,} the 2-bit van Vleck correction),
and flag subsets of the data for low
weights and other known problems resulting in spurious
correlation amplitudes
and/or phases. Finally, we convert the final MS into FITS format,
usually written to DAT tape.
These FITS files can be read into (classic) AIPS directly using {\tt FITLD}.
At this stage, the support scientist sends e-mail to the PI describing
the correlation and
any points of interest noticed during our data review.
\begin{figure}[ht]
\centering
\vspace{4.5in}
\special{psfile=RCampbell_fig2.eps hscale=46 vscale=46 hoffset=0 voffset=0}
\caption{Post-correlation review process for an experiment.}
\label{rvwproc}
\end{figure}
During the course of the post-correlation review,
we also begin populating the EVN Archive
({\tt www.jive.nl/archive/scripts/listarch.php})
for the experiment.
Feedback from the stations and the diagnostic plots from the MS-based
data review go into the archive immediately to allow the PI to get an idea
about the success of the correlation even before receiving the data.
The standard plots typically comprise automatically generated plots of
weight$(t)$ for all stations
throughout the experiment, station amplitude$(\nu)$ and baseline
amplitude/phase$(\nu)$ for a couple calibrator scans, and baseline
amplitude/phase$(t)$
for $\sim$90\,min in the vicinity of one of the calibrator scans.
The FITS file(s) also go to the archive, but are kept private for a
one-year proprietary period (see the {\it EVN Data Access Policy} in the
EVN Users' Guide for more details). The PI can arrange through the
support scientist for a password
to download the FITS data directly from the EVN Archive if desired.
We are working towards providing 1\,Gb/s access to the EVN Archive
for the outside world.
Once we receive calibration information from the stations and process
it into {\tt ANTAB} files, the pipelining of the experiment can begin.
The EVN Pipeline is an automated AIPS script that performs the following:
\begin{enumerate}
\item[$\bullet$] flags data known to be invalid ({\it e.g.,} off-source).
\item[$\bullet$] applies an {\it a priori} amplitude calibration using
the $T_{\rm sys}$ and gain curves from the stations.
\item[$\bullet$] fringe-fits sources authorized by the PI.
\item[$\bullet$] makes preliminary CLEAN images of these sources using a fixed
scheme for phase and amplitude self-calibration.
\item[$\bullet$] creates a set of AIPS tables from various stages of the
calibration/fringe-fitting process, which
the PI can later apply directly to the raw data, if desired.
\item[$\bullet$] outputs a variety of intermediary plots ({\it e.g.,} {\tt POSSM}s,
{\tt VPLOT}s, dirty maps).
\end{enumerate}
We ask the PI in the pre-correlation consultation how to treat
each of the sources in the experiment. Pipeline results for ``public" sources
go directly to the EVN Archive. Pipeline results for
all sources
are made public after the proprietary period, along with the FITS files.
The plots made in the course of pipelining provide more
information with which to assess antenna performance. The resulting AIPS
tables help to simplify the initial stages of the analysis. The quality
of the preliminary images may be affected by the lack of interactive
data editing inherent in the pipeline concept.
More details about the EVN pipeline can be found in
{\tt www.evlbi.org/pipeline/user\_expts.html}, including a link to
the original pipeline paper (Reynolds {\it et al}.\ \cite{ppln02}).
The EVN Archive is thus a central location for obtaining the information
you need when reviewing/analyzing your project.
Users can also query the Archive based on source names or coordinate
ranges, among other characteristics, via an interface developed
in Bologna.
In an effort to aid
citation management, we are also considering associating publications
with experiments in the Archive.
Unless contacted by the PI to the contrary,
we aim to release an experiment's tapes/disks two weeks
after we notify the PI of the experiment's completion.
The timely release of media for re-observation is especially important
as the EVN completes
its move to all-Mk5 operation. Disk procurement levels at the stations
require us to ship back media in time to observe again in the
next-following session.
The ability of single disk-packs to cross experiment boundaries adds a
potential complication: a small number of unreleased experiments may
indeed tie up a disproportionately large number of disk packs.
The immediate posting of station feedback and diagnostic plots to the EVN
Archive endeavors to help the PI gain confidence that the correlation went well
and to allay concerns about releasing the observing media.
To supplement the review products mentioned above, we encourage the PI
to discuss the experiment/correlation with the
responsible JIVE support scientist and/or
to arrange a visit
JIVE for help in data reduction if desired. In order to facilitate such
visits, the eligibility of European PIs for financial support has
been broadened (the bar against EVN-institute affiliation has been dropped) ---
see the ``Access
to the EVN" portion of the EVN web page for more details:
{\tt www.evlbi.org/access/access.html}.
\section{Summary}
We at JIVE are always busy working to improve the quality of the science you
can achieve in your EVN or global VLBI experiments. Since the previous
EVN Symposium in Bonn, these efforts have seen:
\begin{enumerate}
\item[$\bullet$] New astronomical capability: shorter $t_{\rm int}$
allowing wider-field
mapping; oversampling to $\times$4 allowing narrower subband
bandwidths and hence
finer spectral resolution for a given $N_{\rm frq}$.
\item[$\bullet$] Improved correlated-data quality: a 2-bit van Vleck
correction that takes into account the observed statistics of high/low
bits, allowing more reliable (auto-correlation) bandpass calibration
and more accurate closure amplitudes; FTP fringe tests that permit
faster feedback to the stations, allowing equipment problems to be
detected and repaired while the session is still underway.
\item[$\bullet$] Strengthened PI support: internal re-organization;
routine pipelining; the EVN Archive; web-page redesign.
\end{enumerate}
\begin{acknowledgements}
The European VLBI Network is a joint facility of European, Chinese,
South African and other radio astronomy institutes funded by their
national research councils.
This research was supported by the European Commission's I3 Programme
``RADIONET", under contract No.\ 505818.
\end{acknowledgements}
|
1,116,691,497,402 | arxiv | \section{Introduction}
For solving constrained optimal-control problems, we need to face the following issue: in general, the cost-to-go function of the unconstrained problem is quite different from the one that shapes the constraints. An efficient solution can be achieved by combining the two functions via merging \cite{andrieu2010uniting,grammatico2014control}.
Specifically, the merging function is a \gls{CLF} generated by two parent \glspl{CLF}, and represents an important trade-off \glspl{CLF} since, for instance, it may approximate the constraint-shaped function (father function) far from the origin, i.e, where the state constraints may be active, while being similar to the optimal one (mother function) close to the origin. However, there is a major issue in the merging procedure: although any pair of \glspl{CLF} can be successfully merged in dimension two \cite[Th. 1]{grammatico2014control}, this does not hold in higher dimensions. Remarkably, a crucial condition for merging two \glspl{CLF} is the control-sharing property, which is not necessarily satisfied in non-planar systems.
In this paper, we investigate a weaker property, hereby called \emph{partial} control-sharing, by considering a \gls{QCLF}, e.g.\ associated with the optimal \gls{LQR} for the unconstrained system, and a family of linear state constraints. We say that the quadratic function and the constraint functions have the partial control-sharing property if the \gls{QCLF} shares a control law with the constraint functions, provided that the latter are ``active''.
\subsection{Why merging?}
There are several approaches to deal with constrained optimal-control problems.
The most popular one is \gls{MPC} \cite{SznDam87,AllZhe00,GooSerDed06}, possibly in its explicit version \cite{BemMorDuaPis02}. While \gls{MPC} is powerful for discrete-time systems, it can become troublesome for continuous-time systems, as it requires fast sampling, hence long prediction horizons -- issues related to fast sampling can be partially accommodated via sub-optimal control approaches \cite{BlaMiaPel03}.
Perhaps the most popular approach is based on invariant sets and associated Lyapunov functions \cite{Ber72,GutHag85,HuLin01,BoyElgFerBal04,BlaMia15,HuTeeZac06}, where one faces the well-known trade-off between optimality and complexity by choosing among quadratic or non-quadratic functions (see \cite{HuLin01,BlaMia15,HuTeeZac06} for a more complete list of references). In this framework, constrained optimality can be tackled by means of gain-switching \cite{WreBel94}. Specifically, an ``external guard" control is in charge to keep the state inside an invariant set (possibly the largest) compatible with the constraints. Next, this control is switched to the locally-optimal gain, as soon as the state reaches the largest constraint-compatible set \cite{GilTan91} of such a local regulator. The problem with this procedure is twofold: the high complexity of the representation of the sets involved and the discontinuity of the control law.
\subsection{Contribution}
In this paper, we aim at solving the constrained control problem with local optimality in continuous time.
After formalizing the problem (\S \ref{sec:preliminaries}), the main contributions are:
\begin{itemize}
\item We provide necessary and sufficient conditions for the partial control-sharing property in the case of a \gls{QCLF}, $x^\top P x$, and a single linear constraint, $|f^\top x| \leq 1$. We provide sufficient conditions
in the case of multiple constraints, $|f_i^\top x| \leq 1$, for $i \in \{1, \ldots, s\}$.
\item We verify the partial control-sharing in the region where $x^\top P x\leq \mu$ and $|f_i^\top x| \leq 1$, for $i \in \{1, \ldots,s\}$, via convex programming (\S \ref{sec:partial_c-s}). By following a bisection procedure, one can find the largest $\mu$ for which the partial control-sharing property holds;
\item We derive the newborn \gls{CLF} by first smoothing the piecewise-quadratic function $\textrm{max}_i \{|f_i^\top x|^2, x^\top P x\}$, and then by merging it with the optimal function, $x^\top P x$, with full control-sharing guarantee (\S \ref{sec:merging}). The resulting \gls{CLF} has a bounded complexity, being generated by the constraints and the optimal function.
\end{itemize}
\section{Problem formulation and preliminaries}\label{sec:preliminaries}
\subsubsection*{Notation}
$\mathbb{R}$, $\mathbb{R}_{> 0}$ and $\mathbb{R}_{\geq 0}$ denote the set of real, positive real, non-negative real numbers, respectively. $\mathbb{N}$ denotes the set of natural numbers.
For any positive (semi)definite function $V:\mathbb{R}^n \rightarrow \mathbb{R}_{\geq 0}$ and $\mu > 0$, the $\mu$-sublevel set is denoted by $\mc{L}_{(V/\mu)} \coloneqq \left\{x \in \mathbb{R}^{n} \mid V(x) \leq \mu\right\}$.
\subsection{An illustrative example}
\begin{figure}[!t]
\centering
\includegraphics[width=.8\columnwidth]{DoubleInt.pdf}
\caption{State behaviour of the pre-stabilized system in \eqref{eq:Ex1}. The blue arrows represent the derivative directions at every point inside the level curves of the associated \gls{QCLF} $V(x)$. The red lines denote the constraint on $x_2$.}
\label{fig:DoubleInt}
\end{figure}
We start the paper with a simple, yet significant, example, to clarify the general problem addressed in the paper.
\smallskip
\begin{example}\label{exa:ex1}Optimal constrained state feedback design.
\begin{equation}\label{eq:Ex1}
\begin{cases}
\dot{x} = \begin{bmatrix}
0 & 1\\
0 & 0
\end{bmatrix} x + \begin{bmatrix}
0\\
1
\end{bmatrix} u, \\ \\
y = \begin{bmatrix}
0 & 2
\end{bmatrix} x.
\end{cases}
\end{equation}
Let us consider the double integrator system in \eqref{eq:Ex1}, with performance output $y$, subject to linear constraint $\lvert y \rvert \leq 1$. The control input $u$ is preliminary chosen as an \gls{LQR} optimal feedback gain:
$u_0(x) = - R^{-1} B^\top P x = -(x_1 + \sqrt{2} \, x_2)$, where $P$ solves the classic \gls{ARE} with $Q = \left[\begin{smallmatrix}
1 & 0\\
0 & 0
\end{smallmatrix}\right]$ and $R = 1$.
We refer to this optimal control input as a pre-stabilizing compensator, which may fail when the constraint come into play. As shown in Fig.~\ref{fig:DoubleInt}, although the trajectories converge to the origin, there is a (symmetric) region close to the red boundaries where the optimal control drives the state outside the constraint.
\hfill$\square$
\end{example}
\smallskip
In view of the previous example, throughout this paper we consider a generic \gls{LTI} system:
\begin{equation}\label{eq:OrigLTI}
\dot{x} = Ax + Bu,
\end{equation}
with state variable $x \in \mathbb{R}^{n}$, control input $u \in \mathbb{R}^m$, $A \in \mathbb{R}^{n \times n}$ and $B \in \mathbb{R}^{n \times m}$. As in Example~\ref{exa:ex1}, we suppose that the system in \eqref{eq:OrigLTI} is subject to linear constraints acting on the output variable. To tackle this problem, we also assume that the control $u$ may be chosen as the sum of two terms:
\begin{enumerate}
\item a pre-stabilizing compensator $u_0(x) = -\hat{K} x$, $\hat{K} \in \mathbb{R}^{m \times n}$, that meets some optimality (local) conditions in absence of constraints;
\item an additional control input $v = v(x) \in \mathbb{R}^m$, suitable to steer the system within the constraints.
\end{enumerate}
We aim at designing the additional control $v$ in order to ``enlarge'' the set of initial states that generates safe trajectories, while preserving local optimality.
\subsection{Merging control Lyapunov functions: Background}
By referring to the linear system in \eqref{eq:OrigLTI}, in the following, we give some useful definitions.
\smallskip
\begin{definition}[Control Lyapunov Function]\label{def:CLF}
A positive definite, radially unbounded, smooth away from zero, function $V:\mathbb{R}^n \rightarrow \mathbb{R}_{\geq 0}$ is a control Lyapunov function (CLF) for \eqref{eq:OrigLTI} if there exists a locally bounded control law $u:\mathbb{R}^n \rightarrow \mathbb{R}^m$ such that, for all $x \in \mathbb{R}^n$, we have:
\begin{equation}\label{eq:CLF}
\nabla V(x) (Ax + Bu(x)) < 0.
\end{equation}
$V$ is a control Lyapunov function with domain $\mc{L}_{(V/\mu)}$, for $\mu > 0$, if \eqref{eq:CLF} holds for all $x \in \mc{L}_{(V/\mu)}$.
Given some $\beta > 0$, the set $\mc{L}_{(V/\mu)}$ is $\beta$-contractive for \eqref{eq:OrigLTI} with control input $u(\cdot)$ if and only if:
\begin{equation*}
\nabla V(x) (Ax + Bu(x)) \leq - \beta V(x),
\end{equation*}
holds for all $x \in \mc{L}_{(V/\mu)}$.
\hfill$\square$
\end{definition}
\smallskip
\smallskip
\begin{definition}[Control-sharing property \textup{\cite[Def. 2]{grammatico2014control}}]
\label{def:C-S}
Two \glspl{CLF} $V_1$ and $V_2$ for \eqref{eq:OrigLTI} have the control-sharing property if there exists a locally bounded control law $u:\mathbb{R}^n \rightarrow \mathbb{R}^m$ such that, for all $x \in \mathbb{R}^n$, the following inequalities are simultaneously satisfied:
\begin{equation*}
\left\{
\begin{aligned}
&\nabla V_1(x) (A x + B u(x)) < 0\\
&\nabla V_2(x) (A x + B u(x)) < 0.
\end{aligned}
\right.
\end{equation*}
\hfill$\square$
\end{definition}
\smallskip
\smallskip
\begin{definition}[Gradient-type merging \textup{\cite[Def. 3]{grammatico2014control}}]\label{def:M_CLF}
Let $V:\mathbb{R}^n \rightarrow \mathbb{R}_{\geq 0}$ be positive definite and smooth away from zero. $V$ is a gradient-type merging candidate if there exist two continuous functions $\gamma_1, \gamma_2 : \mathbb{R}^n \rightarrow \mathbb{R}_{\geq 0}$ such that $(\gamma_1(x), \gamma_2(x)) \neq (0, 0)$ and
\begin{equation*}
\nabla V(x) = \gamma_1(x) \nabla V_1(x) + \gamma_2(x) \nabla V_2(x).
\end{equation*}
$V$ is a gradient-type merging \gls{CLF} if it is also a \gls{CLF}.
\hfill$\square$
\end{definition}
\smallskip
In \cite{grammatico2014control}, a solution to the constrained control problem with local optimality is based on the following steps:
\begin{enumerate}
\item[S1)] \textit{Mother function}: Find the optimal \gls{QCLF}, $x^\top P x$, for the unconstrained system;
\item[S2)] \textit{Father function}: Find a constraint-shaped \gls{CLF}, e.g.\ by computing or approximating the largest controlled-invariant set;
\item[S3)] \textit{Merging}: Derive a \gls{CLF} that is similar to the father close to the constraints and to the mother near the origin.
\end{enumerate}
The third step is critical for two reasons. First, the possibility to merge two functions requires the control-sharing property \cite[Th. 2]{grammatico2014control}.
Unless we are dealing with a planar system, for which any two \glspl{CLF} share a control \cite[Th. 1]{grammatico2014control}, the control-sharing property may be not satisfied.
Second, the high complexity of the maximal invariant set, i.e., the representation of the father function, might be inherited by the final merging function, which complicates the on-line computation of the control inputs. We face both problems by investigating a different condition, namely the partial control-sharing property.
\subsection{Problem formulation: Partial control-sharing}
We consider a region of bounded complexity of representation, which is shaped by the optimal and the constraint functions.
Then, let us consider the following assumption, which guarantees that the Riccati-optimal control, with infinite-horizon quadratic performance cost $J \coloneqq \int_0^\infty \|x\|^2_{Q} + \|u\|^2_{R} \, dt$, where $R \succ 0$ and $Q \succcurlyeq 0$, is stabilizing.
\smallskip
\begin{assumption}\label{ass:1}
The pair $(A, B)$ in \eqref{eq:OrigLTI} is controllable and the pair $(A, Q)$ is observable.
\hfill$\square$
\end{assumption}
\smallskip
We also assume that the state variable is subject to $s$ linear constraints, given by $|f_i^\top x| \leq 1$, for all $i \in \{1, \ldots, s\}$. By rearranging $f_i$ into the matrix $F \coloneqq [f_1, \ldots, f_s]^\top \in \mathbb{R}^{s \times n}$, we characterize the admissible state space as
\begin{equation*}
\mathcal{F} \coloneqq \{x \in \mathbb{R}^n \mid \norm{F x}_{\infty} \leq 1\}.
\end{equation*}
For each constraint, we also introduce the functions $\psi_i:\mathbb{R}^n \rightarrow \mathbb{R}_{\geq 0}$, defined as $\psi_i(x) \coloneqq |f_i^\top x|^2$, so that $\mathcal{F}$ is characterized by the inequality
\begin{equation}\label{consfunct}
\Psi(x) \coloneqq \underset{i \in \{1, \ldots, s\}}{\textrm{max}} \psi_i(x) \leq 1.
\end{equation}
The optimal control gain matrix is $\hat{K} = R^{-1}B^\top P$ where $P \in \mathbb{R}^{n \times n}$ is the
solution of the \gls{ARE}, $A^\top P + PA -PBR^{-1}B^\top P + Q =0$, and $V(x) = x^\top P x$ is the optimal unconstrained cost-to-go function (positive definite in view of Assumption~\ref{ass:1}).
Then, we shape the working region based on on $V$ and the constraints, i.e.,
\begin{equation*}
\mathcal{G}_\mu \coloneqq \mathcal{F} \cap \mc{L}_{(V/\mu)}.
\end{equation*}
The following definition limits the requirement of control-sharing
only when the boundaries are active.
\smallskip
\begin{definition}[$(\alpha, \beta)$-partial control-sharing property]
Let $\alpha, \beta > 0$ be given.
The functions $V$ and $\Psi$ have the
$(\alpha, \beta)$-partial control-sharing property if there exists a locally-bounded control law $u: \mathbb{R}^{n} \rightarrow \mathbb{R}^m$ such that, for all $x \in \mathcal{G}_\mu$ and $i$ s.t $f_i^\top = \pm 1$, the following inequalities simultaneously hold:
\begin{equation}\label{partial}
\left\{
\begin{aligned}
&\nabla \psi_i(x) (A x + B u(x)) \leq -\alpha\\
&\nabla V(x) (A x + B u(x)) \leq -\beta \, V(x).\\
\end{aligned}
\right.
\end{equation}\hfill$\square$
\end{definition}
\smallskip
\begin{remark}
We note that, if the partial control-sharing holds, then $\mathcal{G}_\mu$ is a control-invariant set. This type of regions has been considered as candidate control-invariant sets, see \cite{hu2010non,Ode02}. However, we ask something stronger than control invariance, which however only requires that $\dot{\psi}_i(x) < 0$ when the $i$-th constraint is active.
Thus, we require that, with the \emph{same} control input that keeps the state inside the set, we also have $\dot V(x) < 0$ on the boundary. In view of the final merging, this condition will ensure the full control-sharing property between the constraint-shaped function and the optimal one.
\hfill$\square$
\end{remark}
\section{Partial control-sharing conditions}\label{sec:partial_c-s}
\begin{figure}[!t]
\centering
\includegraphics[width=.9\columnwidth]{C_set_complete}
\caption{By referring to Example~\eqref{exa:ex1}, the shaded area represents the elliptical convex cone $\mathcal{C}$.}
\label{fig:C_set}
\end{figure}
Without restrictions, we parametrize the control law as $u(x) = u_0(x) + v(x) = - \hat{K}x + v(x)$. Then, the system in \eqref{eq:OrigLTI} becomes:
\begin{equation}\label{eq:LTI}
\dot{x} = \hat{A} x + B v,
\end{equation}
with $\hat{A} \coloneqq (A - B \hat{K}) \in \mathbb{R}^{n \times n}$. We note that the optimal \gls{QCLF} $V(x)$ satisfies
\begin{equation}\label{eq:negder}
\dot{V}(x) = 2x^\top P \hat{A} x = -x^\top \hat Q x,
\end{equation}
with $\hat Q \coloneqq Q + PBR^{-1}B^\top P \succ 0$.
\subsection{MISO systems: Single state constraint}
First, we consider the case of a single constraint acting on the system in \eqref{eq:LTI}, i.e., $|f^\top x| \leq 1$. Then, let us define the following elliptical convex cone (an instance in Fig.~\ref{fig:C_set})
\begin{equation*}
\mc{C} \coloneqq \left\{ x/\lambda \in \mathbb{R}^n \mid x \in \mc{L}_{(V/\mu)} \cap \partial \mc{F} , \, \lambda > 0 \right\}.
\end{equation*}
Then, we have the following equivalence result.
\smallskip
\begin{theorem}\label{th:Th_MISO}
Let $V(x) = x^\top P x$ satisfy \eqref{eq:negder}, the function $\Psi(x) = \psi(x) = |f^\top x|^2$ be associated with the unique constraint, and let $\alpha, \beta, \mu > 0$ be given.
The following statements are equivalent:
\begin{enumerate}
\item[i)] $V$ and $\Psi$ have the $(\alpha,\beta)$-partial control-sharing property on $\mc{G}_\mu$;
\item[ii)] $z^\top (\hat{Q} - \beta P) z - f^\top ( \hat{A} + \tfrac{\alpha}{2} I ) z \geq 0$ for all $z\in \mathcal{C}$, where $2 z^\top P B + f^\top B = 0$.
\hfill$\square$
\end{enumerate}
\end{theorem}
\smallskip
\begin{proof}
\begin{figure}[!t]
\centering
\includegraphics[width=.7\columnwidth]{2DRegion}
\caption{Conditions in \eqref{eq:Hyperplanes1} in the case of two inputs ($m = 2$). The arrows represent the normal vectors to the hyperplanes.}
\label{fig:2DRegion}
\end{figure}
We consider the case $f^\top x =1$ only, as the proof for the symmetric one $f^\top x =-1$ is identical. Let $x \in \mc{G}_\mu$, and $\tilde{\alpha}=\alpha/2$. Then, the following conditions must hold:
\begin{equation*}\label{eq:Hyperplanes}
\left\{
\begin{aligned}
&f^\top (\hat{A}x+B v) \leq -\tilde{\alpha} \\
&2 (x^\top P \hat{A} x + x^\top P B v) \leq - \beta x^\top P x,
\end{aligned}
\right.
\end{equation*}
namely,
\begin{equation}\label{eq:Hyperplanes1}
\left\{
\begin{aligned}
& -f^\top B v \geq f^\top (\hat{A} + \tilde{\alpha} I) x\\
& 2 x^\top P B v \leq x^\top (\hat{Q} - \beta P) x.
\end{aligned} \right.
\end{equation}
These two inequalities are always satisfied if the vectors $-f^\top B$ and $2 x^\top P B$ are not aligned (see Fig.~\ref{fig:2DRegion}, that shows the situation with one constraint and $m = 2$).
Hence, let us focus on the aligned case, i.e., when $2 x^\top P B + \lambda f^\top B = 0$ for some $\lambda > 0$. To guarantee the non-emptiness of the solution set in \eqref{eq:Hyperplanes1}, we must have that:
\begin{equation*}
\begin{aligned}
\text{if} \quad & 2 x^\top P B + \lambda f^\top B = 0 \\
\text{then} \quad & x^\top (\hat{Q}-\beta P) x \geq \lambda f^\top (\hat{A} + \tilde{\alpha} I) x.
\end{aligned}
\end{equation*}
Thus, by dividing the first equality by $\lambda$ and both sides of the second inequality by $\lambda^2$, introducing the state transformation
$z \coloneqq \left(\frac{x}{\lambda}\right) \in \mathcal{C}$, we obtain the desired condition.
\end{proof}
\smallskip
\begin{remark}
The tolerance $\beta>0$ can be small to make $(\hat{Q}-\beta P)$ positive definite\footnote{Precisely, $\beta$ must be smaller than the smallest eigenvalue of $\hat{Q}P^{-1}$.}. Thus, condition ii) in Theorem \ref{th:Th_MISO} can be checked via convex optimization by minimizing $z^\top (\hat{Q} - \beta P) z - f^\top ( \hat{A} + \tilde{\alpha} I ) z$ on the convex domain $\mathcal{C}$ with linear constraint $2 z^\top P B + f^\top B = 0$.
\hfill$\square$
\end{remark}
\smallskip
For $\mu$ sufficiently small, we surely have feasibility. To enlarge the domain $\mc{G}_\mu$, we can progressively increase the parameter $\mu$
(i.e., consider larger level curves in $\mathcal{L}_{(V/\mu)}$) as long as the condition of the theorem is met, thus guaranteeing the existence of a common control law between $\Psi$ and $V$ with the largest $\mu$.
\begin{table}[!t]
\caption{Optimal value of $z^\top (\hat{Q} - \beta P) z - f^\top ( \hat{A} + \tilde{\alpha} I ) z$ for Example~\ref{exa:ex1}, with different parameter values.}
\label{tab:ex1_val}
\begin{center}
\begin{tabular}{cccccccc}
\toprule
\multicolumn{1}{c}{$\alpha$} & \multicolumn{1}{c}{$\beta$}& \multicolumn{6}{c}{\centering $\mu$} \\
\cmidrule{3-8} & & \multicolumn{1}{c}{$0.5$} & \multicolumn{1}{c}{$0.75$} & \multicolumn{1}{c}{$1$} & \multicolumn{1}{c}{$1.5$} & \multicolumn{1}{c}{$3$} & \multicolumn{1}{c}{$10$} \\ \midrule
\multirow{4}{*}{$0.001$} & $0.001$ & {43.57} & {11.22} & {6.43} & {3.62} & {1.78} & {0.71}\\
\cmidrule{2-8} & $0.05$ & {41.96} & {10.76} & {6.13} & {3.42} & {1.64} & {0.61}\\
\cmidrule{2-8} & $0.2$ & {37.15} & {9.36} & {5.24} & {2.83} & {1.24} & {0.33}\\
\midrule
\multirow{4}{*}{$0.1$} & $0.001$ & {43.17} & {11.04} & {6.30} & {3.54} & {1.73} & {0.69}\\
\cmidrule{2-8} & $0.05$ & {41.56} & {10.58} & {6.01} & {3.34} & {1.59} & {0.59}\\
\cmidrule{2-8} & $0.2$ & {36.75} & {9.18} & {5.12} & {2.75} & {1.19} & {0.30}\\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\begin{continuance}{exa:ex1}
By applying the the conditions in \eqref{eq:Hyperplanes1} to $V(x)$ and $\psi(x) = x_2^2/4$ we obtain:
\begin{equation*}
\left\{
\begin{aligned}
&-2 v \geq -2 [x_1 + (\sqrt{2} - \tilde{\alpha}) x_2]\\
&2(x_1 + \sqrt{2} x_2) v \leq (1-\sqrt{2})(x_1^2 + x_2^2) + (\sqrt{2} - 2\beta) x_1 x_2,
\end{aligned}
\right.
\end{equation*}
Thus, by introducing $\lambda > 0$ and following the same steps of the proof of Theorem~\ref{th:Th_MISO}, for $z \in \mathcal{C}$, if $z_1 + \sqrt{2} z_2 = - 1$, we must have
\begin{equation*}
(1 - \beta \sqrt{2}) (z_1^2 + z_2^2) + (\sqrt{2} - 2 \beta) z_1 z_2 \geq - 2[ z_1 + (\sqrt{2} - \tilde{\alpha}) z_2].
\end{equation*}
As summarized in Tab.~\ref{tab:ex1_val}, with small $\alpha$ and $\beta$, the latter condition is satisfied also for large values of $\mu$, guaranteeing the $(\alpha,\beta)$-partial sharing property between $\Psi$ and $V$ on $\mc{G}_\mu$.
\hfill$\square$
\end{continuance}
\subsection{MIMO systems: Multiple constraints}
Let us now consider the general case involving several state constraints. We must have that, whenever a set of constraints is active, i.e., $\psi_i(x) = 1$, the corresponding derivatives $\dot{\psi}_i$ and $\dot{V}$ shall be simultaneously negative by adopting the same control $v$.
Specifically, given any set of indices $K$, $H$ that denote active constraints, the $(\alpha,\beta)$-partial control-sharing property shall be ensured on each set:
\begin{multline*}
\mc{A}_{K,H} \coloneqq \left\{x \in \mc{L}_{(V/\mu)} \mid f_k^\top x = 1,\; f_h^\top x = -1, \right.\\
\left. \textup{ for all } (k, h) \in K \times H \right\},
\end{multline*}
Let us restrict our investigation to the case in which all the constraints are equal to $1$; the other cases can be addressed by replacing $f$ by $-f$.
We call $\mc{A}$ the set of states where all $s$ constraints are active.
Before stating a sufficient condition for the partial control-sharing in MIMO systems, let us introduce the following set:
\begin{multline*}
\mc{V} \coloneqq \left\{ v \in \mathbb{R}^m \mid f_i^\top B v \leq - f_i^\top (\hat{A} + \tfrac{\alpha}{2} I) x, \right. \\
\left. \textup{ for all } (x, i) \in \mc{A} \times \{1, \ldots,s\} \right\}.
\end{multline*}
\smallskip
\begin{theorem}\label{th:Th_MIMO}
Under the same assumptions of Theorem~\ref{th:Th_MISO}, the functions $V$ and $\Psi$ have the $(\alpha , \beta )$-partial control-sharing property
if, for any set $\mc{A}_{K,H}$, it holds:
\begin{equation}\label{eq:Th_MIMO}
\underset{v \in \mc{V}}{\textrm{min}} \; \underset{x \in \mc{A}_{K,H}}{\textrm{max}} \; x^\top (\beta P - \hat{Q}) x + 2 x^\top P B v \leq 0.
\end{equation}
\hfill$\square$
\end{theorem}
\smallskip
\begin{proof}
By construction, for any choice of active constraints in $K$ and $H$, when $x \in \mc{A}_{K,H}$, the conditions on the derivatives $\dot \psi_i(x) \leq -\alpha$ are satisfied for any $v \in \mc{V}$. Thus, the only concern refers to $V$. To ensure $\dot{V} < 0$, we must have $v \in \mc{V}$ such that
\begin{equation*}
2 x^\top P \hat A x + 2 x^\top P B v \leq -\beta x^\top P x,
\end{equation*}
which can be written as \eqref{eq:Th_MIMO}.
\end{proof}
\smallskip
\begin{figure}[!t]
\centering
\includegraphics[width=.7\columnwidth]{VertReg}
\caption{Feasible set of the two dimensional \gls{LP} problem $\Phi$.}
\label{fig:VertReg}
\end{figure}
Here, $\beta$ shall be small enough to make $(\beta P - \hat{Q})$ negative definite.
For computational purposes, we may bound $v$ as $\| v \|_\infty \leq M$, with large $M$, and define the new set $\bar{\mc{V}}$ as
\begin{equation*}
\bar{\mc{V}} = \{v \in \mathbb{R}^m \mid \|v\|_\infty \leq M\} \cap \mc{V}.
\end{equation*}
In that case, in view of \cite[Cor.~37.3.2]{rockafellar2009variational}, since $\bar{\mc{V}}$ and $\mc{A}_{K,H}$ are two compact and convex sets and the function in \eqref{eq:Th_MIMO} is concave in $x$ and convex in $v$, we can exchange ``\textrm{min}'' and ``\textrm{max}''. Moreover,
$$\Phi(x) \coloneqq \underset{v \in \bar{\mc{V}}}{\textrm{min}} \; x^\top (\beta P - \hat{Q}) x + 2 x^\top P B v$$
is an \gls{LP} problem on the compact set $\bar{\mc{V}}$. Then, if the feasible set is non-empty, an optimal solution does exist, and at least one these belongs to the set of vertices of the feasible region, namely $\textrm{Vert}(\bar{\mc{V}})$, as illustrated in Fig.~\ref{fig:VertReg}. Thus, we obtain that
\begin{equation*}\label{eq:Th_MIMO_minfirst}
\underset{x \in \mc{A}_{K,H}}{\textrm{max}} \overbrace{\underset{v \in \textrm{Vert}(\bar{\mc{V}})}{\textrm{min}} \; x^\top (\beta P - \hat{Q}) x + 2 x^\top P B v}^{\eqqcolon \Phi(x)} \leq 0,
\end{equation*}
where $\Phi(x)$ is a concave function in $x$. As in the MISO case, the associated condition can be checked via convex optimization.
\hfill$\square$
\section{Application: smoothing and merging constraint and control-Lyapunov functions}\label{sec:merging}
In this section, we consider the problem of shaping a \gls{CLF}
starting from an optimal \gls{QCLF} and some constraint functions.
We first construct an intermediate function from $V$, suitably scaled by some $\mu$ that ensures partial control-sharing, and the constraint functions $\{ \psi_i \}_{i=1}^{s}$. Then, after a smoothing procedure, we obtain a new \gls{CLF} that has the full control-sharing property with the optimal $V$.
\subsection{A smoothing method}
If there exists a control law such that $V$ and $\{\psi_i\}_{i = 1}^{s}$ simultaneously decrease along the solution to the system in \eqref{eq:LTI}, we can consider the following piecewise-quadratic candidate \gls{CLF}:
\begin{equation}\label{eq:PiecQuadLyapFun}
\hat{V}(x) \coloneqq \underset{i \in \{1, \ldots, s\}}{\textrm{max}}\left\{V(x), \psi_i(x)\right\}.
\end{equation}
Since $\hat{V}$ is not a differentiable function, let us introduce the smoothed function, for some parameter $p \in \mathbb{N}$,
\begin{equation}\label{eq:SmoothedV}
V_p(x) \coloneqq \sqrt[p]{ \textstyle V^p(x) + \sum_{i = 1}^{s} \psi^{p}_i(x)}.
\end{equation}
In the following result, we show that for $p$ large enough, the function $V_p$ is a $\beta$-contractive \gls{CLF}.
\smallskip
\begin{proposition}\label{prop:betaEx}
Assume that $\hat{V}(x)$ is a $\beta$-contractive \gls{CLF} for \eqref{eq:LTI} with control law $v$. Then, there exists $\bar{p} \in \mathbb{N}$ and $\beta_p > 0$ such that, for all $p \geq \bar{p}$, $V^p_p(x)$ is a \gls{CLF} for \eqref{eq:LTI} with the same control law $v$.
\hfill$\square$
\end{proposition}
\smallskip
\begin{proof}
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\columnwidth]{AngularHatV}
\caption{Angular outline of $\hat{V}(x)$ (coloured level curves) inside the region bounded from the constraints (red lines) and $\partial\mathcal{L}_{V}$ (black dashed line). }
\label{fig:AngularHatV}
\end{figure}
Since $\hat{V}(x)$ is a piecewise quadratic candidate \gls{CLF}, there exists some $\hat{\beta} > 0$ such that $D^+ \hat{V}(x) \leq - \hat{\beta} \, \hat{V}(x)$, where $D^+$ denotes the upper-right Dini derivative. Then, let us define the Euler Auxiliary System (EAS) $x^+ \coloneqq x + \tau (\hat{A} x + B v)$, with $\tau > 0$ small enough. In view of \cite[Lemma 4.1]{blanchini1995nonquadratic}, there exists $\hat{\rho} \in \left[0, 1\right)$ such that, for the EAS, we have
$\hat{V}(x^+) \leq \hat{\rho} \, \hat{V}(x)$.
Without any restriction, the latter allows to consider an angular region that is bounded by the constraints and the \gls{QCLF} (the coloured level curves in Fig.~\ref{fig:AngularHatV}).
Moreover, it follows from \cite[Th. 3.2]{blanchini1999new} that, for $V_p(x)$ in \eqref{eq:SmoothedV}, there exists some $\bar{p} \in \mathbb{N}$ and $\tilde{\rho} \in \left[0, 1\right)$ such that, for $p \geq \bar{p}$, $V_p(x^+) \leq \tilde{\rho} \, V_p(x)$.
Introducing two scale factors $\xi_i \in \left[0, 1\right)$, $i = 1, 2$, the idea is to enclose two level surfaces among the original bounded region $\mc{G}_{\mu}$ and the angular region previously introduced. As $p$ grows, such level curves approach the boundaries within which they are confined. Hence, the following chain of inequalities holds:
\begin{equation*}
\hat{V}(x) \leq V_p(x) \leq \frac{1}{\xi_1} V_p(x) \leq \frac{1}{\xi_2} \hat{V}(x),
\end{equation*}
which leads to
$\partial\mathcal{L}_{\hat{V}} \supset \partial\mathcal{L}_{V_p} \supset \xi_1 \, \partial\mathcal{L}_{V_p} \supset \xi_2 \, \partial\mathcal{L}_{\hat{V}}$.
Then, the function $V_p(x)$ is $\rho_p$-contractive, with $\rho_p \coloneqq \hat{\rho}/\xi_2$, so $V_p(x^+) \leq \rho_p V_p(x)$. Directly from \cite[Lemma 4.2]{blanchini1999new}, with $v$, as $p \rightarrow \infty$, there exist a coefficient of contractivity $\beta_p \coloneqq (1 - \rho_p)/\tau$ such that $D^+ V_p(x) \leq - \beta_p V_p(x)$.
Consequently, since $V_p(x)$ is a positively homogeneous function, we have
$D^+ V^p_p(x) \leq - \beta_p \, p \, V_p(x)$ as desired.
\end{proof}
\smallskip
\smallskip
\begin{proposition}\label{prop:sharing}
Let $V$ and $\Psi$ have the $(\alpha,\beta)$-partial control-sharing property. Then, for any $p \geq \bar{p}$, the functions $V^p_p$ and $V$ have the full control-sharing property.
\hfill$\square$
\end{proposition}
\smallskip
\smallskip
\begin{proof}
By noticing that, if the optimal $V$ and the constraints have the $(\alpha,\beta)$-partial control-sharing property, the control law $v$ in Prop.~\ref{prop:betaEx} can be taken in such a way that $\dot V(x) \leq - \beta V(x)$, the proof directly follows from the results of the previous section.
\end{proof}
\subsection{A gradient-type merging: R-composition}
Once we have guaranteed the full control-sharing property between $V^p_p$ and $V$, we are in the position to achieve a successful merging. Next, we briefly recall the R-composition as a possible approach to merge two \glspl{CLF}, see \cite{balestrino2012new, balestrino2012multivariable,grammatico2013universal} for technical details. To obtain a merging function $V_{\wedge}$ that looks like $V$ close to the origin (locally optimal) and like the smoothed $V^p_p$ close to the constraints, the R-composition consists of the following steps:
\begin{itemize}
\item[R1)] Define $R_1, R_2: \mathbb{R}^n \rightarrow \mathbb{R}$, as $R_1(x) \coloneqq 1 - V_p^p(x)$ and $R_2(x) \coloneqq 1 - V(x)$;
\item[R2)] Fix $\phi > 0$, define the function $R_{\wedge} : \mathbb{R}^n \rightarrow \mathbb{R}$ (omitting the dependence on $\phi$) as
\begin{equation*}
R_{\wedge}(x) \!\coloneqq \!\rho(\phi) \!\left(\phi R_1(x) \!+\! R_2(x) \!-\! \sqrt{\phi^2 R_1^2(x) \!+\! R_2^2(x)} \right)
\end{equation*}
where $\rho(\phi) \coloneqq \left(\phi + 1 - \sqrt{\phi^2 + 1}\right)^{-1}$ is a normalization factor;
\item[R3)] Define the R-composition, $V_{\wedge} : \mathbb{R}^n \rightarrow \mathbb{R}_{\geq 0}$, as
\begin{equation*}
V_{\wedge}(x) \coloneqq 1 - R_{\wedge}(x).
\end{equation*}
\end{itemize}
By computing the gradient $\nabla V_{\wedge}(x)$, it turns out from \cite[Prop.~5]{grammatico2014control} that $V_{\wedge}$ is a gradient-type merging candidate and can be used as a candidate \gls{CLF}.
\smallskip
\begin{table}[!t]
\caption{Performance index $J$ for different values of $p$.}
\label{tab:ex1_Jval}
\begin{center}
\begin{tabular}{ccccc}
\toprule
$p$ & $1$ & $2$ & $4$ & $30$\\
\midrule
$J$ & {82.95} & {24.95} & {23.13} & {27.17}\\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\begin{continuance}{exa:ex1}
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\columnwidth]{comparison_p}
\caption{State behaviour (solid lines) of the system in \eqref{eq:Ex1} with gradient-type controller, for different values of $p$. The dashed-dotted lines corresponds to $V_{\wedge}(x) = \mu$, with $\mu = 1.4$, $\alpha = \beta = 0.1$, $\phi = 10$ and $x(0) = [-1.05, -0.1]^\top$.}
\label{fig:comparison_p}
\end{figure}
Finally, we show an example of the correction made by gradient-based controller $v = - b^\top \nabla V_{\wedge}(x)$, with $V_{\wedge}$ obtained via the smoothing procedure and R-composition for different values of $p$. In Fig.~\ref{fig:comparison_p}, we shown controlled state trajectories, where the additional control input $v$ forces the state to remain inside the feasible region, $\mc{L}_{(V/\mu)}$, providing the values for the performance index $J$ in Tab.~\ref{tab:ex1_Jval}.
\hfill$\square$
\end{continuance}
\section{Conclusion and outlook}
Merging constraint functions and (locally) optimal control Lyapunov functions is key to design low-complexity (sub-) optimal control for constrained linear systems.
Partial control-sharing is a promising approach for merging constraint and control-Lyapunov functions, under mild assumptions that can be checked via convex optimization.
Future research will investigate necessary and sufficient conditions for partial control-sharing in the presence of multiple state constraints. Control input constraints shall be considered as well. We shall also investigate sub-optimality bounds of certain merging procedures.
\balance
\bibliographystyle{IEEEtran}
\medskip
|
1,116,691,497,403 | arxiv | \section{\vspace*{-4pt}Introduction} \label{introduction}
From the WMAP polarization results (Kogut \etal 2003), population III stars
likely existed at z$\simeq$20. These massive stars (\cge 250\Mo) are expected to
produce a large population of black holes (BH; \Mbh\cge 150\Mo; Madau \& Rees
2001). Since there is now good dynamical evidence for the existence of
supermassive (\Mbh$\simeq$10$^{6}$--10$^{9}$\Mo) black holes (SMBH's) in the
centers of galaxies at z$\simeq$0 (Kormendy \& Richstone 1995; Magorrian,
Tremaine, \& Richstone 1998; Kormendy \& Gebhardt 2001), it is important to
understand how the SMBH's seen at z$\simeq$0 have grown from lower mass BH's
at z$\simeq$20. A comprehensive review of SMBH's is given by Ferrarese \& Ford
(2004). \ One sugges-\vfill\eject
\n tion is that they ``grow'' through repeated mergers of
galaxies which contain less massive BH's, so the byproduct is a larger single
galaxy with a more massive BH in its center. The growth of this BH may then be
observed via its AGN activity. If this scenario is valid, there may be an
observable link between galaxy mergers and increased AGN activity (Silk \& Rees
1998). Therefore, studying this link as a function of redshift could give
insight into the growth of SMBH's and its relation to the process of galaxy
assembly.
Recent numerical simulations addressed some long-standing issues in the
dissipational collapse scenario by including previously-neglected energetic
feedback from central SMBH's during the merging events (\eg, Robertson \etal
2005). They emphasize the relationship between the central BH mass and the
stellar velocity dispersion, which confirms the link between the growth of
BH's and their host galaxies (di Matteo \etal 2005; Springel \etal 2005ab).
The present study provides observational support for these models at
cosmological redshifts.
\section{The Hubble Ultra Deep Field data} \label{data}
The Hubble Ultra Deep Field (HUDF; Beckwith \etal 2005) is the deepest optical
image of a slice of the Universe ever observed. It consists of 400 orbits with
the \HST\ Advanced Camera for Surveys (ACS) observed over a period of four months
in four optical bands ($BVi'z'$). These are supplemented in the $JH$-bands with
the Near-Infrared Camera and Multi-Object Spectrograph (NICMOS; Bouwens \etal
2004). The HUDF reaches $\sim$1.0 mag deeper in $B$ and $V$ and $\sim$1.5 mag
deeper in \iAB\ and \zAB\ than the equivalent filters in the Hubble Deep Field
(HDF, Williams \etal 1996).
A large number of galaxies in the HUDF appear dynamically unrelaxed, which
suggests they must play an important role in the overall picture of galaxy
formation and evolution. In particular, we notice many galaxies with a
knot-plus-tail morphology, which constitute a well-defined subset of the
irregular and peculiar objects in the HUDF that is uniquely measurable.
According to di Matteo \etal (2005), this morphology appears to represent an
\emph{early} stage in the merging of two nearly-equal mass galaxies. They are
mostly linear structures, resembling the ``chain'' galaxies first reported by
Cowie, Hu, \& Songaila (1995). When more than two clumps come together, these
objects may be more akin to the luminous diffuse objects and clump clusters
(Conselice \etal 2004; Elmegreen, Elmegreen, \& Sheets 2004; Elmegreen,
Elmegreen, \& Hirst 2004), or other types of irregular objects (Driver \etal
1995; van den Bergh 2002).
Since the HUDF data was observed over a period of four months, it also provides
a unique opportunity to search for variability in all types of objects to very
faint flux levels, such as faint stars, distant supernovae (SNe), and weak
active galactic nuclei (AGN). From all objects detected in the HUDF, we
therefore selected the subset of tadpole galaxies and variable objects, and
analyzed their properties in the \iAB-band, where the HUDF images are deepest
and have the best temporal spacing over four months. Yan \& Windhorst (2004b)
discuss how the \iAB-selection result in a small bias against objects at z\cge
5.5 in the high redshift tail of the redshift distribution. However, tadpole
galaxies at z$\simeq$5.5 \emph{do} exist (\eg, Rhoads \etal 2005). Since most
HUDF objects have z\cge 1.5, the \iAB-band images sample the rest-frame UV,
where AGN are known to show more variability (Paltani \& Courvoisier 1994).
To address whether supermassive black-hole growth kept pace with galaxy
assembly, we will present in this paper the redshift distribution of both
tadpole galaxies and weak variable AGN in the HUDF, and compare these with the
redshift distribution of the general field galaxy population.
\section{Tadpoles as Proxy to Galaxy Assembly} \label{sampleselection}
The steps to select galaxies with the characteristic ``tadpole'' shape are
described in Straughn \etal (2006). In short, objects of interest have a
bright ``knot'' at one end and an extended ``tail'' at the other. Two
different source catalogs were made to \iAB=28.0 mag using \texttt{SExtractor}
(Bertin \& Arnouts 1996): a highly deblended catalog containing many point-like
sources, including the knots of potential tadpole galaxies, and a
low-deblending catalog containing extended sources, including the tadpole's
tails.
First, the knots of the tadpole galaxies were selected by setting an axis-ratio
limit. ``Knots'' were defined from the highly deblended catalog with an axis
ratio rounder than some critical value ($b/a$$>$0.70). ``Tails'' are elongated
objects selected from the low-deblending catalog with $b/a$$<$0.43. Tadpoles
were defined when a knot was within a certain distance of the geometrical
center of a tail, namely $<$4$a$ (in semi-major axis units of the tail). We
also required that the knot be $>$0.1$a$ from the tail's geometrical center,
since we are searching for asymmetric objects, and want to eliminate upfront as
many of the true edge-on mid-type spiral disks as possible. The tadpole
candidates also must have the knot near one end of the tail, hence we selected
only those tails and knots with a relative position angle
$\Delta\theta$$\leq$20$^{\circ}$, as measured from the semi-major axis of the
tail. This prevented including knots and tails that appear close together on
the image, but are not physically part of the same galaxy.
Our final sample contains 165 tadpole galaxies, a subset of which is shown in
Fig. 1. These were selected from 2712 objects in the low-deblending HUDF catalog
to \iAB=28.0 mag. Less than 10\% of the selected tadpoles appear as normal
edge-on disk galaxies. Fig. 2 shows a significant overabundance of knots near
the end of the elongated diffuse structures ($\Delta\theta$\cle 10$^{\circ}$)
as compared to randomly distributed knots. Hence, the majority of tadpoles are
not just chance alignments of unrelated knots. Instead, we believe they are
mostly linear structures which are undergoing recent mergers. Their redshift
distribution is shown in Fig. 3a--3b.
\section{Faint Variable Objects as Proxy to SMBH Growth} \label{AGNselection}
Our HUDF variable object study is described in Cohen \etal (2006). Individual
cosmic-ray (CR) clipped images and weight maps were used with {\it
multidrizzle} (Koekemoer \etal 2002) to create four sub-stacks of
approximately equal exposure times that cover 0.4--3.5 months timescales. These
used the same cosmic-ray maps and weight maps as the full-depth HUDF mosaics.
All four epochs were {\it drizzled} onto the {\it same} output pixel scale
(0\arcspt030/pixel) and WCS frame as the original HUDF. Since we are searching
for any signs of variability, we used a liberal amount of object deblending in
the \texttt{SExtractor} catalogs, which used a 1.0$\sigma$ detection threshold
and a minimum of 15 connected pixels (\ie, approximately the PSF area) above sky. This
allows pieces of merging galaxies to be measured separately, to increase the
chance of finding variable events in point-source components. Since each of the
four epoch stacks have half the S/N-ratio of the full HUDF, the sample studied
for variability contains 4644 objects to \iAB\cle 28.0 mag (\cge 10$\sigma$).
\vspace*{6pt}
The ACS/WFC PSF varies strongly with location on the CCD detectors, and with
time due to orbital ``breathing'' of the \HST\ Optical Telescope Assembly. Hence,
we {\it cannot} use small PSF-sized apertures to search for nuclear
variability, as could be done by Sarajedini \etal (2003a) for the much larger
WFPC2 pixel-size and the {\it on-axis} location of the WFPC2 camera. Instead,
we had to use {\it total} magnitudes of the highly deblended ACS objects. Even
though our total flux apertures may encompass the whole galaxy, any variability
must come from a region less than the 0\arcspt 084 PSF in size, due to the
light-travel time across the variability region.
\vspace*{6pt}
The four epoch catalogs were compared to each other, resulting in six diagrams
similar to Fig. 4a, which show the change in measured total magnitudes in
matched apertures as a function of the full-depth HUDF flux. The flux-error
distribution was determined iteratively for each pair of observations, such
that 68.3\% of the points lie within the boundaries of the upper and lower
1.0$\sigma$ lines that represent the Gaussian error distribution (Fig. 4a). In
order to demonstrate the Gaussian nature of this error distribution at all flux
levels, the $\Delta mag$-data were divided by the 1.0$\sigma$ model line, and
histograms were computed for the resulting normalized $\Delta mag$ data at
various flux-levels in Fig. 5. These histograms are well fit by normalized
Gaussians with $\sigma$$\simeq$1.0. The HUDF noise distribution is not perfectly
Gaussian, but with 288 independent exposures in the \iAB-band, the error
distribution is as close to Gaussian as seen in any astronomical CCD
application. Once the $\pm$1.0$\sigma$ lines were determined, we find all
objects that are at least 3.0$\sigma$ outliers. Most outliers in Fig. 5 at
$\Delta mag$\cge 3.0$\sigma$ are due to object variability, after pruning large
objects without visible point sources which suffered from \texttt{SExtractor}
deblending errors. In Fig. 4a, we show the $\pm$1$\sigma$, $\pm$3$\sigma$, and
$\pm$5$\sigma$ lines, along with the actual data. The choice of 3.0$\sigma$
implies that we should expect 0.27\% random contaminants.
In total, we find 45 out of 4644 objects that show the signatures of AGN
variability. These are variable at the \cge 3.0$\sigma$ level, have a compact
region indicative of a point source, and are devoid of visible image defects or
object splitting issues. Less than one of these 45 is expected to be a random
contaminant. In total, 577 candidates were rejected due to crowding or
splitting issues, or due to the lack of a visible point source. Fig. 4b shows
the number of $\sigma$ by which each object varied for each of the 6 possible
epoch-pairs. The colored symbols are for the 45 ``best'' candidates. Another
57 objects were found that are ``potentially'' variable candidates. The
four-epoch light-curves for these 45 variable candidates are shown in Fig. 6.
Of these, 49\% were discovered from a single epoch-pair (usually indicative of
a global rise or decline as a function of time in the light-curve), 43\% in two
epoch-pairs, and only 5\% (2 objects) in 3 epoch-pairs. Further details are
given in Cohen \etal (2006). In summary, the variability fraction on a
timescale of few months (rest-frame timescale few weeks to a month) is at
least 1\% of all HUDF field galaxies.
Since the HUDF is in the Chandra Deep Field--South (CDF-S, Rosati \etal 2002),
there exists deep X-ray data. Within the HUDF, there are 16 Chandra sources
(Koekemoer \etal 2004,2006), and we detect four of these as variable in the
optical. One of these is a mid-type spiral with \iAB$\simeq$21.24 mag, that
belongs to a small group of interacting galaxies. Two others are optical point
sources with \iAB$\simeq$21.12 mag and $\simeq$24.79 mag, showing little or no
visible host galaxy. Both have measured spectroscopic AGN emission-line
redshifts at z$\simeq$3 (Pirzkal \etal 2004). The detection of 25\% of the
Chandra sources as optically variable in the HUDF data shows that the
variability method employed here is a reliable way of finding the AGN that are
not heavily obscured.
The faint object variability in the HUDF is most likely due to weak AGN, given
the timescales and distances involved. Strolger \& Riess (2005) found only one
moderate redshift SN in the HUDF, so SNe cannot be a significant source of
contamination in our sample. Several other possible source of incompleteness in
the variability study must be addressed. Non-variable AGN, or AGN that only
vary on timescales much longer than 4 months, or optically obscured AGN would
not have been detected with our UV--optical variability method. Sarajedini
\etal (2003ab) had two HDF epochs 5--7 years apart, and found 2\% of the HDF
to be variable. It is thus possible that our sampled times-scale shorter than 4
months missed a factor \cge 2 of all AGN --- the ones variable on longer
time-scales.
\section{The Redshift Distribution of Tadpole Galaxies and Faint Variable
Objects} \label{redshiftdistribution}
We calculate photometric redshifts of all HUDF galaxies to \iAB=28.0 mag (\cge
20$\sigma$) from the $BVi'z'$ photometry using \texttt{HyperZ} (Bolzonella \etal 2000),
plus NICMOS $JH$ (Thompson \etal 2005) and VLT ISAAC $K$-band images where
available. When compared to published spectroscopic redshifts for 70 CDF-S
objects (Le F{\' e}vre \etal 2005), our photometric redshifts have an rms
scatter of 0.15 in $\delta$=(z$_{phot}$--z$_{spec}$)/(1+z$_{spec}$) if all 70
objects are included, and 0.10 in $\delta$ when we reject a few of the most
obvious outliers.
The redshift distribution of all HUDF galaxies (solid line in Fig. 3a) is as
expected, with the primary peak at z\cle 1.0 and a generally declining tail at
z$\simeq$4--5. These trends were also seen in the HDF field galaxies (Driver
\etal 1998). A deficit of objects is apparent at z$\simeq$1--2 due to the lack
of UV spectral features crossing the $BViz$(+$JH$) filters. Unlike the HDF,
this deficit occurs because the HUDF does not yet have
deep enough F300W or $U$-band data. The resulting redshift bias, however, is
the \emph{same} for both tadpoles, variable objects and the field galaxy
population, and so divides out in the subsequent discussion. Within the
statistical uncertainties in Fig. 3a, the shape of the tadpole galaxy redshift
distribution follows that of the field galaxies quite closely. This suggests
that if tadpole galaxies are indeed dynamically young objects related to
early-stage mergers, they seem to occur in the same proportion to the field
galaxy population at all redshifts. Tadpole galaxies may therefore be good
tracers of the galaxy assembly process. The ratio of the two redshift
distributions N(z) and the resulting percentage of tadpole galaxies is plotted
in Fig. 3b. Overall, the percentage of tadpole galaxies is roughly constant at
$\sim$6\% with redshift to within the statistical errors for the redshifts
sampled (0.1\cle z\cle4.5).
In Fig. 7a, we show the photometric redshift distribution for all HUDF objects
with \iAB\cle 28.0 mag, and for our best 45 variable candidates. Their
redshift distribution follows that of the field galaxies in general, \ie,
there is no redshift where faint object variability was most prevalent. We plot
in Fig. 7b the ratio of the N(z) for variable objects to that of field
galaxies, and show that the weak variable AGN fraction is roughly constant at
approximately 1\% over all redshifts probed in this study.
\section{Discussion and Conclusions}
The fact that about 6\% of all field galaxies are seen in the tadpole stage
is an important constraint to hierarchical simulations. Springel \etal
(2005ab) predict a tadpole-like stage $\sim$0.7--1.5 Gyr after a major merger
begins, suggesting that the tadpole morphology represents an early-merger stage
of two galaxies of roughly comparable mass. If this 6\% indicates the fraction
of time that an average galaxy in the HUDF spends in an early-merger stage
during its lifetime, then every galaxy may be seen temporarily in a tadpole
stage for $\sim$0.8 Gyr of its lifetime, and may have undergone $\sim$10-30
mergers during its lifetime (Straughn \etal 2006). More complex mergers
involving multiple components may lead to irregular/peculiar and train-wreck
type objects, and the luminous diffuse objects or clump-clusters, which
dominate the galaxy counts at faint magnitudes (Driver \etal 1998). Given that
tadpoles only trace a certain type and stage of merging galaxies, the above
statistics are a lower limit on the number of all mergers.
The question arises if tadpole galaxies and objects with point-sources that
show signs of variability are drawn from the same population. Among our 165
tadpole galaxies, none coincide with the sample of 45 variable objects or with
the CDF-S X-ray sources S (Alexander \etal 2005). At most one or two of the
variable candidates resemble the tadpole galaxies of Straughn \etal (2006).
A factor of three of all AGN may have been missed, since their UV--optical flux
was obscured by a dust-torus. In the AGN unification picture, AGN cones are
two-sided and their axes are randomly distributed in the sky, so that an
average cone opening-angle of $\omega$ implies that a fraction
1--sin($\omega$) of all AGN will point in our direction. If
$\omega$$\simeq$45$^\circ$ (\eg, Barthel 1989), then every optically detected
AGN (QSO) represents 3--4 other bulge-dominated galaxies, whose AGN reflection
cone didn't shine in our direction. Hence, their AGN may remain obscured by the
dust-torus. Such objects could be visible to Chandra in X-rays or to Spitzer
in the mid-IR, although the available Chandra and Spitzer data are not deep
enough to detect all HUDF objects to AB$\simeq$28 mag.
Together with the factor of \cge 2 incompleteness in the HUDF variability
sample due to the limited time-baseline sampled thus far, the actual fraction
of weak AGN present in these dynamically young galaxies may thus be a factor of
$\gtrsim$6--8$\times$ larger than the 1\% variable AGN fraction that we found
in the HUDF. Hence, perhaps as many as $\gtrsim$6--8\% of all field galaxies
may host weak AGN, only $\sim$1\% of which we found here, and another \cge 1\%
could have been found if longer time-baseline had been available. Another
factor of 3--4 of AGN are likely missing because they are optically obscured,
The next generation of X-ray and IR telescopes (Windhorst \etal 2006ab) and
longer optical time-baselines are needed to detect all weak AGN in the HUDF.
Recent state-of-the-art hydrodynamical models (di Matteo \etal 2005;
Springel \etal 2005ab; Hopkins \etal 2005) suggest that during (major)
mergers, the BH accretion rate peaks considerably {\it after} the merger
started, and {\it after} the star-formation rate (SFR) has peaked. Their models
suggest that, for massive galaxies, a tadpole stage is seen typically about 0.7
Gyr after the merger started, but $\sim$1 Gyr before the SMBH accretes most of
its mass, which is when the galaxy displays strong visible AGN activity. Since
the lifetimes of QSO's and radio-galaxies are known to be \cle
(few$\times$10$^7$)--10$^8$ years (Martini 2004; Grazian \etal 2004, Jakobsen
\etal 2003), these models thus imply that the AGN stage is expected to occur
considerably (\ie, \cge 1--1.5 Gyr) \emph{after} the early-merger event during
which the galaxy is seen in the tadpole stage.
The observed lack of overlap between the HUDF tadpole sample and the weak
variable AGN sample thus provides observational support for this prediction.
Hopkins \etal (2005) have quantified the timescales that quasars will be
visible during merging events, noting that for a large fraction of the
accretion time, the quasar is heavily obscured. In particular, their
simulations show that during an early merging phase --- our observed tadpole
phase --- the intrinsic quasar luminosity peaks, but is completely optically
obscured. Only after feedback from the central quasar clears out the gas, will
the object become visible as an AGN. This should be observable by Spitzer in
the mid-IR as a correspondingly larger fraction of IR-selected obscured faint
QSO's. To study the relation between galaxy assembly and SMBH growth in detail,
we need deeper surveys at longer wavelengths with the James Webb Space
Telescope (JWST; Windhorst \etal 2006a). The JWST photometric {\it and} PSF
stability are crucial for this, since many of our HUDF objects show significant
variability of less than a few percent in total flux.
This research was partially funded by NASA grants GO-9793.08-A and
AR-10298.01-A, NASA JWST grant NAG5-12460, the NASA Space Grant program at
ASU, and the Harriet G. Jenkins Predoctoral Fellowship Program.
|
1,116,691,497,404 | arxiv | \section{Introduction}
The direct search for physics beyond the Standard Model(SM) has been unsuccessful thus far. There have been anomalies in some of the observables in the flavour sector, with deviations from the SM predictions at the level of few
sigma~\cite{Ligeti}. In fact, the presence of New Physics(NP) at possibly high scales may very well be deduced only from precision measurements of some of the rare meson decays. The absence of Flavour changing neutral currents (FCNCs) at the tree level allows
the possibility of virtual new physics particles to be present in the loop level diagrams that contribute to these processes. Detailed study of rare charm decays was performed in Ref.~\cite{pakvasa_2002}.
Predictions for these decays in various extensions of the SM including extensions of the Higgs, gauge and fermion sectors were obtained.
Rare charm decays were also recently studied in Refs.~\cite{fajfer_rare,petrov_rare,Boer}. The focus of this study will be the radiative decays of charmed mesons. While both the inclusive and
exclusive radiative $B$ meson decays have been extensively discussed in the literature, less attention has been paid to the $D$ meson radiative decays as their branching ratios are expected to be much smaller due to the almost
complete GIM suppression.
Moreover, charm radiative decays will be dominated by long distance contributions, which can hide the presence of new physics particles that may appear in the loop of the short distance penguin contributions. Nevertheless, in Ref.~\cite{FajferRatio} it was pointed out that a measurement of the difference in the rates of the exclusive modes, $D^0\to\rho\gamma$ and $D^0\to\omega\gamma$ in which the long distance effects are expected to cancel, would indicate short distance new physics if the data reveals a difference of rates which is more than $30\% $. But in general, due to the large uncertainties in the long
distance contributions, any definite conclusion regarding NP will not be feasible from a measurement of the radiative decay rates for the inclusive $c\to u\gamma$ case nor for any individual exclusive channel, unless the NP short distance contribution is larger than that from the long distance effects. In fact, the possibility of enhancement above the otherwise dominant long distance effects, in presence of a fourth generation model with large mixing angles of the $b^\prime$ quark, $U_{ub^\prime}U_{ub^\prime}$ had been pointed out in Ref.~\cite{BabuHeLiPakvasa}. Fourth generation models are now inconsistent with the LHC data, however, models with vector-like charge $-1/3$ quarks, for which the authors of Ref.~\cite{BabuHeLiPakvasa} claimed that their results were also applicable, are still viable. In fact, in the last couple of years many detailed studies of the phenomenology of vector-like quarks and constraints from the flavour sector have been performed~\cite{Cacciapaglia2011,Botella2013,Ishiwata2015,Bobeth2016,uma}.
Apart from the enhancement in the decay rate, which will be subject to the relative size of the short distance and long distance effects, NP could also be searched through a measurement of the polarization of the photon produced in the decay. The SM has a robust prediction regarding the photon
polarization in $c\to u\gamma$ decays and hence, a measurement of the photon polarization can pin down the presence of NP. This had been earlier pointed out for the case of $B$ radiative decays in Refs.~\cite{Atwood, Gronau}. In the SM, the photons
from the short distance (SD) penguin contribution in the $c\to u\gamma$ decays will be mostly left handed up to corrections of
$\cal{O}$($m_u/m_c$). This dominance of left handed polarization can get masked in the presence of long distance (LD) effects. However, the fraction of the right polarized photons will vary in different models and may possibly even allow one to
distinguish between different models of NP. We explore the effects of the presence of a down-type isosinglet vector-like quark model on the $c\to u\gamma$ decay rate, as well as on the photon polarization for this model with an additional left-right symmetry. The decay rate evaluation requires an estimation of both the SD as well as LD components, which are described in the next section. In Sec.~\ref{VLQ} some details of the down type isosinglet vector-like quark model are discussed, including the modifications to the Wilson coefficients in its presence. Sec.~\ref{LRSM} contains a short discussion on LRSM and the results for the bare level SD contributions to the amplitudes for the emission of the left and right handed photons in this model. Sec.~\ref{IVA} gives our results for the branching ratios (BR's) in the SM and in the different NP models. In Sec.~\ref{IVB} we present our analysis of the polarization function in the LRSM and the LRSM with vector-like quark. Finally in Sec.~\ref{V}, we conclude.
\section{The long and short distance contributions within the Standard Model}
The long distance contributions being non-perturbative in nature are hard to estimate. The long distance contributions can come either from pole diagrams or vector meson dominance (VMD) diagrams. At the quark level, the pole contribution corresponds to the annihilation diagrams
$c\overline{q}_{1}\rightarrow q_{2}\overline{q}_{3}$ with a photon attached to any of the four quark lines. They are actually a subset of a more general class of long distance contributions, which include two-particle
intermediate states and extends up to all higher n-particle intermediate states. Phenomenologically however, the single-particle or pole terms are the most accessible. The underlying quark processes in the VMD contributions are
$c\rightarrow q_{1}\overline{q}_{2}q$, followed by $\overline{q}_{2}q\rightarrow\gamma$. All these long distance effects are rather hard to calculate from first principles but
can be estimated in models. Hence, it is important that the observables chosen for uncovering short distance NP,
have different values from the SM case, even in the presence of the large long distance contributions. We provide an updated estimate for the long distance amplitudes and branching ratios for charm decays following the methods of Ref.~\cite{Burdman} in appendix~\ref{LDBR}.
\begin{figure}
\begin{minipage}{0.49\textwidth}
\begin{tikzpicture}[line width=1 pt, scale=1.1]
\draw (-5,0) -- (-1.5,0);
\draw[vector] (-4,0) arc (180:.5:.8);
\draw[vector] (-2.8,0.8)--(-1.7,1.8);
\node at (-4.1,0.8) {\small W};
\node at (-2.3,0.8) {\small W};
\node at (-5.2,0) {c};
\node at (-1.3,0) {u};
\node at (-2.1,1.8) {$\gamma$};
\node at (-3.2,-0.2) {d,s,b};
\end{tikzpicture}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\begin{tikzpicture}[line width=1 pt, scale=1.1]
\draw (-5,0) -- (-1.5,0);
\draw[vector] (-4,0) arc (-180:.5:.8);
\draw[vector] (-3.2,0)--(-1.7,1);
\node at (-3.2,-1.2) {\small W};
\node at (-5.2,0) {c};
\node at (-1.3,0) {u};
\node at (-2.1,1.1) {$\gamma$};
\node at (-3.2,-0.2) {d,s,b};
\end{tikzpicture}
\end{minipage}
\caption{The Feynman diagrams for the process $c\to u\gamma$.}
\label{ctougamma fig}
\end{figure}
The amplitude for the flavour changing radiative transitions were first evaluated by Inami and Lim~\cite{InamiLim}. As pointed out in Ref.~\cite{Kim}, those formulaes need to be appropriately modified for the case of $c\to u\gamma$ decay.
The SM Lagrangian for the processes $c\to u\gamma$, which arises at the loop level as shown in
Fig.\ref{ctougamma fig} is given by,
\begin{align}\label{SM_Lag}
\mathcal{L}_{\text{int}}=-\frac{4G_{F}}{\sqrt{2}}A^{SM}\frac{e}{16\pi^{2}}m_{c}\left(\bar{u}\sigma_{\mu\nu}P_{R}c\right)F^{\mu\nu},
\end{align}
where the mass of the final quark $u$ has been neglected and $P_{R}=\frac{1+\gamma_5}{2}$. The coefficient $A^{SM}$ is a function of the internal quark masses and the (QCD uncorrected) contribution to the amplitude $A^{SM}$ is given by,
\begin{align}\label{Asm}
A^{SM}&=\sum_{p=1,2}Q_p\left[ V_{cb}^{*}V_{ub}G_p(r_{b})+V_{cs}^{*}V_{us}G_p(r_{s})+V_{cd}^{*}V_{ud}G_p(r_{d})\right]\nonumber\\
&=\sum_{p=1,2}Q_p\sum_{q=d, s, b}V_{ci}^{*}V_{ui}G_p(r_{q}),
\end{align}
where
$r_q=\frac{m_q^2}{M_{W}^2}$ with $m_q$ ($q=d$, $s$, $b$) being the masses of the down-type quarks running in the fermion loop in the penguin diagrams. The functions $G_p, p=1,2$ defined in~\cite{Kim} are given in appendix~\ref{G}. $Q_1$ and $Q_2$ are the charges of the $W$ boson emitted from the initial quark in the loop diagram, and that of the internal quark running in the loop, respectively.
The inclusive decay rate for a $c\rightarrow u\gamma$ process within the SM is given by,
\begin{align}
\Gamma^{0}_{c\to u\gamma}=\frac{\alpha G_{F}^{2}}{128\pi^{4}}m_{c}^{5}|A^{SM}|^{2}.
\end{align}
This results in the following inclusive BR for the $c\rightarrow u\gamma$ process,
\begin{align}
BR(c\to u\gamma)=\frac{3}{4}\frac{\alpha}{\pi}\frac{|A^{SM}|^{2}}{|V_{cs}|^{2}I(\frac{m_{s}^{2}}{m_{c}^{2}})+|V_{cd}|^{2}I(\frac{m_{d}^{2}}{m_{c}^{2}})}
BR(D^{+}\to Xe^{+}\nu_{e}),
\end{align}
which is normalized with respect to the inclusive semi-leptonic BR for $D^+$ decays, to get rid of the uncertainty in the charm quark mass. The function $I$ is the phase space suppression factor and is given by,
\begin{equation}
I(x)=1-8x+8x^3-x^4+12x^2ln\left(\frac{1}{x}\right).
\end{equation}
In the case of $b\to s\gamma$ decay, up-type quarks flow in the loop and the heavy top quark contribution dominates and induces penguin operators already at the electroweak scale. In contrast, all the down
type quarks are massless compared to the electroweak scale, resulting in no penguin contribution at this scale within the SM for the case of $c\to u\gamma$. However, the presence of a heavy down type vector-like quark, will result in a non-vanishing penguin contribution at the electroweak scale. Within the SM, the enhancement of the radiative decay rates in presence of QCD corrections was pointed out in Ref.~\cite{Burdman}. While the enhancement was by
a factor of two in the case of $b\to s\gamma$, it was expected to be more dramatic in the case of charm radiative decays~\cite{Burdman,greub}. It is hence important to write down the weak effective Hamiltonian with all the
dimension-6 operators and calculate the corresponding Wilson coefficients within the renormalization-group improved perturbation theory which are discussed below.
\subsection{The RG evolution and the coefficient $\text{C}_{7_{\text{eff}}}$}\label{Wilson}
The RG evolution of the Wilson coefficients for charm decays in context of the SM
to the next-to-leading order (NLO) in QCD corrections is performed. The calculation for the complete set of operators relevant for charm decays had previously been done up to NLO in the NDR scheme~\cite{fajfer_wc} and to the next-to-next-to-leading order (NNLO) in the $\overline{\text{MS}}$ scheme~\cite{muller_wc}. In this article, we work in the $\overline{\text{MS}}$ scheme since the anomalous dimension matrices at the leading order (LO) ($\hat{\gamma}^0_{eff}$) and at NLO ($\hat{\gamma}^1_{eff}$) are readily available in Ref.~\cite{muller_wc}. The short distance evolution of the Wilson coefficients has to be divided into two steps.
The first task is to integrate out the weak gauge bosons at a scale $\mu\sim M_W$.
This
is done by calculating the $C_i$'s at the scale $\mu\sim M_W$ by matching the effective theory with five active flavors $q = u$, $d$, $s$, $c$, $b$ onto the full theory. As mentioned earlier, no penguin operators are generated at this point, since all
the down-type quarks ($d$, $s$ and $b$) are to be treated as massless~\cite{greub} and the GIM mechanism is in full effect. The effective hamiltonian for the scale $m_b<\mu<M_W$ is then given by,
\begin{equation}
\mathcal{H}_{eff}(m_b<\mu<M_W)=\frac{4G_F}{\sqrt{2}}\sum_{q=d,s,b}V_{cq}^*V_{uq}[C_1(\mu)Q_1^q+C_2(\mu)Q_2^q].
\end{equation}
Here,
\begin{eqnarray}
Q_1^q&=&\left(\bar{u}_{L}\gamma_{\mu}T^{a}q_{L}\right)\left(\bar{q}_{L}\gamma^{\mu}T^{a}c_{L}\right),\;\;\;\;\;\;\;\;
Q_2^q=\left(\bar{u}_{L}\gamma_{\mu}q_{L}\right)\left(\bar{q}_{L}\gamma^{\mu}c_{L}\right).
\end{eqnarray}
The effective anomalous dimension matrix $\hat{\gamma}_{eff}$ is calculated in the effective theory with five flavours. Using this matrix, the $C_i(M_W)$'s are evolved down to the scale $\mu\sim m_b$, and the $C_i(m_b)$'s are
obtained.
The next step is to integrate out the $b$ quark as an effective degree of freedom at the scale $\mu\sim m_b$. This is accomplished by matching the effective five flavour theory onto the effective theory for four flavours. This
generates the penguin operators with the Wilson coefficients depending upon $M_W$ solely through the coefficients $C_{1,2}(m_b)$. The effective hamiltonian at the scale $m_c<\mu<m_b$ is then given by
\begin{equation}
\mathcal{H}_{eff}(m_c<\mu<m_b)=\frac{4G_F}{\sqrt{2}}\sum_{q=d,s}V_{cq}^*V_{uq}[C_1(\mu)Q_1^q+C_2(\mu)Q_2^q+\sum_{i=3}^{10}C_i(\mu)Q_i]
\end{equation}
where
\begin{eqnarray}
Q_3&=&\bar{u}_{L}\gamma_{\mu}c_{L}\sum_{q=u,d,s,c}\bar{q}\gamma^{\mu}q,\;\;\;\;\;\;\;\;\;
Q_4=\bar{u}_{L}\gamma_{\mu}T^{a}c_{L}\sum_{q=u,d,s,c}\bar{q}\gamma^{\mu}T^{a}q,\\
Q_5&=&\bar{u}_{L}\gamma_{\mu}\gamma_{\nu}\gamma_{\rho}c_{L}\sum_{q=u,d,s,c}\bar{q}\gamma^{\mu}\gamma^{\nu}\gamma^{\rho}q,\;\;\;\;\;\;\;\;\;
Q_6=\bar{u}_{L}\gamma_{\mu}\gamma_{\nu}\gamma_{\rho}T^{a}c_{L}\sum_{q=u,d,s,c}\bar{q}\gamma^{\mu}\gamma^{\nu}\gamma^{\rho}T^{a}q,\\
Q_7&=&-\frac{g_{em}}{16\pi^2}m_c\bar{u}_{L}\sigma^{\mu\nu}c_{R}F_{\mu\nu},\;\;\;\;\;\;\;\;\;\;
Q_8=-\frac{g_{s}}{16\pi^2}m_c\bar{u}_{L}\sigma^{\mu\nu}T^{a}c_{R}G^{a}_{\mu\nu}.
\end{eqnarray}
In all of the above, $q_L=P_Lq$ and $P_{R,L}=(1\pm\gamma_5)/2$ are the chirality projection operators. The $T^a$
are the generators of $SU(3)$. The $C_i$'s are the Wilson coefficients which contain the complete short distance (perturbative QCD) corrections. For the case of radiative charm decays under consideration here,
the operators $Q_9$ and $Q_{10}$ are not relevant and therefore not shown in the above list. We will hence consider only the set of Wilson coefficients $C_{1,...,8}$ which are evolved down from the $m_b$ scale to the $m_c$ scale using the $\hat{\gamma}_{eff}$ matrix now evaluated in the effective theory with four flavours to obtain the $C_i(m_c)$'s.
Hence, at each order $(O)$, the vector of the Wilson coefficients $C_i$ at the scale $\mu=m_c$ may be schematically written as
\begin{equation}
C_i^{(O)}(m_c)=U^{(O)}_{(f=4)}(m_c,m_b)R^{(O)}_{match}U^{(O)}_{(f=5)}(m_b,M_W)C_i^{(O)}(M_W)
\end{equation}
where $f$ is the number of active flavours at the corresponding scale, $R^{(O)}_{match}$ is the matching matrix between the effective five flavour theory above the scale $\mu=m_b$ to the effective four flavour theory below the scale
$\mu=m_b$, the index $O=$\{LO, NLO\} specifies the order in QCD corrections at which the corresponding quantities are being calculated and the $U$'s are the evolution matrices related to the effective anomalous dimension matrix
$\hat\gamma_{eff}$ and are discussed in detail below. We use the formalism given in Ref.~\cite{Buchalla_Buras,ref_12_muller_wc} to obtain the evolution matrices for LO and NLO. We also closely follow Ref.~\cite{muller_wc} in the following discussion.
\subsection{The leading order(LO) evolution}
Let us start with the full $8\times8$ effective anomalous dimension matrix at the leading order ($\hat\gamma_{eff}^0$) which can be assimilated in parts from~\cite{muller_wc,ref_7_muller_wc,ref_10_muller_wc,ref_11_muller_wc}.
It is given in eqn.~(\ref{g0}) in appendix~\ref{LONLOanomalous} with the full dependence on the number of active flavours($f$) and charges($q_1, q_2$) of the internal quark and the decaying quark.
Now, let $V$ be the matrix that diagonalizes $\hat\gamma_{eff}^{0^T}$, so that
\begin{equation}\label{VLO}
V^{-1}\hat\gamma_{eff}^{0^T}V=\left[\hat\gamma_{{eff}_i}^{(0)^T}\right]_{diag}.
\end{equation}
The LO evolution matrix $U^{(0)}$ for evolving the $C_i$'s down from the scale $\mu_2$ to $\mu_1$ is then given by
\begin{equation}
U^{(0)}(\mu_1,\mu_2)=V\left[\left(\frac{\alpha_s(\mu_1)}{\alpha_s(\mu_2)}\right)^{-\hat\gamma_{{eff}_i}^{(0)}/2\beta_0}\right]_{diag}V^{-1}
\end{equation}
where $\alpha_s$ is the strong coupling constant.
A few comments are in order at this point. It was specified previously that the only operators relevant for the case of charm decays within the SM, above the scale $\mu=m_b$ are $Q_1^q$ and $Q_2^q$. Hence, the matrix
$U^{(0)}(m_b,M_W)$ is essentially a $2\times2$ matrix. The LO values of $C_{1,2}(M_W)$, which are basically the initial conditions are well known and are given by:
\begin{equation}
C_1(M_W)=0,\;\;\;\;\;\;\;\;C_2(M_W)=1.
\end{equation}
Hence we have, for the scale $m_b<\mu<M_W$
\begin{equation}
\begin{pmatrix}
C_1(m_b)\\C_2(m_b)
\end{pmatrix}=U^{(0)}(m_b,M_W)\begin{pmatrix}
C_1(M_W)\\C_2(M_W)
\end{pmatrix}.
\end{equation}
At this point, all the other Wilson coefficients ($C_3$ to $C_8$) are zero. They get their values from the matching at the scale $m_b$. However, the matching matrix $R_{match}=\delta_{ij}$ to LO and hence, for the LO evolution, the
coefficients $C_3$ to $C_8$ remain vanishing even after the matching procedure. The resulting $8\times1$ column vector $(C_1(m_b),C_2(m_b),0,0,0,0,0,0)$ is then multiplied with the $8\times8$ evolution matrix $U^{(0)}(m_c,m_b)$ to obtain
the values of the $C_i$'s at the charm scale. The renormalization scheme independent Wilson coefficient $C_{7_{eff}}$ relevant for radiative charm decays is then obtained at LO using the relation
\begin{equation}\label{c7eff}
C_{7_{eff}}=C_7+\sum_{i=1}^{6}y_iC_i
\end{equation}
where $y_i=\frac{2}{3}\{0,0,1,\frac{4}{3},20,\frac{80}{3}\}$~\cite{muller_wc}.
\subsection{The next-to-leading order(NLO) evolution}
The NLO expression for the evolution matrix is given by
\begin{equation}\label{NLO}
U^{(1)}(\mu_1,\mu_2)=(1+\alpha_s(\mu_1)J^{(1)})U^{(0)}(\mu_1,\mu_2)(1-\alpha_s(\mu_2)J^{(1)})
\end{equation}
where
\begin{equation}
J^{(1)}=VH^{(1)}V^{-1}.
\end{equation}
$V$ was defined previously in eqn.~(\ref{VLO}) and the matrix $H$ is defined by
\begin{equation}
H_{ij}^{(1)}=\delta_{ij}\hat\gamma_{{eff}_i}^{(0)}\frac{\beta_1}{2\beta_0^2}-\frac{G_{ij}^{(1)}}{2\beta_0+\hat\gamma_{{eff}_i}^{(0)}-\hat\gamma_{{eff}_j}^{(0)}}.
\end{equation}
with
\begin{equation}
G^{(1)}=V^{-1}\hat\gamma_{eff}^{(1)^T}V.
\end{equation}
The expression for the $8\times8$ $\hat\gamma_1$ matrix with the complete effective flavour and charge dependence can again be collected in parts from~\cite{muller_wc,ref_7_muller_wc,ref_10_muller_wc,ref_11_muller_wc}. Due to its
large size, we provide the matrix in two separate $8\times6$ and $8\times2$ blocks in appendix~\ref{LONLOanomalous} (see eqns.~\ref{g11} and~\ref{g12}).
It is easy to see that one encounters a term of the order of $\alpha_s^2$ on expanding the expression for $U^{(1)}(\mu_1,\mu_2)$ (eqn.~(\ref{NLO})). However, a calculation of the NLO contribution necessarily requires that all
terms higher than the first order in $\alpha_s$ be discarded and hence, special care should be taken in using eqn.(\ref{NLO}) for the NLO evolution.
Similar to the case of the LO evolution, the only relevant coefficients above the $m_b$ scale are $C_1(M_W)$ and $C_2(M_W)$, calculated up to the NLO order this time. The expressions can be found in~\cite{Misiak} and in the
$\overline{\text{MS}}$ scheme are given by
\begin{equation}
C_1(M_W)=\frac{15\alpha_s(M_W)}{4\pi},\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;C_2(M_W)=1.
\end{equation}
The coefficients $C_i(i=3,...,8)$ however are non-vanishing after the matching procedure at NLO, since the matching matrix $R_{match}$ is now defined by
\begin{equation}
R_{{match}_{ij}}=\delta_{ij}+\frac{\alpha_s(m_b)}{4\pi}R_{ij}^{(1)}.
\end{equation}
The non-zero elements of the matrix $R^{(1)}$ for charm decays being~\cite{muller_wc}
\begin{eqnarray}
R_{41}^{(1)}&=&-R_{42}^{(1)}/6=1/9,\nonumber\\
R_{71}^{(1)}&=&-R_{72}^{(1)}/6=8/81,\nonumber\\
R_{81}^{(1)}&=&-R_{82}^{(1)}/6=-1/54.
\end{eqnarray}
The full set of NLO coefficients $(C_1,...,C_8)$ for the case of charm decays in the SM is then given by
\begin{equation}
C(m_c)=U^{(1)}(m_c,m_b)R_{match}C(m_b).
\end{equation}
where $C(m_b)$ is an $8\times1$ column vector whose first two elements are $C_1(m_b)$ and $C_2(m_b)$ and the rest are zero. Once the values at the charm scale are obtained, the corresponding value for $C_{7_{eff}}$ can be obtained from eqn.~(\ref{c7eff}).
\section{New Physics models}
\label{III}
\subsection{Down type isosinglet vector-like quark}\label{VLQ}
The SM contains three generation of quarks, however, the number of generations is not predicted by the theory. A simple extension of the SM would be to have a chiral fourth generation of quarks and leptons. Presence of a fourth
generation would have a significant effect on the Higgs sector of the SM and is now ruled out by the Higgs production and decay processes data at the LHC. However, the so called vector-like quarks, which do not receive their
masses from Yukawa couplings to a Higgs doublet, are consistent with the present Higgs data. They are distinguished from the SM quarks by their vector coupling to gauge bosons, i.e., both the left handed, $\Psi_L$ and right
handed, $\Psi_R$ chiralities of these fermions transform the same way under the SM gauge groups $SU(3)_c\times SU(2)_L\times U(1)_Y$. These exotic fermions occur for example in the grand unified theory based on
$E_6$~\cite{HewettRizzo}. In general these fermions could either be singlets or doublets or triplets under $SU(2)_L$. Here we consider the case of a down type isosinglet quark.
In the SM, the quark mixing matrix is a $3\times 3$ unitary matrix which is specified in terms of three angles ($\theta_{12},\theta_{13},\theta_{23}$) and a $CP$-violating phase, $\delta_{13}$. A $4\times 4$ unitary quark mixing
matrix is parametrized in terms of 3 additional angles ($\theta_{14},\theta_{24},\theta_{34}$) and two more $CP$ violating phases, $\delta_{14},\delta_{24}$. In Ref.~\cite{uma}, a chi-squared fit to many flavour observables
was performed to obtain the preferred central values, along with the errors of all the elements of the measurable $3\times 4$ quark mixing matrix. To evaluate the SD contribution of the radiative decay rate in the presence of the vector-like isosinglet quark, the central values of the mass and mixing angles are obtained from the results of the fit in Ref.~\cite{uma} are used.
\subsection*{Modified Wilson coefficients in presence of a vector-like quark}\label{vector_like}
Having discussed the evolution of the Wilson coefficients for the SM in full detail we will now simply specify how the contribution of the vector-like quark model modifies the SM coefficients.
The down-type vector-like quark induces a $Z$-mediated FCNC in the down-type quark sector. In Ref.~\cite{Alok_Uma} it was pointed out for the case of singlet up type vector like quark, that only the Wilson coefficients
are modified. Similarly for the
$c\rightarrow u$ transitions that are of interest to us, no new set of operators are introduced and hence the anomalous dimension matrices along with the coefficients $C_1(M_W)$ and $C_2(M_W)$, remain exactly the same as that in
the SM, up to NLO.\footnote{However, at the NNLO order, one encounters terms dependent on $\frac{m_t^2}{M_W^2}$ which arise as a result of integrating out the top quark as a heavier degree of freedom at the electroweak scale.
Since the
$b^\prime$ is also heavier than the $W$ boson, one needs to integrate it out too at this scale. Hence, at the NNLO level, the expressions for $C_1(M_W)$ and $C_2(M_W)$ change for this model as
compared to SM.}
The fundamental difference in this model is that at the electroweak scale, the coefficients $C_{7,8}$ will not be zero. While the down-type quarks running in the penguin loop in the SM\footnote{The relevant diagrams in the Feynman gauge can be found in~\cite{Kim}.} can be treated as massless and hence do not contribute, the vector-like $b^{\prime}$ quark, which couples with all the up-type SM quarks being heavier than $M_W$ will generate a value for the coefficients $C_7$ and $C_8$ at the electroweak scale itself. The values are
\begin{eqnarray}
C_7&=&\frac{1}{2}\left(G_1\left(\frac{m_{b^\prime}^2}{M_W^2}\right)-\frac{1}{3}G_2\left(\frac{m_{b^\prime}^2}{M_W^2}\right)\right)\\
C_8&=&\frac{1}{2}G_2\left(\frac{m_{b^\prime}^2}{M_W^2}\right)
\end{eqnarray}
where the functions $G_p(r)$ defined in~\cite{Kim} are given in appendix~\ref{G}.
We have calculated these coefficients in this model for two benchmark values for the mass of the $b^{\prime}$ quark in accordance with~\cite{uma}. Our results are displayed in Table~\ref{tab1}. Our values for the coefficients in the SM match exactly with Ref.~\cite{muller_wc} if we use their values for the parameters $m_t$, $m_b$, $M_W$ and $\mu$. We find there is more than an order
enhancement in the values of the coefficients $C_{7_{eff}}$ and $C_{8_{eff}}$ at the NLO level in the case of this vector-like quark model compared to the SM. However, we should mention here that our NLO results for the NP model are not exact in the sense
that we have not calculated the expressions for these coefficients at the NLO level at the $W$ scale. The LO results are exact.
\begin{table*}[ht]
\centering
\caption{The values of the Wilson coefficients at the charm scale in SM and a heavy vector-like quark(VLQ) model with the benchmark values of $800$ GeV and $1200$ GeV for the heavy-quark mass. We take the mass of
the charm quark $m_c=1.275$ GeV, the $\overline{\text{MS}}$ mass of the bottom quark $m_b=4.18$ and the mass of the $W$ boson $M_W=80.385$. The four-loop expression for the strong constant $\alpha_s$ has been used.}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\textbf{Coefficients}}& \multicolumn{3}{c|}{\textbf{LO}}& \multicolumn{3}{c|}{\textbf{NLO}} \\
\cline{2-7}
&\textbf{SM} & \textbf{VLQ} & \textbf{VLQ} &\textbf{SM} & \textbf{VLQ} & \textbf{VLQ} \\
& & $m_{b^\prime}=800$ GeV & $m_{b^\prime}=1200$ GeV & & $m_{b^\prime}=800$ GeV & $m_{b^\prime}=1200$ GeV \\ \hline
$C_1$ & -1.0769 & -1.0769 & -1.0769 & -0.7434 & -0.7434 & -0.7434 \\
\hline
$C_2$ & 1.1005 & 1.1005 & 1.1005 & 1.0503 & 1.0503 & 1.0503 \\
\hline
$C_3$ & -0.0043 & -0.0043 & -0.0043 & -0.0060 & -0.0060 & -0.0060 \\
\hline
$C_4$ & -0.0665 & -0.0665 & -0.0665 & -0.1015 & -0.1015 & -0.1015 \\
\hline
$C_5$ & 0.0004 & 0.0004 & 0.0004 & 0.0003 & 0.0003 & 0.0003 \\
\hline
$C_6$ & 0.0008 & 0.0008 & 0.0008 & 0.0009 & 0.0009 & 0.0009 \\
\hline
$C_7$ & 0.0837 & 0.3324 & 0.3276 & 0.6095 & 0.2820 & 0.2778 \\
\hline
$C_8$ & -0.0582 & -0.2259 & -0.2253 & -0.0690 & -0.2197 & -0.2192 \\
\hline
$\mid C_{7_{eff}}\mid$ & 0.0424 & 0.2911 & 0.2863 & 0.0119 & 0.2159 & 0.2117\\
\hline
\end{tabular}
\label{tab1}
\end{table*}
From the values in Table~\ref{tab1} it is evident that the dimension six operators $O_{1,...,6}$ do not mix with the dimension five operators $O_{7,8}$ (a fact that is well known and clear from the form of the anomalous dimension
matrices).
\subsection{Left-right symmetric model}\label{LRSM}
The minimal Left Right symmetric model is based on the gauge group $SU(3)_{c}\times SU(2)_{L}\times SU(2)_{R}\times U(1)_{B-L}$~\cite{LR_model_1,LR_model_2,LR_model_3} with the fermions represented as doublet representations of $SU(2)_{L}$ and $SU(2)_{R}$. The electric charge $Q$ and the third components of the weak isospin $I_{3L}$ and $I_{3R}$ are related as $Q=I_{3L}+I_{3R}+\frac{B-L}{2}$. To ensure perturbative interactions between right-handed gauge boson and fermions, $\zeta_{g}=\frac{g_{R}}{g_{L}}$ (where the $g_R$ and $g_L$ are the right and left handed couplings respectively) should not be large. As in the low energy weak interaction
L-R symmetry is broken, in general $g_{L}\neq g_{R}$. Direct search results impose the the bound $\zeta_{g}M_{W_{2}}>2.5$ TeV~\cite{G.Aad,Chatrchyan}. In order
to generate active neutrino mass through see-saw mechanism, $v_{R}$ should be in the TeV range. All these constraints result in the range for $\zeta_{g}$ being $[0,2]$. The charged gauge boson $W_{L}$ and $W_{R}$ are mixture of the mass eigenstates $W_{1}$ and $W_{2}$, with a mixing angle $\zeta$ restricted to lie in the range $[0,10^{-3}]$~\cite{Cho,Kou}.
The effective lagrangian given in eqn.~(\ref{SM_Lag}) (for SM) may now be written for the case of LRSM as,
\begin{equation}
\mathcal{L}_{eff}=-\frac{eG_{F}}{4\sqrt{2}\pi^{2}}\big[\mathcal{A}\bar{u}\sigma^{\mu\nu}RcF_{\mu\nu}+\mathcal{B}\bar{u}\sigma^{\mu\nu}LcF_{\mu\nu}\big]
\end{equation}
where $\mathcal{A}$ and $\mathcal{B}$ are the bare SD contributions to $c_L$ and $c_R$ respectively and are given by~\cite{to_come}
\begin{align}
\mathcal{A}&=\sum_{\ell}\bigg\{Q_{1}\big(M\text{cos}^{2}\zeta\lambda_{\ell}^{LL}G_{1}^{LL}+m\zeta_{g}^{2}\text{sin}^{2}\zeta\lambda_{\ell}^{RR}G_{1}^{RR}+
m_{\ell}\zeta_{g}\text{sin}\zeta\text{cos}\zeta e^{i\phi}\lambda_{\ell}^{LR}G_{1}^{LR}\nonumber\\
&+m_{\ell}\zeta_{g}\text{sin}\zeta\text{cos}\zeta e^{-i\phi}\lambda_{\ell}^{RL}G_{1}^{RL}\big)
+Q_{2}\big(M\text{cos}^{2}\zeta\lambda_{\ell}^{LL}G_{2}^{LL}
+m\zeta_{g}^{2}\text{sin}^{2}\zeta\lambda_{\ell}^{RR}G_{2}^{RR}\nonumber\\
&+m_{\ell}\zeta_{g}\text{sin}\zeta\text{cos}\zeta e^{i\phi}\lambda_{\ell}^{LR}G_{2}^{LR}
+m_{\ell}\zeta_{g}\text{sin}\zeta\text{cos}\zeta e^{-i\phi}\lambda_{\ell}^{RL}G_{2}^{RL}\big)\bigg\}\label{AB_amps_1}\\
\mathcal{B}&=\sum_{\ell}\bigg\{Q_{1}\big(m\text{cos}^{2}\zeta\lambda_{\ell}^{LL}H_{1}^{LL}+M\zeta_{g}^{2}\text{sin}^{2}\zeta\lambda_{\ell}^{RR}H_{1}^{RR}+
m_{\ell}\zeta_{g}\text{sin}\zeta\text{cos}\zeta e^{i\phi}\lambda_{\ell}^{LR}H_{1}^{LR}\nonumber\\
&+m_{\ell}\zeta_{g}\text{sin}\zeta\text{cos}\zeta e^{-i\phi}\lambda_{\ell}^{RL}H_{1}^{RL}\big)
+Q_{2}\big(m\text{cos}^{2}\zeta\lambda_{\ell}^{LL}H_{2}^{LL}
+M\zeta_{g}^{2}\text{sin}^{2}\zeta\lambda_{\ell}^{RR}H_{2}^{RR}\nonumber\\
&+m_{\ell}\zeta_{g}\text{sin}\zeta\text{cos}\zeta e^{i\phi}\lambda_{\ell}^{LR}H_{2}^{LR}
+m_{\ell}\zeta_{g}\text{sin}\zeta\text{cos}\zeta e^{-i\phi}\lambda_{\ell}^{RL}H_{2}^{RL}\big)\bigg\}\label{AB_amps_2}.
\end{align}
For the case of $c\rightarrow u\gamma$ decays, $Q_{1}=1, Q_{2}=-1/3, M=m_c$ and $m=m_u$. $\lambda_l$'s are the CKM factors, $\lambda_{\ell}^{LL}=V_{c\ell}^{*L}V_{\ell u}^{L}$, $\lambda_{\ell}^{RR}=V_{c\ell}^{*R}V_{\ell u}^{R}$,
$\lambda_{\ell}^{LR}=V_{c\ell}^{*L}V_{\ell u}^{R}$, $\lambda_{\ell}^{RL}=V_{c\ell}^{*R}V_{\ell u}^{L}$.
The mass of the down-type quarks running in the penguin loop is represented by $m_l$. The functions $G_{p}^{ij}$ and $H_{p}^{ij}$
for $p=1,2$ and $i=j=L$ are given in Ref.~\cite{Kim}. $G_p$ is also included in appendix~\ref{G}. The $i=j=R$, $i=L$, $j=R$ and the $i=R$, $j=L$ counterparts relevant for the LRSM are explained in detail in Ref.~\cite{to_come}. We calculate the SD contributions $\mathcal{A}$ and $\mathcal{B}$ only at the bare level. For the $c\rightarrow u\gamma$ decays in the LRSM model, the operator basis with and without the heavy vector-like quark now consists of 20 operators. They are the 8 operators described in sec.~\ref{Wilson} which contribute to $\mathcal{A}$ along with the following two operators,
\begin{eqnarray}
Q_9^q&=&\left(\bar{u}_{L}\gamma_{\mu}T^{a}q_{L}\right)\left(\bar{q}_{R}\gamma^{\mu}T^{a}c_{R}\right),\;\;\;\;\;\;\;\;
Q_{10}^q=\left(\bar{u}_{L}\gamma_{\mu}q_{L}\right)\left(\bar{q}_{R}\gamma^{\mu}c_{R}\right),
\end{eqnarray}
which are the left-right analogues of $Q_1^q$ and $Q_2^q$.
10 more operators with the chiralities of these operators flipped, contribute to $\mathcal{B}$. Since the strong interactions preserve chirality, these two sets of operators with different chiralities do not mix with each other and the RG group mixing of the two sets are the same. However, the additional operators require an additional $\gamma_{4\times4}$ which although present in the literature for radiative $b$ decays~\cite{Cho}, is not available for the case of the radiative charm decays. Hence incorporating the QCD corrections for the LRSM case, is beyond the scope of this work.
\section{Results and discussions}
\label{IV}
\subsection{Branching ratios in the SM and for the NP models}\label{IVA}
The inclusion of QCD corrections result in an enhancement of the coefficient $A^{SM}$ (defined in eqn.~\ref{Asm}) from $\mathcal{O}(10^{-7})$ at the bare (QCD uncorrected) level to $\mathcal{O}(10^{-6})$ at the LO and $\mathcal{O}(10^{-3})$ at the NLO level. At the LO, the contributions from the intermediate $d$ and $s$ quarks differ only in the CKM factors $V_{cd}^*V_{ud}$ and $V_{cs}^*V_{us}$. Their sum, using unitarity is $-V_{cb}^*V_{ub}$, leading to a large suppression in the amplitude. At the NLO, the functional dependence of the amplitudes on the $s$ and $d$ quark masses becomes substantial and hence the net amplitude is no longer just the sum of the CKM factors. In fact, since $V_{cs}^*V_{us}=-V_{cd}^*V_{ud}$, this results in $A^{SM}\propto V_{cs}^*V_{us}[f(\frac{m_s}{mc})^2-f(\frac{m_d}{mc})^2]$, where the function $f$~\cite{greub} is given by:
\begin{align}
f(x)= &-\frac{1}{243}((3672-288\pi^{2}-1296\zeta_{3}+(1944-324\pi^{2})\text{ln}\,x+108\text{ln}^{2}\,x+36\text{ln}^{3}\,x)x+576\pi^{2}x^{\frac{3}{2}}\nonumber\\
&+(324-576\pi^{2}+(1728-216\pi^{2})\text{ln}\,x+324\text{ln}^{2}\,x+36\text{ln}^{3}\,x)x^{2}+(1296-12\pi^{2}+1776\text{ln}\,x\nonumber\\
&-2052\text{ln}^{2}\,x)x^{3}) -\frac{4\pi i}{81}((144-6\pi^{2}+18\text{ln}\,x+18\text{ln}^{2}\,x)x+(-54-6\pi^{2}+108\text{ln}\,x+18\text{ln}^{2}\,x)\nonumber\\
&x^{2}+(116-96\text{ln}\,x)x^{3}).
\end{align}
Hence, the coefficient $A^{SM}$ at LO and NLO is given by,
\begin{align}
&A^{SM}_{LO}=-V_{cb}^{*}V_{ub}C_{7_{eff}}^{LO}\;\;\;\;\;\;\;\;\;\;
&A^{SM}_{NLO}=V_{cs}^{*}V_{us}C_{7_{eff}}^{NLO}.
\end{align}
Note that $\mid C_{7_{eff}}\mid$ itself is not enhanced at NLO compared to LO within the SM as is evident from the values in Table~\ref{tab1}, rather the different CKM coefficients appearing in $A^{SM}_{LO}$ and $A^{SM}_{NLO}$ result in the enhancement of the coefficient $A^{SM}$ at the NLO level.
Since the vector-like quark $b^\prime$ generates a non-vanishing value for the coefficients $C_7$ and $C_8$ at the electroweak scale itself, its presence results in an increased magnitude of $C_{7_{eff}}$ as can be seen in
Table~\ref{tab1}. This results in the BR enhancement by 2 orders of magnitude in the vector-like quark model at NLO compared to that in the SM. The values for $\mid A\mid$ and the corresponding BR's for the QCD uncorrected, LO and NLO corrected contributions for SM and the vector-like quark model (with $m_b^\prime=800, 1200$ GeV) are given in table~\ref{tab2}.
\begin{table*}
\centering
\caption{The values for $\mid A\mid$ and the inclusive $c\rightarrow u\gamma$ BR in the SM and vector-like quark(VLQ) model. For the vector-like quark model, the values have been calculated for the benchmark values $m_b^\prime=800$ GeV and $1200$ GeV.}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{QCD corrections}& \multicolumn{3}{c|}{$\mid A\mid$}& \multicolumn{3}{c|}{BR($c\rightarrow u\gamma$)} \\
\cline{2-7}
&\textbf{SM} & \textbf{VLQ} & \textbf{VLQ} &\textbf{SM} & \textbf{VLQ} & \textbf{VLQ} \\
& & $m_{b^\prime}=800$ GeV & $m_{b^\prime}=1200$ GeV & & $m_{b^\prime}=800$ GeV & $m_{b^\prime}=1200$ GeV \\ \hline
Bare &$2.73\times10^{-7}$&$2.49\times10^{-5}$&$2.35\times10^{-5}$&$2.04\times10^{-17}$&$1.70\times{10^{-13}}$&$1.51\times10^{-13}$ \\
\hline
LO &$5.89\times10^{-6}$&$4.32\times10^{-5}$&$4.25\times10^{-5}$&$9.48\times10^{-15}$&$5.11\times{10^{-13}}$&$4.94\times10^{-13}$ \\
\hline
NLO &$2.61\times10^{-3}$&$4.46\times10^{-2}$&$4.37\times10^{-2}$&$1.86\times10^{-9}$&$5.46\times{10^{-7}}$&$5.23\times10^{-7}$ \\
\hline
\end{tabular}
\label{tab2}
\end{table*}
\begin{table*}
\centering
\caption{Branching ratios for the LRSM model without and with contribution from heavy vector-like quark(VLQ). The Branching ratio is expressed as a function of $\zeta$, $\zeta_g$ and $\theta_{12}$ (for LRSM) and of $\zeta$, $\zeta_g$, $\theta_{12}$, $\theta_{14}$, $\theta_{24}$ and $\theta_{34}$ (for LRSM+VLQ). The corresponding parameters are varied to determine the maximum and minimum values.}
\begin{tabular}{|c|c|c|}\hline
~Model~&~~&~BR\\
\hline\hline
LRSM &Max &$1.96\times10^{-11}$\\
&Min &$0.67\times10^{-15}$\\\hline
LRSM+VLQ ($800$ GeV) &Max &$4.65\times10^{-8}$\\\
&Min &$1.69\times10^{-13}$\\\hline
LRSM+VLQ ($1200$ GeV)&Max &$0.96\times10^{-7}$\\\
&Min &$1.42\times10^{-13}$\\\hline
\hline
\end{tabular}
\label{lrsmbr}
\end{table*}
Table~\ref{lrsmbr} shows the bare level BR's for the LRSM as well as LRSM with a heavy vector-like quark model. Comparing with the bare level BR's from Table~\ref{tab2}, it is evident that for LRSM alone an enhancement of $\mathcal{O}(10^2)$ to $\mathcal{O}(10^6)$ is feasible, depending on the values of the parameters of LRSM, compared to the SM. LRSM along with vector-like quark can enhance the BR by even upto $\mathcal{O}(10^{10})$.
For the SM (vector-like quark model), the enhancement of the BR from the bare level to that with QCD corrections at NLO level is $\mathcal{O}(10^8)$ ($\mathcal{O}(10^6)$). For the case of LRSM with vector-like quark, QCD corrections are expected to lead to similar large enhancement. Even if the enhancement from these corrections is considerably less ($\sim\mathcal{O}(10^{4})$), the QCD corrected SD contribution from the LRSM with vector-like quark could result in BR's much larger than that from the LD effects. This enhancement could possibly point towards the presence of such a NP.
\subsection{Photon polarization as a probe for new physics}
\label{IVB}
Within the SM, in the penguin diagram responsible for the $c\rightarrow u\gamma$ process only left-handed components of the external fermions couple to the $W$ boson. A helicity flip on the $c$ quark leg, proportional to $m_c$, contributes to the amplitude for the emission of left polarized photons, while, that on the $u$ quark leg, proportional to $m_u$, results in right polarized photons.
In the LRSM since the physical $W_1$ boson couples to both left and right handed quarks, a helicity flip is also possible on the internal ($d$, $s$, $b$) quark lines and will result in additional left handed photons with an amplitude involving a new coefficient function and proportional to $m_b\zeta$ and similarly there will be additional right-handed photons proportional to $m_b\zeta$. In the presence of a vector-like quark, each of these contributions will be proportional to $m_{b^\prime}\zeta$.
In analogy to Ref.~\cite{Gronau}, we define the photon polarization for the inclusive process $c\rightarrow u\gamma$ as
\begin{equation}
\lambda_{\gamma}=\frac{|c_R|^{2}-|c_L|^{2}}{|c_R|^{2}+|c_L|^{2}}~,
\end{equation}
where $c_R$, $c_L$ denote the amplitudes for the right and left polarized photons in the process.
For the SM,
since the SD contributions to $c_{R}$ are negligible ($\mathcal{O}(m_u)$ suppressed), if one only includes the
SD contributions to estimate $\lambda_{\gamma}$, its value would be $-1$. However, the exclusive decay modes corresponding to the $c\rightarrow u\gamma$ process are dominated by LD contributions. To account for these
we add the values of the pole type and VMD amplitudes of all the exclusive processes (given in appendix~\ref{LDBR}).
Due to uncertainty in the sign of the VMD contributions, the long distance amplitudes can lie in the range ($2.08\times 10^{-9}-8.78\times 10^{-7}$) GeV$^{-1}$.
Since the LD amplitude does not have any preferred polarization, it contributes equally to both $c_R$ and $c_L$. This results in an almost vanishing value
of $\lambda_{\gamma}\sim$($\mathcal{O}(10^{-8}-10^{-5})$) within the SM. This is in contrast to the $b\rightarrow s\gamma$ case~\cite{Gronau}, where the LD contributions are less significant,
and hence the $\lambda_\gamma$ value is $-1$ in SM.
Without LR symmetry, an isosinglet vector-like quark can only couple to $W_L$ and hence its addition will only enhance the left handed polarized amplitude. For this case we find that in presence of LD contribution,
$\lambda_\gamma$ lies in the range $-6.1\times 10^{-6}$ to $-2.6\times 10^{-3}$. The bare SD contributions to the $\mid c_R\mid$ and $\mid c_L\mid$ amplitudes within the LRSM are given by eqns.~(\ref{AB_amps_1}) and~(\ref{AB_amps_2}). Here again the LD contribution is appropriately added to $\mid c_R\mid$ and $\mid c_L\mid$.
Since the minimal LRSM models with an exact symmetry between the left and right handed sectors are becoming harder to realize, we use a right handed mixing matrix which is distinct from the left handed CKM matrix. To decrease the number of parameters, we take the right-handed CKM matrix to be,
\begin{equation}
\begin{pmatrix}
\cos\theta_{12}&\sin\theta_{12}&0\\
-\sin\theta_{12}&\cos\theta_{12}&0\\
0&0&1
\end{pmatrix}.
\end{equation}
This parametrization of the right handed CKM matrix is inspired by Ref.~\cite{Langacker, Kou}. The CP violating phases have been taken to be zero and $\theta_{13}=\theta_{23}=0$,
where $\theta_{ij}$ is the mixing angle between the $i^{th}$ and $j^{th}$ generations. The photon polarization can be expressed as a function of $\zeta$, $\zeta_g$ and $\theta_{12}$.
We vary the parameters $\zeta_g$ and $\zeta$ within their allowed ranges ($0\leq\zeta_g\leq2$ and $0\leq\zeta\leq10^{-3}$) and look for the $\theta_{12}$ values for the maximum deviation of
the polarization from its SM value of $\approx 0$. For the case of the LRSM with a heavy vector-like quark $b^\prime$, there are three additional parameters ($\theta_{14}, \theta_{24}, \theta_{34}$),
which are also chosen to get the maximum deviation of $\lambda_\gamma$ from its SM value.
The contour plots for the variation of $\lambda_\gamma$ for LRSM with no LD contribution and LRSM with LD amplitude of $2\times 10^{-9}$ GeV$^{-1}$ are shown in Fig.~\ref{polarization_fig}.
\begin{figure}[ht]
\subfloat[][]{\includegraphics[width=0.4\textwidth]{lrsm_no_ld_b.eps}}\hfill
\subfloat[][]{\includegraphics[width=0.41\textwidth]{lrsm_b.eps}}\\
\caption{\small{Contour plots showing the variation of the polarization $\lambda_\gamma$ as a function of $\zeta$ and $\zeta_g$ for LRSM with no LD contribution on left
and LRSM with LD amplitude of $2\times10^{-9}$ GeV$^{-1}$ on right. For both the cases, the right-handed CKM elements are set for maximum deviation of the polarization function from its SM value.
The bar-legends for the different contours of $\lambda_\gamma$ are displayed along with the respective figures. Here $0<\zeta<10^{-3}$ and $0<\zeta_g<2$}.}
\label{polarization_fig}
\end{figure}
\begin{figure}[ht]
\subfloat[][]{\includegraphics[width=0.36\textwidth]{np129_b.eps}}
\subfloat[][]{\includegraphics[width=0.36\textwidth]{np229_b.eps}}\\
\subfloat[][]{\includegraphics[width=0.36\textwidth]{np118_b.eps}}
\subfloat[][]{\includegraphics[width=0.36\textwidth]{np218_b.eps}}\\
\subfloat[][]{\includegraphics[width=0.36\textwidth]{np188_b.eps}}
\subfloat[][]{\includegraphics[width=0.36\textwidth]{np288_b.eps}}\\
\caption{\small{Contour plots showing the variation of the polarization $\lambda_\gamma$ as a function of $\zeta$ and $\zeta_g$. The left panels show the plots for LRSM with a VLQ of mass $800$ GeV,
while the right panels display the plots for LRSM with a VLQ of mass $1200$ GeV. The LD amplitudes(in units of GeV$^{-1}$) are $2\times10^{-9}$, $1\times 10^{-8}$, and $8\times10^{-8}$,
for the top, middle and bottom rows respectively. For all the cases, the right-handed CKM elements are set for maximum deviation of the polarization function from its SM value.
The bar-legends for the different contours of $\lambda_\gamma$ are displayed along with the respective figures. Here $0<\zeta<10^{-3}$ and $0<\zeta_g<2$}.}
\label{LD_pol}
\end{figure}
As seen in Fig.~\ref{polarization_fig}(a), for very small values of $\zeta$ and $\zeta_{g}$, LRSM approaches the SM and hence in absence of long distance contribution, the polarization is
left handed ($\lambda_{\gamma}=-1$), however as the parameters $\zeta$ and $\zeta_{g}$ increase, the polarization value changes from -1 to +1.
This picture completely changes in the presence of the long distance effects, shown in Fig.~\ref{polarization_fig}(b). Left and right pannels
of Fig.~\ref{LD_pol} show the $\lambda_{\gamma}$ contours for LRSM with an
isosinglet down type vector-like quark of mass $800$ GeV and $1200$ GeV respectively, with LD
amplitudes (in units of GeV$^{-1}$) of $2\times 10^{-9}$ , $1\times 10^{-8}$ and $8\times 10^{-8}$, corresponding to the top, middle and bottom
rows. At the lower end of the range estimated for the LD
amplitude, in
a model with a vector like quark along with LRSM, polarization can be large, even +1 as both $\zeta$ and $\zeta_{g}$ approach their maximum values.
If the LD contributions are larger $\sim 1\times 10^{-8}$ GeV$^{-1}$, the maximum polarization value is $\sim 0.5$, which further reduces to 0.05 for LD amplitude of~$8\times 10^{-8}$ GeV$^{-1}$.
On the experimental side, branching ratios of some of the radiative decays of the $D^0$ meson have been measured by the Belle collaboration~\cite{Belle_rad},
\begin{align*}
&\text{BR}\big(D^{0}\to\rho^{0}\gamma\big)=(1.77\pm 0.3\pm 0.07)\times 10^{-5},\\
&\text{BR}\big(D^{0}\to\phi\gamma\big)=(2.76\pm 0.19\pm 0.10)\times 10^{-5},\\
&\text{BR}\big(D^{0}\to \overline{K}^{*0}\gamma\big)=(4.66\pm 0.21\pm 0.21)\times 10^{-4}.
\end{align*}
If the LD contribution is at its lower limit, then the measured BR($D^{0}\to\rho^{0}\gamma$) can allow some enhancement from the NP SD contribution,
on the other hand, the upper limit of LD saturates the observe BR. The measured BR($D^{0}\to\phi\gamma$) also allows some NP SD contribution.
The upper limit for BR($D^0\rightarrow\omega\gamma$) is $2.4\times10^{-4}$~\cite{pdg} and cannot be saturated by the SM contribution. Also, recently an observation of the photon polarization in the $b\rightarrow s\gamma$ transition was reported by LHCb~\cite{LHCb}.
Photon polarization is obtained by the angular distribution of the photon direction with respect to the plane defined by the momenta of the three final-state hadrons
in their centre of mass frame. A similar technique could be used to measure the photon polarization for the case of $D\rightarrow\omega\gamma$, since the decay
of $\omega$ into three pions will permit the measurement of an up-down asymmetry between the number of events with photons on either side of the plane.
For the model
with left-right symmetry and a vector-like quark, the enhancement in the BR($c\to u\gamma$), as well
as the photon polarization value being different from that of the SM, should be reflected in the
exclusive modes as well, although the results may be weaker.
All the form factors required to estimate the exclusive BR's are neither available from experimental data nor yet extracted from lattice calculations. Hence, we do not attempt to calculate the exact BR's for
specific exclusive modes. Very recently exclusive radiative charm decays have been studied~\cite{Hiller} in heavy quark and hybrid formalism.
\section{Conclusions}\label{V}
Charmed decay modes including radiative ones are expected to be plagued by long distance contributions. For the SM, NLO QCD corrections enhance the short distance $c\to u\gamma$ branching ratio by about $\mathcal{O}(10^{8})$. Further
enhancement of the branching ratio is possible in various new physics models. We show that for certain values of
the parameter space, an enhancement by even up to $\mathcal{O}(10^{10})$ is possible in a left-right symmetric model with a down type vector-like singlet quark at the bare level.
This could be enhanced further by many orders of magnitude after incorporating QCD corrections, enabling the short distance branching fraction to be possibly even
larger than the long distance contribution. Such an enhancement could signal the presence of physics beyond the SM. However, the uncertainty in the size of the long distance contributions,
may not allow this to be easily feasible. Nevertheless measurements of branching ratios of all possible charm radiative modes should be made.
A clearer signature of new physics could be obtained by measurement of the photon polarization, for eg. for the radiative $D\rightarrow\omega\gamma$ mode via a technique similar
to that used recently by LHCb~\cite{LHCb} for the $b\rightarrow s\gamma$ case. We find that for a large region of the parameter space for the vector-like quark
model with left-right symmetry, the photon polarization can be right handed.
For the modes $D\rightarrow K^*\gamma$, $\rho\gamma$ the photon polarization could possibly be determined by looking at the photon conversion to $e^+e^-$~\cite{Grossman}.
|
1,116,691,497,405 | arxiv | \section{Findings}
\label{sec:findiings}
Using the above described classifiers, we conduct an exploratory analysis on the \#WhyIDidntReport movement data. The data collection method for the \#WhyIDidntReport has been explained in Section~\ref{sec:data_collection}. The whole corpus had $37K$ tweets of which about $11K$ tweets were annotated for training and evaluating the classification models and the remaining $26K$ unlabeled tweets are analyzed by the trained classifiers. Our goal is to understand to what extent sexual violence victims were silent and what were the reasons.
\textbf{Extent: }Our reason detector identifies $13,612$ tweets that mention at least one reason for not reporting a sexual violence earlier. To conduct a robust analysis, we identified the reason mentioning tweets that have \emph{Reason} probability greater than or equal to $0.8$. The detector model has high confidence on these tweets. This threshold is set empirically, after attempting with multiple smaller and larger values. There are $9,866$ self-reports which surpass this threshold. We continue our analysis with these tweets only. There are $5,882$, $2,377$, and $524$ tweets that contain one, two, and three or more reasons, respectively.
\vspace{-2mm}
\begin{table}[ht!]
\centering
\caption{Frequency and Percentage of Reasons in Annotated and SVM\_labeled Data}
\resizebox{\linewidth}{!}{%
\begin{tabular}{@{}l|ll|lr@{}}
\toprule
Reason & \# in Annotated & \# in SVM\_labeled & \% in Annotated & \% in SVM\_labeled \\ \midrule
Shame & 1811 & 2450 & 18.740 & 19.974 \\
Denial & 680 & 647 & 7.036 & 5.275 \\
Fear & 1988 & 2650 & 20.571 & 21.604 \\
Hopeless & 2984 & 4329 & 30.877 & 35.293 \\
Memory & 400 & 317 & 4.139 & 2.584 \\
LoI & 829 & 951 & 8.578 & 7.753 \\
Protect & 246 & 272 & 2.546 & 2.218 \\
Age & 726 & 650 & 7.512 & 5.299 \\
\bottomrule
\end{tabular}%
}
\label{tab:reason_distribution}
\end{table}
We apply the reason categorization model on the $9,866$ high-confidence reason mentioning tweets. Table~\ref{tab:reason_distribution} shows the distribution of the reason categories. Overall, the distribution in the annotated data and the SVM\_labeled data is are similar. \emph{Hopelessness and Helplessness} is the most common and \emph{Protecting Assailant} is the least common reason for not reporting sexual violence incident.
\textbf{User Engagement: } We study how the Twitter users engage with the victims when they break their silence and disclose the sexually violent experience. Specifically, we measure how many \textit{Like} (\ensuremath\varheartsuit) a reason mentioning tweet receives. Table~\ref{tab:user_engagement} shows the average and median number of \textit{Likes} a tweet got depending on it mentioned a reason or not. We observe that a reason mentioning tweet receives \textit{Like} about two times more than an average tweet. On average, a reason mentioning tweet got $7.08$ \textit{Likes} and other tweet got $3.755$ \textit{Likes} in our annotated data (in the SVM\_labeled data, the numbers are $6.603$ and $3.698$, respectively). Note, we discarded the highly popular tweets ($Likes \ge 100$) to avoid outlier from this analysis. In summary, the Twitter users engaged more with the reason mentioning tweets than with other tweets.
\vspace{-2mm}
\begin{table}[h!]
\centering
\caption{User Engagement with \textit{Reason} and \textit{no\_Reason} Tweets}
\begin{tabular}{@{}l|cc|cc@{}}
\toprule
& \multicolumn{2}{c}{Annotated} & \multicolumn{2}{c}{SVM\_labeled} \\ \midrule
& \textit{Reason} & no\_Reason & Reason & \textit{no\_Reason} \\
\midrule
Average & 7.083 & 3.755 & 6.603 & 3.569 \\
Median & 2 & 1 & 2 & 1 \\ \bottomrule
\end{tabular}
\label{tab:user_engagement}
\end{table}
\textbf{Comparison with Other Studies: } There are existing works that tried to understand the reasons for not reporting. Unlike this work, they collected data through survey or interview. For instance, Truman and Langton~\cite{truman2015criminal} found that fear and hopelessness are common reasons for not reporting sexual violence. Our study also finds these as the two most common reasons~\ref{tab:reason_distribution}. Another study~\cite{nationalcollege} also finds \emph{Denial}, \emph{Hopeless} and \emph{Fear} to be the most common three reasons. However, we didn't find \emph{Denial} as a common reason. Spencer et al.~\cite{collegeSurvey} found that the most common reasons are \emph{LoI} and \emph{Denial} which are different from our most common reasons. The difference could stem from the fact that they studied a specific population- college or university students.
\section{Introduction}
\label{sec:introduction}
Sexual violence is a serious offence that causes several negative impacts on the victims' physical, psychological, and social health~\cite{fitzgerald1997antecedents, loy1984extent}. Unfortunately, the prevalence of sexual violence is very high across the globe~\cite{gelfand1995structure}, and women are the principle victims of sexual violence. According to UN Women and World health Organization (WHO), $35\%$ (almost one in every three) of women worldwide have experienced either physical and/or sexual intimate partner violence or sexual violence by a non-partner at some point in their lives~\cite{unwomen}.
The U.S. Equal Employment Opportunity Commission~\cite{feldblum2016select} receives $12,000$ allegations of sex-based harassment each year. However, this number is believed to be just the tip of the iceberg. According to the Commission~\cite{feldblum2016select}, three of four sexual violence victims do not report to the authority.
Researchers~\cite{collegeSurvey, military, nationalcollege} and pertinent agencies (e.g., National Survey of Crime Victimization [NSCV]) have been trying to understand why victims do not report. Some of the common reasons which have been identified are- lack of information of where and how to report, fear of consequences, denial or minimization, etc. However, most of these studies are small-scale in nature and are focused to specific population. In September 2018, in light of the \textit{Brett Kavanaugh} hearing (\url{https://en.wikipedia.org/wiki/Brett\_Kavanaugh}), people started disclosing in Twitter why they didn't report their sexual violence experience using a hashtag \#WhyIDidntReport. It soon became viral and transformed into a movement. Victims from all over the United States participated in this movement. Their shared tweets gave an unprecedented opportunity of examining victims' silence in a larger scale with a broad range of population. In this paper, we analyze about $40,000$ tweets of this movement to conduct a large-scale analysis of victims' silence- reasons behind not reporting. First, we identify the most common reasons from existing literature. Then, we manually annotated a $10K$ sample of the data and use the annotated data to train supervised machine learning models. Finally, we use the models to detect and categorize the reasons from the full dataset. We find that Twitter users generally engaged more with the victims who broke their silence. We also observe that the distribution of reasons found by our study is aligned with some existing works. To the best of our knowledge, this is the first study that takes a social sensing approach to study the silence of the sexual violence victims. The observations from this study can be useful for sexual violence researchers, crime petrol entities, law and regulation authorities for developing victim friendly policies.
\par
\section{Conclusion, Limitations and Future Work}
\label{sec:limitations}
In summary, this paper studies why the victims remain silent after going through sexual violence. The findings can be useful for sexual violence researchers, crime petrol entities, law and regulation authorities for developing victim friendly policies. While this study reveals some insights from the \#WhyIDidntReport movement data, there are some areas for possible improvement. First, the data was carefully annotated by one female student. This can introduce personal bias in the data. One possible solution is to annotate the data using more people. Second, the annotated data didn't have enough samples for some of the categories. Third, a tweet has $240$ character length limit. Tweeters often bypass this by using multiple tweets to make a single long post. As we treated each tweet as a single post, these kind of long posts weren't considered as a single unit which can potentially lead to inaccurate results. In future, we plan to resolve these limitations by deploying more annotators and designing robust tweet processing techniques.
\section{Literature Review}
\label{sec:literature_review}
Related research that has been conducted on why victims do not report sexual violence generally takes a more focused approach. Some studies look at specific subgroups of victims~\cite{nationalcollege, military} or reporting~\cite{collegeSurvey} and most of them primarily use survey as the way for data collection. For instance, authors in ~\cite{nationalcollege, collegeSurvey} examined why college women don't report their assault by surveying college women nationally or by surveying women on a specific college campus.
Both studies reached similar conclusion as to why, in these cases women, rarely report. They reference that the women were afraid, ashamed, did not think it was bad enough, or were drugged or drinking. The authors in~\cite{nationalcollege} also suggests that a relationship to the offender and personal attributes like race, age, and gender can make a woman less likely to report. The college women study~~\cite{collegeSurvey} mentions that women did not know they could report, or did not report to the university because it was not university-related. The authors in~\cite{military} focus specifically on service women and also uses a survey which was mailed to participants. The study drew on about $2,800$ participants which included those that chose not to respond.
Yet another approach, M{\'e}nard \cite{ageBook} studies the question using data gathered from $48$ rape crisis centers from Pennsylvania. It focuses more on cultural aspects of not reporting. This study takes a broader approach rather than focusing on certain population group. Also, unlike previous studies, we do a large-scale analysis of about $10K$ self-reported reasons instead of conducting survey of focus group study.
\section{Methodology}
\label{sec:method}
\subsection{Data Preparation}
In this section, we describe our data collection strategy, the annotation process, and observations from the annotated data.
\subsubsection{Collection}
\label{sec:data_collection}
We collected about $40,000$ tweets containing the \emph{\#WhyIDidntReport} hashtag from all the U.S. cities that had a population of more than $100K$ using the Twitter Advanced Search Feature. These tweets were collected within the September $22$ -- October $2$, $2018$ period. This is the period when \emph{Brett Kavanaugh hearing}~\footnote{\url{https://en.wikipedia.org/wiki/Brett_Kavanaugh_Supreme_Court_nomination\#Hearing}} was going on and the \#WhyIDidntReport movement was in its peak. The tweets contained a variety of
subject matters that ranged from commentary on the Kavanaugh hearings to personal accounts of
assault to reasons why victims did not report the assault.
We removed the duplicate tweets from the dataset by matching the permanent URLs of the tweets. This reduced the collection size to $37,526$ tweets that were tweeted by $24,194$ unique users. Figure~\ref{fig:usmap} shows a choropleth indicating the number of tweets per $10,000$ people in different U.S. states. The figure shows that there was participation in the movement from almost all over U.S. In total, the dataset covers $211$ cities from $43$ states. The top three states with highest tweet density are- \emph{Alaska}, \emph{Iowa}, and \emph{Texas}.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.9\linewidth]{./fig/usmap}
\caption{Number of tweets per 1000 people from the U.S. states}
\label{fig:usmap}
\end{figure}
\subsubsection{Annotation Scheme}
Our objective was to identify the tweets that disclose a reason for remaining silent after a sexual violence. So, it was necessary to carefully prepare a tweet annotation guideline that conforms with existing literature. To prepare an annotation scheme, first we conducted a pilot study with $1,000$ random tweets. We observe that a large portion of the tweets did not actually share reasons. So, to increase the probability of finding tweets with reasons, we filtered the data with a set of keywords (\emph{`because', `I was', `I thought', `I felt', `I didn't'}) which frequently appeared in reason mentioning tweets according to our pilot study. This step reduced the dataset size to $12,899$ tweets.
Standing on existing literature that studied victims' reasons for not disclosing sexual violence experience~\cite{nationalcollege, collegeSurvey, military, psychtoday}, we prepare a list of eight reasons-
\begin{enumerate}
\item
\textbf{Shame}: The victims feel that it was their own fault and they were too ashamed to tell anyone.
\item
\textbf{Denial} or Minimization: The victims were trying to convince themselves that what happened really was not a big deal, did not happen, or was not wrong.
\item
\textbf{Fear} of Consequences: This category is rather broad because it can be a fear of losing one’s job, what people will think, the actual legal aspects of reporting, physical harm from the assailant or others, or anything else that could cause the fear which prevents the victim from reporting.
\item
\textbf{Hopeless}ness or Helplessness: Victims feel as though there is no point in reporting because they have seen how people treat those who do report and have seen the lack of action when an assault is reported, especially due to others' disbelief.
\item
\textbf{Drug}/Disassociation: Substance or psychological effects prevent the victim from having a clear memory of the event.
\item
\textbf{L}ack \textbf{o}f \textbf{I}nformation: The victim didn't have information of where or who to report, how to report. Or, the victim didn't know that the assault can be reported.
\item
\textbf{Protect}ing Assailant: The victims do not want the assailant to go to jail or have their life ruined.
\item
Young \textbf{Age}: The victims didn't report because they were young.
\end{enumerate}
A undergraduate female student manually analyzed the tweets and labeled if the tweet mentions a reason. After that, she categorized the reason to one of these eight cases. A tweet sometimes provide multiple reasons. In that case, she categorized the tweet into multiple reasons.
\textbf{Example: } ``\textit{I was 15 and was told if I told anyone or didn't go through with it I didn't love him and he would leave me. I didn't want to get in trouble and I didnt think it counted as rape. Years down the it still effects my life.}'' For example, this tweet mentions multiple reasons-- $Fear$ (\emph{I didn't want to get in trouble}) and $Denial$ (\emph{I didnt think it counted as rape}).
\subsubsection{Annotated Data}
In total, we identified $6,860$ tweets that contained at least one reason and $3,935$ tweets that didn't mention any reason for not reporting earlier. Table~\ref{tab:reason_distribution} shows the number and percentage of tweets in each reason category. The most common reason is feeling hopeless or helpless- about $30.877\%$. The least common reason is to protect the image of the assailant, $2.545\%$. There are $4,286$ tweets that mentioned only one reason, $1,789$ tweets mentioned two, and $570$ tweets mentioned three or more reasons.
\subsection{Problem Formulation}
We model the reason understanding problem as a two step process- \textbf{i)} detecting tweets that contain a reason for not disclosing sexual harassment experience \textbf{ii)} categorizing the detected tweets into appropriate reasons.
\subsubsection{Reason Detection}
\label{sec:detection}
We define the reason providing tweet detection task as a supervised binary classification problem where the set of classes, $\mathcal{RD} = \{Reason, no\_Reason\}$. Formally, given a tweet $t$, the goal is to map $t$ to $c$ where $c \in \mathcal{RD}$.
\subsubsection{Reason Categorization}
\label{sec:categorization}
In a tweet, a victim may describe multiple reasons for being silent, in other words, for not disclosing sexual violence experiences. So, we define the reason categorization task as a multi-label supervised classification problem where the set of classes, $\mathcal{RC} = \{Shame$, $Denial$, $Fear$, $Hopeless$, $Drug$, $LoI$, $Protect$, $Age\}$. Formally, given a tweet $t$ that has been mapped to $Reason$, the goal of this task is to map $t$ to $C$ where $C \subseteq \mathcal{RC}$.
\subsection{Model Development}
Before applying machine learning models, we preprocessed the tweets following a series of steps. First, we converted the tweets to all lowercase. Then, a Python package named tweet-preprocessor~\footnote{https://pypi.org/project/tweet-preprocessor/} was used to remove hashtags, URLs, emojis, and smileys from the tweets. After that, these cleaned tweets were transformed to vectors using Bag-of-Words (BOW) model and TF-IDF (with sublinear tf). We used unigrams and bigrams as features (ngrams with n = 1, 2). There were $117,274$ such features. The ngrams that were present in only one tweet (very rare) or present in more than $25\%$ of all tweets (very common) were discarded from feature set. This step reduced the number of features to $23,375$. From these, we selected the most frequent $5,000$ ngram features and used these in all the experiments.
We experimented with multiple supervised learning algorithms. Specifically, we used Support Vector Machine (SVM) with linear kernel, Random Forest, Naive Bayes, and Gradient Boosting. In most cases, the SVM outperformed other models. So, in this paper, we only reported performance of SVM. In all experiments, we split the data into train and test sets with a 80:20 ratio and use the test set for evaluation. In the \textbf{Reason Detection} task, the SVM model achieves an average (weighted) precision, recall, and F1-measure of $90\%$. Table~\ref{tab:rd_performance} shows detailed performance for each class. The SVM based reason detector can detect reason mentioning tweets with a $92\%$ precision and $93\%$ recall.
\vspace{-3mm}
\begin{table}[h!]
\centering
\caption{Performance of Reason Detector}
\begin{tabular}{l|llr}
\toprule
& Precision & Recall & F1-measure\\
\midrule
Reason & 0.92 & 0.93 & 0.92 \\
no\_Reason & 0.87 & 0.84 & 0.86 \\
\midrule
Avg (weighted)& 0.90 & 0.90 & 0.90 \\
\bottomrule
\end{tabular}
\label{tab:rd_performance}
\end{table}
To further understand the quality of the reason detector we conducted a ranking evaluation using the class probabilities of the classified tweets. We order the test set tweets with respect to their SVM assigned \textit{Reason} probability and measured the quality of the top-$k$ tweets by two commonly used metrices- Precision-at-k (P@k), and Average Precision-at-k (AvgP@k). Table~\ref{tab:rd_ranking} shows these measures for various values of $k$. In general, the detection model achieves excellent performance in ranking. The precision of reason detection among the top $1000$ tweets is $97.3\%$.
\begin{table}[!ht]
\centering
\caption{Ranking evaluation of Reason Detection}
\resizebox{\linewidth}{!}{%
\begin{tabular}{@{}l|rrrrrrrr@{}}
\toprule
k & 25 & 50 & 100 & 300 & 500 & 1000 & 1500 & 2000\\
\midrule
P@k & 1.0 & 0.98 & 0.99 & 0.997 & 0.996 & 0.973 & 0.886 & 0.697 \\
\midrule
AvgP@k & 1.0 & 0.997 & 0.992 & 0.994 & 0.995 & 0.991 & 0.980 & 0.973 \\
\bottomrule
\end{tabular}%
}
\label{tab:rd_ranking}
\end{table}
We model the \textbf{Reason Categorization} as a multi-label classification problem. For each $c \in \mathcal{RC}$, we develop a one-vs-rest SVM classifier. Table~\ref{tab:rc_performance} shows performance of each category in terms of precision, recall, F1-measure, and accuracy. The F1-measures for
\emph{Shame}, \emph{Fear}, and \emph{Hopeless} are above $75\%$. However, on the rest of the categories, the classifiers didn't perform well. Particularly, for the \emph{Age} category, the F1-measure is worse than a random guess. The could be due to the lack of samples from these classes. We plan to continue annotating more tweets that can potentially increase the sample size and thereby lead to a better result.
\vspace{-3mm}
\begin{table}[h!]
\centering
\caption{Performance of Reason Classifiers}
\begin{tabular}{l|lllr}
\toprule
Reason & Precision & Recall & F1-measure & Accuracy\\
\midrule
Shame & 0.85 & 0.728 & 0.789 & 0.896 \\
Denial & 0.77 & 0.331 & 0.551 & 0.921 \\
Fear & 0.819 & 0.713 & 0.766 & 0.877 \\
Hopeless & 0.794 & 0.76 & 0.777 & 0.813 \\
Drug & 0.75 & 0.296 & 0.523 & 0.958 \\
LoI & 0.711 & 0.408 & 0.56 & 0.907 \\
Protect & 0.76 & 0.422 & 0.591 & 0.977 \\
Age & 0.662 & 0.293 & 0.477 & 0.908 \\
\bottomrule
\end{tabular}
\label{tab:rc_performance}
\end{table}
If a tweet is labeled positive by multiple one-vs-rest models, we assign the corresponding classes to that tweet. We use \textit{Hamming Loss}~\cite{dembczynski2010regret} (HL) to measure the quality of the multi-label assignments. HL is the fraction of labels that are incorrectly predicted. The equation for measuring HL is as follows-
$$HL = \frac{1}{N\times L}\sum_{i=1}^{|N|}\sum_{j=1}^{|L|}\oplus~(y_{i,j}, z_{i,j})$$
where $N, L, y, z$ are the number of tweets, number of labels (categories), true label, and predicted label, respectively and $\oplus$ denotes an \textbf{xor} operation. HL is lower (close to zero) the better. Our one-vs-rest classification models achieved a very low HL score of $0.093$.
|
1,116,691,497,406 | arxiv | \section{\label{sec:Introduction}Introduction}
The measurement of the first moment of the proton's spin-dependent structure function
$g_{1}^{p}$ by the European Muon Collaboration
(EMC) \cite{EMCg1p:PLB1988,EMCg1p:NPB1989}
revealed a discrepancy from the Ellis-Jaffe sum rule
\cite{EllisJaffe:PRD1974,EllisJaffeCorr:NPB1980}
and also the fact that the SU(3) flavor-singlet axial
charge $g_{\rm{A}}^{(0)}$ was smaller than
expected from the static and
relativistic quark models \cite{ProtonSpin:2008}.
After these discoveries, experimental
efforts \cite{HERMESg1:PRD2007,COMPASSg1d:PLB2007:1,COMPASSg1d:PLB2007:2}
focused on a detailed understanding of the spin structure of the
proton.
The proton spin $s_z/\hbar=1/2$ can be decomposed as
$\frac{1}{2}=\frac{1}{2}\Delta \Sigma + \Delta G + L_{z}$
from conservation of angular momentum.
The measurements precisely determined the total spin
carried by quarks and anti-quarks, $\Delta \Sigma$, which is only
about $30$\% of the proton spin.
The remaining proton spin can be attributed to the other components,
the gluon spin contribution ($\Delta G$) and/or orbital angular momentum contributions ($L_{z}$).
The total gluon polarization is given by
\begin{eqnarray}
\label{eq:GluonPolarization}
\Delta G (\mu) \equiv \int_{0}^{1} dx \Delta g(x,\mu),
\end{eqnarray}
where $x$ and $\mu$ represent Bjorken $x$ and factorization scale
respectively.
The challenge for the $\Delta G(\mu)$ determination
is to precisely map the gluon polarization density $\Delta
g (x,\mu)$ over a wide range of $x$.
The Relativistic Heavy Ion Collider (RHIC),
which can accelerate polarized
proton beams up to 255~GeV,
is a unique and powerful
facility to study the gluon polarization.
One of the main goals of the RHIC physics program is to determine the gluon
polarization through measurements of longitudinal double-spin asymmetries,
\begin{eqnarray}
\label{eq:ALLDef}
A_{LL}\equiv\frac{\sigma^{++}-\sigma^{+-}}{\sigma^{++}+\sigma^{+-}},
\end{eqnarray}
\noindent where $\sigma^{++}$ and $\sigma^{+-}$
denote the cross sections of a specific process in the
polarized $p$$+$$p$ collisions with same and opposite
helicities. Using $A_{LL}$, the polarized cross sections,
$\sigma^{++}$ and $\sigma^{+-}$, can be represented as,
\begin{eqnarray}
\label{eq:PolCrossSection}
\sigma^{+\pm} = \sigma_{0}(1\pm A_{LL}),
\end{eqnarray}
\noindent where $\sigma_{0}$ is the unpolarized cross section
of the
process.
$A_{LL}$ has been measured previously in several channels by PHENIX and
STAR, including inclusive
$\pi^{0}$ \cite{PHENIXPi0ALL:PRL2009,PHENIXPi0ALL:PRD2007,PHENIXPi0ALL:PRD2006,PHENIXPi0ALL:PRD2009},
$\eta$ \cite{PHENIXEtaALL:PRD2011}, and
jet \cite{STARJetALL:arXiv2011,STARJetALL:PRL2008,PHENIXJetALL:PRD2011}
production.
Using the measured asymmetries, as well as the world-data on polarized
inclusive and semi-inclusive deep-inelastic scattering
(DIS) \cite{HERMESg1:PRD2007,COMPASSg1d:PLB2007:1,COMPASSg1d:PLB2007:2,HERMESpPDF:PRD2005,COMPASSSIDIS:PLB2008},
a global analysis based on perturbative-quantum-chromodynamics
(pQCD) calculation was performed at
next-to-leading order (NLO) in the strong-coupling constant
$\alpha_{S}$~\cite{DSSV:PRD2009}.
The resulting $\Delta g (x,\mu)$ from the best fit is
too small to explain the proton spin in the Bjorken $x$ range of
$0.05<x<0.2$ ($-1.3<\log_{10}x<-0.7$) without considering $L_{z}$,
though a substantial gluon polarization is not ruled out yet due to the
uncertainties.
Also, due to the limited Bjorken $x$ coverage,
there is a sizable uncertainty in
Eq.~\ref{eq:GluonPolarization} from the unexplored small
$x$ region.
The polarized cross section of heavy flavor production
on the partonic level is well
studied with leading-order (LO) and NLO pQCD
calculations \cite{LOHFXSect:PLB1994,NLOHFXSect:PRD2003,NLOHFXSect:ArXiv2000}.
The heavy quarks are produced dominantly by the gluon-gluon
interaction at the partonic level \cite{HQSpin:arXiv2009}.
Therefore, this channel has good sensitivity to the polarized gluon
density.
In addition, the large mass of the heavy quark
ensures that pQCD techniques are applicable for calculations of the
cross section.
Therefore, the measurement of heavy flavor production in polarized
proton collisions is a useful tool to study gluon polarization.
In $p$$+$$p$ collisions at $\sqrt{s}=200$~GeV, the heavy flavor
production below $p_{T}\sim5$~GeV/$c$
is dominated by charm quarks.
The Bjorken $x$ region covered by this process at midrapidity
is centered around $2m_{c}/\sqrt{s}\sim1.4\times10^{-2}$ where $m_{c}$
represents the charm quark mass.
Hence, measurement of the spin dependent heavy flavor production
is sensitive to the unexplored $x$ region, and
complements other data on the total gluon polarization $\Delta G(\mu)$.
At PHENIX, hadrons containing heavy flavors are measured through their
semi-leptonic decays to electrons and
positrons (heavy flavor electrons) \cite{SingleE:PRL2006,HQPhenix:PRC2011}.
Therefore the double-spin asymmetry of the heavy flavor electrons is an
important measurement for the gluon polarization study.
In this paper, we report the first measurement of this asymmetry,
and a resulting constraint on the gluon polarization with an LO
pQCD calculation.
The organization of this paper is as follows:
We introduce the PHENIX detector system used for the measurement in
Sec.~\ref{sec:ExperimentalSetup}.
The method for the heavy flavor electron analysis is discussed in
Sec.~\ref{sec:MeasurementOfSE} and the results of the cross section and the
spin asymmetry are shown in Sec.~\ref{sec:CrossSection} and
Sec.~\ref{sec:HelicityAsymmetry}, respectively.
From the asymmetry result, we estimate a constraint on the polarized
gluon density, which is described in Sec.~\ref{sec:Discussion}.
For the sake of simplicity,
we use the word ``electron'' to include both electron and positron
throughout this paper, and distinguish by charge where necessary.
\section{\label{sec:ExperimentalSetup}Experimental Setup}
This measurement is performed with the PHENIX detector positioned at
one of collision points at RHIC.
The RHIC accelerator comprises the blue ring
circulating clockwise and the yellow ring
circulating counter-clockwise.
For this experiment,
polarized bunches are stored and accelerated up to 100~GeV in each ring
and collide with longitudinal polarizations of
$\sim57$\% along the beams at the collision point with a collision
energy of $\sqrt{s}=200$~GeV.
The bunch polarizations are changed to parallel (beam-helicity $+$)
or anti-parallel (beam-helicity $-$) along the beams
alternately in the collisions
to realize all 4 ($=2\times 2$) combinations of the crossing beam-helicities.
Each time the accelerator is filled, the pattern of beam helicities in
the bunches is changed, in order to confirm the absence of a pattern
dependence of the measured spin asymmetry.
See Sec.~\ref{sec:HelicityAsymmetry} for details.
A detailed description of the complete PHENIX detector system
can be found elsewhere
\cite{PHENIXOV:NIM2003,PHENIXmag:NIM2003,PHENIXcnt:NIM2003,PHENIXpid:NIM2003,PHENIXemc:NIM2003,PHENIXmuon:NIM2003,PHENIXin:NIM2003}.
The main detectors that are used in this analysis are beam-beam
counters (BBC), zero degree calorimeters (ZDC), and two central
arm spectrometers. The BBC provides the collision vertex
information and the minimum bias (MB) trigger. The luminosity
is determined by the number of MB triggers. Electrons are
measured with the two central spectrometer arms which each
cover a pseudorapidity range of $|\eta|<0.35$ and azimuthal
angle $\Delta\phi=\pi/2$.
Figure~\ref{fig:PHENIXDetector} shows
the beam view of the 2009 PHENIX central arms configuration,
which comprises the central magnet (CM), drift chamber (DC),
and pad chamber (PC) [for charged particle tracking], the
ring-imaging \v{C}erenkov detector (RICH) and hadron blind
detector (HBD)~\cite{HBD:NIM2004,HBD:NIM2011} [for electron
identification], and the electromagnetic calorimeter (EMCal)
[for energy measurement]. Below we summarize the features of
the detectors and the CM.
\begin{figure}[tbh]
\includegraphics[width=1.0\linewidth,clip]{Phenix_2009}
\caption{\label{fig:PHENIXDetector}(color online)
Beam view (at $z=0$) of the PHENIX central arm detectors in
2009. See text for details.}
\end{figure}
The BBCs are two identical counters positioned at $\pm1.44$~m
from the nominal interaction point along the beam direction and
cover pseudorapidity of $3.1<|\eta|<3.9$. They measure the
collision vertex along the beam axis by measuring the time
difference between the two counters, and also provide the MB
trigger defined by at least one hit on each side of the vertex.
The position resolution for the vertex is $\sim2.0$~cm in
$p$$+$$p$ collision.
The ZDCs, which are located at $\pm18.0$~m away from the
nominal interaction point along the beam direction, detect
neutral particles near the beam axis ($\theta<2.5$~mrad). Along
with the BBCs, the trigger counts recorded by the ZDCs are used
to determine the relative luminosity between crossings with
different beam-helicities combinations. The ZDCs also serve for
monitoring the orientation of the beam polarization in the
PHENIX interaction region through the experiment.
The transverse momentum of a charged particle track
is determined by its
curvature in the magnetic field provided by the PHENIX CM
system \cite{PHENIXmag:NIM2003}.
The CM is energized by two pairs of concentric coils and provides an
axial magnetic field parallel to the beam direction.
During this measurement, the two coils of the CM were operated in the
canceling (``$+-$'') configuration.
This configuration
is essential for the background rejection of the heavy flavor electron
measurement with the HBD as described later.
In this configuration, the field is almost
canceled out around the beam axis in the radial region $0<R<50$~cm, and has a peak value of
$\sim0.35$~T around $R\sim100$~cm.
The total field integral is $|\int B\times dl|=0.43$~Tm.
The DC and PC in the central arms measure charged particle trajectories
in the azimuthal direction to determine the transverse momentum
($p_{T}$) of each particle.
By combining the polar angle measured by
the PC and the vertex information along the beam axis from the BBC with
$p_{T}$, the total momentum $p$ is determined.
The DC is positioned between $202$~cm and $246$~cm in radial distance from
the collision point for both the west and east arms and the PC is
$247$-$252$~cm.
The RICH is a threshold-type gas $\check{\rm{C}}$erenkov counter and the
primary detector used to identify electrons in PHENIX.
It is located in the radial region of $2.5$-$4.1$~m.
The RICH has a $\check{\rm{C}}$erenkov threshold of $\gamma = 35$, which
corresponds to $p=20$~MeV/$c$ for electrons and $p=4.9$~GeV/$c$ for charged
pions.
The EMCal comprises four rectangular sectors in each arm.
The six sectors based on lead-scintillator calorimetry and
the two (lowest sectors on the east arm) based on lead-glass
calorimetry are positioned at radial distances from the
collision point of $\sim5.1$~m and $\sim5.4$~m,
respectively.
\begin{figure}[tbh]
\subfigure[ HBD from top view.]
{\includegraphics[bb=0 0 591 553,width=1.0\linewidth]{HBD_in_PHENIX}}
\subfigure[ HBD exploded view.]
{\includegraphics[width=1.0\linewidth,clip]{arm-exploded-view}}
\caption{\label{fig:HBD}(color online)
(a) Top view of the HBD showing the location of the HBD in the
central magnet. (b) Exploded view of one half HBD arm. CF$_{4}$
gas is filled in the volume as the $\check{\rm{C}}$erenkov
light radiator.
}
\end{figure}
A challenging issue for the heavy flavor electron
measurement is to reject the dominant background of electron
pairs from $\gamma$ conversions and Dalitz decays of
$\pi^{0}$ and $\eta$ mesons, which are mediated by virtual
photons. These electrons are called ``photonic electrons'',
while all the other electrons are called ``nonphotonic
electrons''. Most nonphotonic electrons are from heavy
flavor decays, however, electrons from $K_{e3}$ decays
($K\rightarrow e\nu \pi$) and the dielectron decays of light
vector mesons are also nonphotonic~\cite{SingleE:PRL2006}.
The HBD aims to considerably reduce the photonic electron
pairs utilizing distinctive feature of the $e^{+}e^{-}$
pairs, namely their small opening angles.
The HBD is a position-sensitive $\check{\rm{C}}$erenkov detector
operated with pure CF$_{4}$ gas as a radiator.
It covers pseudorapidity $|\eta|<0.45$ and $2\times 3\pi /4$ in
azimuth.
The coverage is larger than the acceptance of the other
detectors in the central arm in order to detect
photonic electron pairs with only one track reconstructed
in the central arm and the other outside of the central arm acceptance.
Figure~\ref{fig:HBD} shows the top view and exploded view of the HBD.
The HBD has a $50$~cm long radiator directly coupled in a
windowless configuration to a readout element consisting of a triple
Gas Electron Multiplier (GEM) stack,
with a CsI photocathode evaporated on the top surface of the
GEM facing the collision point
and pad readout at the exterior of the stack.
The readout element in each HBD arm is divided into five sectors.
The expected number of photoelectrons for an electron track is about 20,
which is consistent with the measured number.
Since the HBD is placed close to the collision point, the material
thickness is small in order to minimize conversions.
The total thickness to pass through the HBD
is $0.024X_{0}$ and the thickness before the GEM pads is
$0.007X_{0}$.
The $\check{\rm{C}}$erenkov light generated by electrons
is directly collected on a photosensitive cathode plane, forming
an almost circular blob image.
The readout pad plane comprises hexagonal pads with an area of
$6.2$~cm$^2$ (hexagon side length $a=1.55$~cm) which is comparable to, but
smaller than, the blob size which has a maximum area of $9.9$~cm$^2$.
The HBD is located in a field free region that preserves the
original direction of the $e^{+}e^{-}$ pair.
The $\check{\rm{C}}$erenkov blobs created by electron pairs with a small
opening angle overlap, and therefore generate a signal in the HBD with
twice the amplitude of a single electron.
Electrons originating from $\pi^{0}$ and $\eta$ Dalitz decays and
$\gamma$ conversions can largely be eliminated by rejecting tracks which
correspond to large signals in the HBD.
\section{\label{sec:MeasurementOfSE}Heavy Flavor Electron Analysis}
With the improved signal purity from the HBD,
the double helicity asymmetry of the
heavy flavor electrons was measured.
In this section, we explain how the heavy flavor electron analysis and the
purification of the heavy flavor electron sample using the HBD was performed.
\subsection{\label{subsec:DataSet}Data Set}
The data used here were recorded by PHENIX during 2009.
The data set was selected by a level-1 electron trigger in coincidence
with the MB trigger.
The electron trigger required a minimum energy deposit of $0.6$~GeV in a
$2\times2$ tile of towers in EMCal,
$\check{\rm{C}}$erenkov light detection in the
RICH,
and acceptance matching of these two hits.
After a vertex cut of $|z_{{\rm vtx}}|<20$~cm and
data quality cuts,
an equivalent of
$1.4\times10^{11}$ MB events, corresponding to $6.1$~pb$^{-1}$,
sampled by the electron trigger were analyzed.
\subsection{\label{subsec:ElectronSelection}Electron Selection}
Electrons are reconstructed using the detectors in the PHENIX
central arm described above. Several useful variables for the
electron selection which were used in the previous electron
analysis in 2006 \cite{HQPhenix:PRC2011} are also used in this
analysis. In addition to the conventional parameters, we
introduced a new value, $q_{\rm{clus}}$, for the HBD analysis.
\begin{description}
\item[hbdcharge: $q_{\rm{clus}}$]
Total charge of the associated HBD cluster calibrated
in units of the number of photoelectrons (p.e.).
\end{description}
The electron selection cuts (eID-cut) are:
\begin{packed_enum}
\item[] 4.0$\sigma$ matching between track and EMCal cluster
\item[] \# of hit tubes in RICH around track $\geq2$
\item[] 3.5$\sigma$ matching between track and HBD cluster
\item[] shower profile cut on EMCal
\item[] $0.57<E/p<1.37$ ($0.5$ GeV/$c$ $<p_{T}<1.0$ GeV/$c$)
\item[] $0.60<E/p<1.32$ ($1.0$ GeV/$c$ $<p_{T}<1.5$ GeV/$c$)
\item[] $0.64<E/p<1.28$ ($1.5$ GeV/$c$ $<p_{T}<5.0$ GeV/$c$)
\item[] \# of hit pads in HBD cluster $\geq2$
\item[] $q_{\rm{clus}}>8.0$ p.e.
\item[] ($q_{\rm{clus}}>4.0$ p.e. for one low-gain HBD sector)
\end{packed_enum}
These cuts require hits in the HBD, RICH, and EMCal that are associated
with projections of the track onto these detectors.
The shower profile in the EMCal is required to match the profile
expected of an electromagnetic shower.
For electrons, the energy deposit on EMCal, $E$, and the
magnitude of the reconstructed momentum on DC and PC,
$p$, should match due to their small mass.
Therefore the ratio, $E/p$, was required to be close to 1.
Since the energy resolution of the EMCal depends on the momentum of the
electron, the cut boundaries were changed in different momentum
range.
Charged particles traversing the CF$_4$ volume in the HBD produce
also scintillation light, which has no directivity and creates
hits with small charge in random locations in the GEM pads.
To remove HBD background hits
by the scintillation light,
a minimum charge and a minimum cluster size
were required for the HBD hit clusters.
During this measurement, the efficiency for the $\check{\rm{C}}$erenkov
light in one HBD sector was low compared with other sectors.
Hence we apply a different charge cut to that HBD sector for the
electron selection.
The $E/p$ distribution for tracks selected with these cuts
is shown in Fig.~\ref{fig:EPRatio}.
The clear peak around $E/p=1$ corresponds to
electrons and the spread of events around the peak consists mainly of
electrons from $K_{e3}$ decays and misidentified hadrons.
As the figure shows, the fraction of these background tracks in the
reconstructed electron sample after applying eID-Cut including the
$E/p$ cut was small.
The fractions of the $K_{e3}$ decays and the misidentified hadrons are
described in Sec.~\ref{subsec:SystematicErrorInMSE} and
Sec.~\ref{subsec:SEResult}.
As mentioned in Sec.~\ref{sec:ExperimentalSetup}, we remove
the photonic electrons and purify the heavy flavor electrons on the
basis of the associated HBD cluster charge.
The nonphotonic electron cuts (npe-Cut) are:
\begin{packed_enum}
\item[] $8.0<q_{\rm{clus}}<28.0$ p.e.
\item[] ($4.0<q_{\rm{clus}}<17.0$ p.e. for 1 low-gain HBD sector)
\end{packed_enum}
\begin{figure}[tbh]
\includegraphics[width=1.0\linewidth,clip]{EPDist050_100}
\caption{\label{fig:EPRatio}$E/p$ distributions
for $0.5$~GeV/$c$~$<p_{T}<1.0$~GeV/$c$ reconstructed charged tracks
with the eID-Cut other than the $E/p$ cut.
Criteria of the $E/p$ cut for the momentum region
are shown by dashed lines in the plot.}
\end{figure}
\subsection{\label{subsec:SEYieldEstimation}Yield estimation of heavy flavor
electrons with HBD}
We categorize the HBD hit clusters into three types according to the
source of the cluster.
A cluster created by a single blob of $\rm{\check{C}}$erenkov light
from a nonphotonic electron as shown in Fig.~\ref{fig:HBDResponse:1}
is defined as a {\it{single cluster}}.
On the other hand,
a cluster created by merging blobs of $\rm{\check{C}}$erenkov light from
a track pair of photonic electrons as shown in Fig.~\ref{fig:HBDResponse:2}
is defined as a {\it{merging cluster}}.
However, a portion of the photonic electrons
which have a large enough opening angle such that the two cluster do
not merge (typically $\simgt0.1$~rad)
creates two separated single clusters as shown
in Fig.~\ref{fig:HBDResponse:3}.
Therefore the single clusters are created by both of the nonphotonic
electron and the photonic electron with a large opening angle.
We also define another type of cluster created by scintillation light,
which we call a {\it{scintillation cluster}}.
Scintillation hits which accidentally have large hit charges and
have neighboring hits can constitute clusters.
Photonic electrons from $\gamma$ conversions after the HBD GEM
pads do not create $\rm{\check{C}}$erenkov light in the HBD gas
volume.
Hence they basically do not have associated clusters in the HBD and they
are rejected by the HBD hit requirement in the eID-Cut.
However, a portion of these are accidentally associated with
scintillation clusters and satisfy the eID-Cut and so also survive in
the reconstructed electron sample.
In Sec.~\ref{subsubsec:HBDClusterChargeFit}, we estimated
yields of these clusters from the distribution shape of the HBD cluster
charge.
We also estimated the small component of {\it{single clusters}} generated
from photonic electrons which have the large opening angles
as described in Sec.~\ref{subsubsec:HBDRingChargeFit}.
Then we determined the nonphotonic electron yield.
Subtracting additional background electrons from $K_{e3}$ decays and
$e^{+}e^{-}$ decays of light vector mesons, we obtain the heavy
flavor electron yield as described
in Sec.~\ref{subsubsec:SingleE}.
\begin{figure}[tbh]
\subfigure[\label{fig:HBDResponse:1} Non-photonic electrons.]
{\includegraphics[width=0.49\linewidth,clip]{HBDSingleElectron}}\\
\subfigure[\label{fig:HBDResponse:2} Photonic electrons
(merging cluster).]
{\includegraphics[width=0.49\linewidth,clip]{HBDElectronPair}}
\subfigure[\label{fig:HBDResponse:3} Photonic electrons
(separated clusters).]
{\includegraphics[width=0.49\linewidth,clip]{HBDElectronPairSep}}
\caption{\label{fig:HBDResponse}(color online)
Responses of the HBD for (a) nonphotonic electrons and (b,c)
photonic electrons. (b) Most of the photonic electron pair
create merging clusters. (c) However, the photonic
electrons with large opening angles create separated clusters.
}
\end{figure}
\subsubsection{\label{subsubsec:HBDClusterChargeFit} Yield estimation of single clusters}
All clusters associated with the
reconstructed electrons can be classified into the above three types.
The yield of the electrons associated with the
single clusters must be evaluated to estimate the yield of the heavy flavor
electrons.
The shapes of the $q_{\rm{clus}}$ distributions for the three cluster
types are quite different since merging
clusters have basically double the charge of single clusters and
the charge of scintillation clusters is considerably smaller than the
charge of the single cluster.
Using the difference in the shapes, we estimate yields of these
clusters as follows.
The probability distributions of $q_{\rm{clus}}$ for single and
merging clusters were estimated by using low-mass unlike-sign electron
pairs reconstructed with only the eID-Cut,
which is dominated by photonic electron pairs.
We defined the unlike-sign electron pairs whose two electrons were associated
with two different HBD clusters as separated electron pairs and the pairs
whose two electrons were associated to the same HBD cluster as merging
electron pairs.
The probability distribution of $q_{\rm{clus}}$
for the single clusters were estimated by the
$q_{\rm{clus}}$ distribution of the separated electron pairs and
the probability distribution of $q_{\rm{clus}}$
for the merging clusters were estimated by the
$q_{\rm{clus}}$ distribution of the merging electron pairs.
The reconstruction of the electron pairs creates a small bias on the
shapes of the $q_{\rm{clus}}$ distributions.
Corrections for this bias are estimated by simulation and applied to
the distributions.
The probability distributions are denoted as
$f_{\rm{c}}^{\rm{s}}(q_{\rm{clus}})$ for the
single clusters and $f_{\rm{c}}^{\rm{m}}(q_{\rm{clus}})$ for the merging
clusters.
The probability distribution of $q_{\rm{clus}}$
for the scintillation clusters is also
estimated by the distribution of the hadron tracks reconstructed by the
DC/PC tracking and the RICH veto and
denoted as $f_{\rm{c}}^{\rm{sci}}(q_{\rm{clus}})$.
The variables used in the {\bf{hbdcharge}} analysis are:
\begin{packed_descr}
\item[$f_{\rm{c}}^{\rm{s}}$]
Probability distribution of $q_{\rm{clus}}$
for the single clusters
\item[$f_{\rm{c}}^{\rm{m}}$]
Probability distribution of $q_{\rm{clus}}$
for the merging clusters
\item[$f_{\rm{c}}^{\rm{sci}}$]
Probability distribution of $q_{\rm{clus}}$
for the scintillation clusters
\item[$n_{\rm{s}}$]
Number of single clusters
after applying eID-Cut.
\item[$n_{\rm{m}}$]
Number of merging clusters
after applying eID-Cut.
\item[$n_{\rm{sci}}$]
Number of scintillation clusters
after applying eID-Cut.
\item[$\tilde{n}_{\rm{s}}$]
Number of single clusters
after applying eID-Cut and npe-Cut.
\item[$\tilde{n}_{\rm{m}}$]
Number of merging clusters
after applying eID-Cut and npe-Cut.
\item[$\tilde{n}_{\rm{sci}}$]
Number of scintillation clusters
after applying eID-Cut and npe-Cut.
\end{packed_descr}
The $q_{\rm{clus}}$ distribution of the
reconstructed electrons found by applying eID-Cut is fitted with a
superposition of the three probability distributions
\begin{equation}
\begin{array}{cc}
n_{\rm{s}}\times f_{\rm{c}}^{\rm{s}}(q_{\rm{clus}})&+ \\
n_{\rm{m}}\times f_{\rm{c}}^{\rm{m}}(q_{\rm{clus}})&+ \\
n_{\rm{sci}}\times f_{\rm{c}}^{\rm{sci}}(q_{\rm{clus}}) ,&
\end{array}
\end{equation}
where $n_{\rm{s}}$, $n_{\rm{m}}$ and $n_{\rm{sci}}$
are fitting parameters that represent the numbers of the
reconstructed electrons associating to
single clusters, merging
clusters and scintillation clusters
after applying eID-Cut respectively.
The fraction of nonphotonic electrons and photonic electrons are
different in different $p_{T}$ region of the reconstructed electron sample.
Therefore the fitting was performed for each $p_{T}$ region and
$n_{\rm{s}}(p_{T})$, $n_{\rm{m}}(p_{T})$ and
$n_{\rm{sci}}(p_{T})$ for each
$p_{T}$ region were determined.
In the fitting, the distribution functions,
$f_{\rm{c}}^{\rm{s}}(q_{\rm{clus}})$,
$f_{\rm{c}}^{\rm{m}}(q_{\rm{clus}})$, and
$f_{\rm{c}}^{\rm{sci}}(q_{\rm{clus}})$, are assumed to be $p_{T}$
independent because the velocity of electrons in
$p_{T}$ region of interest
is close enough to the speed of light in vacuum such
that the yield of $\rm{\check{C}}$erenkov light from the electron is
nearly independent of $p_{T}$.
We also compared the shapes of the distributions in different
$p_{T}$ regions to confirm that the effect
from the track curvature is
small enough to be ignored even at $p_{T}\sim0.5$~GeV/$c$.
On the other hand, $f_{\rm{c}}^{\rm{s}}(q_{\rm{clus}})$,
$f_{\rm{c}}^{\rm{m}}(q_{\rm{clus}})$ and
$f_{\rm{c}}^{\rm{sci}}(q_{\rm{clus}})$
for different HBD sectors vary slightly.
Considering this difference, the fitting is performed for each sector
individually.
The $q_{\rm{clus}}$ distribution for the reconstructed
electrons with transverse momentum $p_{T}$ ranging from $0.75$
GeV/$c$ to $1.00$ GeV/$c$ and the fitting result are shown in
Fig.~\ref{fig:HBDClusterChargeFitting} for one HBD sector.
The charge distribution of the reconstructed electrons is well
reproduced by the superposition of the three individual components.
\begin{figure}[tbh]
\includegraphics[width=1.0\linewidth,clip]{fitting_sect9sn0_ptbin3}
\caption{\label{fig:HBDClusterChargeFitting}(color online).
Charge
distribution of HBD clusters associated with reconstructed
electrons with a transverse momentum ranging from $0.75$~GeV/$c$ to
$1.00$~GeV/$c$ (solid black line),
and the charge distribution for each
component, i.e., single clusters
($n_{\rm{s}}f_{\rm{c}}^{\rm{s}}$,
blue circles), merging clusters
($n_{\rm{m}}f_{\rm{c}}^{\rm{m}}$,
red squares), and scintillation clusters
($n_{\rm{sci}}f_{\rm{c}}^{\rm{sci}}$,
green triangles). The superposition
of these components is also shown (purple inverted triangles).}
\end{figure}
\begin{figure}[tbh]
\includegraphics[width=1.0\linewidth,clip]{sep_merge_pt_dist_chargecut_sum_fin}
\caption{\label{fig:SepMergePtDistChargeCut}(color online).
Yield spectra of HBD clusters after applying eID-Cut and npe-Cut
estimated from the HBD cluster charge fitting.
The plot shows the spectrum for the single clusters
($\tilde{n}_{\rm{s}}$, blue circle),
the spectrum for the merging clusters
($\tilde{n}_{\rm{m}}$, red square) and
the spectrum for the scintillation clusters
($\tilde{n}_{\rm{sci}}$, green triangle).
The error bars represent fitting uncertainties.
}
\end{figure}
The total number of reconstructed
electrons after applying both of eID-Cut
and npe-Cut for the three cluster types,
which are represented as $\tilde{n}_{\rm{s}}$,
$\tilde{n}_{\rm{m}}$ and $\tilde{n}_{\rm{sci}}$, are
calculated by applying the npe-Cut efficiencies of
$\int_{q_{{\rm min}}}^{q_{{\rm max}}}dqf^{\rm{s}}_{\rm{c}}(q)$,
$\int_{q_{{\rm min}}}^{q_{{\rm max}}}dqf^{\rm{m}}_{\rm{c}}(q)$ and
$\int_{q_{{\rm min}}}^{q_{{\rm max}}}dqf^{\rm{sci}}_{\rm{c}}(q)$
to the fit results, $n_{\rm{s}}$,
$n_{\rm{m}}$ and $n_{\rm{sci}}$, respectively.
In the integrals,
$q_{{\rm min}}$ and $q_{{\rm max}}$ represent the HBD charge
boundaries in the npe-Cut of $8$ p.e. and $28$ p.e. ($4$
p.e. and $17$ p.e. for the low-gain sector).
The variables, $\tilde{n}$, are also summarized above.
Figure~\ref{fig:SepMergePtDistChargeCut} shows the yield spectra from
the calculation as functions of $p_{T}$.
\subsubsection{\label{subsubsec:HBDRingChargeFit} Yield estimation of
separated photonic electrons}
The estimated $\tilde{n}_{\rm{s}}$ is the sum of
nonphotonic electrons and photonic electrons which
create the separated clusters in the HBD.
In the following description, we denote the photonic electrons which
create merging clusters as {\it{merging photonic electrons}} (MPE)
and those which
create separated single clusters as {\it{separated photonic
electrons}} (SPE).
In this section,
the number of SPE is estimated to obtain the yield of the nonphotonic
electrons.
\begin{figure}[tbh]
\includegraphics[width=0.95\linewidth,clip]{HBDRing}
\caption{\label{fig:HBDRing}(color online).
A half of an annular region around the reconstructed electron
track on the HBD for the definition of {\bf{hbdringcharge}}.
The inner and outer radii of the annular region
are 7.0~cm and 8.0~cm respectively.
The direction of the half region is determined as the opposite
side to the edge of the HBD sector to avoid inefficiency around
the edge.
}
\end{figure}
\begin{figure}[tbh]
\includegraphics[width=1.0\linewidth,clip]{fit_result_sect9sn0_r2_ptbin3}
\caption{\label{fig:HBDRingChargeFitting}(color online). HBD charge
distribution in the annular region for the reconstructed electrons
with a transverse momentum ranging from 0.75~GeV/$c$ to 1.00~GeV/$c$
(solid black line), and the fitting
result of the charge
distribution for electrons with correlated
charges ($n_{\rm{spe}}f_{\rm{r}}^{\rm{spe}}$, green circle) and without
correlated charges
($n_{\rm{non\mathchar`-spe}}f_{\rm{r}}^{\rm{non\mathchar`-spe}}$,
blue square), and
the superposition of the fitting results (red inverted triangle).
}
\end{figure}
In the case where a reconstructed electron track is identified
as an SPE, the partner electron generates an additional signal
in the HBD, as illustrated in Fig.~\ref{fig:HBDResponse:3}.
This property is utilized to estimate the number of SPE. For
this estimation, we defined a new value, $q_{\rm{ring}}$, as
\begin{description}
\item[hbdringcharge: $q_{\rm{ring}}$]
The total charge in the HBD
pads centered on a half of an annular region with an inner
radius of $7.0$~cm and an outer radius of $8.0$~cm around the track
projection of HBD as shown in Fig.~\ref{fig:HBDRing}.
To avoid inefficient regions around the edges of the HBD
sectors, we use one half of an annular region oriented
away from the nearest sector edge (see Fig.~\ref{fig:HBDRing}).
The $q_{\rm{ring}}$ value is
normalized by the area of the half of
the annular region in the definition.
\end{description}
The choice of 7.0~cm to 8.0~cm is determined by three factors:
(1) the distribution of distance between separated clusters of
SPE has a maximum around 7.0~cm, (2) few HBD clusters have
radii larger than 7.0~cm, and (3) larger area includes more
scintillation background and decreases the signal to background
ratio. Whereas the $q_{\rm{ring}}$ distributions for the
nonphotonic electrons and MPE comprise signals only from
scintillation light, the distributions for SPE include the
correlated signals around the tracks in addition to
scintillation light.
The variables used in the {\bf{hbdringcharge}} analysis are:
\begin{packed_descr}
\item[$f_{\rm{r}}^{\rm{spe}}$]
Probability distribution of $q_{\rm{ring}}$ for SPE
\item[$f_{\rm{r}}^{\rm{non\mathchar`-spe}}$]
Probability distribution of $q_{\rm{ring}}$
for nonphotonic electrons and MPE
\item[$n_{\rm{spe}}$]
Number of SPE
after applying the eID-Cut and npe-Cut.
\item[$n_{\rm{non\mathchar`-spe}}$]
Number of electrons other than SPE
after applying the eID-Cut and npe-Cut.
\end{packed_descr}
The $f_{\rm{r}}^{\rm{spe}}(q_{\rm{ring}})$ for SPE and the
$f_{\rm{r}}^{\rm{non\mathchar`-spe}}(q_{\rm{ring}})$ for
nonphotonic electrons and MPE can be estimated by hadron tracks
and electron tracks with large $q_{\rm{clus}}$ values, which
consist almost entirely of MPE. Because hadrons and MPE
clusters do not create any correlated signals around their
tracks, the $q_{\rm{ring}}$ distributions of the tracks are
created by only the scintillation light.
The $f_{\rm{r}}^{\rm{spe}}(q_{\rm{ring}})$ was estimated by
using simulations. The dominant photonic electrons come from
the Dalitz decays of $\pi^{0}$ and $\eta$ and $\gamma$ from
their decays which convert in materials. We simulated the
detector responses for the Dalitz decay and the $\gamma$
conversion events of the neutral mesons by a {\sc geant}3
simulation \cite{GEANT3:1994} configured for the PHENIX
detector system. The $\pi^{0}$ and $\eta$ spectra were
parametrized in the simulation by $m_{T}$-scaled Tsallis
distributions \cite{MTScaling:PRD2011}, together with their
known branching ratios to Dalitz decays and $\gamma$ decays. In
order to include contributions from scintillation light,
$f_{\rm{r}}^{\rm{non\mathchar`-spe}}(q_{\rm{ring}})$, which is
identical to the $q_{\rm{ring}}$ distribution from only the
scintillation light, was convoluted to the result to obtain
$f_{\rm{r}}^{\rm{spe}}(q_{\rm{ring}})$.
The $q_{\rm{ring}}$ distribution for the reconstructed
electrons selected by applying eID-Cut and npe-Cut
was fitted with the superposition
of the $q_{\rm{ring}}$ distributions,
$f_{\rm{r}}^{\rm{spe}}(q_{\rm{ring}})$
and $f_{\rm{r}}^{\rm{non\mathchar`-spe}}(q_{\rm{ring}})$, as
\begin{eqnarray}
n_{\rm{spe}}\times f_{\rm{r}}^{\rm{spe}}(q_{\rm{ring}})+
n_{\rm{non\mathchar`-spe}}\times f_{\rm{r}}^{\rm{non\mathchar`-spe}}(q_{\rm{ring}}) ,
\end{eqnarray}
where $n_{\rm{spe}}$ and $n_{\rm{non\mathchar`-spe}}$ are fitting
parameters and represent the numbers of SPE and other electrons
in the $q_{\rm{ring}}$ distribution, respectively,
as summarized above.
Similar to the $q_{\rm{clus}}$ distribution,
the fitting for the $q_{\rm{ring}}$ distribution was also performed
for each electron $p_{T}$ region and each HBD sector.
Figure~\ref{fig:HBDRingChargeFitting} shows a fitting result in one HBD
sector in the electron $p_{T}$ region from $0.75$ GeV/$c$ to $1.00$
GeV/$c$.
\subsubsection{\label{subsubsec:SingleE}Yield estimation of heavy flavor electrons}
Using the above fitting results of $\tilde{n}_{\rm{s}}$
and $n_{\rm{spe}}$,
the yield of nonphotonic electrons, $N^{\rm{npe}}$ was
estimated with the formula
\begin{eqnarray}
\label{eq:NonPhotonicEFormula}
N^{\rm{npe}}(p_{T}) & = &
\tilde{n}_{\rm{s}}(p_{T}) - n_{\rm{spe}}(p_{T}).
\end{eqnarray}
The remaining background for the heavy flavor electrons
in the nonphotonic electron sample comes from $K_{e3}$
decays and $e^{+}e^{-}$ decays of light vector
mesons, namely $\rho$, $\omega$, and $\phi$.
Electrons from the Drell-Yan process also contribute to the
background, however the contribution is known to be less
than 0.5\% of total heavy flavor electrons in this $p_{T}$ range and
can be ignored.
We determined the yield of the
heavy flavor
electrons from
$N^{\rm{npe}}$ by subtracting the components of the $K_{e3}$
electrons, which are estimated by simulation using a measured $K$ cross
section \cite{MTScaling:PRD2011}, and the electrons from light vector
mesons, which are already estimated in previously published
result \cite{SingleE:PRL2006}, as
\begin{eqnarray}
\label{eq:SingleEFormula}
N^{{\rm{HFe}}}(p_{T}) = N^{\rm{npe}}(p_{T}) - N^{Ke3}(p_{T}) -
N^{\rm{LVM}}(p_{T}),
\end{eqnarray}
where $N^{Ke3}(p_{T})$ and $N^{\rm{LVM}}(p_{T})$ represent the
electrons from the $K_{e3}$ decays and the light vector meson decays
respectively.
\subsection{\label{subsec:SystematicErrorInMSE}Systematic Uncertainty}
\begin{table}
\caption{\label{tab:YieldSysError} Relative systematic uncertainties
given on the heavy flavor electron yield.}
\begin{ruledtabular}\begin{tabular}{c c l}
source & uncertainty & \ \ \ $p_{T}$ range (GeV/$c$) \\
\hline
{\bf{hbdringcharge}} fitting & $16\%$ & ( $0.50<p_{T}<0.75$ ) \\
& $ 6\% \sim 4\%$ & ( $0.75<p_{T}<1.75$ ) \\
& $ 2\%$ & ( $1.75<p_{T}$ ) \\
{\bf{hbdcharge}} fitting & $2\%$ & ( $0.50<p_{T}<0.75$ ) \\
& $< 1\%$ & ( $0.75<p_{T}$ ) \\
$K_{e3}$ & $4\%$ & ( $0.50<p_{T}<0.75$ ) \\
& $< 1\%$ & ( $0.75<p_{T}$ ) \\
hadron misID & $4\%$ & ( $0.50<p_{T}<0.75$ ) \\
& $< 1\%$ & ( $0.75<p_{T}$ ) \\
\end{tabular}\end{ruledtabular}
\end{table}
The systematic uncertainties for the heavy flavor electron yield come
from the fits for
the $q_{\rm{clus}}$ distribution and
the $q_{\rm{ring}}$ distribution,
and from estimations of $K_{e3}$
contribution and misidentified hadrons.
The most significant source in these contributions is the fitting
uncertainty for the $q_{\rm{ring}}$ distribution.
We varied the radius of the annular region to
an inner radius of $6.0$ cm and an
outer radius of $7.0$ cm and also to $8.0$ cm and $9.0$ cm from
the default radii of $7.0$ cm and $8.0$ cm.
The uncertainty from the fitting was set to the amount of
variation in $n_{\rm{spe}}$ after these changes.
The estimated uncertainties decrease from
about 16\% of the heavy flavor electron yield in
the momentum range of $0.50<p_{T}<1.00$~GeV/$c$ to about 2\% above
1.75~GeV/$c$.
The fitting uncertainty for the $q_{\rm{clus}}$ distribution
comes from the estimation of the bias in the charge distribution shape
due to the electron pair reconstruction.
The systematic uncertainty from this effect is estimated to be
less than 2\% by simulation.
In the low momentum region, $0.50<p_{T}<1.00$~GeV/$c$,
uncertainties from the $K_{e3}$ contribution
and the hadron misreconstruction are significant.
The uncertainty from the $K_{e3}$ contribution comes almost entirely
from the uncertainty on the
$K$ cross section used in the $K_{e3}$ simulation.
This uncertainty amounts to about 4\% of the total
heavy flavor electron yield in the low
momentum region and decreases to less than 1\% for $p_{T}>0.75$~GeV/$c$.
We also estimated the upper limits of the hadron contamination due to
misreconstructions employing a hadron-enhanced event set.
As a result, we determined the upper limits as 4\% of the total
heavy flavor electron yield in the low
momentum region which decreases to less than 1\% over 1.5~GeV/$c$.
The upper limits are assigned as the systematic uncertainties from hadron
misreconstructions.
Table~\ref{tab:YieldSysError} summarizes the systematic uncertainties
on the heavy flavor electron yield.
\subsection{\label{subsec:SEResult}Results of Heavy Flavor Electron Yield and
Signal Purity}
From Eq.~\ref{eq:NonPhotonicEFormula} and
Eq.~\ref{eq:SingleEFormula} and the discussion in
Sec.~\ref{subsec:SystematicErrorInMSE}, the
heavy flavor electron yield spectrum with the systematic uncertainties was
determined.
The spectrum is shown in Fig.~\ref{fig:SingleE}.
We also show the yield of inclusive reconstructed electrons after
applying the eID-Cut and npe-Cut and the estimated $K_{e3}$
contribution.
The electrons from $e^{+}e^{-}$ decays of the light vector mesons are
not shown in Fig.~\ref{fig:SingleE}, but they are less than 5\%
of the heavy flavor electron yield in this $p_{T}$ range.
\begin{figure}[tbh]
\includegraphics[width=1.0\linewidth,clip]{single_e_sum_syserr_fin}
\caption{\label{fig:SingleE}(color online). Heavy flavor electron yield
spectrum. The black square points represent the total number of the
reconstructed electrons after applying the eID-Cut and npe-Cut.
The red circle points represent the
estimated yield of the heavy flavor electrons. The yellow bands
represent the systematic uncertainties for the heavy flavor
electron yield.
The green triangle points with dashed lines
represent the estimated $K_{e3}$
contribution with systematic uncertainties shown by light-blue bands.}
\end{figure}
The ratio of the nonphotonic electron yield
to the photonic electron
yield in this measurement,
\begin{eqnarray}
\label{eq:NPEPERatio}
R(p_{T})\equiv \frac{N^{\rm{npe}}(p_{T})}{N^{\rm{reco}}_{e}(p_{T})-N^{\rm{npe}}(p_{T})}
\end{eqnarray}
where $N^{\rm{reco}}_{e}$ denotes the total number of
reconstructed electrons after applying the eID-Cut and npe-Cut,
is shown in the top panel of Fig.~\ref{fig:SignalFraction}.
In Eq.~\ref{eq:NPEPERatio}, we assumed the fraction of misidentified
hadrons in the reconstructed electrons after the cuts is negligible as
shown in Fig.~\ref{fig:EPRatio},
and so the number of photonic electrons can be represented as
$N^{\rm{reco}}_{e}(p_{T})-N^{\rm{npe}}(p_{T})$.
The same ratio from a previous
measurement \cite{SingleE:PRL2006} is also shown in the figure.
The previous measurement employed two other methods for the background
estimation, namely a cocktail method and a converter method.
In the cocktail method,
a sum of electron spectra from various background sources was
calculated using a Monte Carlo hadron decay generated.
This sum was subtracted from the inclusive electron sample to isolate
the heavy flavor contribution.
With the converter method, a photon converter around the beam pipe was
introduced to increase the photon conversion probability by a
well-defined amount, and thus allow determination of the photonic
background.
The nonphotonic to photonic electron ratio
is improved by a factor of about 2 or more in
$p_{T}>1.0$~GeV/$c$ compared with the previously measured result
due to the rejection of photonic electrons by the HBD.
The signal purity is defined as the ratio of the yield of the
heavy flavor electrons to the reconstructed electrons after applying the
eID-Cut and npe-Cut,
\begin{eqnarray}
\label{eq:SignalPurity}
D(p_{T})\equiv \frac{N^{\rm{HFe}}(p_{T})}{N^{\rm{reco}}_{e}(p_{T})}.
\end{eqnarray}
The result is shown as the bottom plot in Fig.~\ref{fig:SignalFraction}.
We also show the result of the signal purity in the previous
measurement.
Comparing with the previously measured result,
the signal purity is improved by
a factor of about $1.5$ in a $p_{T}$ range from $0.75$~GeV/$c$ to
$2.00$~GeV/$c$.
\begin{figure}[tbh]
\includegraphics[width=1.0\linewidth,clip]{dilution_fact_nph_ph_ratio}
\caption{\label{fig:SignalFraction}(color online). (top) Ratio
between the yields of the nonphotonic electrons and the photonic
electrons in the reconstructed tracks. The red circles and the
blue squares and triangles
represent this analysis result and the previous result \cite{SingleE:PRL2006},
respectively. The error bars and bands
represent the statistic and the systematic uncertainties.
(bottom) Signal purity which is a ratio of the yield of the
heavy flavor electrons to the total reconstructed electrons.}
\end{figure}
\section{\label{sec:CrossSection}Heavy Flavor Electron Cross Section}
\begin{table}
\caption{\label{tab:XSectSysError}
Relative systematic uncertainties on
the cross section due to uncertainties in the total sampled luminosity,
trigger efficiencies, and detector acceptance.
These systematic uncertainties
are globally correlated in all $p_{T}$ regions
($p_{T}>1.25$~GeV/$c$ for
the uncertainties on $\epsilon_{{\rm trig}}^{e\rm{|MB}}$).}
\begin{ruledtabular}\begin{tabular}{c c l}
source & uncertainty & \\
\hline
MB trig. cross sect. & $9.6\%$ & \\
acceptance $A$ & $8\%$ & \\
reco. efficiency $\epsilon_{{\rm rec}}$ & $6\%$ & \\
MB trig. efficiency $\epsilon_{{\rm trig}}^{{\rm MB}}$ & $2.5\%$ & \\
$e$ trig. efficiency $\epsilon_{{\rm trig}}^{e\rm{|MB}}$ & $3.6\%$ & in $p_{T}>1.25$ GeV/$c$ \\
\end{tabular}\end{ruledtabular}
\end{table}
The invariant cross section is calculated from
\begin{equation}
E\frac{d^{3}\sigma}{dp^{3}} = \frac{1}{2\pi
p_{T}}\frac{1}{L}\frac{1}{A\epsilon_{{\rm rec}}\epsilon_{{\rm trig}}}\frac{N(\Delta
p_{T},\Delta y)}{\Delta
p_{T}\Delta y} ,
\end{equation}
where $L$ denotes the integrated luminosity, $A$ the acceptance,
$\epsilon_{{\rm rec}}$ the reconstruction efficiency,
$\epsilon_{{\rm trig}}$ the trigger efficiency, and $N$ the estimated
number of heavy flavor electrons.
The luminosity, $L$, was calculated from the number of MB events divided by the
cross section for the MB trigger.
For the latter, a value of $23.0$~mb with a systematic uncertainty of
$9.6$\% was estimated from van-der-Merr scan
results \cite{Pi0pp:PRL2003} corrected for the relative changes in the BBC
performance.
The combination of the acceptance and the reconstruction efficiency,
$A\epsilon_{{\rm rec}}(p_{T})$, was estimated by a {\sc geant}3 simulation.
We found that $A\epsilon_{{\rm rec}}(p_{T})$ has a value of
$4.7\%\times(1\pm 8\times10^{-2}(\rm{acc.})\pm
6\times10^{-2}(\rm{reco.}))$, with a slight $p_{T}$ dependence.
The efficiency of the MB trigger for the hard scattering processes,
including heavy flavor electron production, is
$\epsilon_{{\rm trig}}^{{\rm MB}} = 79.5\% \times (1 \pm 2.5\times10^{-2})$.
The efficiency of the electron trigger for the electrons under the
condition of the MB trigger firing,
$\epsilon_{{\rm trig}}^{e\rm{|MB}}(p_{T}) \equiv \epsilon_{{\rm trig}}(p_{T})/\epsilon_{{\rm trig}}^{{\rm MB}}$,
can be calculated by the ratio of the
number of the reconstructed electrons in the MB triggered sample in
coincidence with the electron trigger to the number of the reconstructed
electrons without the coincidence.
The efficiency $\epsilon_{{\rm trig}}^{e\rm{|MB}}$
is shown in Fig.~\ref{fig:TriggerEfficiency} as a function of
$p_{T}$.
Whereas we used the calculated efficiency values for the momentum
region of $p_{T}<1.25$ GeV/$c$,
we assumed a saturated efficiency for $p_{T}>1.25$ GeV/$c$ and estimated
the value with a fitting as shown in Fig.~\ref{fig:TriggerEfficiency}.
The fitting result is $\epsilon_{\rm{plateau}}=56.5\%\times (1\pm
3.6\times10^{-2})$.
The total trigger efficiency $\epsilon_{{\rm trig}}(p_{T})$ can be
calculated with the above two efficiencies as
$\epsilon_{{\rm trig}}(p_{T}) = \epsilon_{{\rm trig}}^{{\rm MB}}\times
\epsilon_{{\rm trig}}^{e\rm{|MB}}(p_{T})$.
Table~\ref{tab:XSectSysError} summarizes the systematic uncertainties on
the cross section due to uncertainties in the total sampled luminosity,
trigger efficiencies, and detector acceptance.
All systematic uncertainties listed in Table~\ref{tab:XSectSysError}
are globally correlated over whole $p_{T}$ region ($p_{T}>1.25$~GeV/$c$ for
the uncertainties on $\epsilon_{{\rm trig}}^{e\rm{|MB}}$).
\begin{figure}[tbh]
\includegraphics[width=1.0\linewidth,clip]{e_pt_pol0_syserr_h1}
\caption{\label{fig:TriggerEfficiency}(color online). Efficiency of the
electron trigger for reconstructed electrons under the
condition that the MB trigger was issued. The red line represents
the fitting result with the constant function and the green band
represents the fitting uncertainty.}
\end{figure}
\begin{figure}[tbh]
\includegraphics[width=1.0\linewidth,clip]{cross_section_fin}
\caption{\label{fig:CrossSection}(color online). (top) Invariant
differential cross sections of electrons from heavy-flavor
decays. The red circles are
this analysis of 2009 data and the blue squares are the previous
2005 data~\protect\cite{SingleE:PRL2006} for
the nonphotonic electron cross sections.
The error bars and bands represent the statistical
and systematic uncertainties.
The scaling uncertainty from the Vernier scan is not included in
the error bands because the same uncertainty must be
considered for both the results of 2009 and 2005.
The purple dashed dotted line is electron cross section from
$J/\psi\rightarrow e^{+}+e^{-}$ decays estimated from the cocktail
method~\protect\cite{HQPhenix:PRC2011}.
The solid and dashed curves are the FONLL
calculations. (bottom) Difference of the ratio of
the data and the FONLL
calculation from 1.
The upper and lower curve shows the theoretical
upper and lower limit of the FONLL calculation.}
\end{figure}
The measured cross section of heavy flavor electrons is shown in
Fig.~\ref{fig:CrossSection} and tabulated in
Table~\ref{tab:CrossSection}.
A correction for bin width \cite{BinningCorr:NIM1995} is applied to the
$p_{T}$ value of each point.
The figure also shows the previously published
result \cite{SingleE:PRL2006}.
The new result agrees well with the previous result within the
uncertainties.
Note that in this paper we employed a new analysis method with the HBD whereas the
previous measurement employed different methods,
the cocktail method and the converter method.
The consistency between these measurements proves that additional
photonic backgrounds generated in the HBD material are removed, and that
this new analysis method with the HBD is robust.
The electron cross section from $J/\psi\rightarrow e^{+}+e^{-}$ decays
estimated by the cocktail method \cite{HQPhenix:PRC2011}
and a fixed order next-to-leading log (FONLL) pQCD calculation of the
heavy flavor contributions to the electron
spectrum \cite{FONLOXSect:PRL2005} are also shown in Fig.~\ref{fig:CrossSection}.
The $J/\psi$ contribution to the heavy flavor electrons are less than
2\% in $p_{T}<1.25$~GeV/$c$ and increase to $\sim$20\% until
$p_{T}=5.0$~GeV/$c$.
The FONLL pQCD calculation shows that
the heavy flavor electrons in the low momentum region are dominated by
charm quark decays, and the contribution from bottom quarks in
$p_{T}<1.25$~GeV/$c$ is less than 5\%.
\begin{table}[tbh]
\caption{\label{tab:CrossSection}
Data table for the cross section result corresponding
to Fig.~\ref{fig:CrossSection}.}
\begin{ruledtabular}\begin{tabular}{c c c c}
$p_{T}$ & $E\frac{d^{3}\sigma}{dp^{3}}$ & stat. error
& syst. error \\
{[GeV/$c$]} & \multicolumn{3}{c}{[mb$\times$GeV$^{-2}$c$^{3}$]} \\
\hline
0.612 & 2.12$\times10^{-3}$ & 0.04$\times10^{-3}$ & 0.47$\times10^{-3}$ \\
0.864 & 7.93$\times10^{-4}$ & 0.09$\times10^{-4}$ & 1.11$\times10^{-4}$ \\
1.115 & 2.78$\times10^{-4}$ & 0.03$\times10^{-4}$ & 0.37$\times10^{-4}$ \\
1.366 & 1.09$\times10^{-4}$ & 0.02$\times10^{-4}$ & 0.13$\times10^{-4}$ \\
1.617 & 4.77$\times10^{-5}$ & 0.08$\times10^{-5}$ & 0.58$\times10^{-5}$ \\
1.867 & 2.34$\times10^{-5}$ & 0.05$\times10^{-5}$ & 0.27$\times10^{-5}$ \\
2.118 & 1.15$\times10^{-5}$ & 0.04$\times10^{-5}$ & 0.13$\times10^{-5}$ \\
2.369 & 6.05$\times10^{-6}$ & 0.20$\times10^{-6}$ & 0.68$\times10^{-6}$ \\
2.619 & 3.28$\times10^{-6}$ & 0.19$\times10^{-6}$ & 0.37$\times10^{-6}$ \\
2.869 & 1.82$\times10^{-6}$ & 0.11$\times10^{-6}$ & 0.20$\times10^{-6}$ \\
3.120 & 1.08$\times10^{-6}$ & 0.07$\times10^{-6}$ & 0.12$\times10^{-6}$ \\
3.370 & 6.20$\times10^{-7}$ & 0.41$\times10^{-7}$ & 0.69$\times10^{-7}$ \\
3.620 & 4.07$\times10^{-7}$ & 0.26$\times10^{-7}$ & 0.45$\times10^{-7}$ \\
3.870 & 2.42$\times10^{-7}$ & 0.19$\times10^{-7}$ & 0.27$\times10^{-7}$ \\
4.121 & 1.59$\times10^{-7}$ & 0.15$\times10^{-7}$ & 0.18$\times10^{-7}$ \\
4.371 & 1.07$\times10^{-7}$ & 0.11$\times10^{-7}$ & 0.12$\times10^{-7}$ \\
4.621 & 8.02$\times10^{-8}$ & 1.11$\times10^{-8}$ & 0.89$\times10^{-8}$ \\
4.871 & 5.38$\times10^{-8}$ & 0.71$\times10^{-8}$ & 0.60$\times10^{-8}$ \\
\end{tabular}\end{ruledtabular}
\end{table}
\section{\label{sec:HelicityAsymmetry}Heavy Flavor Electron Spin Asymmetry}
\begin{table}
\caption{\label{tab:ALLSysError} Systematic uncertainties by type.
The scaling uncertainty denotes an uncertainty on
scaling of the raw asymmetry $A_{LL}^{\rm{S+BG}}$ and the offset
uncertainty denotes
an uncertainty on the absolute value of the asymmetry.
The ``global'' in this table means the uncertainties are globally
correlated in all $p_{T}$ regions.
The scaling uncertainty is represented as the ratio of the uncertainty
to the signal ($\Delta S/S$) given in percent
and the offset uncertainty is represented as the
absolute value of the uncertainty.}
\begin{ruledtabular}\begin{tabular}{c c c}
source & uncertainty & type \\
\hline
signal purity $D$ & $\sim6\%$ & scaling \\
polarization ( $\frac{\Delta(P_{B}P_{Y})}{P_{B}P_{Y}}$ ) & $8.8\%$ &
global scaling \\
relative luminosity $r$ & \ $0.14\times 10^{-2}$ \ & global offset \\
background asymmetry $A_{LL}^{BG}$ & \ $0.2\times
10^{-2}\times\frac{1-D}{D}$\ & offset \\
\end{tabular}\end{ruledtabular}
\end{table}
Since parity is conserved in QCD processes, thereby disallowing finite
single spin asymmetries, using Eq.~\ref{eq:PolCrossSection} we express
the expected electron yields for each beam-helicity combination as
\begin{equation}
\label{eq:Yield}
\begin{array}{l}
N_{++}^{\rm{exp}}(N_{0},A_{LL}) = N_{0}(1 + |P_{\rm{B}}P_{\rm{Y}}|A_{LL}) \\
N_{--}^{\rm{exp}}(N_{0},A_{LL}) = N_{0}(1 + |P_{\rm{B}}P_{\rm{Y}}|A_{LL})/r_{--} \\
N_{+-}^{\rm{exp}}(N_{0},A_{LL}) = N_{0}(1 - |P_{\rm{B}}P_{\rm{Y}}|A_{LL})/r_{+-} \\
N_{-+}^{\rm{exp}}(N_{0},A_{LL}) = N_{0}(1 - |P_{\rm{B}}P_{\rm{Y}}|A_{LL})/r_{-+},
\end{array}
\end{equation}
where
$N_{\pm\pm}^{\rm{exp}}(N_{0},A_{LL})$
denote the expected yields for collisions between
the blue beam-helicity ($\pm$) and
the yellow beam-helicity ($\pm$) and $N_{0}$
is the expected yield in collisions of unpolarized beams under the same
integrated luminosity as the $++$ beam-helicity combination.
$N_{\pm\pm}^{\rm{exp}}(N_{0},A_{LL})$ are used for fitting functions
to estimate $A_{LL}$ as described below.
$P_{\rm{B}}$ and $P_{\rm{Y}}$ represent the polarizations of the beams.
The beam polarizations are measured with a carbon target
polarimeter \cite{pC:arXiv2004},
normalized by the absolute polarization measured with a separate
polarized atomic hydrogen jet polarimeter \cite{pH:PLB2006,pH:arXiv2007}
at another collision point in RHIC ring.
The measured polarizations are about
$P=57\%$ with a relative uncertainty of $\Delta
P/P=4.7\times10^{-2}$ in the measurement.
The relative luminosities are defined as the ratio of the
luminosities in the beam-helicity combinations,
\begin{equation}
\begin{array}{l}
r_{--} \equiv \frac{L_{++}}{L_{--}} \\
r_{+-} \equiv \frac{L_{++}}{L_{+-}} \\
r_{-+} \equiv \frac{L_{++}}{L_{-+}} ,
\end{array}
\end{equation}
where $L_{\pm\pm}$ represent the integrated luminosities in the
beam-helicity combinations shown by the subscript.
The relative luminosities are determined by the ratios of MB trigger
counts in the four beam-helicity combinations.
The double-spin asymmetry for inclusive electrons after applying eID-Cut
and npe-Cut, which include not only
the heavy flavor electrons (S) but also the background electrons (BG),
is determined by simultaneously fitting the yields of electrons in each
of the four beam-helicity combinations with the expected values
$N_{\pm\pm}^{\rm{exp}}(N_{0},A_{LL})$ from Eq.~\ref{eq:Yield}, where
$A_{LL}$ and $N_{0}$ are free parameters.
To perform the fit, a log likelihood method assuming Poisson
distributions with expected values of $N_{\pm\pm}^{\rm{exp}}(N_{0},A_{LL})$
was employed.
The fit was performed
for electron yields in each fill
to obtain the fill-by-fill double-spin asymmetry.
We confirmed that all asymmetries in different fills are consistent with
each other within their statistical uncertainties and, therefore,
the patterns of the crossing helicities in the fills do not affect the
asymmetry measurement.
The final double-spin asymmetry for inclusive electrons,
$A_{LL}^{\rm{S+BG}}(p_{T})$, was
calculated as the weighted mean of the fill-by-fill asymmetries.
The double-spin asymmetry in the heavy flavor electron production,
$A_{LL}^{\rm{HFe}}$, was determined from
\begin{equation}
\label{eq:ALL}
A_{LL}^{\rm{HFe}}(p_{T})=\frac{1}{D(p_{T})}A_{LL}^{\rm{S+BG}}(p_{T})-\frac{1-D(p_{T})}{D(p_{T})}A_{LL}^{\rm{BG}}(p_{T})
\end{equation}
where $A_{LL}^{\rm{BG}}$ represents the spin
asymmetries for the background electron production,
and $D$ represents the signal purity defined in Eq.~\ref{eq:SignalPurity}
and shown in Fig.~\ref{fig:SignalFraction}.
As previously discussed, most of the background electrons come from
Dalitz decays of the $\pi^{0}$ and $\eta$, or from conversions of
photons from decays of those hadrons.
The fractional contribution on the partonic level, and therefore the
production mechanism for the $\pi^{0}$ and $\eta$ is expected to be very
similar up to
$\sim10$~GeV/$c$ \cite{PHENIXPi0ALL:PRD2007,PHENIXEtaALL:PRD2011}.
We assume identical spectra for
double-spin asymmetries of $\pi^{0}$ production and $\eta$ production,
and estimated $A_{LL}^{\rm{BG}}$ from only the $\pi^{0}$ double-spin
asymmetry using data from this PHENIX measurement.
The resulting $A_{LL}^{\rm{BG}}$ is
$-0.1\times10^{-2}<A_{LL}^{\rm{BG}}<0.1\times10^{-2}$
in $0.5<p_{T}<2.5$ GeV/$c$ and
$0.1\times10^{-2}<A_{LL}^{\rm{BG}}<0.2\times10^{-2}$ in
$2.5<p_{T}<3.0$ GeV/$c$, with uncertainties less than
$0.2\times10^{-2}$.
Systematic uncertainties on $A_{LL}^{\rm{HFe}}$ are separated into
scaling uncertainties and offset uncertainties.
The scaling uncertainties come from uncertainty in the beam
polarizations, $P_{B}$ and $P_{Y}$, and the signal purity, $D$.
The uncertainty from the beam polarization is estimated as
$\Delta(P_{B}P_{Y})/P_{B}P_{Y}=8.8$\% which is globally correlated over
the whole $p_{T}$ range.
The offset uncertainties come from uncertainties in the relative
luminosity, $r$, and the background asymmetry, $A_{LL}^{\rm{BG}}$.
The uncertainty from relative luminosity which is also globally correlated over
is determined as
$\Delta r=1.4\times10^{-3}$ from comparison of the measured relative
luminosities with the MB trigger and the
ZDC trigger.
The systematic uncertainties are summarized in
Table~\ref{tab:ALLSysError}.
A transverse double-spin asymmetry $A_{TT}$, which is defined by the same
formula as Eq.~\ref{eq:ALLDef} for the transverse polarizations, can
contribute to $A_{LL}$ through the residual transverse components of the
beam polarizations.
The product of the transverse components of the beam polarization is
measured to be $\sim10^{-2}$ in this experiment.
For $\pi^{0}$ production, the $A_{TT}$ is expected to be
$\sim10^{-4}$ based on an NLO QCD
calculation \cite{Pi0ATT:PRD2005}.
If we assume the transverse asymmetries of $\pi^{0}$ and heavy flavor
electrons are comparable, we arrive at the value of $A_{LL}\sim10^{-6}$.
This value is negligible compared with the precision of the
$A_{LL}^{S+BG}$ measurement of $\sim10^{-3}$.
The result of the double-spin asymmetry of heavy flavor electrons
is shown in Fig.~\ref{fig:SingleEALL} and tabulated in
Table~\ref{tab:SingleEALL}.
We show systematic uncertainties for scaling and offset separately in the
figure.
The measured asymmetry is consistent with zero.
\begin{figure}[tbh]
\includegraphics[width=1.0\linewidth,clip]{a_ll_pt_fin}
\caption{\label{fig:SingleEALL}(color online). Double-spin asymmetry
of the heavy flavor electron production.
The red error bands represent scaling systematic uncertainties from the
dilution factor and the blue error bands represents offset systematic
uncertainties from relative luminosity and the background spin asymmetry.}
\end{figure}
\begin{table*}[tbh]
\caption{\label{tab:SingleEALL}Data table for the $A_{LL}^{\rm{HFe}}$ result corresponding to Fig.~\ref{fig:SingleEALL}.}
\begin{ruledtabular}\begin{tabular}{c c c c c c}
$p_{T}$[GeV/$c$]&$A_{LL}^{\rm{HFe}}$& &\ \ stat. uncertainty\ \ &\ \ syst. uncertainty (offset)\ \ &\ \ syst. uncertainty (scale)\ \ \\
\hline
0.612 & 2.83$\times10^{-2}$ & \ \ \ & 2.66$\times10^{-2}$ & 0.75$\times10^{-2}$ & 0.50$\times10^{-2}$ \\
0.864 & -1.20$\times10^{-2}$ & \ \ \ & 1.21$\times10^{-2}$ & 0.30$\times10^{-2}$ & 0.08$\times10^{-2}$ \\
1.115 & 0.76$\times10^{-2}$ & \ \ \ & 1.30$\times10^{-2}$ & 0.21$\times10^{-2}$ & 0.04$\times10^{-2}$ \\
1.366 & 2.08$\times10^{-2}$ & \ \ \ & 1.63$\times10^{-2}$ & 0.18$\times10^{-2}$ & 0.10$\times10^{-2}$ \\
1.617 & -0.69$\times10^{-2}$ & \ \ \ & 2.18$\times10^{-2}$ & 0.17$\times10^{-2}$ & 0.03$\times10^{-2}$ \\
1.867 & -1.39$\times10^{-2}$ & \ \ \ & 2.68$\times10^{-2}$ & 0.16$\times10^{-2}$ & 0.03$\times10^{-2}$ \\
2.118 & 4.82$\times10^{-2}$ & \ \ \ & 3.46$\times10^{-2}$ & 0.16$\times10^{-2}$ & 0.09$\times10^{-2}$ \\
2.369 & -5.91$\times10^{-2}$ & \ \ \ & 4.40$\times10^{-2}$ & 0.16$\times10^{-2}$ & 0.11$\times10^{-2}$ \\
2.619 & -6.97$\times10^{-2}$ & \ \ \ & 5.47$\times10^{-2}$ & 0.16$\times10^{-2}$ & 0.13$\times10^{-2}$ \\
2.869 & 6.43$\times10^{-2}$ & \ \ \ & 7.07$\times10^{-2}$ & 0.16$\times10^{-2}$ & 0.12$\times10^{-2}$ \\
\end{tabular}\end{ruledtabular}
\end{table*}
\section{\label{sec:Discussion}Discussion}
In this section, we discuss constraint of $\Delta g$ from the measured
double-spin asymmetry with an LO pQCD calculation.
In $p$$+$$p$ collisions at $\sqrt{s}=200$~GeV, heavy flavor electrons
with momentum ranging $0.50<p_{T}<1.25$~GeV/$c$ are mainly produced by
open charm events as described in Sec.~\ref{sec:CrossSection}.
Whereas the precise mechanism for $J/\psi$ production is unknown,
unpolarized and polarized cross section of the open charm production can
be estimated with pQCD calculations.
In LO pQCD calculations, only $gg\rightarrow c\bar{c}$ and
$q\bar{q}\rightarrow c\bar{c}$ are allowed for the open charm
production.
The charm quarks are primarily created
by the $gg$ interaction in the unpolarized hard scattering.
In addition,
the anti-quark polarizations are known to be small from semi-inclusive
DIS measurements precisely enough that both DSSV \cite{DSSV:PRD2009}
and GRSV \cite{GRSV:PRD2001} expect
contribution of polarized $q\bar{q}$ cross section to the double-spin
asymmetry of the heavy flavor electrons in $|\eta|<0.35$ and
$p_{T}<3.0$~GeV/$c$ to be
$\sim10^{-4}$ \cite{HQSpin:arXiv2009}, which is
much smaller than the accuracy of this measurement.
Therefore,
in this analysis of $\Delta g$, we ignore the $q\bar{q}$ interaction and
assume the asymmetries are due only to the $gg$ interaction.
Under the assumption,
the spin asymmetry of the heavy flavor electrons
is expected to be
approximately proportional to the
square of polarized gluon distribution normalized by unpolarized
distribution, $|\Delta g/g(x,\mu)|^{2}$.
We estimated the unpolarized and the polarized cross section
of charm production in $p$$+$$p$ collisions with
a LO pQCD calculation of $gg\rightarrow c\bar{c}$ \cite{LOHFXSect:PLB1994}.
For this calculation, CTEQ6M \cite{CTEQ6:JHEP2002}
was employed for the unpolarized
parton distribution functions (PDF).
For the polarized PDF, we assumed $|\Delta
g(x,\mu)|=Cg(x,\mu)$ where $C$ is a constant.
The charm quark mass was assumed as $m_{c}=$~1.4~GeV/$c^{2}$ and
the factorization scale in CTEQ6 and the renormalization scale
were assumed to be identical to
$\mu=m_{T}^{c}\equiv\sqrt{{p_{T}^{c}}^{2}+{m_{c}}^{2}}$.
The fragmentation and decay processes were simulated with
{\sc pythia}8 \cite{PYTHIA8:ArXiv2007,PYTHIA6:ArXiv2006}.
We generated $pp\rightarrow c\bar{c} + X$ events and selected electrons
from the charmed hadrons, $D^{+}$, $D^{0}$, $D_{s}$, $\Lambda_{c}$ and
their antiparticles.
We scaled the charm quark yield in {\sc pythia} with respect to the pQCD
calculated unpolarized and polarized cross sections to obtain
unpolarized and polarized electron yields from charmed hadron decays
under these cross sections.
We also applied a pseudorapidity cut of $|\eta|<0.35$ for the
electrons
to match the acceptance of the PHENIX central arms.
The shape of the expected spin asymmetry $A_{LL}^{\rm{HFe}}(p_{T})$ is
then determined from the simulated electron yields.
\begin{figure}[tbh]
\includegraphics[width=1.0\linewidth,clip]{x_dist}
\caption{\label{fig:XDist}(color online). Bjorken $x$ distributions of
gluons contributing the heavy flavor electron production with
momentum ranging $0.50<p_{T}<1.25$ GeV/$c$
obtained from {\sc pythia} simulation.
The distribution is normalized with respect to the number of total
generated charmed hadrons.
}
\end{figure}
Figure~\ref{fig:XDist} shows the distributions of the gluon Bjorken $x$
contributing to heavy flavor electron production
in the momentum range $0.50<p_{T}<1.25$~GeV/$c$,
from {\sc pythia}.
Using the mean and the RMS of the distribution for $0.50<p_{T}<1.25$ GeV/$c$, we
determine the mean $x$ for heavy flavor electron production to be
$\left\langle\log_{10}x\right\rangle=\xmean^{+\xrmsp}_{-\xrmsn}$.
\begin{figure}[tbh]
\subfigure[\label{fig:ALLFit:Result}Expected $A_{LL}^{\rm{HFe}}$ for
several $|\Delta g/g|$.]
{\includegraphics[width=1.0\linewidth,clip]{dg_prob_scale1_00_m1_40_gh_CalcDG}}
\subfigure[\label{fig:ALLFit:Chis}$\hat{\chi}^{2}$ curves.]{\includegraphics[width=1.0\linewidth,clip]{dep_logllh_sqr_fin}}
\caption{\label{fig:ALLFit}(color online). (a)
$A_{LL}^{\rm{HFe}}$ for $|\Delta g/g|=0.00$, $0.10$, $0.20$,
$0.30$, $0.40$, $0.50$ are shown as the solid line, the dashed line,
the dotted line, the dashed dotted line, the long-dashed dotted line,
the dashed triplicate-dotted line respectively. They are plotted with the
measured data points and the notation for the error bars are same as
Fig.~\ref{fig:SingleEALL}. (b) $\hat{\chi}^{2}$ curves calculated
from (a) as a function of $|\Delta g/g|^{2}$. The black solid line
is the default configuration. The blue curves are
after changing the charm mass to 1.3 GeV/$c^{2}$ (dashed line) and
to 1.5 GeV/$c^{2}$ (doted line) and the red curves are after
changing the scale $\mu^{2}$ to $0.75{m_{T}^{c}}^{2}$ (dashed
dotted line) and
$1.5{m_{T}^{c}}^{2}$ (long-dashed dotted line).}
\end{figure}
We calculated expected $A_{LL}^{\rm{HFe}}(p_{T})$
by varying $C=|\Delta g/g|$.
Figure~\ref{fig:ALLFit:Result} shows several of these curves, along with
the measured points.
$\chi^{2}$ values are calculated for each value of $C$, along with
related uncertainties.
By assuming that the systematic uncertainties on the points
are correlated and represent global shifts,
we defined the quantity $\hat{\chi^{2}}$ as
\begin{equation}
\label{eq:EffectiveChis}
\begin{array}{rl}
\hat{\chi}^{2}(C)&\equiv-2\log\left(\left(2\pi\right)^{\frac{n}{2}}\hat{P}(C)\right)
\nonumber \\
\hat{P}(C)&\equiv\int dpdqN(p)N(q)\times \nonumber \\
&\prod_{i=1}^{n}N\left(\frac{\left(y_i+p\epsilon_{\rm{syst}}^{i\
\rm{offset}}-\left(1+q\gamma_{\rm{syst}}^{i\ \rm{scale}}\right)f(x_i;C)\right)}{\epsilon_{\rm{stat}}^{i}}\right) \\
&\gamma_{\rm{syst}}^{i\ \rm{scale}}=\sqrt{\left(\frac{\epsilon_{\rm{syst}}^{i\ \rm{scale}}}{y_i}\right)^{2}+\left(\frac{\Delta(P_BP_Y)}{P_BP_Y}\right)^{2}}, \nonumber \\
\end{array}
\end{equation}
where $N(X)$ denotes normal probability distribution,
i.e. $N(X)=1/\sqrt{2\pi}\exp(-X^{2}/2)$, $n$ is the number
of the data points and equal to three, and for the $i$-th data point,
$x_i$ is the $p_{T}$ value, $y_i$ is the $A_{LL}$ value, and
$\epsilon_{\rm{stat}}^{i}$,
$\epsilon_{\rm{syst}}^{i\ \rm{offset}}$ and $\epsilon_{\rm{syst}}^{i\
\rm{scale}}$ represent the statistical, offset systematic and
scaling systematic uncertainties, respectively.
$f(p_{T};C)$ denotes the expected $A_{LL}(p_{T})$ for the
parameter of $C=|\Delta g/g|$.
$\Delta(P_BP_Y)$ is an uncertainty for polarization mentioned in
Sec.~\ref{sec:HelicityAsymmetry}.
If we set the systematic uncertainties,
$\epsilon_{\rm{syst}}^{\rm{offset}}$ and $\gamma_{\rm{syst}}^{i\
\rm{scale}}$, to zero, the newly defined $\hat{\chi^{2}}$ is consistent
with the conventional $\chi^{2}$.
The resulting $\hat{\chi}^{2}$ curve is shown in
Fig.~\ref{fig:ALLFit:Chis}, plotted as a function of $C^{2}=|\Delta
g/g|^{2}$ because the curvature becomes almost parabolic.
The minimum of $\hat{\chi}^{2}$, $\hat{\chi}^{2}_{{\rm min}}$, is located
at $|\Delta g/g|^{2}=0.0$ which is the boundary of $|\Delta g/g|^{2}$.
$\Delta\hat{\chi}^{2}\equiv\hat{\chi}^{2}-\hat{\chi}^{2}_{{\rm min}}=1$
and $9$ were
utilized to determine $1\sigma$ and $3\sigma$ uncertainties.
With these criteria, we found the constraints on the gluon
polarization are
$|\Delta g/g(\left\langle\log_{10}x\right\rangle,\mu)|^{2}<
\dggsqrerrlim\times10^{-2}$($1\sigma$) and
$\dggsqrthreeerrlim\times10^{-2}$($3\sigma$).
The constraints are consistent with theoretical expectations for
$\Delta g/g(x,\mu)$ at
$\left\langle\log_{10}x\right\rangle=\xmean^{+\xrmsp}_{-\xrmsn}$ and
$\mu=1.4$~GeV which are
$\sim\dggdssv$ from DSSV,
$\sim\dgggrsvs$ from GRSV(std) and
$\sim\dgggrsvv$ from GRSV(val)
using CTEQ6 for the unpolarized PDF.
The effects of the charm quark mass and scale factor in the
cross section calculation were also checked by varying the charm mass from
$m_{c}=1.3$~GeV/$c^{2}$ to 1.5~GeV/$c^{2}$ and the scale
to $\mu^{2}=0.75{m_{T}^{c}}^{2}$ and $1.5{m_{T}^{c}}^{2}$.
Figure~\ref{fig:ALLFit:Chis} also shows the resulting $\hat{\chi}^{2}$ curves.
Considering the variation of the crossing position at
$\Delta\hat{\chi}^{2}=1$,
the constraint including the uncertainties from the charm mass and the
scale can be represented as $|\Delta
g/g|^{2}<(\dggsqrerrlim_{-0.3}^{+0.4}{\rm{(mass)}}_{-0.4}^{+0.7}{\rm{(scale)}})\times10^{-2}$($1\sigma$).
The integral of the CTEQ6 unpolarized PDF in the sensitive $x$
region of
$\left\langle\log_{10}x\right\rangle=\xmean^{+\xrmsp}_{-\xrmsn}$ and
$\mu=1.4$~GeV is $\int_{0.01}^{0.08}dxg(x,\mu)=4.9$.
Hence the constraint on the integral of the polarized PDF at $1\sigma$
corresponds to
$|\int_{0.01}^{0.08}dx\Delta g(x,\mu)|<0.85$.
This study also highlights the possibility for constraining $\Delta g$
in this Bjorken $x$ region more precisely in the future
with higher statistics and
higher beam polarizations.
\section{\label{sec:Summary}Summary}
We have presented a new analysis method for identifying heavy flavor electrons at
PHENIX.
With this new method, the signal purity is improved by a factor of
about 1.5 around $0.75\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} p_{T} \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 2.00$~GeV/$c$ due to the
rejection of photonic electrons by the HBD.
We have reported on the first measurement of the longitudinal
double-spin asymmetry of heavy flavor electrons, which are consistent
with zero.
Using this result, we estimate a
constraint of $|\Delta g/g
(\log_{10}x=\xmean^{+\xrmsp}_{-\xrmsn},\mu=m_{T}^{c})|^{2}<\dggsqrerrlim\times10^{-2}$($1\sigma$).
This value is consistent with the existing theoretical expectations with
GRSV and DSSV.
With improved statistics and polarization, the helicity
asymmetry of heavy flavor electron production can provide more
significant constraints on the gluon polarization, and complement other
measurements of $\Delta G$.
\section*{ACKNOWLEDGMENTS}
We thank the staff of the Collider-Accelerator and Physics
Departments at Brookhaven National Laboratory and the staff of
the other PHENIX participating institutions for their vital
contributions.
We thank Marco Stratmann for detailed discussions about constraining the
gluon polarization and for the preparation of codes used to calculate the
cross sections.
We acknowledge support from the
Office of Nuclear Physics in the
Office of Science of the Department of Energy, the
National Science Foundation, Abilene Christian University
Research Council, Research Foundation of SUNY, and Dean of the
College of Arts and Sciences, Vanderbilt University (U.S.A),
Ministry of Education, Culture, Sports, Science, and Technology
and the Japan Society for the Promotion of Science (Japan),
Conselho Nacional de Desenvolvimento Cient\'{\i}fico e
Tecnol{\'o}gico and Funda\c c{\~a}o de Amparo {\`a} Pesquisa do
Estado de S{\~a}o Paulo (Brazil),
Natural Science Foundation of China (P.~R.~China),
Ministry of Education, Youth and Sports (Czech Republic),
Centre National de la Recherche Scientifique, Commissariat
{\`a} l'{\'E}nergie Atomique, and Institut National de Physique
Nucl{\'e}aire et de Physique des Particules (France),
Bundesministerium f\"ur Bildung und Forschung, Deutscher
Akademischer Austausch Dienst, and Alexander von Humboldt Stiftung (Germany),
Hungarian National Science Fund, OTKA (Hungary),
Department of Atomic Energy and Department of Science and Technology (India),
Israel Science Foundation (Israel),
National Research Foundation and WCU program of the
Ministry Education Science and Technology (Korea),
Ministry of Education and Science, Russian Academy of Sciences,
Federal Agency of Atomic Energy (Russia),
VR and Wallenberg Foundation (Sweden),
the U.S. Civilian Research and Development Foundation for the
Independent States of the Former Soviet Union,
the Hungarian American Enterprise Scholarship Fund,
and the US-Israel Binational Science Foundation.
|
1,116,691,497,407 | arxiv | \part{Supplementary Materials}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
\setcounter{page}{1}
\makeatletter
\renewcommand{\theequation}{S\arabic{equation}}
\renewcommand{\thefigure}{S\arabic{figure}}
\renewcommand{\bibnumfmt}[1]{[S#1]}
\renewcommand{\citenumfont}[1]{S#1}
\pagestyle{empty}
Here we present the derivation of the general form of the scattering matrix for the geometry with 2 and 3 contact leads. We derive a general formulas for pumped heat and charge for a system consisting of both superconducting and normal leads and apply it to the device considered in the paper.
\section{General form of the scattering matrix for 2 and 3 leads}
We study first the case where we couple the superconducting Island to 3 leads. The setup is described by the Hamiltonian:
\begin{eqnarray}
\nonumber
H_0&=&2i \gamma_0 \vec{\Delta}\cdot \vec{\gamma} \\
\nonumber\label{H3}
H_T &=& t_L \left(c_{Lk}-c_{Lk}^\dag\right)\gamma_x+t_R \left(c_{Rk}-c_{Rk}^\dag\right)\gamma_y + t_U \left(c_{Uk}-c_{Uk}^\dag\right)\gamma_z\\
\nonumber
H_\alpha &=& \sum_k \xi_k c_{\alpha k}^\dag c_{\alpha k}
\end{eqnarray}
Analogously to the procedure in the paper, we calculate the unitary scattering matrix: $\Psi_{\rm out}(\epsilon) = S(\epsilon) \Psi_{\rm in}(\epsilon)$ where the scattering states $\Psi_{\rm in} = (\Phi_{e,L},\Phi_{h,L},\Phi_{e,R},\Phi_{h,R},\Phi_{e,U},\Phi_{h,U}) $ are in the basis of electrons $ (e)$ and holes $(h)$ in the left and right leads.
The scattering matrix can be expressed as:
\begin{eqnarray}
S(\epsilon) = \openone +2 \pi i W^\dag \left( H_0-\epsilon-i\pi WW^\dag\right)^{-1} W
\end{eqnarray}
Where $W $ is the matrix that encodes the coupling to the leads. It is possible to perform a unitary transformation that decouples some of the leads degrees of freedom form the system. We work in this rotated basis defined by $\tilde{\Psi} = (\Phi_{L}^+,\Phi_{R}^+,\Phi_{U}^+,\Phi_{L}^-,\Phi_{R}^-,\Phi_{U}^-) $ where $ \Phi_{\alpha}^\pm = \frac{1}{\sqrt{2}}\left[\Phi_{e,\alpha}\pm \Phi_{h,\alpha}\right]$. It follows directly that From Eq. \eqref{H3} that, in this basis, the scattering matrix takes a block diagonal form with only three channels coupled to the dot. We therefore restrict to the three dimensional basis of coupled channels, namely $\psi= (\Phi_{L}^-,\Phi_{R}^-,\Phi_{U}^-)$ .
Denoting $\Gamma_U$ the coupling to the third lead, the coupling matrix takes the form:
\begin{eqnarray}
W &=& \left(
\begin{array}{cccc}
\sqrt{\Gamma_L}&0&0\\
0&\sqrt{\Gamma_R}&0 \\
0&0&\sqrt{\Gamma_U} \\
0&0&0
\end{array}
\right) =\sqrt{\Gamma_L} |x\rangle \langle L^-| +\sqrt{\Gamma_R}|y\rangle \langle R^-| +\sqrt{\Gamma_U}|z\rangle \langle U^-|
\end{eqnarray}
Projecting onto the ground state manifold: $P_{\rm g} = |\theta\rangle \langle \theta| +|\phi\rangle \langle \phi| $. We can write:
\begin{eqnarray}
\nonumber
P_{\rm g} W &=&\sqrt{\Gamma_L} \left[ \langle \theta |x\rangle |\theta\rangle \langle L^-|+\langle \phi |x\rangle |\phi\rangle \langle L^-| \right]
+ \sqrt{\Gamma_R} \left[ \langle \theta |y\rangle |\theta\rangle \langle R^- |+\langle \phi |y\rangle |\phi\rangle \langle R^-| \right] + \sqrt{\Gamma_U} \left[ \langle \theta |z\rangle |\theta\rangle \langle U^-|+\langle \phi |z\rangle |\phi\rangle \langle U^-| \right]\\
&=&\sqrt{\Gamma_L}\left[ \cos\theta\cos\phi |\theta\rangle \langle L^-|-\sin\phi |\phi\rangle \langle L^-| \right]+ \sqrt{\Gamma_R} \left[ \cos\theta\sin\phi |\theta\rangle \langle R^- |+\cos\phi |\phi\rangle \langle R^-| \right] - \sqrt{\Gamma_U}\left[ \sin\theta |\theta\rangle \langle U^-| \right]
\end{eqnarray}
The non-trivial part of the scattering matrix follows:
\begin{eqnarray}
\label{eq:3contatti}
\nonumber
\tilde{S}(\epsilon) &=& \openone +2 i W^\dag \left( P_{\rm g}[H_0-\epsilon-iWW^\dag] P_{\rm g} \right)^{-1} W\\
\nonumber
&=& \openone -2 i W^\dag \left( \epsilon+iP_{\rm g}WW^\dag P_{\rm g} \right)^{-1} W\\
\nonumber
&=& |L^-\rangle \langle L^-|\left\{ 1- 2i \Gamma_L \left[\frac{\left(\epsilon+i \Gamma_R \right)\cos^2\theta+\left(\epsilon+i \Gamma_U \right)\sin^2\theta\sin^2\phi}{\left(\epsilon+i \Gamma_U \right)\sin^2\theta\left(\epsilon+i \Gamma_L\sin^2\phi+i \Gamma_R\cos^2\phi \right)+\left(\epsilon+i \Gamma_L \right)\left(\epsilon+i \Gamma_R \right)\cos^2\theta}\right]\right\}\\
\nonumber
&+& |R^-\rangle \langle R^-|\left\{ 1- 2i \Gamma_R\left[\frac{\left(\epsilon+i \Gamma_L \right)\cos^2\theta+\left(\epsilon+i \Gamma_U \right)\sin^2\theta\cos^2\phi}{\left(\epsilon+i \Gamma_U \right)\sin^2\theta\left(\epsilon+i \Gamma_L\sin^2\phi+i \Gamma_R\cos^2\phi \right)+\left(\epsilon+i \Gamma_L \right)\left(\epsilon+i \Gamma_R \right)\cos^2\theta}\right]\right\}\\
\nonumber
&+& |U^-\rangle \langle U^-|\left\{ 1- 2i \Gamma_U\sin^2\theta\left[\frac{\left(\epsilon+i \Gamma_L\sin^2\phi+i \Gamma_R\cos^2\phi \right)}{\left(\epsilon+i \Gamma_U \right)\sin^2\theta\left(\epsilon+i \Gamma_L\sin^2\phi+i \Gamma_R\cos^2\phi \right)+\left(\epsilon+i \Gamma_L \right)\left(\epsilon+i \Gamma_R \right)\cos^2\theta}\right]\right\}\\
\nonumber
&-& 2i\sqrt{\Gamma_L\Gamma_R} \left[|L^-\rangle \langle R^-|+|R^-\rangle \langle L^-|\right]\left\{\frac{\left(\epsilon+i \Gamma_U \right)\sin^2\theta\cos\phi\sin\phi}{\left(\epsilon+i \Gamma_U \right)\sin^2\theta\left(\epsilon+i \Gamma_L\sin^2\phi+i \Gamma_R\cos^2\phi \right)+\left(\epsilon+i \Gamma_L \right)\left(\epsilon+i \Gamma_R \right)\cos^2\theta}\right\}\\
\nonumber
&+& 2i\sqrt{\Gamma_U\Gamma_R}\left[ |U^-\rangle \langle R^-|+ |R^-\rangle \langle U^-|\right]\left\{\frac{\left(\epsilon+i \Gamma_L \right)\cos\phi\cos\theta\sin\theta}{\left(\epsilon+i \Gamma_U \right)\sin^2\theta\left(\epsilon+i \Gamma_L\sin^2\phi+i \Gamma_R\cos^2\phi \right)+\left(\epsilon+i \Gamma_L \right)\left(\epsilon+i \Gamma_R \right)\cos^2\theta}\right\}\\
&+& 2i\sqrt{\Gamma_U\Gamma_L}\left[ |U^-\rangle \langle L^-|+|L^-\rangle \langle U^-|\right]\left\{\frac{\left(\epsilon+i \Gamma_R \right)\sin\phi\cos\theta\sin\theta}{\left(\epsilon+i \Gamma_U \right)\sin^2\theta\left(\epsilon+i \Gamma_L\sin^2\phi+i \Gamma_R\cos^2\phi \right)+\left(\epsilon+i \Gamma_L \right)\left(\epsilon+i \Gamma_R \right)\cos^2\theta}\right\}\
\end{eqnarray}
\begin{figure}
\includegraphics[width=0.45\textwidth]{SMtx3LeadsGeneral}
\caption{The evolution the $x $ and $y $ angles of the scattering matrix, Eq. \eqref{Smtx3C1-1} during the first segment of the pumping cycle. The solid line was calculated for $\Gamma_U/\Gamma_L= 0.3$ and $\epsilon/\Gamma_L= 10^{-4} $, while the dashed line shows the limit of uncoupling the upper lead$ \Gamma_L\gg \epsilon\gg\Gamma_U$, where $\epsilon/\Gamma_L= 10^{-2}$ and $\Gamma_U/\Gamma_L= 10^{-5}$.}
\label{fig:3LeadsGeneralCoupling}
\end{figure}
The expression in Eq. \eqref{eq:3contatti} simplifies considerably when the scattering matrix is evaluated along the path in parameter space of the pumping cycle. We obtain, for the three branches of the cycle:
\begin{eqnarray}\label{Smtx3C1-1}
\nonumber
S(\mathcal{C}_1) &=& e^{-ix_{\vec{C}_1}}\left[ \cos x_{\vec{C}_1} \frac{4 \mathbf{1} +3 \lambda_3 -\sqrt{3}\lambda_8}{6}
+i\sin x_{\vec{C}_1}\cos y_{\vec{C}_1}\frac{\sqrt{3}\lambda_8+\lambda_3}{2}
+i\sin x_{\vec{C}_1}\sin y_{\vec{C}_1}\lambda_4
\right]
+ e^{-2i\zeta_R} \frac{2 \mathbf{1} -3 \lambda_3 +\sqrt{3}\lambda_8}{6}
\\
\nonumber
S(\mathcal{C}_2) &=& e^{-ix_{\vec{C}_2}}\left[ \cos x_{\vec{C}_2} \frac{2 \mathbf{1} +\sqrt{3}\lambda_8}{3}
+i\sin x_{\vec{C}_2}\cos y_{\vec{C}_2}\lambda_3
+i\sin x_{\vec{C}_2}\sin y_{\vec{C}_2}\lambda_1
\right] + e^{-2i\zeta_U} \frac{\mathbf{1} -\sqrt{3}\lambda_8}{3}
\nonumber \\
S(\mathcal{C}_3) &=& e^{-ix_{\vec{C}_3}}\left[ \cos x_{\vec{C}_3} \frac{4 \mathbf{1} -3 \lambda_3 -\sqrt{3} \lambda_8}{6}
+i\sin x_{\vec{C}_3}\cos y_{\vec{C}_3}\frac{1}{2}\left[\sqrt{3}\lambda_8-\lambda_3\right]
+i\sin x_{\vec{C}_3}\sin y_{\vec{C}_3}\lambda_6
\right]+ e^{-2i\zeta_L} \frac{2 \mathbf{1} +3 \lambda_3 +\sqrt{3} \lambda_8}{6}
\nonumber
\end{eqnarray}
where
$\vec{C}_1= (L,U,\theta)$, $\vec{C}_2=(R,L,\phi)$, $\vec{C}_3= \vec{C}_3=(U,R,\phi)$ and
$\tan x_{(\alpha,\beta,\psi)} = (\Gamma_\alpha\cos\psi^2+\Gamma_\beta\sin\psi^2)/\epsilon$,
$\tan y_{(\alpha,\beta,\psi)} =(2\sqrt{\Gamma_\alpha\Gamma_\beta}\cos\psi\sin\psi)/(\Gamma_\beta\sin\psi^2-\Gamma_\alpha\cos\psi^2)$
and $\tan\zeta_{\alpha} =\Gamma_\alpha/\epsilon$.
The scattering matrix dependence on the parameters is represented graphically in Fig. \ref{fig:3LeadsGeneralCoupling}, which presents the $x$ and $y$ components along the path in parameter space.
\section{General form of the Scattering matrix for 2 leads}
Upon decoupling one of the leads, by setting $ \Gamma_U =0$ in Eq. \eqref{eq:3contatti}, the scattering matrix reduces to that of the two-leads geometry, $\tilde{S}(\epsilon) \in SU(2)$, which can be parametrised in terms of scattering matrices as
\label{eq:s2conatti1}
\begin{eqnarray}
\nonumber
\tilde{S}(\epsilon) = e^{-i\eta(\epsilon) } e^{-i\chi(\epsilon) \left(\cos\delta(\epsilon)\sigma_z+\sin\delta(\epsilon)\sigma_x\right)} = e^{-i\eta(\epsilon) } \left[\cos\chi(\epsilon) -i\sin\chi(\epsilon) \left( \cos\delta(\epsilon) \sigma_z+\sin\delta(\epsilon) \sigma_x\right)\right]
\end{eqnarray}
with
\begin{eqnarray}
\nonumber
\tan \delta(\epsilon)&=& \frac{2\sqrt{\Gamma_L\Gamma_R}\sin\theta^2\sin 2\phi}{(\Gamma_L-\Gamma_R)(1+\cos\theta^2)-(\Gamma_L+\Gamma_R)(\sin\theta^2\cos2\phi)}\xrightarrow[]{\Gamma_L\rightarrow \Gamma_R} \tan 2\phi\\
\nonumber
\tan\chi(\epsilon) &=&\frac{\epsilon\sqrt{\left[(\Gamma_L-\Gamma_R)(1+\cos\theta^2)-(\Gamma_L+\Gamma_R)(\sin\theta^2\cos2\phi)\right]^2+4\Gamma_L\Gamma_R\sin\theta^4\sin 2\phi^2}}{2\left[\epsilon^2+\Gamma_L\Gamma_R\cos\theta^2\right]}\xrightarrow[]{\Gamma_L\rightarrow \Gamma_R} \frac{\epsilon\Gamma\sin\theta^2}{\epsilon^2+\Gamma^2\cos\theta^2}\\
\tan\eta(\epsilon) &=&\frac{ \epsilon \left[\Gamma_L(\cos\theta^2+\sin\theta^2\sin\phi^2)+\Gamma_R(\cos\theta^2+\sin\theta^2\cos\phi^2)\right]}{\epsilon^2-\Gamma_L\Gamma_R\cos\theta^2}\xrightarrow[]{\Gamma_L\rightarrow \Gamma_R}\frac{\epsilon\Gamma(1+\cos\theta^2)}{\epsilon^2-\Gamma^2\cos\theta^2}
\end{eqnarray}
In the limit of equal coupling we recover the simplified expressions (6) in the paper.
\section{Pumped charge and heat with normal and superconducting leads}
The main result of the paper establishes a relation between the geometric phases associated with the braiding of Majorana zero modes and transport across the system. Specifically the pumped heat across the cycle encodes informations on the geometric phase.
We derive here the general expression for the pumped charge and heat in setups consisting of both normal and superconducting leads, and use it for the Majorana-based device studied in the paper.
The starting point for our derivation are the results obtained for charge pumping \cite{Moskalets2002,Blaauboer2002,Taddei2004}. The system under consideration consists of $N$ normal leads contacted to a mesocopic device in the presence of a grounded superconductor. The amplitude for electrons and holes injected from lead $\alpha$ to be reflected /transmitted as electrons or holes at lead $\beta$ is determined by the scattering matrix $S_{\beta,\alpha}^{(ij)}(t)$, where $i,j \in \{ e,h \}$, with the same notation used in the paper.
The outgoing charge and energy currents at lead $\alpha$ are given by
\begin{eqnarray}
I_{e,\alpha} & = &\frac{e}{h}\int_0^\infty dE \left[ f_{{\rm out},\alpha}(E) -f_{{\rm in},\alpha} (E) \right], \\
I_{\epsilon,\alpha} & = &\frac{1}{h} \int_0^\infty dE (E-\mu_\alpha) \left[ f_{{\rm out},\alpha}(E) -f_{{\rm in},\alpha} (E) \right],
\end{eqnarray}
Here the distribution of the ingoing particles at lead $\alpha$ is the equilibrium distribution set by the external temperature and chemical potential, $f_{{\rm in},\alpha} (E) = f_0(E-\mu_\alpha)$, and the outgoing one is determined by the scattering properties of the systems.
In the following we simplify $\left[ f_{{\rm out},\alpha}(E) -f_{{\rm in},\alpha} (E) \right]$, and the result can be used for both the charge and heat current.
We assume that all the leads are kept at the same chemical potential, $\mu_\alpha=\mu$.
Moreover, the time dependence of the scattering matrix on time is due to the weak slow periodic driving of external parameters, $X_j(t)=X_j+X_{j,\omega} e^{i(\omega t-\phi_j)} + X_{j,\omega} e^{-i(\omega t-\phi_j)}$. The time-dependence of the scattering matrix is then expressed given by
\begin{equation}
\hat{S}(t)= \hat{S} + \hat{S}_{\omega}e^{-i\omega t} +\hat{S}_{-\omega} e^{i \omega t},
\end{equation}
with
\begin{equation}
\label{eq:scatter-time}
\hat{S}_{\pm \omega} = \sum_j X_{j,\omega}e^{\mp i \phi_j} \frac{\partial \hat{S}}{\partial X_j}.
\end{equation}
This essentially corresponds to including the scattering process involving the nearest energy side-bands. The distribution of outgoing electrons at energy $E$ is therefore determined by the distribution of ingoing electrons and holes at energies $E$, $E+\omega$ and $E-\omega$.
With the observation that the ingoing equilibrium distribution of holes is given by $f^{(h)}_{{\rm in}, \beta}(E) = f_0(E +\mu)$,
we get
\begin{eqnarray}
\label{eq:distribuzioni}
f_{{\rm out}, \alpha} (E) = \sum_{\beta}& & \left[ |S_{\alpha,\beta}^{ee}|^2 f_0 (E-\mu) + |S_{-\omega,\alpha,\beta}^{ee}|^2 f_0 (E-\mu+\omega) + |S_{\omega,\alpha,\beta}^{ee}|^2 f_0 (E-\mu-\omega) \right. \nonumber \\
& & \left. +|S_{\alpha,\beta}^{eh}|^2 f_0 (E+\mu) + |S_{-\omega,\alpha,\beta}^{eh}|^2 f_0 (E+\mu+\omega) + |S_{\omega,\alpha,\beta}^{eh}|^2 f_0 (E+\mu-\omega)
\right].
\end{eqnarray}
Expanding Eq. \eqref{eq:distribuzioni} at leading order in $\omega \to 0$, and using the unitarity of the scattering matrix, $\left( \hat{S} + \hat{S}_{\omega}e^{-i\omega t} +\hat{S}_{-\omega} e^{i \omega t}\right) \cdot \left( \hat{S}^\dagger + \hat{S}^\dagger_{\omega}e^{+i\omega t} +\hat{S}^\dagger_{-\omega} e^{-i \omega t}\right)= 1 $, we get
\begin{eqnarray}
\label{eq:correnti}
I_{e,\alpha} &=& \frac{e}{h} \int_{-\infty}^{\infty} dE \left[ \left( f_0(E+\mu) -f_0(E-\mu ) \right) \left(1-|S_{\alpha,\beta}^{ee}|^2 -|S_{\omega,\alpha,\beta}^{ee}|^2 -|S_{-\omega,\alpha,\beta}^{ee}|^2 \right) \phantom{\frac{a}{b}} \right. \nonumber \\
& & \left. + \hbar \omega \frac{\partial f (E-\mu)}{\partial E} \left( |S_{-\omega, \alpha,\beta}^{ee}|^2 - |S_{\omega,\alpha,\beta}^{ee}|^2 \right)
+ \hbar \omega \frac{\partial f (E+\mu)}{\partial E} \left( |S_{-\omega, \alpha,\beta}^{eh}|^2 - |S_{\omega,\alpha,\beta}^{eh}|^2 \right), \right]
\end{eqnarray}
and
\begin{eqnarray}
\label{eq:correnti2}
I_{\epsilon,\alpha} &=& \frac{1}{h} \int_{-\infty}^{\infty} dE (E-\mu) \left[ \left( f_0(E+\mu) -f_0(E-\mu ) \right) \left(1-|S_{\alpha,\beta}^{ee}|^2 -|S_{\omega,\alpha,\beta}^{ee}|^2 -|S_{-\omega,\alpha,\beta}^{ee}|^2 \right) \phantom{\frac{a}{b}} \right. \nonumber \\
& & \left. + \hbar \omega \frac{\partial f (E-\mu)}{\partial E} \left( |S_{-\omega, \alpha,\beta}^{ee}|^2 - |S_{\omega,\alpha,\beta}^{ee}|^2 \right)
+ \hbar \omega \frac{\partial f (E+\mu)}{\partial E} \left( |S_{-\omega, \alpha,\beta}^{eh}|^2 - |S_{\omega,\alpha,\beta}^{eh}|^2 \right) \right],
\end{eqnarray}
where we have shifted the energy to be measured from the superconductor chemical potential and we have neglected the effect of finite band size in extending the integral over all energies.
The expressions for $I_e$ and $I_\epsilon$ can be simplified using the symmetries of the scattering matrix. In fact, $S^{ee}_{\alpha,\beta}(-E)=\left(S^{ee}_{\alpha,\beta}(E)\right)^*$, and as a consequence:
\begin{eqnarray}
\label{eq:simmetria-dispari}
1-|S_{\alpha,\beta}^{ee}(-E)|^2 -|S_{\omega,\alpha,\beta}^{ee} (-E)|^2 -|S_{-\omega,\alpha,\beta}^{ee} (-E)|^2 = 1-|S_{\alpha,\beta}^{ee}(E)|^2 -|S_{\omega,\alpha,\beta}^{ee} (E)|^2 -|S_{-\omega,\alpha,\beta}^{ee}(E)|^2 \nonumber \\
T^{ee}_{\alpha,\beta}(-E) \equiv |S_{-\omega, \alpha,\beta}^{ee} (-E)|^2 - |S_{\omega,\alpha,\beta}^{ee} (-E)|^2 = - |S_{-\omega, \alpha,\beta}^{ee} (E)|^2 - |S_{\omega,\alpha,\beta}^{ee} (E)|^2 = - T^{ee}_{\alpha,\beta}(E) \nonumber \\
T^{eh}_{\alpha,\beta}(-E) \equiv|S_{-\omega, \alpha,\beta}^{eh} (-E)|^2 - |S_{\omega,\alpha,\beta}^{eh} (-E)|^2 = - |S_{-\omega, \alpha,\beta}^{eh} (E)|^2 - |S_{\omega,\alpha,\beta}^{eh} (E)|^2 = - T^{eh}_{\alpha,\beta}(E).
\end{eqnarray}
Using the parity properties of the fermi function, we can therefore rewrite the heat and charge currents as integrals over positive energies only
\begin{eqnarray}
\label{eq:energia-positiva}
I_{e,\alpha} &=& 2 \int_0^\infty dE \, \left[ \left( f_0(E+\mu) -f_0(E-\mu ) \right) \left(1-|S_{\alpha,\beta}^{ee}(E)|^2 -|S_{\omega,\alpha,\beta}^{ee}(E)|^2 -|S_{-\omega,\alpha,\beta}^{ee}(E)|^2 \right) \right] \nonumber \\
& & + \frac{e \omega}{2\pi} \sum_{\beta} \int_0^\infty dE \, \left[ \frac{\partial f (E-\mu)}{\partial E}\left( T^{ee}_{\alpha,\beta}(E) - T^{eh}_{\alpha,\beta}(E) \right) + \frac{\partial f (E+\mu)}{\partial E}\left( T^{eh}_{\alpha,\beta}(E) - T^{ee}_{\alpha,\beta}(E) \right) \right],\\
I_{\epsilon,\alpha} & = & -\frac{\mu}{e} I_{e,\alpha} + \frac{\omega}{2\pi} \sum_{\beta} \int_0^\infty dE \, E \,\left[ \frac{\partial f (E-\mu)}{\partial E}\left( T^{ee}_{\alpha,\beta}(E) + T^{eh}_{\alpha,\beta}(E) \right) + \frac{\partial f (E+\mu)}{\partial E}\left( T^{eh}_{\alpha,\beta}(E) + T^{ee}_{\alpha,\beta}(E) \right) \right].
\end{eqnarray}
Note that at $\mu=0$ $I_{e,\alpha}=0$ identically. This is a consequence of the symmetry of the scattering matrix, and is in contrast with usual results for which the scattering matrix is approximated to be independent of energy. In that case the expressions in \eqref{eq:simmetria-dispari} are even under energy reversal and one has a vanishing heat current as opposed to a finite charge current.
The expressions can be further simplified by computing explicitly
\begin{eqnarray}
\sum_{\beta=L,R} T^{ee}_{L, \beta} = \frac{1}{4} \delta X_1 \delta X_2{\rm Im} \left[ \frac{\partial S_{L-,L-}}{\partial X_1}\frac{\partial S_{L-,L-}^*}{\partial X_2} + \frac{\partial S_{L-,R-}}{\partial X_1}\frac{\partial S_{L-,R-}^*}{\partial X_2}\right] = \sum_{\beta=L,R} T^{eh}_{L, \beta},
\end{eqnarray}
and similarly for $T^{ee}_{R,\beta}$.
This leads to the expressions
\begin{eqnarray}
I_{e,\alpha} &=& 2 \int_0^\infty dE \, \left[ \left( f_0(E+\mu) -f_0(E-\mu ) \right) \left(1-|S_{\alpha,\beta}^{ee}(E)|^2 -|S_{\omega,\alpha,\beta}^{ee}(E)|^2 -|S_{-\omega,\alpha,\beta}^{ee}(E)|^2 \right) \right] , \\
I_{\epsilon,\alpha} & = & -\frac{\mu}{e} I_{e,\alpha} + \frac{\omega}{\pi} \sum_{\beta} \int_0^\infty dE \, E \,\left[ \frac{\partial f (E-\mu)}{\partial E} T^{ee}_{\alpha,\beta}(E) + \frac{\partial f (E+\mu)}{\partial E} T^{ee}_{\alpha,\beta}(E) \right],
\end{eqnarray}
which appear in the manuscript.
Note that the only time dependence enters through $X_1$ and $X_2$, and one can perform the time integral over the pumping cycle inside the energy integral.
In particular we consider the combination $\mathcal{Q}_\alpha (\mu, T) = \int_0^\tau dt \, (I_{\epsilon,\alpha} + \mu I_{e,\alpha}/e ) $, so that
\begin{eqnarray}
\mathcal{Q}_\alpha (\mu, T) = \int_0^\tau dt \, (I_{\epsilon,\alpha} +\frac{\mu}{e} I_{e,\alpha} ) =
\int_0^\infty dE\, E \, \mathcal{Q}_{0,\alpha} (E) \left[ \frac{\partial f (E-\mu)}{\partial E} + \frac{\partial f (E+\mu)}{\partial E} \right].
\end{eqnarray}
with
\begin{eqnarray}
\mathcal{Q}_{0,\alpha} (E)=
\frac{1}{\pi}\int_A d X_1 dX_2\sum_{\beta =L,R,\nu=e,h} {\rm Im} \left[ \frac{\partial S^{e\nu}_{\alpha,\beta}}{\partial X_1}\frac{\partial {S^{e\nu}_{\alpha,\beta}}^*}{\partial X_2}\right]= \int_0^\tau dt \left( \frac{d n(\alpha)}{dX_1}\frac{dX_1}{dt} + \frac{d n(\alpha)}{dX_2}\frac{dX_2}{dt} \right)
\end{eqnarray}
and
\begin{eqnarray}
\frac{d n(\alpha)}{dX} = \frac{1}{2\pi}\sum_{\beta=L,R,\nu=e,h} {\rm Im} \frac{\partial S^{e\nu}_{\alpha,\beta}}{\partial X} {S^{e\nu}_{\alpha,\beta}}^*
\end{eqnarray}
Note that at $T=0$ $\mathcal{Q}_L (\mu,0)= \mathcal{Q}(\mu)$, which, for $\mu \to 0$ approaches the universal fraction of the solid angle discussed in the manuscript.
|
1,116,691,497,408 | arxiv | \section{Introduction}
Pairwise comparisons (PCs) take place when we somehow compare two entities (objects or abstract concepts). According to \cite{HHPRD}, Raymond Llull is credited for the first documented use of pairwise comparisons in ``A system for the election of persons'' (\textit{Artifitium electionis personarum}) before 1283 and in ``An electoral system'' (\textit{De arte eleccionis}) in 1299. Both manuscripts were handwritten (there was no scientific publication process established yet) for deciding the winner of elections.
There are two variants of pairwise comparisons: multiplicative and additive. The multiplicative PCs variant reflects a relationship:
$$A~is~x~times~[comparison~operator]~than~B$$
The additive type expresses: \textit{``by how much (the percentage is often used) one entity is [comparison operator]
than another entity''}. The comparison operator could be: \textit{bigger, better, more important}, or similar comparisons (see \cite{LDX2019}).
The multiplicative pairwise comparison is determined by the ratio of two entities. For instance, the constant $\pi$, is one of the most recognized ratios in mathematics is defined as the ratio of a circle circumference to its diameter.
Entities may be physical objects or abstract concepts (e.g., software dependability or software safety). In both cases, we can provide an estimate based on expert assessment. However, physical measurements (e.g., area or weight) should be considered if they are available.
In practice, multiplicative (or ratio) PCs are more popular than additive PCs. However, they are more mathematically challenging than additive comparisons. The additive PCs can be produced from the multiplicative form by a logarithmic mapping (introduced in \cite{RAND}). Values of additive pairwise comparisons could be both negative and positive real numbers.
The logarithmic mapping is used to provide the proof of
inconsistency convergence and for the interpretation of the limit of convergence in \cite{HK1996, KS2010, KS2016}. Fuzzy extensions of pairwise comparisons are presented in \cite{D1999, KLH2006}.
Pairwise comparisons are usually represented by a \textit{PC matrix}. In the case of multiplicative PCs, it is a matrix of ratios of entities with 1s on the main diagonal (for the entity being compared to itself) and reciprocal ($x$ and $1/x$) values in upper/lower triangles as it is also reasonable to assume that
the ratio of $B/A$ is the reciprocal value of $A/B$. By the definition, ratios are strictly positive real numbers.
For entities which are abstract concepts (e.g., software quality and software safety), the division operation is undefined but using the ratio still makes sense (e.g., by stating: ``three times more important''). For this reason, ratios are often given by expert assessments.
The main goal of PCs is to split 1 into $n$ real values assigned to $n$ entities $E_i$, $i=1, \ldots, n$. We call them {\em weights}.
\subsection{Problem outline}
\label{outline}
In \cite{KSS2020} (a follow-up to \cite{KO1997}), Smarzewski has observed that PC matrices form a group under Hadamard product. In this paper, we show that it is, in fact, a Lie group. It is the first application of Lie theory to pairwise comparisons method.
\subsection{Group theory applications in Computer Science}
In pairwise comparisons, the use of abelian groups took place in \cite{CavDap09}. More recently, abelian groups were used in \cite{Ram16,CB2018,KMRS2019}.
The intensive Internet search has identified the first publication related to Lie group as \cite{BK1972}.
Most computer science publications, which use Lie theory are related to graphics and vision (see \cite{BT2019,EI2020,KCD2009,KF2020,ZZH2019}), Machine Learning (see \cite{DAO2018,KF2020,LG2006}), and network (see \cite{ZCX2019}).
---
\subsection{Structure of this paper}
A brief introduction to the pairwise comparisons method is provided in Section~\ref{case}. Section~\ref{case} also includes a simple example of using the PC method for
a generic exam. Section~\ref{basic-def} provides reasoning for the necessity of theorems and propositions. Section~\ref{LieGroupIntro} shows the
construction of Lie groups and Lie algebra for PC matrices.
Section~\ref{exp-trans} introduces the exponential transformation and its properties. Section~\ref{dir-prod} presents the main theorem. Section~\ref{genSec} outlines the generalization of our results.
\section{Example of using pairwise comparisons in practice}
\label{case}
A Monte Carlo experiment for pairwise comparisons accuracy improvement was presented in \cite{K1996,K1998}. It provided statistical evidence that the accuracy gain was substantial by using pairwise comparisons. For simplicity, let us assume that
we have three entities to compare: $ A, B, $ and $ C $.
The three comparisons are: $ A $ to $B$, $ A $ to $ C $, and $ B $ to $ C $. We assume the reciprocity of PC matrix $M$: $m_{ji}=1/m_{ij}$ which is reasonable (when comparing $ B $ to $ A $, we expect to get the inverse of the comparison $ A $ to $ B $). The exam is hence represented by the following PC matrix $M$:
\begin{equation}
M=[m_{ij}]=
\begin{bmatrix}
1 & A/B & A/C \\
B/A & 1 & B/C \\
C/A & C/B & 1 \\
\end{bmatrix}
\end{equation}
\noindent $A/B$ reads ``the ratio between A and B'' and may not necessarily be a result of the division (in the case of the exam problem, the use of division operation makes no mathematical sense but using the ratio is still valid).
Ratios of three entities create a {\em triad} $(A/B, A/C,B/C)$. This triad is said to be \textit{consistent} provided $A/B*B/C=A/C$. It is illustrated by
Fig.~\ref{fig:iicycle}. Random numbers of dots are hard to count but can be compared in pairs as two random clouds. $[A/B]$ reflects the \textit{assessed} ratio of dots by expert opinions.
A large enough number of dots represents the concept of (\textit{numerosity}). They may, for example, represent votes of experts. In an emergency situation (e.g., mine rescue), it is impossible to count votes in a short period of time. The exact number of votes is there but all we need is to assess the numerosity of votes
Symbolically, in a PC matrix $M$, each triad (or a cycle) is defined by $(m_{ik},m_{ij},m_{kj})$. Such triad is {\em consistent} providing $(m_{ik}*m_{kj}=m_{ij})$. When all triads are consistent (known as the \textit{consistency condition} or \textit{transitivity condition}), the entire
PC matrix is considered \textit{consistent}.
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{Fig1.png}
\caption[cycle]{An inconsistency indicator cycle}
\label{fig:iicycle}
\end{figure}
Looking at the above exam grading case, we have discovered a pairwise
comparisons method which can be used to construct a PC matrix. The
\textit{solution} to the PC matrix is a vector of weights which are geometric means of rows(for more detail see, for example, \cite{KK2015}). We usually normalize it to 1 as the sum. The justification for the use of the vector $w=[v_i]$ of geometric means (GM) of rows is not trivial and it is the subject of this paper. The exact reconstruction of the PC matrix (say $M$) via $M=[v_i/v_j]$ is guaranteed only for the consistent matrices.
In our example, the computed weights (as normalized geometric means of rows) are approximated to: $[0.58, 0.31, 0.11]$. By looking at the result, we can conclude that problem $A$ is the most difficult with the weight 0.58. The easiest problem is $C$ with the weight $0.11$.
One of the challenges of pairwise comparisons is the inconsistency of assessments. It is well demonstrated by Fig.~\ref{fig:iicycle}. It seems that a trivial mistake took place: 6 should be in place of 5 since 2*3 gives
this value. However, there are no grounds to assume that 2 and 3 are accurate assessments. No specific accuracy assumption is made of any assessment.
\section{The problem formulation}
\label{basic-def}
This paper examines the use of group theory to construct the geometry of pairwise comparisons matrices and improve the consistency-driven process in \cite{JK1996, BFK2011}. The Hadamard product (also known as coordinatewise, coordinate-wise, elementwise, or element-wise product) is examined in the context of inconsistency and inaccuracy. To achieve this goal, we provide a proof that PC matrices are represented by a Lie group. Subsequently, a Lie algebra of
the Lie group of PC matrices is constructed. A decomposition method of PC matrices is introduced for the Hadamard product. One of the components is an approximation PC matrix and the other orthogonal component is interpreted as the approximation inaccuracy. The importance of selecting PC matrix components is also provided in this paper. Subgroups of the PC matrix Lie group have been identified and presented as an internal direct product.
\section{Lie groups and Lie algebras of PC matrices}
\label{LieGroupIntro}
The monograph \cite{Tu2010} stipulates that, ``Intuitively, a manifold is a generalization of curves and surfaces to higher dimensions. It is locally Euclidean in that every
point has a neighborhood, called a chart, homeomorphic to an open subset of $\mathbb{R}^n$''. We find the above justification to be sufficient to be followed by computer science researchers.
A group that is also a differentiable (or smooth) manifold is called Lie group (after its proponent Sophus Lie). According to \cite{AA2008}, a Lie group is an abstract group $\mathcal{G}$ with a smooth structure, that is:
\begin{df} \
\begin{enumerate}
\item $ \mathcal{G} $ is a group,
\item $ \mathcal{G} $ is a smooth manifold,
\item the operation $\mathcal{G} \times \mathcal{G} \longrightarrow \mathcal{G},$ $(x,y) \longrightarrow
xy^{-1}$ is smooth.
\end{enumerate}
\end{df}
\noindent Matrix Lie group operates on matrices.
\begin{df}
The Lie algebra of a Lie group $ \mathcal{G} $ is the vector space $ \mathbb{T}_e\mathcal{G} $ equipped with the Lie bracket operation $ [~ ,~ ]$ of vector fields.
\end{df}
\noindent The bracket operation $ [~ , ~]$ is assumed to be bilinear, antisymmetric, and satisfies the Jacobi identity: Cyclic$([X, [Y, Z]]) = 0$ for all $X, Y, Z$
belonging to this algebra.
\noindent Lie group and Lie algebra have been analyzed in
\cite{YY2010,YY2013, YY2014,YY2016}.
\begin{prop}
For every dimension $n > 0$, the following group:
$$ \mathcal{G}=\{\ M=[m_{ij}]_{n \times n} | M \cdot M^T = I, m_{ij} = \frac{1}{m_{ji}}
> 0 \text{ for every } i,j = 1, 2,\ldots,n\}\ $$
is an abelian group of $n \times
n$ PC matrices with an operation
\begin{equation*}
\begin{split}
& \cdot : \mathcal{G} \times \mathcal{G} \longmapsto \mathcal{G} \\
& \mathit{defined \ by}\ \quad (M,N) \longrightarrow M \cdot N =[m_{ij} \cdot n_{ij}]
\end{split}
\end{equation*}
where "$\cdot$" is the Hadamard product.
\end{prop}
\begin{proof}
To begin, we know that $I \cdot I^{-1} = I$ so $I \in \mathcal{G}$ where
$I=[\eta_{ij}]_{n \times n}$ is the identity element of the group and satisfies
the condition $\eta_{ij} = 1$ every $i,j =1,\ldots,n$.
Now, observe that if $M \in \mathcal{G}$ then $M \cdot M^T = I$ and $M^T = M^{-1}$. Thus
$M^T(M^T)^T=I$ so $M^{-1} \in \mathcal{G}$.
Let $M$ and $N$ be arbitrary elements of $\mathcal{G}$. Observe that by the properties of $\mathcal{G}$:
$$NM(NM)^T=(NM)(N^T M^T) =N(MM^T) N^T=NIN^T=I.$$
$\mathcal{G}$ is closed and commutative under Hadamard product. Consequently, we see that $(\mathcal{G},\cdot)$ is an abelian group.
\end{proof}
\begin{df}
Let $\mathcal{G}$ be PC matrix Lie group and $M(t)$ be a path through $\mathcal{G}$. We say that $M(t)$ is \textit{smooth} if each entry in $M(t)$ is differentiable. The derivative of $M(t)$ at the point $t$ is denoted $M'(t)$ which is the matrix whose $ij^{th}$ element is the derivative of $ij^{th}$ element of $M(t)$.
\end{df}
\begin{cor}
The abelian group $\mathcal{G}$ is a PC matrix Lie group.
\end{cor}
\begin{proof}
We know that the Hadamard product "$\cdot$" and the operation $M\longrightarrow M^{-1}=M^T$ are differentiable for every PC matrix M.\\
Thus, $\mathcal{G}$ is a PC matrix Lie Group.
\end{proof}
We also know that the tangent space of any matrix Lie group at unity is a vector space.
The tangent space of any matrix group $\mathcal{G}$ at unity $I$ will be denoted by $T_I(\mathcal{G})=g$ where $I$ is the unit matrix of $\mathcal{G}$.
\begin{thm}
The tangent space of the PC matrix Lie Group $\mathcal{G}$ at unity $I$ consists of all $n \times n$ real matrices $X$ that satisfy $X + X^T = 0$.
\end{thm}
\begin{proof}
Recall that any matrix $A \in \mathcal{G}$ satisfies the condition $A\cdot A^T=I $. Let us consider a smooth path $A(t)$ such that $A(0)=I$.
\noindent We know that:
\begin{equation} \label{eq4-1}
A(t) \cdot A(t)^T = I
\end{equation}
for all parameters $t$.
\noindent By differentiating the equation \ref{eq4-1}, we get
\begin{equation} \label{4-2}
\frac{d}{dt}(A(t)) \cdot A(t)^T + A(t) \cdot \frac{d}{dt}(A(t)^T) = 0
\end{equation}
Considering that: $$ \frac{d}{dt}( A(t)^T) = (\frac{d}{dt}(A(t)))^T $$ the equation \ref{4-2}
can be rewritten as $$ A'(t) \cdot A(t)^T + A(t) \cdot
A'(t)^T = 0 $$
and at the point $t=0$, we obtain:
$$A'(0) + A'(0)^T = 0$$
Thus, any tangent vector $X = A'(0)$ satisfies $X+X^T=0$.\\
\end{proof}
\begin{cor}
The Lie algebra of $\mathcal{G}$, denoted by $T_I(\mathcal{G})$, is a Lie algebra of $\mathcal{G}$ and $T_I(\mathcal{G})$ is the space of the skew-symmetric $n \times n$ matrices. Observe that:
$$dim(\mathcal{G}) = dim T_I(\mathcal{G}) = \frac{n \cdot (n-1)}{2}.$$
\end{cor}
\section{Exponential map and PC matrices}
\label{exp-trans}
The exponential map is a map from Lie algebra of a given Lie group to that group. In this Section, we will introduce the exponential transformation from $g$ (the tangent space to the identity element of PC matrix Lie group $\mathcal{G}$) to $\mathcal{G}$.\\
Let $\mathcal{G}$ be PC matrix Lie group and $g$ be the Lie algebra of $\mathcal{G}$. Then, the exponential map:
\begin{align*}
exp:g & \longrightarrow \mathcal{G} \\
A=[a_{ij}]_{n \times n} & \longrightarrow exp[A]=[e^{a_{ij}}]
\end{align*}
\noindent has the following properties:
\begin{enumerate}
\item $G_1=\{ \delta(t)=e^{tA} ~|~ t \in \mathbb{R}, A \in g \}$ is one parameter subgroup of $\mathcal{G}$.
\item Let $A$ and $B$ be two elements of the Lie algebra $g$. Then, the following equality holds:
$$e^{A+B} = e^A \cdot e^B$$
\item Given any matrix $A \in g$, the tangent vector of the smooth path
$\gamma (t)$ is equal to $A \cdot e^{tA}$, that is, $$\gamma '(t) =
\frac{d}{dt}e^{tA} = A \cdot e^{tA} $$
\item For any matrix $A \in g$,
$$(e^A)^{-1} = e^{-A} = (e^A)^T = e^{A^T}$$
and
$$ (e^A)(e^A)^T=e^A \cdot e^{A^T} = e^{A+A^T} = e^0 = 1.$$
\item For any matrix $A \in g$, $\gamma (t) = e^{tA}$ is a geodesic curve
of the pc matrix Lie group $\mathcal{G}$ passing through the point $\gamma (0) = 1$.
\item For any matrix $A \in g$, we would like to stipulate that
$$det(e^A) = e^{Tr A}$$
where $Tr(A)$ is the trace function of $A$.
However, this cannot always be achieved and a counterexample is presented in the Example~\ref{counterexample1}.
\end{enumerate}
\begin{ex}
\label{counterexample1}
Let us consider the following matrix
$$ A=\begin{pmatrix}
0 & -1 & 1 \\
1 & 0 & 0 \\
-1 & 0 & 0
\end{pmatrix}
$$
\noindent $A$ is the element of $g$, hence the
exponential map of $A$ is:
$$ e^A=\begin{pmatrix}
1 & \frac{1}{e} & e \\
e & 1 & 1 \\
\frac{1}{e} & 1 & 1
\end{pmatrix}
$$
\noindent The determinant of $e^A$ is: $$det(e^A) = e^2+e^{-2}-2$$ and the trace of $e^A$ is: $$Tr(A)= \Sigma^{3}_{i=1}a_{i,i} = 0$$
Consequently, $det(e^A)$ is not equal to $e^{Tr(A)}$ for the matrix $A$.
\end{ex}
\section{Internal direct product of Lie group of $3 \times 3$ PC Matrices}
\label{dir-prod}
The aim of this section is to provide both geometric and algebraic perspectives on PC matrices. Our presentation is based on the techniques in \cite{A1995},\cite{YY2013}, and \cite{YY2016}. However, a modified approach is used in this Section. Let us recall that we consider only $3 \times 3$ PC matrices. Section~\ref{genSec} outlines generalization to $n>3$. Let us introduce the definition of the internal direct product.
\noindent We use the notation $I_n = \{1,\ldots n\}$.
\begin{df}
Let $\mathcal{G}$ be a group and let $\{N_i|i\in I_n\}$ be a family of normal subgroups of $\mathcal{G}$. Then $\mathcal{G}=N_1N_2\dotsm N_n$ is called the {\em internal direct product} of $\{N_i | i \in I_n\}$
and $N_i\cap(N_1\dotsm N_{i-1}N_{i+1}\dotsm N_n)=\{e\}$ for all $i\in \{1,2,\ldots, n\}$ (see \cite{F1933}).
\end{df}
\begin{thm}
Let $\mathcal{G}$ be a group and $\{N_i | i \in I_n\}$ be a family of normal subgroups of $ \mathcal{G} $. Then $ \mathcal{G} $ is an internal direct
product of $\{N_i | i \in I_n\}$ if and only if for all $A \in \mathcal{G}$, $A$ can be uniquely expressed as $$A=a_1 \cdot a_2 \dotsm a_n$$ where $a_i \in N_i, i \in \{1,\ldots,n\}$ (see \cite{F1933})
\end{thm}
\begin{thm}
Let $\mathcal{G}$ be the internal direct product of a family of normal subgroups $\{N_i|i\in I_n\}$.
Then $$\mathcal{G} \simeq N_1 \times N_2 \times \dotsm \times N_n$$.
The collection $M_n =(M_n,\cdot)$ of all consistent PC matrices
$ M $ is a multiplicative Lie subgroup of the Lie group $ \mathcal{G} $.
Moreover, let us consider additive consistent matrices represented by the following set:
$$l = C_g = \{A = [a_{i,j}]_{n\times n}\in g | a_{i,k} + a_{k,j} = a_{i,j} \}
\text{ for every } (i,j,k) \in T_n$$
\noindent where $T(n) = \{ (i,j,k) \in \{1,2,3, \dots ,n\}:i<j<k \}$ was
considered in \cite{A1995}.
If $A, B \in C_g$ then $A+B \in C_g$ and $kA \in C_g$. Thus $C_g$ is a Lie
subalgebra of $(g,+)$. Let us observe that the following equality holds:
\[g = C_g \oplus (C_g)^{\bot} (C_g)^{\bot} = h. \]
It follows that $C_{\mathcal{G} }$ is a Lie subgroup of $({\mathcal{G}},\cdot)$; therefore, the following equality also holds:
$$\mathcal{G} = C_{\mathcal{G}} (C_{\mathcal{G}})^{\perp}$$
\end{thm}
Considering the above results, we provide the new geometric and algebraic interpretation for $3 \times 3$ PC matrices.
\begin{prop}
Let
$$h=\Bigg\{\begin{pmatrix}
0 & x & -x \\
-x & 0 & x \\
x & -x & 0
\end{pmatrix} : x \in \mathbb{R}\Bigg\}$$
and
$$l=\Bigg\{\begin{pmatrix}
0 & y & y+z \\
-y & 0 & z \\
-y-z & -z & 0
\end{pmatrix} : y,z \in \mathbb{R}\Bigg\}$$
be two sets of $3 \times 3$ PC matrices. Then the following holds:
\begin{enumerate}[(i)]
\item $h$ and $l$ are 1 and 2 dimensional Lie subalgebras of the Lie algebra of $g$,
\item the vector space denoted by the Lie algebra of $h$ is the
orthogonal complement space of the vector space denoted by the Lie algebra of $l$.
\end{enumerate}
\end{prop}
\begin{proof}
\begin{enumerate}[(i)]
\item
For the proof, let us use $sp$ for the linear span of a set of vectors.
$$ h = sp \Bigg\{\begin{pmatrix}
0 & 1 & -1 \\
-1 & 0 & 1 \\
1 & -1 & 0
\end{pmatrix}\Bigg\}$$
and
$$ l = sp \Bigg\{\begin{pmatrix}
0 & 1 & 1 \\
-1 & 0 & 0 \\
-1 & 0 & 0
\end{pmatrix}, \begin{pmatrix}
0 & 0 & 1 \\
0 & 0 & 1 \\
-1 & -1 & 0
\end{pmatrix} \Bigg\}$$
Since the space $h$ produces one matrix, $dim(h) = 1$ and the space
$l$ produces two linearly independent vectors, $dim(l) = 2$.
Moreover, $h$ and $l$ are Lie subalgebras of $\mathcal{G}$.
\item It is implied by the basic properties of Lie algebras.
\end{enumerate}
We will show that every element of the Lie algebra $g$ can be written as the sum of one element of the Lie subalgebra $h$ and one element of Lie subalgebra $l$, that is, for all $A \in \mathcal{G}$, $A = A_h + A_l$, with $A_h \in h$ and $A_l \in l$.
$$
\begin{pmatrix}
0 & a & b \\
-a & 0 & c \\
-b & -c & 0
\end{pmatrix}
=
\begin{pmatrix}
0 & x & -x \\
-x & 0 & x \\
x & -x & 0
\end{pmatrix}
+
\begin{pmatrix}
0 & y & y+z \\
-y & 0 & z \\
-y-z & -z & 0
\end{pmatrix}
$$
where
$$x=\frac{1}{3}(a-b+c)$$
$$y=\frac{1}{3}(2a+b-c)$$
$$z=\frac{1}{3}(2c-a+b)$$
\end{proof}
\begin{prop}
\label{6-4}
Let
$$H=\Bigg\{N|N=\begin{pmatrix}
1 & k & \frac{1}{k} \\
\frac{1}{k} & 1 & k \\
k & \frac{1}{k} & 1
\end{pmatrix},
\mathit{where}\ k \in \mathbb{R^+}\Bigg\}$$
and
$$L=\Bigg\{W|W=\begin{pmatrix}
1 & y & yz \\
\frac{1}{y} & 1 & z \\
\frac{1}{yz} & \frac{1}{z} & 1
\end{pmatrix}, \mathit{where}\ y,z \in \mathbb{R^+}\Bigg\}$$
\noindent be two sets of $3 \times 3$ matrices. Then:
\begin{enumerate}[(i)]
\item $H$ and $L$ are normal subgroups of the Lie group of $\mathcal{G}$
\item The Lie group $\mathcal{G}$ is the internal direct product of the normal Lie
subgroups $H$ and $L$. In particular:
$$\mathcal{G} \simeq H \times L.$$
\end{enumerate}
\end{prop}
\begin{proof}
\noindent We know that:
$$\mathcal{G}=\Bigg\{M|M=\begin{pmatrix}
1 & m_{12} & m_{13}\\
\frac{1}{m_{12}} & 1 & m_{23} \\
\frac{1}{m_{13}} & \frac{1}{m_{23}} & 1
\end{pmatrix} , m_{12},m_{13},m_{23} \in \mathbb{R^+}\Bigg\}$$
\ \\
Let us consider:\\
$$A=\begin{pmatrix}
1 & \xi & \eta \\
\frac{1}{\xi} & 1 & \gamma \\
\frac{1}{\eta} & \frac{1}{\gamma} & 1
\end{pmatrix} \in \mathcal{G}$$
\ \\
Using the logarithmic transformation we get:
$$\tilde{A}=\begin{pmatrix}
0 & ln(\xi) & ln(\eta) \\
-ln(\xi) & 0 & ln(\gamma) \\
-ln(\eta) & -ln(\gamma) & 0
\end{pmatrix}$$
The Proposition~\ref{6-4} implies that:
$$\tilde{A}=\begin{pmatrix}
0 & x & -x \\
-x & 0 & x \\
x & -x & 0
\end{pmatrix}
+
\begin{pmatrix}
0 & y & y+z \\
-y & 0 & z \\
-y-z & -z & 0
\end{pmatrix} = A_h+A_l$$
\noindent Moreover, we know that:
$$x = \frac{ln(\xi)-ln(\eta)+ln(\gamma)}{3}=
ln(\frac{\xi\gamma}{\eta})^{1/3}$$
$$y = \frac{2ln(\xi)+ln(\eta)-ln(\gamma)}{3}=
ln(\frac{\xi^2\eta}{\gamma})^{1/3}$$
$$z = \frac{2ln(\gamma)-ln(\xi)+ln(\eta)}{3}=
ln(\frac{\gamma^2\eta}{\xi})^{1/3}$$
\noindent Using the exponential transformation for $ \tilde{A} = A_h+A_l$
we conclude that:
$$ exp(\tilde{A}) = exp(A_h)\cdot exp(A_l)$$
which implies:
$$A = A_H\cdot A_L$$
\end{proof}
Observe that the matrix $A_L$ is a consistent PC matrix. However, the following proposition needs to be considered for deciding on the classification of the matrix $A_H$.
\begin{prop}
\begin{enumerate}[(i)]
\item Lie algebra of the PC matrix Lie group of consistent $L$ is $l$.
\item Lie algebra of the PC matrix normal Lie subgroup $H$ is $h$.
\end{enumerate}
\end{prop}
From the above proposition for additive consistent matrices, we conclude that $h$ is the orthogonal complement space of $l$. Hence every element of $h$
can be classified as ortho-additive consistent matrix and every element of $H$
is an ortho-consistent PC matrix.
\section{Generalization outline}
\label{genSec}
Following \cite{KU2018}, we would like to point out that PC matrices of the size $n>3$ can be viewed as consisting of $3 \times 3$ PC submatrices obtained by deleting rows and columns with the identical indices. That is, if we delete row
$j$, we need to also delete column $j$. Fig.~\ref{fig:m5tom3} demonstrates how to obtain one PC matrix $3 \times 3$ from PC matrix $5 \times 5$ by deleting
rows and columns numbered 3 and 4.
\begin{figure}[h]
\includegraphics[width=1.2\linewidth]{M5toM3}
\caption[Generalization]{Generatization to $n>3$}
\label{fig:m5tom3}
\end{figure}
Definition 3.3 from \cite{KU2018} states this as follows:
\begin{quotation}
Assume $n<m$, $A$ and $B$ are square matrices of degree $n$ and $m$
respectively. We call matrix $A$ a submatrix of $B$ ($A \subset B$) if there
exist injection $$\sigma : \{1,\ldots,n\} \rightarrow \{1,\ldots,m\} $$
such that for all
$n,m \in \{1,\ldots,n\}$ $a_{ij} = b_{\sigma(i)\sigma(j)}.$
\end{quotation}
The PC matrix reconstruction from its $3 \times 3$ submatrix approximation components is PC matrix $S$ of geometric means of all corresponding elements in these components. The need for geometric mean use comes from the occurrence of the same matrix element $(n-1)(n-2)(n-3)/6$ times in submatrices.
It is important to observe that PC matrix $S$ does not need to be consistent even though all submatrices of $S$
are consistent. However, the reconstruction process converges to a consistent PC matrix as partly proven by \cite{HK1996} and completed in \cite{KS2010}. The above reconstruction process will be analyzed in the planned follow-up paper.
\section*{Conclusions}
Using fundamental theorems from \cite{Levi1942} and \cite{Levi1943}, the collaborative research effort \cite{collab2020} demonstrated that group generalization for pairwise comparisons matrices is a challenge. In particular, \cite{collab2020} provided the proof that elements of a multiplicative PC matrix must be selected from a torsion-free abelian group.
Our paper demonstrates that the multiplicative PC matrices (not the elements of a PC matrix) generate a Lie group for the Hadamard product. Lie algebras of these Lie groups are identified here. It has been shown that Lie algebras form spaces of skew-symmetric matrices. It has also been proven that
the Lie group of PC matrices can be represented as an internal direct product using the direct summability property of vector spaces.
In conclusion, a relatively simple concept of pairwise comparisons turns out to be related to the theory of Lie groups and Lie algebras (what is commonly regarded as very sophisticated mathematics). For the first time, the decomposition of a PC matrix into an approximation component and orthogonal component (interpreted as the approximation error) was obtained. Without such decomposition, the pairwise comparisons method has remained incomplete for 722 years from its first scholarly presentation.
\section*{Acknowledgments}
We thank Tu\v{g}\c{c}e \c{C}alci for the verification of mathematical formulas and the terminology associated with them. The authors recognize the efforts of Tiffany Armstrong in proofreading this text and Lillian (Yingli) Song with the technical editing. We also acknowledge that algebraic terminology and basic concepts are based on \cite{Algebra}.
|
1,116,691,497,409 | arxiv | \section{#1}}
\setcounter{equation}{0}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\hspace{0.1cm}}{\hspace{0.1cm}}
\newcommand{\hspace{0.7cm}}{\hspace{0.7cm}}
\newcommand{\theta}{\theta}
\newcommand{\sigma}{\sigma}
\newcommand{\rightarrow}{\rightarrow}
\newcommand{\beta}{\beta}
\newcommand{\bar{z}}{\bar{z}}
\newcommand{\tilde{\zeta}}{\tilde{\zeta}}
\newcommand{\tilde{\zeta}}{\tilde{\zeta}}
\newcommand{\kappa}{\kappa}
\newcommand{\bar{k}}{\bar{k}}
\newcommand{\bar{\imath}}{\bar{\imath}}
\newcommand{\bar{\jmath}}{\bar{\jmath}}
\begin{document}
\setcounter{page}{0}
\topmargin 0pt
\oddsidemargin 5mm
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\newpage
\setcounter{page}{0}
\begin{titlepage}
\begin{flushright}
QMW-PH-95-9\\
UMDEPP 95-130\\
{\bf hep-th/yymmnn}\\
May $5th$, $1995$
\end{flushright}
\vspace{0.5cm}
\begin{center}
{\Large {\bf On continuous conformal deformation of the $SL(2)_4/U(1)$
coset}} \\
\vspace{1.8cm}
\vspace{0.5cm}
{\large
S. James Gates Jr.$~^1$\footnote{e-mail: [email protected]} and Oleg A.
Soloviev$~^{1,2}$
\footnote{e-mail: [email protected]}}\\
\vspace{0.5cm}
$~^1${\em Physics Department, University of Maryland, \\
College Park, MD 20742, U.S.A.}\\
\vspace{0.5cm}
$~^2${\em Physics Department, Queen Mary and Westfield College, \\
Mile End Road, London E1 4NS, United Kingdom}\\
\vspace{0.5cm}
\renewcommand{\thefootnote}{\arabic{footnote}}
\setcounter{footnote}{0}
\begin{abstract}
{We describe a one-parameter family of $c=1$ CFT's as a continuous
conformal deformation of the $SL(2)_4/U(1)$ coset.}
\end{abstract}
\vspace{0.5cm}
\centerline{May 1995}
\end{center}
\end{titlepage}
\newpage
\section{Introduction}
The curious fact is that among all $c=1$ CFT's there exists the one-parameter
family of theories living on an orbifold \cite{Ginsparg}. These conformal
models are related to the Ashkin-Teller critical line \cite{Ashkin} and
correspondingly to the Abelian massless Thirring model. By now
many of the aspects of this $c=1$ family have been studied in detail
\cite{Ginsparg},\cite{Dijkraaf}. A new puzzle arose, when it was shown that
there is another one-parameter family of $c=1$ CFT's associated with the
affine-Virasoro construction of the affine algebra $SL(2)$ at level $k=4$
\cite{Morozov1}. According to our observation this family has to be described
as a non-Abelian Thirring model \cite{Soloviev1}. It is not obvious that the
non-Abelian Thirring model is equivalent to the Abelian Thirring model.
Therefore, one can raise a question about equivalence of the two given
one-parameter families. As yet this question has not been answered, though some
evidence for this equivalence has been presented \cite{Morozov2}.
We believe that in order to clarify the relation between the two families, one
has to compare their partition functions on the torus. The problem is that
there still are some open questions about the Lagrangian formulation of the
second continuous $c=1$ family. The aim of the present paper is to describe the
non-Abelian one-parameter family as a continuous conformal deformation of the
CFT, whose partition function on the torus is known. We shall exhibit that the
$SL(2)_k/U(1)$ coset, which is the $c=1$ CFT at $k=4$, admits a one-parameter
conformal deformation. We shall argue that the given continuous family
of $c=1$ CFT's share all the properties of the $c=1$ one-parameter
affine-Virasoro construction and is in agreement with the nonperturbative
Lagrangian description in \cite{Soloviev1}. We hope that the model we
construct will turn out to be useful for answering the main question of
equivalence.
The paper is organized as follows. In section 2 we show that the spectrum of
the $SL(2)_k/U(1)$ coset at $k=4$ acquires an extra null-vector. In section 3
we make use of the given null-vector to perform the conformal one-parameter
deformation on the $SL(2)_4/U(1)$ coset. We conclude in section 4.
\section{The null-operator at $k=4$}
The key point of our discussion will be the fact that at $k=4$ the
$SL(2)_k/U(1)$ coset acquires a new null-operator which can be used to deform
the given CFT. Therefore, we start with the Lagrangian description of the coset
construction which will be a central element in our consideration.
It is well known that within the Lagrangian approach the $G/H$ coset can be
presented as a combination of ordinary conformal Wess-Zumino-Novikov-Witten
(WZNW) models and ghost-like action \cite{Karabali}, \cite{Hwang}:
\begin{equation}
S_{G/H}=S_{WZNW}(g,k)~+~S_{WZNW}(h,-k-2c_V(H))~+~S_{Gh}(b,c,\bar b,\bar c),
\end{equation}
where $g$ takes values on the group $G$, $h$ takes values on the group $H$,
$c_V(H)$ is defined according to
\begin{equation}
f^{il}_kf^{jk}_l=-c_V(H)\delta^{ij},~~~~~i,j,k=1,2,...,\dim H.\end{equation}
Whereas the last term in eq. (2.1) is the contribution from the ghost-like
fields,
\begin{equation}
S_{Gh}=\mbox{Tr}\int d^2z(b\bar\partial c+\bar b\partial\bar c).\end{equation}
There are first class constraints in the system. Therefore, the physical states
are defined as cohomology classes of the nilpotent BRST operator $Q$
\cite{Karabali},\cite{Hwang},
\begin{equation}
Q=\oint{dz\over2\pi i}\left[:c_i(\tilde
J^i+J_H^i):(z)~-~(1/2)f^{ij}_k:c_ic_jb^k:(z)\right],~~~~~Q^2=0,\end{equation}
where we have used the following notations
\begin{equation}
J_H=-{k\over2}g^{-1}\partial g|_H,~~~~~~\tilde
J={(k+2c_V(H))\over2}h^{-1}\partial h.\end{equation}
Here the current $J_H$ is a projection of the ${\cal G}$-valued current $J$ on
the subalgebra ${\cal H}$ of ${\cal G}$.
Let us turn to the case of the $SL(2)_k/U(1)$ coset. This is the simplest coset
construction. In particular, the BRST operator $Q$ takes the form
\begin{equation}
Q=\oint{dz\over2\pi i}:c(\tilde J^3+J^3):(z),\end{equation}
where we assume that the gauge subgroup $H=U(1)$ is associated with the
subalgebra generated by the $t^3$ generator of the $SL(2)$ algebra
\begin{equation}
[t^a,t^b]=f^{ab}_ct^c,~~~~a,b,c=1,2,3\end{equation}
thus, $c_V(U(1))=0$. Therefore, the action of the $SL(2)_k/U(1)$ coset is given
by
\begin{equation}
S_{SL(2)/U(1)}=S_{WZNW}(g,k)~+~S_{WZNW}(h,-k)~+~\int d^2z(b\bar\partial c+\bar
b\partial\bar c).\end{equation}
The physical operators and states are constructed in terms of the three given
CFT's under the condition of annihilation by $Q$. We are not interested in
finding all the physical operators. We want to focus attention on the following
one
\begin{equation}
O^L=L_{ab}:J^a\bar J^{\bar a}\phi^{b\bar a}:,\end{equation}
where $\phi^{a\bar a}$ is defined as follows
\begin{equation}
\phi^{a\bar a}=\mbox{Tr}:g^{-1}t^agt^{\bar a}:.\end{equation}
This is a highest weight vector of the affine algebra \cite{Knizhnik}. In
addition, $\phi^{a\bar a}$ is a primary operator of the Virasoro algebra
associated with the WZNW model on $SL(2)_k$. The corresponding conformal
dimensions are given by \cite{Knizhnik}
\begin{equation}
\Delta_\phi=\bar\Delta_\phi={2\over k+2}.\end{equation}
In virtue of the properties of $\phi^{a\bar a}$, the operator $O^L$ is also a
Virasoro highest weight vector, when the matrix $L_{ab}$ is symmetrical, i.e.
\begin{equation}
L_0|O^L\rangle=\Delta_O|O^L\rangle,~~~~~~L_{m>0}|O^L\rangle=0,\end{equation}
with
\begin{equation}
\Delta_O=\bar\Delta_O=1+{2\over k+2}.\end{equation}
Here $L_n$ are the Virasoro generators of the WZNW model on $SL(2)_k$. At the
same time, the operator $O^L$ is no longer a highest weight vector with respect
to the affine algebra but its descendant.
In the case of $SL(2)$ the normal form of the matrix $L_{ab}$ is diagonal,
\begin{equation}
L_{ab}=\lambda_a\eta_{ab},\end{equation}
where $\lambda_1,~\lambda_2,~\lambda_3$ are arbitrary numbers,
$\eta_{ab}=\mbox{diag}(1,1,-1)$.
One can check that in general the operator $O^L$ does not commute with $Q$ and,
therefore, does not belong to the physical subspace of the theory in eq. (2.8).
Hence, some modifications of $O^L$ are required.
Let us consider the following modified operator
\begin{equation}
\hat O^L=O^L~+~N:\tilde J^3\bar
J^{\bar a}\phi^{3\bar a}:,
\end{equation}
where the constant $N$ is to be defined.
Clearly the operator $\hat O^L$ has
the same conformal dimensions as $O^L$. We demand
\begin{equation}
Q|\hat O^L\rangle=0.\end{equation}
This requirement leads us to two equations
\begin{equation}
\lambda_1=\lambda_2,~~~~~~~N=\lambda_3-{2\over
k}(\lambda_1+\lambda).\end{equation}
Under the given conditions, the operator $\hat O^L$ belongs to the cohomology
of the BRST operator Q. In general, $\hat O^L$ is built up of the two WZNW
models presented in eq. (2.8). Dramatic simplifications occur, when $k=4$.
Indeed, in this case there is a solution with
$\lambda_1=\lambda_2=\lambda_3=\lambda$ and $N=0$. Thus, at $k=4$,
\begin{equation}
\hat O^L=O^\lambda,\end{equation}
where
\begin{equation}
L_{ab}=\lambda\eta_{ab}.\end{equation}
The operator $O^\lambda$ describes a one-parameter family of BRST invariant
operators. It turns out that $O^\lambda$ is a null-vector. Indeed, it is not
difficult to compute its norm with arbitrary $k$:
\begin{equation}
||O^\lambda||^2=(k-4)^2~M,\end{equation}
where $M$ is some number. Hence, the norm vanishes, when $k=4$. Also one can
show that
\begin{equation}
\langle O^\lambda O^\lambda O^\lambda\rangle=0,\end{equation}
which agrees with the consistency equation obtained in \cite{Soloviev2}.
Therefore, all correlation functions with $O^\lambda$ will vanish. In other
words, at $k=4$
the space of physical operators of the $SL(2)_k/U(1)$ coset acquires a
one-parameter family of null-vectors which are not ruled out by the BRST
symmetry and are not equal to BRST exact operators.
\section{The one-parameter deformation}
Let us consider the following theory
\begin{equation}
S(\lambda)=S_{SL(2)_4/U(1)}~-~\int d^2zO^\lambda(z,\bar z),\end{equation}
where $O^\lambda(z,\bar z)$ is the null-operator constructed in the previous
section. The $\lambda$ parameter in eq. (3.22) has dimension $-2\Delta_\phi$,
because the operator $O^\lambda$ has dimensions
$(1+\Delta_\phi,1+\Delta_\phi)$. Since $\Delta_\phi>0$, $O^\lambda$ is an
irrelevant operator. If we think of $\lambda$ as being small, then the action
in eq. (3.22) can be understood as the $SL(2)_4/U(1)$ coset perturbed by the
irrelevant operator. In general, such a perturbation will run us in the
infrared
problem. However, in the case under consideration $O^\lambda$ is a null-vector.
Therefore, there should be no trace of this operator in any local physical
observable. Correspondingly the conformal symmetry of the perturbed theory
remains to be manifest. Thus, $\lambda$ appears to play a role of a modular
variable from the
point of view of the target space geometry described by the given CFT. This
analogy can be made more clear in terms of interacting WZNW models \cite{Hull}.
Let us consider the system of two interacting identical $SL(2)_4$ WZNW models.
The action of the system is given by
\begin{equation}
S(g_1,g_2,S)=S_{WZNW}(g_1,4)+S_{WZNW}(g_2,4)-{16\over\pi}\int
d^2z\mbox{Tr}^2(g_1^{-1}\partial g_1S\bar\partial g_2g_2^{-1}).\end{equation}
Here $g_1,~g_2$ take values in $SL(2)$. The statement is that \cite{Hull}
\begin{equation}
Z(4,4,S)=Z_{WZNW}(4)\tilde Z(\lambda),\end{equation}
where
\begin{eqnarray}
Z(4,4,S)&=&\int{\cal D}g_1{\cal D}g_2\exp[-S(g_1,g_2,S)],\nonumber\\
Z_{WZNW}(4)&=&\int{\cal D}g\exp[-S_{WZNW}(g,4)],\\
\tilde Z(\lambda)&=&\int{\cal D}g\exp[-(S_{WZNW}(g,4)-\int d^2zO^\lambda)].
\nonumber\end{eqnarray}
The relation between the couplings $S$ and $\lambda$ is as follows
\begin{eqnarray}
S&=&\sigma\cdot I,\nonumber\\ & & \\
\lambda&=&-{16\sigma^2\over\pi}+...,\nonumber\end{eqnarray}
where $I$ is the identity from the direct product of two Lie algebras, ${\cal
G}\times{\cal G}$, $\sigma$ is a small parameter, whereas
dots stand for higher order corrections in $\sigma$. The last formula
tells us that we have to restrict $\lambda$ to negative values. Furthermore, at
$\sigma=1/8$, the system of two interacting level 4 WZNW models acquires the
gauge symmetry and, correspondingly, undergoes phase transition. There is one
more phase transition at the Dashen-Frishman point \cite{Dashen} (see also
\cite{Hull}). Therefore, $\sigma$ is not exactly a continuous parameter, but
has two particular values at which the system of two interacting WZNW models
changes its properties drastically.
Now it becomes transparent how $\lambda$ can be related to the module of the
target space geometry. Indeed, the interaction term in eq. (3.23) is to
parametrize the metric on the intersection of the two group manifolds. Since we
have shown that the coupling $S$ may change continuously without changing the
underlying CFT, this parameter $\sigma$ is by definition called a module. This
fact must imply that the current-current interaction in eq. (3.23) becomes a
truly marginal operator at $k=4$.
The Virasoro central charge of the CFT described by eq. (3.23) is equal to 4.
Unfortunately,
it is not clear how the $c=1$ CFT is embedded into the given $c=4$
CFT. Based on formula (3.24) we may argue that the stress-energy tensor of the
$SL(2)_4$ WZNW model perturbed by the null-vector $O^\lambda$ has to have the
following form
\begin{equation}
T(\lambda)= T_{c=1}(\lambda)~+~K,\end{equation}
where $K$ is the $c=1$ CFT which is gauged away by gauging the subgroup
$H=U(1)$. The hope is that there exist variables in which the stress-tensor
$T_{c=1}(\lambda)$ can be presented as the one-parameter affine-Virasoro
construction \cite{Morozov1}. The latter also has two points which might be
identified with the two phase transitions we have just mentioned above.
We leave this issue for further investigation.
The fact that the system of two interacting level $k=4$ WZNW models is
conformal with the continuous parameter $\sigma$ indicates that the interaction
term in eq. (3.23) is a truly marginal operator. For all other $k$'s this is
just a marginal operator which breaks the conformal symmetry. Truly marginal
operators are responsible for the existence of moduli in the target space
geometry. This observation supports our interpretation of $\sigma$ (or
$\lambda$) as a module.
\section{Conclusion}
We have found a continuous family of $c=1$ CFT's, which either can be
associated with the truly marginal perturbation of the system of two $SL(2)_4$
WZNW models or with the null-deformation of the $SL(2)_4$ WZNW model. Both
descriptions are equivalent to each other. If the given one-parameter family is
linked with the $c=1$ orbifold construction, then one may expect a duality
symmetry to present in the theory we have constructed. Indeed, via bosonization
procedure one can relate the parameter $\sigma$ in eqs. (3.26) with the radius
$R$ of a scalar field compactified on a circle:
\begin{equation}
R=\sqrt{{(1-8\sigma)\over2}}.\end{equation}
It is well known that this compactification possesses the duality symmetry
under $R\to1/(2R)$.
Our consideration of the continuous deformation of the CFT
suggests some general arguments about perturbation theory in duality invariant
systems. Suppose $R$ is a parameter which goes to $1/(2R)$ under the duality
symmetry. (In the non-Abelian case one has some matrices instead of one
parameter.) Then there is a self-dual point $R_0=1/\sqrt2$. Near by this point
there is a small parameter
\begin{equation}
r={R-R_0\over R_0},\end{equation}
which measures the deviation from $R_0$. Under the duality symmetry
\begin{equation}
r\to r'=-r~+~{\cal O}(r^2).\end{equation}
If $r$ is very tiny, one can drop the higher order corrections in $r$ in the
equation for $r'$. Correspondingly, the original duality symmetry amounts to
the change of sign of the parameter $r$. Therefore, the leading order in
$r$ has to be an even function of $r$. In other words, expansion in $r$ has to
have an effective perturbation parameter which is an even function of $r$.
We have exhibited how quantum
perturbation theory can be related to expansion around the self-dual point,
which in the case under consideration coincides with the $SL(2)_4/U(1)$ coset.
Indeed, eqs. (3.26) clearly display the effect we anticipate for duality
invariant systems.
The hope is that in the so-called $S$-dual theories, expansion around the
self-dual point can be connected with string quantum expansion in topology of
the world sheet. This would be certainly the case, when the tree approximation
in string theory coincides with the $S$-self-dual point. Then one could check
whether the string toroidal correction gives rise to the even function of $r$.
{\bf Acknowledgements:}
S.J.G.'s research is supported by NSF grant \# PHY-93-41926 and in part by NATO
Grant CRG-93-0789.
O.S. would like to thank the British PPARC and the Physics Department of the
University of Maryland for financial support.
|
1,116,691,497,410 | arxiv |
\section{Introduction}
Over the past decade, gravitational lensing \citep{Bartelmann01} has emerged as one of the most powerful tools to study gravity and the dark sectors of the Universe, dark matter and dark energy, through the impact of these components on the density fluctuations of matter and their evolution with cosmic time \citep{Peacock06}. In the limit of weak lensing, massive structures along the line of sight imprint a subtle shearing on the shapes of distant galaxies. This signal can be extracted by statistically analysing the ellipticity of galaxy images in large surveys \citep{Refregier03}. These cosmic shear surveys \citep{Kilbinger15} face the challenge that they must accurately reconstruct the galaxy redshift distribution in order to interpret the cosmological signal correctly. Even small biases in the first moment of the redshift distribution may introduce significant biases in the recovered cosmological parameters \citep[e.g.][]{Huterer06,Ma06}. There exists tension between constraints from cosmic shear and the cosmic microwave background, first seen between the Canada-France-Hawaii Telescope Lensing Survey \citep[CFHTLenS,][]{Heymans13,MacCrann15,Joudaki17} and Planck \citep{Planck14}, but for example also between the Kilo-Degree Survey \citep[KiDS,][]{Kuijken15} and Planck legacy \citep{Planck20}, recently for KiDS-1000 \citep{Asgari21}. The most recent cosmic shear results from the Dark Energy Survey \citep[DES,][]{Flaugher15} are very similar to KiDS-1000 \citep{Amon21,Secco21} albeit at lower statistical tension with Planck\footnote{Conclusions over the degree of tension differ primarily owing to the way tension is quantified and different prior choices for the neutrino mass in the Planck re-analysis.}. In the light of these repeatedly reported tensions, redshift calibration has come under scrutiny as one of the systematics for cosmic shear experiments \citep[e.g.][]{Joudaki20}.
Due to the statistical nature of the shear measurements, current generation (stage-III) cosmic shear surveys, such as KiDS, DES, and the Hyper Suprime-Cam Subaru Strategic Program \citep[HSC,][]{Aihara18}, rely on imaging of tens of millions of galaxies for which spectroscopic redshifts cannot be measured directly. Instead, galaxy redshifts are determined with secondary redshift estimates, the most notable ones are direct calibration with spectroscopic training samples \citep[e.g.][]{Lima08,Hildebrandt17,Hildebrandt20,Buchs19,Wright20a}, clustering redshifts (which infer redshift distributions by exploiting the gravitational clustering of galaxies at similar redshifts, e.g. \citealt{Newman08,Matthews10,Schmidt13,Menard13,vandenBusch20,Hildebrandt21,Gatti22}) and methods that make use of a combination of both these approaches \citep{Sanchez18,Alarcon20a,Myles21}.
The redshift calibration of the fourth data-release of KiDS \citep{Kuijken19,Hildebrandt21} relies on an implementation of the direct calibration that utilises a self-organising map \citep[SOM,][]{Kohonen82,Wright20a} based on work by \citet{Masters16}. The fundamental principle of this method is to re-weight a spectroscopic reference sample such that it is representative of a photometric dataset with an unknown redshift distribution. The weighted redshift distribution of the reference sample is then a direct estimate of the unknown distribution. Additionally, the SOM method allows for the removal of galaxies from the photometric dataset for which no similar galaxies exist in the reference sample. Their inclusion would otherwise bias the estimated redshift distribution. We call the subset of the remaining, well represented galaxies the `gold sample'. In this work we explore the redshift calibration of the KiDS data with a significantly enhanced reference sample that is composed of a variety of spectroscopic redshift campaigns and precision photometric redshifts. This allows us to expand the KiDS gold sample and calibrate redshifts in regions of the colour space that are difficult to access by direct spectroscopy. We then study how these additional calibration data influence our ability to calibrate the redshifts of the source sample and how selection effects and changes in the calibration propagate to cosmological constraints. We compare our results to the original KiDS-1000 cosmic shear analysis by \citet{Asgari21}.
This paper is structured as follows: in Sect.~\ref{sec:data} we describe the KiDS data and the redshift calibration sample (further details on this compilation in App.~\ref{app:compilation}), and in Sect.~\ref{sec:redshifts} and~\ref{sec:cosmo} we present the SOM redshift calibration and our cosmic shear analysis methods. We present and discuss the newly calibrated gold samples and cosmological constraints in Sect.~\ref{sec:results} and~Sect.~\ref{sec:discussion}. Finally we conclude and summarise in Sect.~\ref{sec:conclusion}.
\section{Discussion}
\label{sec:discussion}
\subsection{Gold sample selection and calibration}
\label{sec:discuss_gold}
A side-by-side comparison of the different KiDS gold samples presented in Sect.~\ref{sec:newgold} is non-trivial. On the one hand adding or removing galaxies from the calibration sample changes the redshift distribution of each of the SOM groupings that, according to Eq.~(\ref{eq:som_weight}), determine the sample's redshift distribution. On the other hand two distinct gold samples are comprised of different galaxies since a modification of the calibration sample will also apply an implicit selection on the set of representative SOM groupings. Both of these effects combined determine the overall calibrated redshift distribution.
\begin{figure*}
\centering
\includegraphics[width=0.75\textwidth]{./figures/nz_compare_Fid_plusPAUSCOS15.pdf}
\caption{
Comparison of the redshift distributions between the fiducial sample and the sample including the PAUS and COSMOS2015 photo-$z$. Top: The SOM redshifts derived from the subset of SOM groupings present in both samples. Bottom: The SOM redshift distributions of the groupings that are found only in the {\sl \mbox{spec-$z$}+\allowbreak PAUS+\allowbreak COS15} sample (dashed yellow) and the underlying redshift distribution in the calibration sample of the same groupings, i.e. not applying the SOM weighting in Eq.~(\ref{eq:som_weight}), scaled to the amplitude of the former for comparison.}
\label{fig:nz_compare_groups}
\end{figure*}
This is exemplified by the fact that, as we expand the calibration sample from the {\sl \mbox{spec-$z$} \mbox{nQ $\geq 4$}} subset to {\sl \mbox{spec-$z$}+\allowbreak PAUS+\allowbreak COS15}, we generally see that both the representation fractions and the mean redshifts increase across all bins. The galaxies added in each iteration are typically fainter (with the exception of PAUS) at the cost of lower redshift accuracy, which allows us to calibrate additional KiDS galaxies, preferentially at the tails of the redshift distributions. At the same time we can observe in Fig.~\ref{fig:nz_compare} that the redshift distributions are skewed to lower redshifts, which can be explained by the fact that there are disproportionately many more galaxies with $z < 0.5$ added in each iteration to the calibration sample (Fig.~\ref{fig:nz_calib}). These in turn increase the representation fraction of low and intermediate redshift galaxies in the gold sample (see also Fig.~\ref{fig:representation}). This implies that we are changing the redshift calibration since the skewing applies to each individual SOM group. We can separate these two effects by splitting the {\sl \mbox{spec-$z$} fiducial} and {\sl \mbox{spec-$z$}+\allowbreak PAUS+\allowbreak COS15} gold samples in two subsets, one containing those SOM groupings that are common to both samples (i.e. containing the same KiDS galaxies) and groupings that can only be calibrated using the full redshift compilation. The subset of KiDS galaxies that is common to both gold samples shows the same redshift skewing as seen with all galaxies (top panels of Fig.~\ref{fig:nz_compare_groups}), whereas the additional COSMOS2015 galaxies contribute significantly at the low and high redshift tails of the tomographic bins (bottom panels of Fig.~\ref{fig:nz_compare_groups}).
The final ingredient to the redshift calibration is the quality control cut (Eq.~\ref{eq:qc}) that we apply to remove potentially miscalibrated parts of the colour space. This becomes most obvious when comparing the representation fractions of the first tomographic bin (Fig.~\ref{fig:representation}) which decrease whenever we add the COSMOS2015 data (compare {\sl \mbox{spec-$z$}+\allowbreak PAUS} to {\sl \mbox{spec-$z$}+\allowbreak PAUS+\allowbreak COS15} and {\sl only-PAUS} to {\sl only-PAUS+\allowbreak COS15}). The low redshift of the KiDS galaxies in this bin makes the sample particularly susceptible to the addition of high redshift galaxies. These can significantly change the mean redshift of the calibration sample $\langle z_{\rm cal} \rangle$ compared to the KiDS photo-$z$ in the SOM groupings which then may fail to pass the quality control (see also App.~\ref{app:SOMqc}). The great depth of the COSMOS2015 data compared to the spectroscopic data also explains why the SOM needs to apply more weighting to match the {\sl \mbox{spec-$z$}+\allowbreak PAUS+\allowbreak COS15} compilation to the KiDS colour-space (Fig~\ref{fig:nz_somweight_groups}).
\subsection{Cosmological constraints}
The gold sample selection effects are, due to their redshift dependence, directly propagated to the cosmological constraints (Sect.~\ref{sec:constraints}), causing shifts in $S_8$ of up to 0.5σ from sample to sample. One of the assumptions in our analysis is that we can adopt the same Gaussian priors for the $\delta z_i$ nuisance parameters (Sect.~\ref{sec:pipe}) that are used by \citetalias{Asgari21}. We therefore re-analyse the fiducial gold sample assuming no knowledge of the empirical redshift bias by centring the priors on $\mu_i = 0$. For this run we find that the value of \input{other/S8_Fid_nodz.tex} is in good agreement with the fiducial analysis.
On the other hand it may seem that our choice for the widths $\sigma_i$ of the $\delta z_i$ priors may be insufficient to accommodate for the apparent variance in the mean redshifts of the different gold samples (see Fig.~\ref{fig:nz_compare}). This variance, however, is not only determined by potential systematic biases in the redshift calibration between any of these samples, but also by changes in the gold sample selection itself, as discussed above. Therefore, the question of the correct redshift prior can only be answered with realistic simulated data sets that are currently not available for our extended redshift calibration sample. Nevertheless, a comparison of the $S_8$ values allows us to get an estimate of the variance induced by the selection effects in the calibration data and the resulting parameter constraints from the gold samples.
\begin{figure*}
\centering
\includegraphics[width=\columnwidth]{./figures/cosmo_S8_compare_sys.pdf}
\hfill
\includegraphics[width=\columnwidth]{./figures/cosmo_Sig8_compare_sys.pdf}
\caption{
Comparison between constraints (best fit and 68th-percentile PJ-HPD) on $S_8$ (left) and $\Sigma_8$ (right) for {\sl \mbox{spec-$z$} fiducial}, {\sl \mbox{spec-$z$} \mbox{nQ $\geq 4$}}, and the corresponding test scenario A and B. The grey arrows indicate shifts in $S_8$ introduced in the tests by modifying the redshift distributions.}
\label{fig:S8_sys}
\end{figure*}
\subsubsection{The fiducial gold sample}
Next to the {\sl \mbox{spec-$z$} fiducial} sample that is based on the full spectroscopic compilation we also define the {\sl \mbox{spec-$z$} \mbox{nQ $\geq 4$}} sample that relies only on the most secure redshifts. The estimated \input{other/S8_nQ4.tex} for the latter, arising from particularly low $\Omega_{\rm m}$ and high $\sigma_8$ values, is about 0.5σ larger than in the fiducial case and surpasses all other gold samples (Table~\ref{tab:params}). In the $\Sigma_8$ projection this difference reduces to 0.2σ.
The shift in $S_8$ between gold samples that are both calibrated with spectroscopic data begs the question which of these estimates is more reliable. The primary difference between the two calibration datasets is the selection using redshift quality flags. Selecting spectra based on the redshift confidence is a trade-off between constructing a sample that is confined to regions of the colour-redshift-space in which galaxies have distinct spectral features that allow secure redshift determination and a sample with an increasing fraction of galaxies with catastrophically misidentified redshifts. In the latter case the redshifts of the calibration sample themselves cause a biasing of the gold sample redshifts and in turn $S_8$. In case of selecting only the highest quality redshifts, the biases arises from a misrepresentation of the imaging data by the calibration sample, as shown by \citet{Hartley20}. The redshift distribution of the calibration sample in each SOM grouping depends on the quality flag and thus the relative representation of different galaxy populations in the gold sample may change. We investigate the magnitude of both these effects by assuming two worst-case scenarios which shift down the $S_8$ estimate obtained for {\sl \mbox{spec-$z$} fiducial} and shift up $S_8$ for {\sl \mbox{spec-$z$} \mbox{nQ $\geq 4$}}:
\begin{enumerate}[label=\Alph*)]
\item We assume in case of the {\sl \mbox{spec-$z$} fiducial} sample that \SI{5}{\percent} of truly low redshift galaxies with $3 \leq {\rm nQ} < 4$ (nominal \SI{95}{\percent} certainty), as well as \SI{1}{\percent} in case of ${\rm nQ} \geq 4$ (\SI{99}{\percent} certainty), are catastrophically misidentified as high redshift galaxies. Since both redshift flags are equally common in the fiducial sample we expect a combined spectroscopic failure rate of about \SI{3}{\percent}. We implement this worst-case scenario on {\sl \mbox{spec-$z$} fiducial} by truncating the top \SI{3}{\percent} of all redshift distributions which should increase the recovered $S_8$ value. We calculate the redshift $z_{97}$ corresponding to the 97-th percentile of the redshift distribution $n(z)$, set $n(z)=0$ at $z > z_{97}$ and re-normalise to reproduce the original gold sample number density.
\item We speculate that the calibration of the {\sl \mbox{spec-$z$} \mbox{nQ $\geq 4$}} sample suffers from the same spectroscopic misrepresentation effects studied by \citet[from Fig.~6 therein]{Hartley20}, who found redshift biases $\langle z \rangle - \langle z_{\rm true} \rangle$ of $0.008$, $0.022$, $-0.003$, and $-0.058$ in the four tomographic redshifts bins of simulated DES and spectroscopic data, for the first time implementing a realistic, simulated ${\rm nQ} \geq 4$ sample selection. Since we currently do not have comparable spectroscopic mock data in KiDS, we assume in this scenario that the bias applies at the same magnitude to the {\sl \mbox{spec-$z$} \mbox{nQ $\geq 4$}} gold sample. We therefore correct the assumed bias by interpolating the values from the four DES bins to the five tomographic bins of KiDS and shift the {\sl \mbox{spec-$z$} \mbox{nQ $\geq 4$}} redshift distributions by $-0.008$, $-0.015$, $-0.014$, $0.003$, and $0.058$. We consider this to be an even more conservative assumption than scenario A, since the bias should be significantly smaller in case of KiDS thanks to the 9-band imaging and the improvements of the SOM calibration over the classical DIR approach (Sect.~\ref{sec:SOM}) that is used by \citet{Hartley20}.
\end{enumerate}
With these modifications to the redshift distributions, scenario A ({\sl fiducial, truncated top \SI{3}{\percent}}) yields a higher and scenario B ({\sl\mbox{nQ $\geq 4$}, Hartley corrected}) a lower estimate for $S_8$ (Fig.~\ref{fig:S8_sys}, left). These results indicate that the combination of these two effects may explain the observed differences between $S_8$ in {\sl \mbox{spec-$z$} fiducial} and {\sl \mbox{spec-$z$} \mbox{nQ $\geq 4$}}. When comparing the projection $\Sigma_8$ instead (Fig.~\ref{fig:S8_sys}, right), which is less susceptible to shifts along the $\Omega_{\rm m}$-$\sigma_8$-degeneracy, the difference between {\sl \mbox{spec-$z$} fiducial} and {\sl \mbox{spec-$z$} \mbox{nQ $\geq 4$}} is much smaller. However the shift, introduced when correcting the redshift distributions in scenario B, is about twice as big compared to the $S_8$ case, indicating that the selection effects studied in \citet{Hartley20} can have a significant impact on cosmological constraints. Furthermore the completeness, which determines the ability to correctly map out colour-redshift degeneracies in each SOM grouping, and the quality of the calibration sample redshifts should be balanced carefully to minimise biases in the redshift calibration and the cosmological analysis.
\subsubsection{Other gold samples}
Finally, we make a relative comparison of the $\Sigma_8$ constraints from the remaining gold samples, since $\Sigma_8$ typically exhibits a smaller scatter than the corresponding $S_8$ values. The gold samples that are calibrated using only photo-$z$ from PAUS, COSMOS2015, or a combination of both, prefer smaller $\Sigma_8$ values compared to {\sl \mbox{spec-$z$} fiducial} (Fig.~\ref{fig:S8_tests}). While the cosmological constraints in a similar comparison between redshifts calibrated using spectroscopic data and COSMOS2015 \citep{Hildebrandt20} show a more pronounced shift in the opposite direction, this difference is caused by the gold sample selection \citep{Wright20b}. In our analysis the shifts in $\Sigma_8$ may be explained by the fact that COSMOS2015 tends to calibrate the KiDS galaxies to higher redshifts than the spectroscopic calibration data alone (see Fig.~\ref{fig:nz_compare_groups}), translating to lower $\Sigma_8$. The same reasoning does not explain why $\Sigma_8$ reduces further when the PAUS data is included, therefore this behaviour is most likely owed to the quality control (Eq.~\ref{eq:qc}). When the photo-$z$ data is combined with the spectroscopic compilation, it may result in a significantly different distribution of $| \langle z_{\rm cal} \rangle - \langle z_{\rm B} \rangle |$ for the SOM groupings, which has non-trivial implications for the gold sample selection and the derived cosmological constraints.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./figures/cosmo_Sig8_compare_scatter.pdf}
\caption{
Constraints on $\Sigma_8$ (best fit and 68th-percentile PJ-HPD) for all primary and secondary gold samples. The coloured vertical line and outer bands indicate the constraints from the fiducial gold sample, the inner bands the expected variance of the sampler.}
\label{fig:S8_tests}
\end{figure}
\section{Redshift calibration with self-organising maps}
\label{sec:redshifts}
A self-organising map \citep[SOM,][]{Kohonen82} is a very powerful tool that allows us to calibrate the redshift distribution of the KiDS-1000 lensing sample using the redshift compilations defined in the previous section. We adopt the SOM methodology of \cite{Wright20a} which additionally provides a metric to select only those parts of the KiDS colour space in which we can reliably map out the colour-redshift relation.
\subsection{SOM redshift calibration methodology}
\label{sec:SOM}
The basic idea of the SOM methodology dates back to \citet{Lima08} who introduced a redshift calibration strategy built on the assumption that two galaxy samples with the same colour-space distribution follow the same redshift distribution. Therefore they suggest to derive the unknown redshift distribution $N(z)$ of a photometric galaxy sample from a calibration sample with accurate, preferentially spectroscopic redshifts $N^{\rm cal}(z)$ that is constructed such that it is representative of the photometric sample. This method is called `direct calibration' (DIR). In practice, however, such a calibration sample has typically a substantially different selection function. Therefore \citet{Lima08} propose a re-weighting scheme to match the calibration to the photometric sample by computing the ratio of the local galaxy density of both samples in the high-dimensional colour-space spanned by the photometric observations. This can be achieved for example by counting neighbours in a fixed volume around a point in the colour-space or by computing the volume occupied by a fixed number of nearest neighbours. Provided that both samples initially cover the same volume of the colour-space this method should recover the true redshift distribution, even in the presence of colour-redshift degeneracies.
This method is still susceptible in particular to selection biases and incompleteness introduced by spectroscopic targeting strategies and success rates \citep[e.g.][]{Gruen17,Hartley20}. Recent work by \citet{Wright20a} has shown that this can be alleviated by performing additional cleaning and selections (quality control, see Sect.~\ref{sec:SOMkids}) on the unknown sample, creating a `gold sample' containing only galaxies of the photometric sample that are sufficiently represented by the calibration sample. They implement this by training a SOM on the colour-space of the calibration sample and then parse the photometric sample into the same cells. Cells that are not occupied by objects from both samples are rejected, effectively removing those critical parts of the colour-space. They improve the cleaning procedure by applying hierarchical clustering on the SOM to find groups of cells with similar photometric properties instead of filtering individual cells. This allows a more fine-grained trade-off between the number of photometric sources rejected due to partitioning of the high-dimensional colour-space and the bias introduced by misrepresentation of the gold sample.
Finally, they compute the DIR weight for each of the $n$ SOM groupings $\mathcal{G} = \{g_1, \dots, g_n\}$ which is the ratio of calibration-to-gold sample objects. They obtain the redshift distribution of the gold sample
\begin{equation}
N(z) = \sum_{g \,\in\, \mathcal{G}} N_g^{\rm cal}(z) \, \frac{N_{g,{\rm tot}}^{\rm gold}}{N_{g,{\rm tot}}^{\rm cal}}
\label{eq:som_weight}
\end{equation}
by calculating the DIR-weighted sum of the redshift distributions $N_g^{\rm cal}(z)$ of the calibration sample in each SOM grouping. $N_{g,{\rm tot}}^{\rm cal}$ and $N_{g,{\rm tot}}^{\rm gold}$ are the total number of calibration sample and gold sample objects of group $g$, respectively.
\subsection{Application to KiDS-1000}
\label{sec:SOMkids}
For our analysis we largely follow \citet{Wright20a} and train a SOM with $101\times101$ hexagonal cells and periodic boundaries on the full calibration sample ({\sl \mbox{spec-$z$}+\allowbreak PAUS+\allowbreak COS15}, see Sect.~\ref{sec:comp_primary}). The input features are the matched KiDS $r$-band magnitudes and all 36 possible KiDS-colours that can be formed from the $ugriZYJHK_{\rm s}$ imaging. Next, we divide the calibration and the KiDS-1000 source sample into the five tomographic bins and parse both samples into the SOM cells. We then run the hierarchical clustering for which we use the same number of clusters per bin (4000, 2200, 2800, 4200, and 2000) as \citet{Wright20a} since these numbers were calibrated using simulations\footnote{These simulations are tailored to fit the KiDS imaging data and are based on the MICE2 simulation \citep{Fosalba15a,Fosalba15b,Crocce15,Carretero15,Hoffmann15}.} \citep{vandenBusch20}. Even though each gold sample has a different optimal number of clusters, simulating the new redshift compilation and including realistic photo-$z$ errors is beyond the scope of this work.
We are using the same SOM for the remaining calibration samples defined in Sect.~\ref{sec:compilation} and simply parse the corresponding subset of the full calibration sample back into the SOM before running the hierarchical clustering. For each of these calibration samples we apply a final cleaning step to the SOM groupings by defining a quality cut
\begin{equation}
| \langle z_{\rm cal} \rangle - \langle z_{\rm B} \rangle | > 5 \sigma_{\rm mad} \, ,
\label{eq:qc}
\end{equation}
where $\sigma_{\rm mad} = \mathrm{nMAD}\left( \langle z_{\rm cal} \rangle - \langle z_{\rm B} \rangle \right)$ is the normalised median absolute deviation from the median, where the normalisation ensures that the nMAD reproduces the traditional standard deviation in the limit of Gaussian noise. This selection rejects clusters of SOM cells in which the mean calibration sample redshift $\langle z_{\rm cal} \rangle$ and the mean KiDS photometric redshifts $\langle z_{\rm B} \rangle$ catastrophically disagree. \citet{Wright20a} found that this additional cleaning significantly reduces the SOM redshift bias while the impact on the number density is small and does not exceed a few percent. The rejection threshold of $\sigma_{\rm mad} \approx 0.12$ is calculated for the {\sl \mbox{spec-$z$} fiducial} case and is applied to all other samples. This choice is motivated by the fact that this value is very close to the one calibrated with mock data for KiDS-1000 by \citep{Hildebrandt21}, whereas $\sigma_{\rm mad}$ would be twice as large if we were calculating this threshold based on {\sl \mbox{spec-$z$}+\allowbreak PAUS+\allowbreak COS15}. One reason for this difference in $\sigma_{\rm mad}$ is that the COSMOS2015 data allow the inclusion of additional populations of faint galaxies for which the calibration sample reference redshifts and the KiDS photo-$z$ are more likely discrepant, increasing the spread of the distribution of $\langle z_{\rm cal} \rangle - \langle z_{\rm B} \rangle$. We discuss this effect further in Sect.~\ref{sec:discuss_gold} and App.~\ref{app:SOMqc}.
This final selection step defines our gold sample for which we compute the redshift distributions according to Eq.~(\ref{eq:som_weight}). Since we require weighted redshift distributions for our cosmological analysis we substitute $N_{g,{\rm tot}}^{\rm gold}$ by $W_{g,{\rm tot}}^{\rm gold} = \sum_{i \in g} w_i$ which is the sum over the individual galaxy weights $w_i$ from shape measurements in the SOM group $g$.
\subsection{Clustering redshifts}
There is one key difference to the calibration methodology of \citet{Hildebrandt21} which is that we choose to omit the clustering redshift analysis in this work. While this choice limits our ability to validate the redshift distributions of our new gold samples, the SOM method is our fiducial calibration method and is therefore the focus of this work. In addition to that, the newly included calibration data does not increase the spatial overlap with KiDS significantly and is, due to its inhomogeneity, difficult to administer in a cross-correlation analysis. We leave this validation and joint analysis with the clustering redshifts to future work.
\section{Updated spectroscopic compilation}
\label{app:compilation}
\begin{table*}
\centering
\caption{
Listing of the spectroscopic samples that enter the spectroscopic compilation summarising the sample sizes and mean redshifts. The values are calculated after removing duplicates between overlapping catalogues and matching with the KiDS imaging.}
\label{tab:specz_samples}
\input{./other/specz_samples}
\end{table*}
Here we present the details of our extended and revised spectroscopic compilation discussed in Sect.~\ref{sec:data:spec}. It includes redshifts from surveys covering a number of deep extragalactic fields for which we have 9-band VST+VISTA photometry of similar quality as the main KiDS+VIKING data (`KiDZ'). In this paper we use 6 such fields shown in Fig.~\ref{fig:KiDZ}; three of them (COSMOS, CDF-S and VVDS-2h) are covered by a number of partly overlapping surveys and we merged them into our redshift calibration sample. In doing this, we translate the input redshift quality flags or assessments into our flag $\mathtt{NQ}$, with $\mathtt{NQ} \geq 4$ indicating the most secure spectroscopic redshifts (confidence \SI{99}{\percent} or more), while $3 \leq \mathtt{NQ} < 4$ indicate secure, but lower-confidence redshifts. The particular surveys are listed in Table~\ref{tab:specz_samples}, in the ascending order of their mean redshift determined after removing multiples and cross-matching with our imaging. The details of the selection and quality assignment for the particular samples are as follows:
\begin{itemize}
\item hCOSMOS \citep{Damjanov18}: we use all the galaxies from the published dataset and assign them redshift quality $\mathtt{NQ}=4$;
\item GAMA-G15Deep \citep{Kafle18,Driver22}: we select galaxies with input redshift quality $\mathtt{Z\_QUAL}\geq 3$ and with redshifts $z>0.001$ to avoid stellar contamination. We assign $\mathtt{NQ} = \mathtt{Z\_QUAL}$;
\item G10-COSMOS \citep{Davies15}: we use $\mathtt{Z\_BEST}$ as the redshift value and follow the recommended selection for galaxy spectroscopic redshifts, that is $\mathtt{Z\_BEST}>0.0001$, $\mathtt{Z\_USE}<3$ and $\mathtt{STAR\_GALAXY\_CLASS}=0$. As the redshifts provided in that compilation do not always have a quality flag that could be easily translated into our $\mathtt{NQ}$, we assign all these galaxies a specific flag $\mathtt{NQ}=3.5$;
\item ACES \citep{Cooper12}: we select redshifts with $\mathtt{Z\_QUALITY} \geq 3$ and with redshift errors below \SI{1}{\percent}. We assign $\mathtt{NQ} = \mathtt{Z\_QUALITY}$;
\item OzDES \citep{Lidman20}: we use two patches partly overlapping with KiDZ imaging (around CDF-S and around VVDS-2h). We select redshifts $z>0.002$ (stellar contamination removal) and require quality $\mathtt{qop} \in \lbrace3,4\rbrace$. We assign $\mathtt{NQ}=\mathtt{qop}$;
\item DEVILS \citep{Davies18}: we select spectroscopic redshifts only ($\mathtt{zBestType}=\mathtt{spec}$) and require the flags $\mathtt{starFlag}=0$, $\mathtt{mask}=0$, and $\mathtt{artefactFlag}=0$. As the DEVILS sample is a compilation including both own redshifts and external ones, we assign $\mathtt{NQ}=4$ if $\mathtt{zBestSource}=\mathtt{DEVILS}$ (redshift obtained from DEVILS observations) and $\mathtt{NQ}=3$ (external redshifts);
\item VIPERS \citep{Scodeggio18}: we require sources with the flag $2\leq\mathtt{zflg}<10$ or $22\leq\mathtt{zflg}<30$. We assign $\mathtt{NQ}=4$ if $3\leq\mathtt{zflg}<5$ or $23\leq\mathtt{zflg}<25$, and $\mathtt{NQ}=3$ otherwise;
\item VVDS \citep{LeFevre05,LeFevre13}: we join the WIDE, DEEP and UDEEP sub-samples, and use only sources with $\mathtt{ZFLAGS} \in \lbrace3,4,23,24\rbrace$. We assign $\mathtt{NQ}=4$ to these redshifts;
\item LEGA-C \citep{vanderWel16}: we select sources with the flag $\mathtt{f\_use}=1$ and assign $\mathtt{NQ}=4$ to them;
\item DEEP2 \citep{Newman13}: as in the previous KiDS papers \citep{Hildebrandt17,Hildebrandt20,Hildebrandt21}, we use sources from two equatorial fields (0226 \& 2330), require sources with $\mathtt{ZQUALITY} \geq 3$, and with redshift errors smaller than \SI{1}{\percent}. We assign $\mathtt{NQ}=\mathtt{Z\_QUALITY}$;
\begin{figure}
\centering
\includegraphics[width=\columnwidth,trim=0mm 5mm 0mm 5mm]{./figures/KiDZ_flipped.pdf}
\caption{
On-sky distribution of spectroscopic sources matched against the six KiDZ fields.}
\label{fig:KiDZ}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth,trim=0mm 5mm 0mm 5mm]{./figures/sample_redshifts.pdf}
\caption{
Redshift distributions of the spectroscopic samples that enter the spectroscopic compilation, ordered by mean redshift (Table~\ref{tab:specz_samples}). The distributions are calculated after removing duplicates between overlapping catalogues and matching with the KiDS imaging.}
\label{fig:sample_nz}
\end{figure}
\item C3R2: we combine four public datasets, DR1 \citep{Masters17}, DR2 \citep{Masters19}, DR3 \citep{Stanford21}, and KMOS \citep{Euclid20} and exclude the Northern EGS field. We take sources with $\mathtt{QFLAG} \geq 3$ and assign $\mathtt{NQ}=\mathtt{QFLAG}$;
\item DEIMOS \citep{Hasinger18}: we require the quality flag $\mathtt{Q}=2$, while for assigning $\mathtt{NQ}$ we use the other flag provided, namely $\mathtt{NQ}=4$ for $\mathtt{Q_f} \in \lbrace 4,14 \rbrace$, $\mathtt{NQ}=3$ otherwise;
\item GOODS/CDF-S: we use a public ESO compilation of spectroscopy in the CDF-S field\footnote{Available from \url{https://www.eso.org/sci/activities/garching/projects/goods/MasterSpectroscopy.html}} \citep{Popesso09,Balestra10}. From the compilation, we choose secure redshifts (assigning $\mathtt{NQ}=4$ to them) and likely redshifts ($\mathtt{NQ}=3$) following the recommendations in the dataset description\footnote{\url{https://www.eso.org/sci/activities/garching/projects/goods/MASTERCAT_v3.0.dat}};
\item FMOS-COSMOS \citep{Silverman15}: we select sources with the quality flag $\mathtt{q\_z} \geq 2$ and assign $\mathtt{NQ}=4$ if $\mathtt{q\_z} =4$ or $\mathtt{NQ}=3$ otherwise;
\item zCOSMOS: we use this name for a proprietary compilation of various spectroscopic surveys in the COSMOS field, kindly provided to us by Mara Salvato, updated as of 01 September 2017. That dataset includes some of the surveys already included in our compilation, but it also provides redshifts from various other campaigns. We use the provided quality flag and select sources meeting the criteria $3 \leq \mathtt{Q\_f} \leq 5$ or $13 \leq \mathtt{Q\_f} \leq 15$ or $23 \leq \mathtt{Q\_f} \leq 25$ or $\mathtt{Q\_f} \in \lbrace 6, 10 \rbrace$, do not use sources from lower-confidence determinations (e.g. grism), and limit the redshifts to $\mathtt{z\_spec} > 0.002$ to avoid stellar contamination. We assign redshift quality as $\mathtt{NQ} = \min((\mathtt{Q_f}\!\mod 10),4)$;
\item VUDS \citep{LeFevre15}: we use sources with redshift flag $\mathtt{zflags}$ ending with $\lbrace 3, 4, 9\rbrace$ (reliability $\geq \SI{80}{\percent}$) and assign $\mathtt{NQ}=4$ if $3\leq \mathtt{zflags} <5$ or $13 \leq \mathtt{zflags} < 25$, otherwise $\mathtt{NQ}=3$;
\item VANDELS \citep{Garilli21}: we select sources for which $(\mathtt{zflg}\!\mod 10) \in \lbrace 2, 3, 4\rbrace$ and assign $\mathtt{NQ}=4$ if $(\mathtt{zflg}\!\mod 10) \in \lbrace 3, 4 \rbrace$ or $\mathtt{NQ}=3$ otherwise. The reassignment of the quality flags is motivated by the reportedly high redshift confidence of objects with flag values of 2 and 3.
\end{itemize}
When joining the above samples into one dataset, we remove duplicates both from overlapping surveys as well as within the individual ones. In the former case, if for a given source there are redshifts from different surveys available, we assign the most reliable measurement based on a specific `hierarchy'. Namely, we join the catalogues by cross-matching objects within \SI{1}{\arcsec} radius and apply the following order of preference:
\begin{itemize}
\item COSMOS field: G10-COSMOS > DEIMOS > hCOSMOS > VVDS > Lega-C > FMOS > VUDS > C3R2 > DEVILS > zCOSMOS;
\item CDF-S field: ACES > VANDELS > VVDS > VUDS > GOODS/CDF-S > DEVILS > OzDES;
\item VVDS-2h field: VIPERS > VVDS > C3R2 > DEVILS > OzDES.
\end{itemize}
For objects with multiple spectroscopic measurements within a particular survey, we either take the redshift with the highest quality flag or, if various entries for the same source have the same quality flag and the reported redshifts differ by no more than 0.005, we calculate the average. If the reported redshifts have the same quality flag but differ by more than 0.005, we exclude such a source from the compilation.
\section{SOM quality control}
\label{app:SOMqc}
The original quality control (Eq.~\ref{eq:qc}) was calibrated for the spectroscopic calibration data of \citet{Hildebrandt21} with $\sigma_{\rm mad} = \mathrm{nMAD}\left( \langle z_{\rm cal} \rangle - \langle z_{\rm B} \rangle \right) \approx 0.12$. Since this criterion may not be optimal for our new calibration dataset, we would in principle need to recalibrate the SOM cell rejection threshold to obtain the optimal trade-off between redshift bias and the size of the gold sample. It is, however, very challenging to simulate the photo-$z$ errors and success rates of our new calibration dataset and therefore we choose to keep the threshold fixed for all gold sets.
Nevertheless, the addition of COSMOS2015 to the calibration data has a significant impact on the distribution of $\langle z_{\rm cal} \rangle - \langle z_{\rm B} \rangle$ for the fully trained cells of the SOM (Fig.~\ref{fig:SOMcellbias}). While the distribution is almost symmetrical for the case of {\sl \mbox{spec-$z$} fiducial}, it is skewed to positive values for {\sl \mbox{spec-$z$}+\allowbreak PAUS+\allowbreak COS15}. The effect is more distinct for those cells that are occupied exclusively by the COSMOS2015 data than for those that exist in both gold samples (compare the orange dotted and black lines in Fig.~\ref{fig:SOMcellbias}). This is a clear indication that much of the additional calibration data are faint objects for which the KiDS 9-band photo-$z$ catastrophically underestimates the redshift obtained from the much deeper COSMOS2015 data. This effect is responsible for the reduced representation fraction in the first tomographic bin, as discussed in Sect.~\ref{sec:newgold_mix}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./figures/SOM_deltaz_distribution.pdf}
\caption{
Comparison of the mean calibration sample redshift and KiDS 9-band photo-$z$ in each SOM cell within the fiducial $\pm 5\sigma_{\rm mad}$ interval. The coloured solid lines represent the fiducial (purple) and the full (orange) gold sample. The black line is the subset of SOM cells that belong to both gold samples, whereas the dashed orange line represents those cells that are occupied only by data from COSMOS2015.}
\label{fig:SOMcellbias}
\end{figure}
\section{Differences from the original KiDS-1000 analysis}
\label{app:K1000diff}
\begin{table*}
\centering
\caption{
Summary of the main cosmological parameter constraints (best fit and 68th-percentile PJ-HPD) from COSEBIs for \citet{Asgari21} and our reanalysis with and without the corrected $m$-bias values.}
\renewcommand{\arraystretch}{1.33}
\label{tab:params_A21}
\input{./other/params_A21}
\renewcommand{\arraystretch}{1.0}
\tablefoot{
Additionally shown are the $\chi^2$ values (for $70.5$ effective degrees of freedom) and $\Sigma_8 = \sigma_8 (\Omega_{\rm m} / 0.3)^\alpha$ calculated for \protect\input{other/alpha_fid.tex}, determined from a fit to the posterior samples of the fiducial chain.}
\end{table*}
The major difference between the analysis of the original KiDS-1000 data and this work is that our redshift calibration strategy focuses exclusively on improving the SOM redshifts. We omit revisiting the KiDS clustering redshifts, which \citet{Hildebrandt21} used as a validation for the fiducial SOM redshift distributions. We note, however, that the additional calibration data is only beneficial for the SOM calibration, since such inhomogeneous data, in particular photometric redshifts, cannot be applied easily in clustering redshifts.
Furthermore there are small differences in the cosmological analysis compared to \citetalias{Asgari21}, which are mostly related to small differences in the COSEBI data vector. We find that these differences originate from using a newer version of {\sc TreeCorr}, however it has no impact on our cosmological constraints which agree within the expected statistical variance of the sampler. Differences in the measured COSEBIs also propagate to the covariance matrix, resulting in sub-percent differences between our covariance and the one used by \citetalias{Asgari21}. Finally, we assume \SI{99}{\percent} correlation between $m$-bias values per tomographic bin instead of \SI{100}{\percent} (perfect correlation) as in \citetalias{Asgari21}. The effect of this choice is negligible compared to those introduced by the variations in the COSEBI data vector.
We also test whether the correction of the $m$-bias (Table~\ref{tab:mbias}) has a significant impact on the cosmological constraints in \citetalias{Asgari21} by analysing the same data (`Asgari new $m$-bias') with otherwise identical parameters (except for the differences summarised above). Our results show that the recovered primary cosmological parameters, in particular \input{other/S8_ORIGmbias.tex}, are almost identical with those of our reanalysis of the KiDS-1000 data, performed with the old $m$-bias values (Table~\ref{tab:params_A21}). Given that the uncertainties of the $m$-biases are unchanged and the central value changes by less than 0.5σ in any of the tomographic bins, these results are as expected and have no further implication for the conclusions of \citetalias{Asgari21}.
\section{Marginal parameter constraints}
Table~\ref{tab:params_marginal} lists an alternative representation of the parameter constraints of Table~\ref{tab:params}, showing the maximum of the marginal distribution and the associated 68th-percentile instead of the best fit and the 68th-percentile PJ-HPD.
\begin{table*}
\centering
\caption{
Summary of the main cosmological parameter constraints (maximum of the marginal distribution and the associated 68th-percentile) from COSEBIs for all gold samples and the comparison to \citet{Asgari21} and Planck legacy (TT, TE, EE + lowE).}
\renewcommand{\arraystretch}{1.33}
\label{tab:params_marginal}
\input{./other/params_marginal}
\renewcommand{\arraystretch}{1.0}
\tablefoot{
Additionally shown is $\Sigma_8 = \sigma_8 (\Omega_{\rm m} / 0.3)^\alpha$ calculated for \protect\input{other/alpha_fid.tex}, determined from a fit to the posterior samples of the fiducial chain.}
\end{table*}
\section{Goodness-of-fit}
We find that the cosmological model fits to the new gold samples achieve a better goodness-of-fit ($\chi^2$ between 62 and 74, Table~\ref{tab:params}) than those to the original KiDS-1000 gold sample with $\chi^2 \approx 83$ as reference. Since our methodology is identical in both cases, these differences stem from changes of the data vector that propagate also into the covariance matrix. We compare in particular the COSEBI data vectors with their respective best fit models for {\sl \mbox{spec-$z$} fiducial} and Asgari new $m$-bias (App.~\ref{app:K1000diff}) in Fig.~\ref{fig:bestfit}. Although a comparison by eye of model and data is difficult due to the strong correlation between the different COSEBI modes, the most significant changes in the data vector occur between modes that involve bin 1 and 2. In contrast to this the fitted models are very close in the first two bins and begin to deviate with increasing redshift.
These changes propagate to the $\chi^2$, which becomes apparent when summing the individual $\chi^2$-values in each panel of Fig.~\ref{fig:bestfit} for the two left-most columns. While the sum over the remaining three columns (dominated by bin 3, 4, and 5) is almost identical for {\sl \mbox{spec-$z$} fiducial} and Asgari new $m$-bias, the first two columns fully account for the smaller goodness-of-fit of $\Delta \chi^2 \approx 20$. Since changing the $m$-bias seems to have a rather small impact on the $\chi^2$ (see Table~\ref{tab:params_A21}), this behaviour is most likely linked to the changes in the gold sample selection and redshift distributions, which are most significant for the first two bins. \citetalias{Asgari21} report tensions between parameter constraints obtained from using only the second bin (including its cross-correlations) and the remainder of the data vector. A reduction of the $\chi^2$-value alone does not allow any conclusions on whether the improved calibration sample has an impact on the internal consistency of the cosmological analysis.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{./figures/bestfit_Fid_ORIGmbias.pdf}
\caption{
Comparison of the COSEBI data vector and the best fit model for {\sl \mbox{spec-$z$} fiducial} and the reanalysis of \citet{Asgari21} with corrected $m$-bias. The triangular arrangement of all 15 combinations of tomographic bins are labelled with their individual $\chi^2$-values. The COSEBIs modes are significantly correlated, therefore their goodness-of-fit cannot be estimated by eye.}
\label{fig:bestfit}
\end{figure*}
\section{Cosmological analysis}
\label{sec:cosmo}
In this section we summarise our cosmic shear analysis pipeline which we adopt from \citet[\citetalias{Asgari21} hereafter]{Asgari21}.
\subsection{Cosmic shear}
The primary observable of cosmic shear are the shear two-point correlation functions \citep[2PCFs,][]{Kaiser92}
\begin{equation}
\xi_\pm(\theta) = \langle \gamma_{\rm t} \gamma_{\rm t} \rangle(\theta) \pm \langle \gamma_\times \gamma_\times \rangle(\theta) \, ,
\end{equation}
where $\gamma_{\rm t}$ and $\gamma_\times$ are the tangential and the cross component of the shear, defined with respect to the line connecting a pair of galaxies \citep[see e.g.][]{Bartelmann01}. We use a weighted estimator for the shear correlations $\xi_\pm$ as a function of the separation angle $\theta$ between two tomographic redshift bins $i$ and $j$:
\begin{equation}
\hat\xi_\pm^{(ij)}(\bar \theta) = \frac{\sum_{ab} w_a w_b \left[ \epsilon_{{\rm t},a}^{\rm obs} \epsilon_{{\rm t},b}^{\rm obs} \pm \epsilon_{\times,a}^{\rm obs} \epsilon_{\times,b}^{\rm obs} \right] \Delta_{ab}^{(ij)}(\bar \theta)}{\sum_{ab} w_a w_b (1 + \bar m_a) (1 + \bar m_b) \, \Delta_{ab}^{(ij)}(\bar \theta)}
\label{eq:shear_estimator}
\end{equation}
Here, $\Delta_{ab}^{(ij)}(\bar \theta)$ is a function that expresses whether a pair of galaxies, $a$ and $b$, falls into an angular bin labelled by $\bar\theta$. Each galaxy has a weight $w$ and measured ellipticities $\epsilon_{\rm t}^{\rm obs}$ and $\epsilon_\times^{\rm obs}$. The denominator applies the multiplicative shear bias $m$, which corrects the measured shear to match the true galaxy shear.\footnote{In practice we do not apply the $m$-bias per galaxy in Eq.~(\ref{eq:shear_estimator}) but instead take the average value in each tomographic bin to avoid effects such as galaxy detection biases \citep{Kannawadi19}.}
We extract the cosmological information from the shear correlation signal using COSEBIs \citep[complete orthogonal sets of E/B-integrals,][]{Schneider10}. These present a method to cleanly decompose the shear 2PCFs into $E$- and $B$-modes by applying a set of oscillatory filter functions defined over a finite angular range between $\theta_{\rm min}$ and $\theta_{\rm max}$. The filter functions $T_{\pm n}(\theta)$ for the $n$-th COSEBI mode ($E_n$ and $B_n$) have exactly $n+1$ roots:
\begin{align}
E_n &= \frac{1}{2} \int_{\theta_{\rm min}}^{\theta_{\rm max}} {\rm d}{\theta} \, \theta \left[ T_{+n}(\theta) \, \xi_+(\theta) + T_{-n}(\theta) \, \xi_-(\theta) \right] \\
B_n &= \frac{1}{2} \int_{\theta_{\rm min}}^{\theta_{\rm max}} {\rm d}{\theta} \, \theta \left[ T_{+n}(\theta) \, \xi_+(\theta) - T_{-n}(\theta) \, \xi_-(\theta) \right]
\end{align}
One of the advantages of this formalism compared to the classical 2PCFs is that COSEBIs are less sensitive to small scales, where the complex physics of baryon feedback plays an important role, if a subset of the modes is chosen accordingly \citep{Asgari20}.
\subsection{Analysis pipeline}
\label{sec:pipe}
Our analysis pipeline is an upgraded version of {\tt CosmoPipe}\footnote{\url{https://github.com/AngusWright/CosmoPipe}} \citep{Wright20b} which is a wrapper for {\tt Cat\_to\_Obs}\footnote{\url{https://github.com/KiDS-WL/Cat\_to\_Obs\_K1000\_P1}} \citep{Giblin21} and the KiDS Cosmology Analysis Pipeline\footnote{\url{https://github.com/KiDS-WL/kcap}} (KCAP, \citealt{Joachimi21,Asgari21,Heymans21,Troester21}) that have both been used previously to analyse the KiDS-1000 data. The pipeline measures the shear 2PCFs using {\sc TreeCorr} \citep{Jarvis04,Jarvis15} on angular scales between \SI{0.5}{\arcmin} and \SI{300}{\arcmin} from which we compute the first five COSEBI modes using the logarithmic versions of the filter functions $T_{\pm n}(\theta)$. The logarithmic versions achieve a better compression of the cosmological signal onto fewer COSEBI modes.
We use the {\sc CosmoSIS} framework \citep{Zuntz15} to compute theoretical predictions with the KCAP COSEBI module \citep{Asgari12}. The linear matter power spectrum is modelled with CAMB \citep{Lewis00,Howlett12} and its non-linear evolution with {\sc HMCode} \citep{Mead15,Mead16}, whereas intrinsic alignments are calculated based on the model of \citet{Hirata04,Bridle07}. We then compare these predictions to the measured COSEBIs by sampling a Gaussian likelihood with {\sc MultiNest} \citep{Feroz09} using the analytical covariance model and priors of \citet{Joachimi21}. From this we infer constraints on the cosmological parameters of a spatially flat ΛCDM model. We additionally marginalise over a set of sample-dependent nuisance parameters which capture uncertainties in the shear and redshift calibration. Since Monte-Carlo samplers like {\sc MultiNest} are not designed to find the best fitting model parameters, we additionally run a Nelder-Mead minimiser \citep{Nelder65} starting from the maximum posterior point of all chains.
Based on this we quote parameter constraints and their uncertainty as the fit parameter value and the projected joint highest posterior density (PJ-HPD) that we obtain from the {\sc MultiNest} chains. We note that both best fit parameters as well as the PJ-HPD have statistical uncertainties of about 0.1σ or \SI{10}{\percent} on the 1σ constraints due to the limited number of posterior samples \citep{Joachimi21}.
\subsubsection{Redshift uncertainty}
We propagate uncertainties in the redshift calibration to the cosmological constraints by allowing the redshift distribution of each tomographic bin $i$ to vary by a shift $\delta z_i$. We use a set of correlated Gaussian priors $\delta z_i \sim N(\mu_i, \sigma_i)$ which allows us to apply an empirical redshift bias correction by choosing offsets $\mu_i \neq 0$. These offsets and their correlations (Table~\ref{tab:mbias}) are calibrated from spectroscopic and KiDS-like mock data \citep{vandenBusch20} in \citet{Hildebrandt21}. Since our analysis uses different calibration samples with altered sample selections, we would in principle need to perform a similar mock data analysis to recalibrate the priors for $\delta z_i$. However, these new samples contain many new spectroscopic datasets and the inclusion of photometric redshifts presents an additional challenge when attempting to model realistic photo-$z$ errors. Therefore, we assume that the variance of the KiDS-1000 priors is conservative enough to absorb potential changes of the redshift biases from KiDS-1000 to the new gold samples.
\subsubsection{Multiplicative shear uncertainty}
The second set of sample-dependent nuisance parameters is the average multiplicative shear bias ($m$-bias, see Eq.~\ref{eq:shear_estimator}) in each tomographic bin. The effect of the $m$-bias and its uncertainty on the COSEBIs is captured in the covariance matrix and is calibrated by comparing the true galaxy ellipticities to those measured from a suite of image simulations generated by \citet{Kannawadi19}. The $m$-bias values vary little from sample to sample but are by up to 0.5σ larger than those of the KiDS-1000 sample \citep{Giblin21}. This led to the discovery of an issue with the way KiDS galaxies are assigned to galaxies in the COSMOS field which provides us with accurate shape information. We recompute the $m$-bias (Table~\ref{tab:mbias}, Fig.~\ref{fig:mbias}) and find that the revised values are in good agreement with those of the new gold samples. The updated values are also well within the uncertainty on $m$ that was accounted for in \citetalias{Asgari21}.
\begin{table}
\centering
\caption{
The revised multiplicative shear bias $m_{\rm new}$ for the KiDS-1000 sample compared to the original values $m_{\rm old}$ of \citet{Asgari21}.}
\renewcommand{\arraystretch}{1.10}
\label{tab:mbias}
\resizebox{\columnwidth}{!}{
\begin{tabular}{cccccc}
\hline\hline
Bin & Photo-$z$ range & $m_{\rm old}$ & $m_{\rm new}$ & $\sigma_m$ & $\Delta z = z_{\rm est} - z_{\rm true}$ \\
\hline
1 & $0.1 < Z_{\rm B} \leq 0.3$ & $ -0.009$ & $ -0.010$ & $0.019$ & $\hphantom{-}0.000 \pm 0.0096$ \\
2 & $0.3 < Z_{\rm B} \leq 0.5$ & $ -0.011$ & $ -0.009$ & $0.020$ & $\hphantom{-}0.002 \pm 0.0114$ \\
3 & $0.5 < Z_{\rm B} \leq 0.7$ & $ -0.015$ & $ -0.011$ & $0.017$ & $\hphantom{-}0.013 \pm 0.0116$ \\
4 & $0.7 < Z_{\rm B} \leq 0.9$ & $\hphantom{-}0.002$ & $\hphantom{-}0.008$ & $0.012$ & $\hphantom{-}0.011 \pm 0.0084$ \\
5 & $0.9 < Z_{\rm B} \leq 1.2$ & $\hphantom{-}0.007$ & $\hphantom{-}0.012$ & $0.010$ & $-0.006 \pm 0.0097$ \\
\hline
\end{tabular}
}
\renewcommand{\arraystretch}{1.0}
\tablefoot{
We note that the uncertainties $\sigma_m$ remain unchanged. The last column summarises the parameters of the correlated Gaussian redshift priors ($\mu_i \pm \sigma_i$, where $\sigma_i$ is the square root of the diagonal element of the covariance matrix).}
\end{table}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./figures/mbias.pdf}
\caption{
The $m$-bias values calculated for each gold sample per tomographic bin. Additionally, the values used by \citet{Asgari21} are shown in black and their revised values in teal.}
\label{fig:mbias}
\end{figure}
\section{Data}
\label{sec:data}
This paper explores redshift calibration of the KiDS cosmic shear weak lensing sample (Sect.~\ref{sec:data:k1000}) with increasingly deep redshift calibration catalogues. Our fiducial analysis relies exclusively on spectroscopic data which is compiled from a variety of different spectroscopic surveys (Sect.~\ref{sec:data:spec} and App.~\ref{app:compilation}). We then add less accurate redshift estimates derived from narrowband photometry from the Physics of the Accelerated Universe Survey \citep[PAUS,][]{Padilla19} and finally from medium- and broadband photometry from COSMOS2015 (Sect.~\ref{sec:data:phot}). This approach is similar to the construction of the calibration sample for the DES Y3 redshift calibration \citep{Myles21}, however the KiDS lensing and redshift calibration samples are both covered by the same nine bands which avoids mapping different photometries via a transfer function.
\subsection{KiDS-1000 photometric data}
\label{sec:data:k1000}
The Kilo-Degree Survey \citep[KiDS,][]{Kuijken15,deJong15,deJong17,Kuijken19} is a public European Southern Observatory (ESO) survey that has been designed particularly with weak gravitational lensing applications in mind. The complete survey will deliver about \SI{1350}{deg^2} of $ugri$ imaging split into an equatorial and a southern field. Combined with $ZYJHK_{\rm s}$ imaging from its companion infra-red survey, the VISTA Kilo-Degree Infrared Galaxy Survey \citep[VIKING][]{Edge13,Venemans15}, this constitutes a nine-band, matched-depth data-set with primary imaging in the $r$-band, observed at a mean seeing of \SI{0.7}{\arcsec}. This work is based on the fourth data release of KiDS which covers \SI{1006}{deg^2}. The weak lensing source catalogue \citep[KiDS-1000,][]{Giblin21} is divided into five tomographic redshift bins, based on nine-band photometric redshifts (four bins with $\Delta Z_{\rm B} = 0.2$, starting from $Z_{\rm B} = 0.1$ and a fifth bin at $0.9 < Z_{\rm B} \leq 1.2$) computed with \texttt{BPZ} \citep[Bayesian Photometric Redshift,][]{Benitez00}. It contains all objects with non-zero shear measurement weights obtained from {\it lens}fit \citep{Miller07,FenechConti17}, which effectively selects objects with $r$-band magnitudes between $20 \leq r \leq 25$.
In addition to the main survey imaging there are observations of six fields dedicated primarily to redshift calibration. These `KiDZ' fields cover approximately \SI{1}{deg^2} each and target areas of the sky also observed by different spectroscopic campaigns, which are summarised in Sect.~\ref{sec:data:spec}. Just like the main survey, KiDZ provides KiDS+VIKING 9-band imaging\footnote{In case of the COSMOS field we instead use existing data in the CFHT (Canada France Hawaii Telescope) $z$-band which is sufficiently similar to the VISTA InfraRed CAMera (VIRCAM) $Z$-band at the Visible and Infrared Survey Telescope for Astronomy (VISTA) given the photometric and redshift calibration uncertainties.} which reaches or exceeds the depth of the main survey. In the latter case we homogenise the data depth by applying Gaussian noise to obtain matched photometry.
\subsection{Spectroscopic data for calibration}
\label{sec:data:spec}
The most important spectroscopic campaigns that overlap with the six KiDZ fields are zCOSMOS \citep{Lilly09}, VVDS \citep[VIMOS VLT Deep Survey,][]{LeFevre05,LeFevre13}, DEEP2 \citep{Newman13}, the GAMA \citep[Galaxy And Mass Assembly,][]{Driver11} deep field G15deep \citep{Kafle18,Driver22}, and a compilation of spectra covering the Chandra Deep Field South (CDF-S, App.~\ref{app:compilation}). This data has been used in previous KiDS redshift calibration works \citep{Wright20a,Hildebrandt21}.
We extend this compilation by adding data from C3R2 \citep[Complete Calibration of the Colour-Redshift Relation,][]{Masters17,Masters19,Euclid20,Stanford21}, DEVILS \citep[Deep Extragalactic Visible Legacy Survey,][]{Davies18}, VIPERS \citep[VIMOS\footnote{VIsible Multi-Object Spectrograph} Public Extragalactic Redshift Survey,][]{Scodeggio18} and a variety of spectroscopic campaigns that target the CDF-S and COSMOS fields which are detailed in App.~\ref{app:compilation}. We also revise the selection of sources included for calibration by removing duplicates, both from spatial overlap as well as within the datasets, and by homogenising redshift quality flags based on the original information in the input samples. If, for a given source, there are redshifts from different surveys available, we assign the most reliable measurement based on a specific `hierarchy' of surveys (see App.~\ref{app:compilation} for details). For objects with multiple spectroscopic measurements within a particular survey, we either take the redshift with the highest quality flag or, if various entries for the same source have the same quality flag and the reported redshifts differ by no more than 0.005, we take the average. However, if the reported redshift differences exceed this threshold, we exclude such a source from the compilation. We restrict the selection to objects with high quality spectroscopic redshifts (approximately corresponding to $\geq \SI{95}{\percent}$ confidence or redshift quality code ${\rm nQ} \geq 3$). Fig.~\ref{fig:samples} compares the number of galaxies and their mean redshift for all samples that enter the spectroscopic compilation. These values apply after removing duplicates between overlapping catalogues and only for those objects with photometric coverage in KiDZ (Sect.~\ref{sec:data:k1000}).
\begin{figure}
\centering
\includegraphics[width=0.87\columnwidth]{./figures/sample_size.pdf}
\caption{
Sample size and mean redshift of the different surveys that are part of our spectroscopic compilation after matching to their counterparts in KiDS imaging. Objects with redshifts from multiple sources are assigned to the survey with the most reliable redshift estimate.}
\label{fig:samples}
\end{figure}
\subsection{Photometric data for calibration}
\label{sec:data:phot}
The success rate of spectroscopically determined redshifts is very different from (typically flux-limited) imaging data. Therefore, it is very difficult to obtain a spectroscopic calibration sample that is representative of photometric data in magnitude-\slash colour-space, especially at faint magnitudes. Instead, we additionally include galaxy samples with high quality photometric redshifts to achieve a greater overall coverage of the KiDS data by the calibration sample which is beneficial for our redshift calibration technique of choice (Sect.~\ref{sec:SOM}).
\subsubsection{COSMOS2015}
The COSMOS2015 catalogue \citep{Laigle16} constitutes a sample of about half a million galaxies in the COSMOS field with precision photometric redshifts\footnote{We use the median of the photo-$z$ likelihood distribution (\texttt{PHOTOZ} column in the catalogue).} derived from up to 30 photometric bands, ranging from near ultra-violet to mid infrared, including 14 medium and narrow band filters. This sample extends to higher redshifts ($z_{\rm max} \approx 6$) and fainter magnitudes than our spectroscopic compilation, but at the cost of less secure redshift estimates with an outlier fraction ranging from \SI{0.5}{\percent} at low redshifts to \SI{13.2}{\percent} for $3 < z < 6$ \citep{Laigle16}.
\subsubsection{PAUS}
The PAUS photometric redshift sample\footnote{available at \url{cosmohub.pic.es}} \citep{Alarcon20b} is a combination of 26 optical and near-infrared bands from COSMOS2015 that are matched against observations of the COSMOS field in 40 narrow band filters by the PAU survey. These PAU filters sample the optical regime between \SIrange{450}{850}{nm} at a bandwidth of $\Delta \lambda = \SI{12.5}{nm}$ \citep{Padilla19} and the combined photometric catalogue is limited to $i_{\rm AB} < 23$. Due to the relatively high spectral resolution of the dataset a new Bayesian spectral energy distribution (SED) fitting technique is required that accounts for individual emission lines \citep{Alarcon20b}. This allows the PAUS photo-$z$ to achieve a $3\times$ ($1.7\times$) lower photo-$z$ scatter at the bright (faint) end of the magnitude distribution and marginally smaller outlier fractions compared to the original COSMOS2015 photo-$z$ at $i_{\rm AB} < 23$. The scaled photo-$z$ bias is very low and has a $|\text{median}(\Delta_z)| < 0.001$ over the whole redshift range of the PAUS sample. Therefore this sample positions itself right between the spectroscopic data and COSMOS2015 in terms of completeness and redshift precision.
\subsection{Combined calibration sample}
\label{sec:compilation}
In this work we select data from the full COSMOS2015 photometric catalogue and combine this data hierarchically with PAUS and the spectroscopic data. Finally we match this unified catalogue to the KiDZ imaging to form our redshift calibration sample.
We prepare the full COSMOS2015 photometric catalogue similar to \citet{Laigle16}, that is, we select only those sources which fall into the intersection of the footprint of the COSMOS field (${\tt flag\_cosmos} = 1$) and the UltraVISTA observations (${\tt flag\_hjmcc} = 0$), which provides essential IR spectral coverage. We exclude data from saturated areas ($\mathtt{flag\_peter} = 0$) and additionally remove objects that are classified to be most likely stars (${\tt type} \neq 1$) or have no photo-$z$ estimate. This selection yields about half a million objects.
About 40,000 sources of the PAUS sample are, by design, matched against COSMOS2015 and therefore require no further preparation. Therefore we can directly combine the spectroscopic compilation, PAUS, and the subset of COSMOS2015 by matching objects within \SI{1}{\arcsec}. We maintain a hierarchy to ensure that we always choose the most reliable redshift estimate available: spec-$z$ supersede PAUS photo-$z$ which supersede the COSMOS2015 photo-$z$. Finally, we assign 9-band KiDS magnitudes to this compilation by matching against the KiDZ data, again within \SI{1}{\arcsec}. This combination of spec-$z$ (in all KiDZ fields), photo-$z$ (only in COSMOS) and KiDS imaging represents our full redshift calibration sample.
The method to combine the two photometric redshift samples with our spectroscopic compilation in an hierarchical manner is very similar to the approach taken for the redshift calibration of the DES Y3 data \citep{Myles21}. There are, however, two key differences to their approach. First, our compilation of spectroscopic redshifts covers a much wider range of the colour-redshift space than their selection of spectra. This allows us to construct more representative calibration samples that consist purely of spectroscopic redshifts, photometric redshifts, or a combination thereof. Second, the primary KiDS imaging data and the calibration data are observed in all nine photometric bands at a comparable depth, which simplifies the mapping from galaxy colour to redshift significantly (see Sect.~\ref{sec:redshifts}).
\subsubsection{Primary compilations}
\label{sec:comp_primary}
From this heterogeneous sample with redshift estimates from very different sources we select three subsets, each with a higher redshift precision but lower completeness:
\begin{itemize}
\item the full compilation (to which we refer as {\sl \mbox{spec-$z$}+\allowbreak PAUS+\allowbreak COS15}),
\item objects with either spectroscopic redshifts and/or PAUS photo-$z$ ({\sl \mbox{spec-$z$}+\allowbreak PAUS}), and
\item our fiducial sample containing only those objects that have spectroscopic redshifts ({\sl \mbox{spec-$z$} fiducial}).
\end{itemize}
The main properties and redshift distributions of these three primary compilations are summarised in Table~\ref{tab:samples} and Fig.~\ref{fig:nz_calib}.
The fiducial sample is already about twice as large as the calibration sample used previously by \citet{Hildebrandt21} to calibrate the KiDS-1000 redshifts. Of the additional spectra we consider DEVILS and C3R2, the latter of which is designed to target regions of the galaxy colour-space with currently little spectroscopic coverage, to be the most important contributions. Similarly {\sl \mbox{spec-$z$} fiducial} already contains about \SI{66}{\percent} of the matched PAUS sources, of which the majority has redshift $z < 1$. Due to its limited depth, the {\sl \mbox{spec-$z$}+\allowbreak PAUS} sample presents only a small improvement over the fiducial case. The COSMOS2015 data, on the contrary, nearly doubles the compilation to its final size of about $112,000$ objects. Due to the significantly higher depth of the COSMOS2015 photo-$z$, the fraction of sources with $z > 1$ nearly triples, pushing the mean redshift to $\langle z \rangle \approx 1.0$. Nevertheless this comes at the cost of a lower redshift accuracy compared to the rest of the sample.
\begin{table}
\centering
\caption{
Number counts and mean redshifts of the original KiDS-1000 redshift calibration sample and different subsets of the new redshift compilation.}
\label{tab:samples}
\input{./other/samples}
\tablefoot{
The full compilation is represented as {\sl \mbox{spec-$z$}+\allowbreak PAUS+\allowbreak COS15}. Values apply after matching with the KiDZ imaging and removing duplicates in the spectroscopic data.}
\end{table}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./figures/nz_calib.pdf}
\caption{
Redshift distribution of different calibration sample subsets (coloured, solid lines) and the one used by \citet{Hildebrandt21} for KiDS-1000 (dashed black line). The bottom panel shows the excess of sources with photometric redshifts contributed by PAUS and PAUS + COSMOS2015 compared to the fiducial spectroscopic sample.}
\label{fig:nz_calib}
\end{figure}
\subsubsection{Secondary compilations}
\label{sec:comp_secondary}
In addition to the three primary compilations we also consider a subset that is restricted to only the most secure spectroscopic redshifts. This {\sl \mbox{spec-$z$} \mbox{nQ $\geq 4$}} sample is, due to the large fraction of shared spectra, closest to the one of \citet{Hildebrandt21} except that it lacks some low redshift sources (Fig.~\ref{fig:nz_calib})
Finally, we create three subsets of the full redshift compilation that rely purely on photometric redshift estimates. We achieve this by recompiling the redshift compilation according to Sect.~\ref{sec:compilation} but omitting all spectroscopic redshifts, therefore maintaining the usual hierarchy of PAUS and COSMOS2015 photo-$z$. These are
\begin{itemize}
\item only objects from the PAU survey ({\sl only-PAUS}),
\item all objects with photo-$z$ ({\sl only-PAUS+\allowbreak COS15}), and
\item the pure COSMOS2015 subset ({\sl only-COS15}, also discarding PAUS photo-$z$ from the stack).
\end{itemize}
Since the PAUS sample is essentially a subset of the COSMOS2015 catalogue, the latter two samples are almost identical except that for \SI{30}{\percent} of the sources the photo-$z$ are augmented by the PAUS data (see Table~\ref{tab:samples}). The PAUS sample is about half the size of the fiducial spectroscopic compilation and, while achieving a higher completeness at $i_{\rm AB} < 23$, lacks many important faint, high redshift objects.
\section{Conclusion}
\label{sec:conclusion}
We apply the SOM redshift calibration technique \citep{Wright20a} to define and calibrate the redshift distributions of a new set of KiDS-1000 gold samples by adopting a new spectroscopic calibration sample. Compared to previous work by \citet{Hildebrandt21} we double the size of this calibration sample by adding more than ten additional spectroscopic campaigns such as C3R2 and DEVILS, which allows us to calibrate an additional \SI{9}{\percent} of the KiDS galaxies.
We take this one step further by enhancing the calibration sample with precision photometric redshifts from the PAU survey and COSMOS2015, maintaining a hierarchy that prefers spectroscopic over PAUS and COSMOS2015 redshifts to resolve duplicates in the three catalogues. The resulting KiDS gold sample increases by additional \SI{6}{\percent} and covers nearly \SI{98}{\percent} of all KiDS galaxies in the fifth tomographic bin.
When comparing these gold samples we find changes in the mean redshifts of up to $|\Delta z| = 0.026$ which originate from selection effects in the calibration sample. First, there are residual modifications to the redshift calibration of those KiDS sources that are found in both gold samples. These modifications are a direct consequence of changing the redshift distribution of the calibration sample when adding the photo-$z$. Second and most important, the selection of KiDS sources itself changes since the faint COSMOS2015 data allows us to calibrate additional galaxies at both low and high redshifts. These results highlight the importance of quantifying and calibrating potential method dependent redshift biases arising from selection effects, as has been shown in previous work. This requires sophisticated galaxy mock data with sufficient redshift coverage, realistic galaxy colours and accurate modelling of photometric and spectroscopic galaxy samples, in particular if one aims to study the impact of photo-$z$ outliers on the calibration sample.
In the second part of this study we perform a cosmic shear analysis using COSEBIs and find \input{other/S8_Fid.tex} for our fiducial and \input{other/S8_plusPAUSCOS15.tex} for the photo-$z$-enhanced gold sample which is slightly lower than, but still in excellent agreement with, previous work on KiDS-1000 by \citet{Asgari21}. As part of our additional systematic testing we create a third gold sample that we calibrate using only the most secure spectra of our spectroscopic compilation (nQ $\geq 4$). We measure \input{other/S8_nQ4.tex} which corresponds to an increment of 0.5σ compared to the fiducial case. Our analysis speculates that this sample underestimates the true redshifts and overestimates $S_8$ due to an implicit selection introduced when limiting the calibration to the most secure spectroscopic redshifts \citep{Hartley20}. Therefore completeness and quality of the calibration sample can be optimised, this however requires realistic simulations to quantify the impact of this trade-off on the recovered redshift distributions.
Finally we analyse four KiDS gold samples which are all calibrated from different subsets of our extended calibration sample or using only photo-$z$ from PAUS and COSMOS2015. No matter how we calibrate the KiDS source galaxies, we find that all seven gold samples studied in this work scatter in the range of $S_8 = 0.743 \dots 0.766$ around our fiducial analysis. This further confirms previously reported tensions of KiDS with measurements of the CMB by \citet{Planck20} at 3.0σ to 3.6σ for a flat ΛCDM model.
In summary, there seems to be little benefit in using precision photometric redshifts for the SOM redshift calibration given the excellent coverage of the KiDS data by our spectroscopic compilation alone. The {\sl \mbox{spec-$z$}+\allowbreak PAUS+\allowbreak COS15} gold sample achieves, compared to our fiducial sample, a \SI{6}{\percent} improvement in terms of the number density, yet improvements on the cosmological constraints are marginal. Nevertheless, if the spectroscopic coverage is significantly lower or one wishes to target higher redshifts, photo-$z$ samples are a valuable source of complementary calibration data. However, the greater the dependence on photometric redshifts is, the more attention must be paid to redshift outliers to guarantee a good balance between statistical uncertainties and systematic biases in the redshift calibration. Given their challenging calibration requirements this is in particular true for the next generation, stage IV surveys such as Euclid \citep{Laureijs11} or the Vera C. Rubin Observatory Legacy Survey of Space and Time \citep[LSST,][]{Ivezic19}. Despite photo-$z$ outliers, photometric redshift samples have one significant advantage over spectroscopically selected samples: they achieve a much higher completeness, which can mitigate selection effects in the calibration sample and may improve the ability of the SOM to map out the full extent of colour-redshift-degeneracies.
\section{Results}
\label{sec:results}
In this section we present the new KiDS gold samples and cosmological constraints from an analysis of COSEBIs.
\subsection{New KiDS gold samples}
\label{sec:newgold}
Similar to the calibration samples (Sect.~\ref{sec:compilation}) we divide the gold samples in two categories: primary, which are based on the full compilation of spectroscopic data plus optionally photo-$z$ ({\sl \mbox{spec-$z$} fiducial}, {\sl \mbox{spec-$z$}+\allowbreak PAUS}, and {\sl \mbox{spec-$z$}+\allowbreak PAUS+\allowbreak COS15}, see Sect.~\ref{sec:comp_primary}), and secondary samples, which are calibrated with subsets of the spectroscopic calibration sample ({\sl \mbox{spec-$z$} \mbox{nQ $\geq 4$}}) or by using exclusively photo-$z$ ({\sl only-PAUS}, {\sl only-COS15}, and {\sl only-PAUS+\allowbreak COS15}, see Sect.~\ref{sec:comp_secondary}).
\subsubsection{Primary gold samples}
\label{sec:newgold_mix}
We make a quantitative comparison of the selection of the three primary KiDS gold samples based on the representation fraction of each tomographic bin, the effective sample number density compared to the density of the full KiDS-1000 source sample. These are summarised for all gold samples in Fig.~\ref{fig:representation}. The numbers show that our new spectroscopic redshift compilation provides a much greater coverage of the KiDS source sample since our fiducial gold sample has a \SI{9}{\percent} higher accumulated number density than the previous data set calibrated by \citet{Hildebrandt21}, increasing the total representation fraction from \SI{80}{\percent} to \SI{89}{\percent}. In comparison to the former, our {\sl \mbox{spec-$z$} fiducial} gold sample and those constructed by the addition of the PAUS and COSMOS2015 photo-$z$ steadily increase the coverage fraction of KiDS galaxies across all tomographic bins, rising from \SI{73}{\percent}, \SI{81}{\percent}, and \SI{82}{\percent} to \SI{88}{\percent}, \SI{90}{\percent}, and \SI{95}{\percent} in bins 2, 3, and 4 respectively. The fifth tomographic bin, which contributes most of the cosmological signal in the cosmic shear analysis, shows the least change in its representation fraction due to the already very high coverage of \SI{95}{\percent} reported by \citet{Hildebrandt21}.
The only exception to the steadily increasing representation fractions is the first bin of {\sl \mbox{spec-$z$}+\allowbreak PAUS+\allowbreak COS15}, where the number density is about \SI{2.5}{\percent} lower compared to the fiducial case. This is the result of the quality control (Eq.~\ref{eq:qc}) removing some SOM groupings due to discrepancies in $\langle z_{\rm cal} \rangle$ and $\langle z_{\rm B} \rangle$, which arise when adding calibration sources and\slash or changing the gold selection. Expanding the calibration sample may shift $\langle z_{\rm cal} \rangle$ significantly, in particular in sparsely occupied SOM groupings, such that $\langle z_{\rm cal} \rangle - \langle z_{\rm B} \rangle$ exceeds the quality control threshold $5\sigma_{\rm mad}$, which will flag and exclude the corresponding KiDS galaxies from the gold sample.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{./figures/representations.pdf}
\caption{
Representation fractions, the effective number density of the different gold samples relative to the full KiDS-1000 source sample, per tomographic bin. The effective number density factors in the lensing weight of each object and is calculated according to Eq.~(C.12) in \citet{Joachimi21}.}
\label{fig:representation}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.75\textwidth]{./figures/nz_compare_abs.pdf}
\caption{
Comparison of the gold sample redshift distributions and their mean redshifts for all tomographic bins obtained from the different subsets of the calibration sample. We note that they all describe different subsets of the full KiDS-1000 source sample.}
\label{fig:nz_compare}
\end{figure*}
The SOM redshift distributions of the gold samples are shown in Fig.~\ref{fig:nz_compare}; note that these samples do not represent the same galaxies. A comparison reveals two effects when adding photo-$z$ to the calibration sample: First, the bulk of the redshift distributions is skewed to lower redshifts as we add more data to the calibration sample which is most evident in the third tomographic bin. Secondly, COSMOS2015 adds a significant portion of high redshift objects to the compilation that extends the coverage of KiDS galaxies to higher redshifts, enhancing the tails of the redshift distributions and significantly increasing the mean redshifts, in particular of bin 5.
Finally we compare the calibration sample redshift distributions in each tomographic bin to the resulting gold sample redshift distributions (Fig.~\ref{fig:nz_somweight_groups}). For most of the samples these distributions have very similar shapes and with mean redshifts agreeing within $\pm 0.02$, which indicates that the re-weighting (Eq.~\ref{eq:som_weight}) of the SOM groupings is very small on average. This changes once the COSMOS2015 redshifts are added to the calibration sample. Due to their significantly higher depth and mean redshifts, the redshift tails must be down-weighted significantly (up-weighted in bin 1 and 2) to match the density of the KiDS source sample. The down-weighting of the low redshift tails in the upper three bins can be explained by the fact that COSMOS2015 adds faint galaxies at these redshifts. The corresponding KiDS galaxies have a low lensing weight, which must be compensated by the SOM cell weights.
\begin{figure*}
\centering
\includegraphics[width=0.75\textwidth]{./figures/nz_SOMweight_Fid_plusPAUSCOS15.pdf}
\caption{
Comparison of the tomographically binned calibration sample (black lines) to the gold sample redshift distributions (coloured lines). The greater the difference between the black and the coloured lines, the more weighting is applied by the SOM to match the calibration sample to the KiDS data.}
\label{fig:nz_somweight_groups}
\end{figure*}
\subsubsection{Secondary gold samples}
\label{sec:newgold_phot}
The first of our secondary gold samples we calibrate using only those galaxies of the spectroscopic compilation that have the most secure spectroscopic redshifts of at least \SI{99}{\percent} confidence. Due to similarities to the SOM calibration sample used by \citet[see Sect.~\ref{sec:compilation}]{Hildebrandt21}, this {\sl \mbox{spec-$z$} \mbox{nQ $\geq 4$}} gold sample positions itself in between the latter and the {\sl \mbox{spec-$z$} fiducial} sample in terms of representation fractions. In tomographic bin 4 and 5 however it is lacking some of the high redshift sources due to the more conservative spectroscopic flagging, reducing both the mean redshifts (Fig.~\ref{fig:nz_compare}) as well as the representation fractions (Fig.~\ref{fig:representation}).
The remaining three gold samples exclusively rely on photometric redshifts from PAUS and COSMOS2015 (Sect.~\ref{sec:comp_secondary}). Due to the great overlap of sources, both samples that contain the COSMOS2015 photo-$z$ ({\sl only-COS15} and {\sl only-PAUS+\allowbreak COS15}) achieve representation fractions which are just \SI{2}{\percent} smaller than those of the full redshift compilation, again due to the quality control (Eq.~\ref{eq:qc}). As seen for {\sl \mbox{spec-$z$}+\allowbreak PAUS+\allowbreak COS15}, the effective density in the first bin is significantly reduced. The {\sl only-PAUS} gold sample exhibits significantly suppressed tails in bin 4 and 5 due to a lack of high redshift sources that only the much deeper COSMOS2015 provides. This also imprints on the representation fractions which are similar to the fiducial gold sample in bin 1 and 2 but are about \SI{10}{\percent} lower in bin 3 and 4 and \SI{18}{\percent} lower in the fifth tomographic bin. This example highlights the abilities of the SOM to flag and remove sources that cannot be calibrated by the DIR approach with the particular calibration sample.
\subsection{Cosmological constraints}
\label{sec:constraints}
We present cosmological results for our primary KiDS gold samples, focusing on a relative comparison to \citetalias{Asgari21} and other literature values. We summarise the numerical values of the most relevant cosmological parameters in Table~\ref{tab:params}. In Fig.~\ref{fig:S8_Om} and~\ref{fig:S8} we highlight comparisons of the derived parameter $S_8 = \sigma_8 (\Omega_{\rm m} / 0.3)^{0.5}$, which is the primary measurable of weak lensing due to the degeneracy between $\Omega_{\rm m}$, the dimensionless matter density parameter, and $\sigma_8$, parameterising the amplitude of the linear power spectrum.
\begin{table*}
\centering
\caption{
Summary of the main cosmological parameter constraints (best fit and 68th-percentile PJ-HPD) from COSEBIs for all gold samples and their comparison to \citet{Asgari21} and Planck legacy (TT, TE, EE + lowE).}
\renewcommand{\arraystretch}{1.33}
\label{tab:params}
\input{./other/params}
\renewcommand{\arraystretch}{1.0}
\tablefoot{
Additionally shown are the $\chi^2$ values (for $70.5$ effective degrees of freedom) and $\Sigma_8 = \sigma_8 (\Omega_{\rm m} / 0.3)^\alpha$ calculated for \protect\input{other/alpha_fid.tex}, determined from a fit to the posterior samples of the fiducial chain.}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./figures/cosmo_Om_S8.pdf}
\caption{
Marginalised constraints for the joint distributions of $S_8$ and $\Omega_{\rm m}$ (\SI{68}{\percent} and \SI{95}{\percent} credible regions) obtained for different gold samples and Planck legacy (TT, TE, EE + lowE). Since the contours represent different galaxy samples, some deviation is expected.}
\label{fig:S8_Om}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./figures/cosmo_S8_compare_main.pdf}
\caption{
Comparison of the $S_8$ constraints from our gold samples to other studies. We show the best fit (where available) and 68th-percentile PJ-HPD (circles, opaque data points) and the maximum of the marginal distribution and the associated 68th-percentile (diamonds, semi-transparent). We compare to \citet{Asgari21}, HSC-Y1 \citep{Hikage19}, DES-Y3 \citep{Amon21}, and Planck legacy. The coloured vertical line and outer bands indicate the constraints from the fiducial gold sample, the inner bands the expected variance of the sampler.}
\label{fig:S8}
\end{figure}
First we ensure that our wrapper for the cosmological pipeline delivers results that are consistent with those of \citetalias{Asgari21}. We re-analyse the original KiDS-1000 data, running our pipeline with the same input parameters (`Asgari reanalysis'). We find that the constraints on all cosmological parameters agree within the expected variance of the Monte-Carlo sampler. The best fit solution has a slightly larger $\chi^2$ of $83.8$ compared to $82.2$ for KiDS-1000 which is driven primarily by the slight change in the data vector (see App.~\ref{app:K1000diff}). Similarly, the correction of the $m$-bias values (Table~\ref{tab:mbias}) has no significant impact on the cosmological constraints either (Table.~\ref{tab:params_A21}).
\subsubsection{Primary gold samples}
For our KiDS gold samples we find a tendency to lower $S_8$-values, albeit at low significance (Figs.~\ref{fig:S8_Om}, \ref{fig:S8} and Table~\ref{tab:params}). In particular we obtain \input{other/S8_Fid.tex} for the {\sl \mbox{spec-$z$} fiducial} and \input{other/S8_plusPAUS.tex} for the {\sl \mbox{spec-$z$}+\allowbreak PAUS} gold samples which is about 0.4σ lower compared to \citetalias{Asgari21} and exceeds the expected statistical variance of the sampler (about 0.1σ). Deviations to some degree between these gold samples are expected, since they all represent different galaxy selections. The goodness of fit improves for all gold samples but in particular for {\sl \mbox{spec-$z$} fiducial}, where it reduces to $\chi^2 \approx 63$ compared to 82 in KiDS-1000. The {\sl \mbox{spec-$z$}+\allowbreak PAUS+\allowbreak COS15} sample on the other hand is in very good agreement with the original KiDS-1000 analysis.
In addition to $S_8$ we consider the more general $\Omega_{\rm m}$-$\sigma_8$-degeneracy case of $\Sigma_8 = \sigma_8 (\Omega_{\rm m} / 0.3)^\alpha$, where $\alpha$ is a free parameter. We estimate the reference value for $\alpha$ by fitting the posterior samples of the fiducial chain and obtain \input{other/alpha_fid.tex} which is close to $\alpha=0.54$ for COSEBIs in \citetalias{Asgari21}. This projection optimises the signal-to-noise-ratio compared to $S_8$ and we find $\Sigma_8 = 0.744_{-0.021}^{+0.017}$ for {\sl \mbox{spec-$z$} fiducial} compared to $\Sigma_8 = 0.751_{-0.016}^{+0.024}$ which we obtain for \citetalias{Asgari21} with $\alpha=\alpha_{\rm fid}$. Furthermore, the scatter of $\Sigma_8$ for the different gold samples is significantly smaller than the scatter in $S_8$ (see Table~\ref{tab:params}).
If we compare the marginal errors of $S_8$ for the different gold samples (Table~\ref{tab:params_marginal}), which have a smaller statistical variance than the PJ-HPD, we find that the constraints improve by \SI{5}{\percent} when including PAUS and another \SI{4}{\percent} when including COSMOS2015. Since the constraints on $\Sigma_8$ are almost constant, these changes in $S_8$ are most probably related to small changes in the $\Omega_{\rm m}$-$\sigma_8$-degeneracy. We also compare $A_{\rm IA}$, which is the dimensionless amplitude of the intrinsic alignment galaxy power spectrum, and find that its value is stable within the uncertainties in all our analyses.
|
1,116,691,497,411 | arxiv | \section{Introduction}\label{sec.intro}
Following the first detection of an exoplanetary atmosphere around the hot-Jupiter HD209458 b \citep{Charbonneau2001}, there has been a plethora of studies aimed at better understanding the atmospheres of exoplanets. Whether a planet can retain an atmosphere for a significant time period is essential for its habitability, as atmospheres regulate the planetary surface temperature, as well as absorb harmful solar or cosmic rays \citep[e.g.,][]{1993Icar..101..108K, 2009A&ARv..17..181L}. Consequently, studies regarding the stability of planetary atmospheres are critical in the ongoing searches for habitable planets.
A planet that is too close to its host star, however, receives intense high-energy (i.e., X-rays and ultraviolet, UV) irradiation, which can cause atmospheres to heat, expand and `evaporate'. For many years now, the Hubble Space Telescope (HST) has been a key instrument for observing escaping planetary atmospheres in the UV \citep[e.g.][]{2003Natur.422..143V}. Major advancements in the field can be expected with new missions soon to be launched, such as the Colorado Ultraviolet Transit Experiment \citep{2017SPIE10397E..1AF}, and the James Webb Space Telescope. Predicting prime candidates for future observations of escaping atmospheres is therefore important both to guide these observations as well as for interpreting their findings.
Strong atmospheric escape has been detected in several close-in exoplanets so far using transmission spectroscopy in the UV \citep[e.g.,][]{2003Natur.422..143V,2010ApJ...720..872F, 2014ApJ...786..132K, 2015Natur.522..459E, 2016A&A...591A.121B}. In particular, as the composition of gas giants consists predominantly of hydrogen, their atmospheres show strong absorption in the Ly$\alpha$ line. Unfortunately, the centre of this line is contaminated in our geocorona and can be significantly absorbed by neutral hydrogen in the interstellar medium \citep{2002ApJ...574..412W, Ben-Jaffel2008}. This renders the line core unusable and the signatures of escaping atmospheres can only be observed in the wings of the Ly$\alpha$ line.
Another promising line for conducting transmission spectroscopy is H$\alpha$, as this line can be observed with ground-based spectrographs and it does not suffer contamination at the line centre, thus allowing us to probe the accelerating material at the bottom of the escaping atmosphere. However, as the number of hydrogen atoms initially in the first excited state ($n=2$) can be much lower than the population at ground level ($n=1$), transit depths in H$\alpha$ are expected to be smaller than in Ly$\alpha$ transits \citep{2012ApJ...751...86J, 2015ApJ...810...13C}. Another complication is that an active host star shows variability in H$\alpha$, making it harder to disentangle signatures from the star and from the planetary atmosphere \citep{2017AJ....153..217C}. Recently, escaping atmospheres were detected using the metastable helium triplet at $\sim 10830$\AA\ \citep{2018Sci...362.1388N,2019A&A...623A..58A, 2019arXiv190713425A}, opening up new possibilities for probing planetary atmospheres with ground-based instrumentation in the infrared, such as with CARMENES \citep{2014SPIE.9147E..1FQ} and SpIRou \citep{2018AJ....155...93C}.
The reason why atmospheres of close-in giants are more susceptible to strong evaporation relies on two factors: 1) the intense high-energy irradiation they receive from their host stars, due their close proximity to the host, and 2) on their relatively low gravitational potential, required for a planet to `hold on' to their atmospheres. Young exoplanets are more susceptible to escape, because young stars have higher X-ray and UV fluxes \citep{2005ApJ...622..680R, 2015ApJ...815L..12J} and because planets at young ages are more `puffed up', and thus have a lower gravitational potential. As the system evolves and the high-energy flux decreases with time, escape rates are expected to decrease.
In this paper, we model the evolution of atmospheric escape on close-in giant planets. In particular, we focus on two different types of gas giants: a Jupiter-mass planet and a Saturn-mass planet (1 and 0.3 $M_{\rm jup}$, respectively). In both cases, we consider these planets to be at an orbital distance of 0.045 au or, equivalently, at a $3.5$~day orbit around a solar mass star. This distance is just inside the short-period `Neptunian desert', which shows a lack of planets with masses between 0.03 and 0.3 $M_{\rm jup}$ at orbital periods $\lesssim 5$--$10$ days \citep{2016A&A...589A..75M}. The paucity of detected planets within this desert is suggestive of significant mass loss in planets with masses $\lesssim 0.3 ~M_{\rm jup}$. This will indeed be demonstrated in our results.
Our paper is presented as follows. Section \ref{fEUVandrads} presents the two main inputs required in our study of the evolution of atmospheric escape: a prescription for how the high-energy flux of the host star evolves during the main-sequence phase and a prescription describing the contraction of the planet after its formation. These two factors are then considered in our atmospheric escape models, which are described in Section \ref{comp}, along with a description of the ray tracing technique used to simulate the spectroscopic transits. Our results are shown in Section \ref{evol_results_1}: for each age ranging from 10 to 5000~Myr, we calculate the physical properties of the escaping atmospheres and predict their Ly$\alpha$ and H$\alpha$ transit profiles, which can be compared to observations. Section \ref{CONCLUSIONS} presents a discussion of our results and our conclusions.
\section{Evolution of planetary radii and stellar EUV Flux} \label{fEUVandrads}
The evolution of atmospheric escape of a close-in planet depends on two important factors:
\begin{enumerate}
\item as the host star evolves, its activity declines due to spin down, resulting in declining fluxes in the extreme ultraviolet (EUV) and
\item as the planet evolves, cooling causes it to contract with time.
\end{enumerate}
Therefore, to simulate the evolution of a close-in giant, we couple evolving predictions of EUV flux \citep{Johnstone2015a} and planetary radius \citep{Fortney2010} with our hydrodynamic escape model, which will be described in Section \ref{comp}. The evolution of these input parameters are shown in Figure \ref{fig.fEUVandrad} and described next.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{figs/fig1.png}
\caption{Top: EUV flux received at an orbital distance of 0.045 au from a solar-like star with respect to planetary age. Line colour indicates stellar rotation: slow (red), intermediate (green), fast (blue). From \citet{Johnstone2015a}. Bottom: Planetary radius with respect to age for both a 1-$M_{\rm jup}$ (magenta) and a 0.3-$M_{\rm jup}$ (orange) close-in giant orbiting a solar-like star at 0.045 au. From \citet{Fortney2010}. The black diamonds show our sampled increments. }
\label{fig.fEUVandrad}
\end{figure}
The EUV emitted from the host star originates from its magnetically heated, chromospheric and coronal plasma \citep[e.g.,][]{jardine}. Given that magnetic activity is linked to rotation, the EUV flux depends on the host star's rotational evolution. In rotational evolution studies, it is common to separate stars in different rotational tracks: those representing stars born as slow, intermediate or fast rotators \citep{2013A&A...556A..36G,2015A&A...577A..28J}. Due to the rotation-activity relations, stars that are born as slow rotators have overall lower EUV flux throughout their lives than those that were born as fast rotators \citep{2015A&A...577L...3T}. As the host star evolves, its magnetic activity declines due to spin down \citep{2014MNRAS.441.2361V}. At $\sim$ 1 Gyr, the rotational evolution paths for solar-mass stars converge to a unique solution \citep{1972ApJ...171..565S, 2013A&A...556A..36G}. This convergence is thus also predicted to occur in the evolution of the EUV flux ($F_{\rm EUV}$).
Here, similarly to \citet{Johnstone2015a}, we consider three host star rotational evolutions, resulting in three differing EUV flux evolutions for fast, intermediate and slow rotators (Figure \ref{fig.fEUVandrad}, top). This allows us to study the influence of initial conditions of the star's rotation on atmospheric escape of an orbiting close-in giant.\footnote{The values predicted by \citet{Johnstone2015a} results in $F_{\rm EUV}=$ 6.7 erg s$^{-1} {\rm cm}^{-2}$ at 1 au assuming a solar-like star at the approximate solar age. To match the observed flux of 1 erg s$^{-1}{\rm cm}^{-2}$ from \citet{2005ApJ...622..680R}, we normalise the curves in \citet{Johnstone2015a} by dividing them by 6.7. The flux of 1 erg s$^{-1}{\rm cm}^{-2}$ is also the one adopted in the fiducial case of \citet{Murray-Clay2009}, on which our hydrodynamic model is based.}
A planetary evolution model is required to calculate the change in planetary radius $R_{\rm pl}$ with time. Here, we use the results from \citet{Fortney2010} to describe the evolution of a 1-$M_{\rm jup}$ planet (magenta line in the bottom panel of Figure \ref{fig.fEUVandrad}) and of a 0.3-$M_{\rm jup}$ planet (orange line). These curves represent the calculations with no core in \citet{Fortney2010}, for planets orbiting a solar-like star at 0.045 au, and they do not account for the mass loss. Ideally, as the atmosphere is removed from the planet, one would also need to take into account the reduction in the total mass of the planet, and consequently the corresponding change in radius. For example, this could be done by coupling atmospheric losses to planetary evolution models in a self-consistent way. This is left for a future work. Although mass loss is not self-consistent considered in the adopted radius evolution curves, it is unlikely that this would affect the 1-$M_{\rm jup}$ planet, as total mass lost represents a small fraction of the total planetary mass, as we will see further on. However, this correction is more relevant for the 0.3-$M_{\rm jup}$, as lower-mass planets have stronger evaporation. We will discuss this further in Section \ref{evol_results_1}.
\section{Atmospheric escape and transit calculations}\label{comp}
\subsection{Hydrodynamic escape model} \label{WindmodelSection}
Because of the strong mass-loss rates inferred in observations ($> 10^9$g/s, \citealt{2011A&A...529A.136E}), atmospheric escape in close-in planets can be described using a fluid approximation. For escape to be considered in this regime, the atmosphere has to be collisional, whose indicator is given by the Knudsen number, $K_n={\lambda_{\rm mfp}}/{h}$, which is the ratio between the mean free path of an atmospheric particle $\lambda_{\rm mfp}$ and the atmospheric scale height $h$. The motion of a planetary atmosphere for which $ K_n \ll 1 $ is best described by a fluid, in an escape process known as hydrodynamic escape. This escape process occurs when heating in the collisional region of an atmosphere causes an upward pressure gradient force, which drives a bulk, radial outflow \citep{catling_kasting_2017}.
Our numerical model is based on the model presented in \citet{Murray-Clay2009}. We treat the escaping atmosphere as a fluid, solving the equations of fluid dynamics in a co-rotating frame. These coupled differential equations are ionisation balance as well as the conservation of mass, energy and momentum. To achieve convergence of the wind solution, we use a shooting method based on the model of \citet{2006ApJ...639..416V}.
In steady state, the momentum equation in spherical symmetry can be written as
\begin{equation}\label{eq:con_mom}
{u}\frac{d {u}}{d r } = -\frac{1}{\rho}\frac{d P}{d r} - \frac{G M_{ \rm pl}}{r^2}+\frac{3GM_*r}{a^3} ,
\end{equation}
where $r$ is the radial coordinate from the centre of the planet, $\rho$ and $u$ represent the atmospheric mass density and velocity, respectively, and $P$ is the thermal pressure. The first term on the right side of Equation (\ref{eq:con_mom}) is the thermal pressure gradient while the second term represents attraction due to gravity. This equation is analogous to the momentum equation for a stellar wind \citep{1958ApJ...128..664P} with an additional term on the right side due to tidal effects. The tidal term is the sum of the centrifugal force and differential stellar gravity along the ray between the planet and star \citep{muno}. In this paper, we use the terms `hydrodynamic escape' and `planetary wind' interchangeably.
Conservation of energy requires that
\begin{equation}
\rho {u} \frac{d}{d r} \left[\frac{k_BT}{(\gamma - 1)m} \right] = \frac{k_BT}{m}{u} \frac{d\rho}{d r }+Q-C ,
\end{equation}
where $k_B$ is the Boltzmann constant, T the temperature, $\gamma = {5}/{3}$ is the ratio of specific heats for a monatomic ideal gas and $m$ is the mean particle mass, which ranges from $0.5m_H$ to $1m_H$, for a fully ionised to fully neutral atomic hydrogen plasma, respectively. Here, $m_H$ is the mass of atomic hydrogen. The term on the left indicates the change in the internal energy of the fluid. The first term on the right represents cooling due to gas expansion, while the second ($Q$) and third ($C $) are heating and cooling terms. This equation is, again, typically used in stellar wind models, although with different heating and cooling terms \citep{2006ApJ...639..416V}.
Our planetary wind model assumes that the EUV flux is concentrated at one photon energy $e_{\rm in}=20$ eV, thus the volumetric heating rate [erg/s/cm$^3$] can be written as $Q=\epsilon F_{\rm EUV}e^{-\tau} \sigma_{\nu_0}n_{n}$, where $F_{\rm EUV}$ is the EUV flux received by the planet, $\tau$ the optical depth to ionising photons, $n_n$ the number density of neutral hydrogen and $\sigma_{\nu_0}$ is the cross section for the photoionisation of hydrogen given by
$\sigma_{\nu_0}=6\times 10^{-18} \left( {e_{\rm in}}/{13.6 {\rm eV}}\right)^{-3}=1.89 \times 10^{-18}{\rm cm}^2$
\citep{spit}. Of the total received energy flux $F_{\rm EUV}$, a fraction $\epsilon$ is converted to thermal energy to heat the atmosphere. In our simulations, we follow the approach by \citet{Murray-Clay2009} and assume $\epsilon= 1- {13.6 ~\rm{eV}}/{e_{\rm in}}=0.32$.
We assume that the escaping atmospheric gas cools by radiative losses resulting from collisional excitation, also known as Ly$\alpha$ cooling. The volumetric cooling rate is $ C= 7.5 \times 10^{-19} n_+ n_n \exp[{-1.183\times 10^5}/{T}]$, where $n_+$ is the number density of protons, which is equivalent to the number density of electrons in a purely hydrogen plasma.
The ionisation balance is given by
\begin{equation}\label{eq: ion_bal}
\frac{n_n F_{\rm EUV}e^{-\tau}\sigma_{\nu_0}}{e_{\rm in}}= n_+^2\alpha_{\rm rec}+\frac{1}{r^2}\frac{d}{d r}(r^2n_+{u}) ,
\end{equation}
where the rate of photoionisations (first term) balances the sum of the rates of radiative recombinations (second term) and of the advection of ions (third term). Here, $\alpha_{\rm rec} = 2.7 \times 10^{-13} (T/10^4)^{0.9}$ is the case B radiative recombination coefficient for hydrogen ions \citep{95, 2006agna.book.....O}. The ionised fraction is given as $F_{\rm ion}={n_+}/{n_H}$, where $n_H=n_+ + n_n$ is the total number density of hydrogen.
Finally, the conservation of mass requires that
\begin{equation}\label{eq: cons_mass_1st}
\frac{d (r^2 \rho {u}) }{d r } = 0 .
\end{equation}
Our simulated planetary outflow originates from the sub-stellar point of the planet, which is the point closest to the star. We then apply our calculated solution over 4$\pi$ steradians rendering it an upper limit to atmospheric escape \citep{Murray-Clay2009, 2015ApJ...815L..12J}. Therefore, from Equation (\ref{eq: cons_mass_1st}), we have that the mass-loss rate of the escaping atmosphere is $\dot{M} = 4\pi r^2 \rho {u} $.
Similarly to stellar wind theory, the initially subsonic flow is accelerated transonically until it reaches an asymptotic speed, known as the terminal velocity $u_{\rm term}$. Convergence is reached once the values of the mass-loss rate and terminal velocity between two subsequent runs are below 1\%. We then calculate the Knudsen number $K_n$ for each run of our simulations, so as to ensure that the atmosphere remains collisional throughout all our simulations.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{figs/fig2.png}
\caption{Radial profiles of various atmospheric properties for two simulated close-in planets with masses $1~M_{\rm jup}$ (magenta, solid line) and $0.3M_{\rm jup}$ (orange, dashed line) and radius $1.1~R_{\rm jup}$. The planets orbit a solar-like star at 0.045 au, and receive an incident flux $F_{EUV} = 480$ erg cm$^{-2}$ s$^{-1}$, consistent with a 4.5 Gyr-old star. Left panels, from top to bottom: wind velocity $u$, temperature $T$, total mass density $\rho$. Right panels, from top to bottom: fraction of ionised hydrogen $F_{\rm ion}$, optical depth to ionising photons $\tau$ and thermal pressure $P$. The distance at which $\tau=1$ is reached at $\sim 1.1 R_{\rm pl}$ for the cases presented above. The circle and cross indicate the sonic point and Roche lobe boundary, respectively.
\label{fig:fiducial}}
\end{figure}
Figure \ref{fig:fiducial} shows typical solutions of our simulations, where we show radial profiles of various atmospheric parameters for $1~M_{\rm jup}$ (magenta) and $0.3~M_{\rm jup}$ (orange) planets orbiting a solar mass star at 0.045 au. The adopted radii for these illustrative simulations are $1.1~ R_{\rm jup}$. For both these planets, we assume an $F_{\rm EUV} = 480$ erg cm$^{-2}$ s$^{-1}$. In addition to these parameters, which determine the physical properties of the planetary systems, our hydrodynamic models require values of the temperature $T_0$ and density $\rho_0$ at the base of the escaping atmosphere (here, assumed at 1 $R_{\rm pl}$). For all of our simulations, we assume these parameters are $T_0=1000$ K and $\rho_0=4 \times 10^{-13}$ g cm$^{-3}$, similar to values adopted in \citet{Murray-Clay2009}. Note that, although $T_0$ and $\rho_0$ are free parameters of the model, \citet{Murray-Clay2009} demonstrated that large variations in these values had negligible effect on the resulting simulated escape. We confirmed their results with our models as well.
The left panels of Figure \ref{fig:fiducial} show profiles of planetary wind velocity $u$, temperature $T$, and total mass density $\rho$, while the right panels show fraction of ionised hydrogen $F_{\rm ion}$, optical depth to ionising photons $\tau$ and thermal pressure $P$. In each of the panels, the cross represents the Roche lobe boundary
\begin{equation}
R_{\rm Roche} = \frac{a}{R_{\rm pl}}\left(\frac{M_{\rm pl}}{3M_{*}}\right)^{1/3} .
\end{equation}
Atmospheric particles are no longer gravitationally bound to the planet for $r>R_{\rm Roche}$, with stellar tidal gravity dominating over planetary gravity. Note that each of our calculations extend out to 10 $R_{\rm pl}$, which is above $R_{\rm Roche}$. The circle indicates the sonic point $R_{\rm sonic}$, which is the distance at which the planetary wind reaches the sound speed $c_s = \sqrt{{5 k_BT}/{ 3 m }}$. It is evident from the velocity profile that our simulated wind becomes transonic within the Roche lobe boundary. This is also the case for our simulated planets featured in the subsequent sections during all evolutionary stages.
Figure \ref{fig:fiducial} shows a steep drop in density and pressure with simultaneous sharp rises in $T$ and $F_{\rm ion}$, occurring below $1.5 R_{\rm pl}$. These strong variations in the atmospheric structure are the result of strong absorption of stellar EUV radiation at this distance (region with high optical depth $\tau$), which leads to the photoionisation of atmospheric hydrogen, as seen in the dramatic increase of $F_{\rm ion}$ from $\simeq 0$\% to $75\%$ ($30\%$) at $1.5 R_{\rm pl}$ for a 1-$M_{\rm jup}$ (0.3-$M_{\rm jup}$) planet. Ejected free electrons then heat the planetary atmosphere through collisions, causing the sharp temperature increase. The high temperatures cause the atmosphere to expand and escape hydrodynamically.
At large distances, the temperature drops due to the expansion of the atmosphere (adiabatic cooling). This energy sink dominates over heating once the atmosphere becomes sufficiently ionised. Ly$\alpha$ cooling also acts against the heating and is at its strongest at higher atmospheric temperatures due to increased collisional excitation \citep{Salz2016b}.
Comparing the curves for the 1-$M_{\rm jup}$ and 0.3-$M_{\rm jup}$ planets in Figure \ref{fig:fiducial}, we see that the wind of the more massive planet has lower velocities due to its higher gravitational potential. Its atmosphere is also more rarefied, as seen by the density and optical depth curves. Its temperature, nevertheless, reaches higher values, leading to a more ionised atmosphere. The mass-loss rates of the 1-$M_{\rm jup}$ planet is $1.7 \times 10^{9}$ g s$^{-1}$, 26 times lower than the mass-loss rate of $4.4 \times 10^{10}$ g s$^{-1}$, for the 0.3-$M_{\rm jup}$ planet. Altogether, the lower gravitational potential of the less massive planet results in stronger planetary winds, with higher velocities and mass-loss rates.
\subsection{Comparison to energy-limited escape}
The energy limited approximation \citep{Wats} is often adopted for calculating mass-loss rates of atmospheric escape in place of more computationally heavy hydrodynamic models.
This approximation can be derived as follows. We first assume energy balance between the final kinetic energy of the planetary wind and the input energy due to irradiation
\begin{equation}
E_{\rm kinetic, output} = E_{\rm irradiation, input}
\end{equation}
For the input energy, a fraction $\epsilon$ of the stellar high-energy flux $F_{\rm EUV} $ intercepted by a planet with a cross section of $\pi R_{\rm pl}^2$ is assumed to drive a flow that reaches a kinetic energy $ \dot{M} u_{\rm term}^2/2$, where $u_{\rm term}$ is the terminal velocity.
\begin{equation} \label{eq.el}
\frac{\dot{M} u_{\rm term}^2}{2} = \epsilon F_{\rm EUV} (\pi R_{\rm pl}^2) \, .
\end{equation}
where $\dot{M}$ is the mass-loss rate of the planetary wind. We further assume that the terminal velocity $u_{\rm term}$ is on the order of the surface escape velocity $v_{\rm esc} = (2 G M_{\rm pl}/R _{\rm pl})^{1/2}$, we have thus that the mass-loss rate derived in the energy-limit approximation is
\begin{equation} \label{eq: E_lim_simple}
\dot{M}_E = {\epsilon} \frac{ F_{\rm EUV} (\pi R_{\rm pl}^2) }{ GM_{\rm pl}/R _{\rm pl}} .
\end{equation}
Using an efficiency of $\epsilon=0.32$, for the 1-$M_{\rm jup}$ and 0.3-$M_{\rm jup}$ planets presented in Figure 1, we find energy limited mass-loss rates of $\dot{M}_E = 1.9 \times 10^9$ g s$^{-1}$ and $6.2 \times 10^{9}$ g s$^{-1}$, respectively.
The energy-limit approximation overestimates the mass-loss rate computed from hydrodynamical models for the 1-$M_{\rm jup}$ planet by only a factor of 1.1. However, it underestimates by a factor of 1/7 the mass-loss rate of the 0.3-$M_{\rm jup}$ planet. Examining Equations (\ref{eq.el}) and (\ref{eq: E_lim_simple}), we expect that $\dot{M}_E > \dot{M}$, when the terminal velocities of the wind are $u_{\rm term} < v_{\rm esc}$. Indeed, for the 1-$M_{\rm jup}$ planet, terminal velocities are $\sim 20$~km/s, compared to an escape velocity of $\sim 60$~km/s. On the other hand, we expect that $\dot{M}_E < \dot{M}$, when the terminal velocities of the wind are $u_{\rm term} > v_{\rm esc}$. This is the case for the 0.3-$M_{\rm jup}$ planet, where the terminal velocities are above 32~km/s and escape velocity is $\sim 30$~km/s.
Recent studies showed that the energy-limited approximation can be a poor indicator of the mass lost in atmospheric escape processes \citep[e.g.][]{Salz2016,kub2} and can underestimate the mass-loss rate of low density, highly irradiated exoplanets planets in the boil-off regime \citep{kub2}. To better reconcile the analytical expression with hydrodynamic models, a correction factor $K$ is often considered in the denominator of Equation (\ref{eq: E_lim_simple}) to account for stellar tidal forces that reduces the effect of the planetary gravitational potential \citep{Erkaev2007}
\begin{equation}
K=1-\frac{3R_{\rm pl}}{2R_{ \rm Roche}}+\frac{R_{\rm pl}^3}{2R_{ \rm Roche}^3} .
\end{equation}
Additionally, the cross section of the planet is assumed to be $\pi R_{\tau=1}^2$, where $R_{\tau=1}$ refers to the distance where optical depth unity to ionising photons is reached. These two additional factors can reduce the discrepancy between the hydrodynamic calculations and the analytical limit, hence
\begin{equation} \label{eq: E_lim}
\dot{M}_{E}^{\rm corr} = {\epsilon} \frac{ F_{\rm EUV} (\pi R_{\tau=1}^2) }{K GM_{\rm pl}/R _{\rm pl}} .
\end{equation}
However, to calculate $R_{\tau=1}$, one needs to compute the density and optical depth profiles through hydrodynamic simulations or from a derived prescription based in such simulations. For our examples presented in Figure \ref{fig:fiducial}, using $R_{\tau=1}$ from our simulations, and the tidal correction factor $K$, we find $\dot{M}_{E}^{\rm corr}$ that are factors of 1.6 and 0.25 the values from our hydrodynamical models. In these cases, the `correction' made the discrepancy smaller for the 0.3-$M_{\rm jup}$ planet (from 1/7 to 1/4), but increased the discrepancy for the 1-$M_{\rm jup}$, when we adopted Equation (\ref{eq: E_lim}) instead of (\ref{eq: E_lim_simple}). In the comparisons done further on in this paper, we compute energy-limited mass loss from Equation (\ref{eq: E_lim}).
\subsection{Transit calculations} \label{Raymodel}
After solving for the hydrodynamic equations for the planetary outflow, we use a ray tracing model to simulate planetary transits as observed in both Ly$\alpha$ and H$\alpha$. This model is described in \citet{2018MNRAS.481.5296V}, here modified for Ly$\alpha$ and H$\alpha$ transitions. In the ray tracing model presented here, we upgrade from a Doppler line profile to a Voigt line profile. As the ray tracing model from \citet{2018MNRAS.481.5296V} is three dimensional, we create a three-dimensional Cartesian grid with 201 evenly spaced cells in each direction. This grid is centred on the planet and each cell is `filled'\footnote{Similar to a solid of revolution.} with the one dimensional hydrodynamic calculations of $u$, $T$, and $\rho$, taking into account that Doppler shifts result from the projection of the wind velocities along the line-of-sight. One of the axis of the grid is aligned along the observer-star line, so that the grid seen in the plane-of-the-sky is a square of $201 \times 201$ cells.
To calculate a frequency-dependent transit, we use 51 velocity channels from $u_{\rm channel}=-500$~km/s to $+500$ km/s, using increments of 50 km/s for the outer velocities ($|u_{\rm channel}|=100$ to $500$ km/s), while we improve the resolution at line centre (35 channel increments within $\pm$ 100 km/s).
With this, the frequency-dependent optical depth along a single ray in the direction connecting the observer to the star-planet system is
%
\begin{equation}
\tau_\nu = \int n_{n} \sigma \phi_\nu dz ,
\end{equation}
%
where $\sigma$ is the absorption cross-section at the line centre, $\phi_\nu$ the Voigt line profile. The subscript `$\nu$' indicates that the line profile and the optical depth are dependent on the frequency (or velocity channel) of the observation. We calculate $\sigma$ for both Ly$\alpha$ and H$\alpha$ transitions using
\begin{equation}
\sigma = \frac{\pi e^2 f}{m_e c }
\end{equation}
where $f$ is the oscillator strength of the transition, $m_e$ the electron mass, $e$ the electron charge and $c$ the speed of light. Our values of $f$ for each of these transitions were obtained from the NIST Catalogue\footnote{ \url{https://www.nist.gov/pml/atomic-spectra-database}}, which gives $f=0.416410$ for Ly$\alpha$ and $f=0.64108$ for H$\alpha$. The Voigt line profile $\phi_\nu$ is a convolution of a Gaussian and Lorentzian line profile accounting for both Doppler and natural broadening
%
\begin{equation}
\phi_\nu =\frac{\lambda_0}{\sqrt{\pi}u_{\rm th}} \frac{\xi}{\pi}\int_{-{\infty}}^{{\infty}}\frac{e^{-w^2}}{\chi^2 +(\Delta {u}/u_{\rm th}-w)^2 }dw
\end{equation}
where $\lambda_0 $ is the wavelength at line centre ($\lambda_0 = 1215.67 $\AA\ for Ly$\alpha$; $ \lambda_0 = 6562.79 $\AA\ for H$\alpha$), $u_{\rm th} = (2 k_B T / m_H)^{1/2}$ and $m_H$ is the mass of atomic hydrogen. $\chi={A_{ji} \lambda_0}/({4 \pi u_{\rm th}})$ is the damping parameter, where $A_{ji}$ is the transition rate ($A_{ji}=4.69860 \times 10^8$ s$^{-1}$ for the Ly$\alpha$ transition; $A_{ji}= 4.4101 \times 10^7$ s$^{-1}$ for the H$\alpha$ one; values from NIST). The velocity offset from the line centre is $\Delta {u} = {u_{\rm channel}}-{u}_{\rm LOS}$, where ${u}_{\rm LOS}$ is the line of sight flow velocity of the escaping wind. We make use of IDL's inbuilt \texttt{voigt} function to calculate $\phi_\nu$.
In computing the transmitted spectrum, we neglect centre-to-limb variations in the stellar disc and assume that, for a given frequency, the stellar disc emits a uniform specific intensity $I_\star$. The fraction of `transmitted' specific intensity at a given frequency during transit is hence
\begin{equation}
\frac{I_\nu}{I_\star} = e^{-\tau_\nu} .
\end{equation}
$1 - I_\nu/{I_\star} $, therefore, represents the fraction of absorbed specific intensity as a result of the absorption from both the planet disc and its atmosphere. To calculate the total absorption, we then integrated over all rays. Given our three-dimensional grid construction, the grid projected in the plane of the sky has $201 \times 201$ cells. We therefore shoot 51 frequency-dependent rays through each of the $201^2$ grid elements. Given that the grid is larger than the projected area of the stellar disc, rays emitted from pixels outside the stellar disc are assigned a zero specific intensity.
Therefore, integrating over all these rays and dividing by the `flux' of star ($\pi R_\star^2$) allows us to calculate the transit depth
\begin{equation}\label{eq.deltaF}
\Delta F_\nu = \frac{\int\int (1-e^{-\tau_\nu})dxdy}{\pi R_\star ^2} .
\end{equation}
where $dx$ and $dy$ are the sizes of each cell. By construction, $dx=dy=2 \times 10R_{\rm pl}/201 \simeq 0.1R_{\rm pl}$ (the factor of 2 comes from the `solid of revolution' method used and $10R_{\rm pl}$ is the extension of the hydrodynamical computations). In practice, $\Delta F_\nu$ represents the ratio of a frequency-dependent effective area of the planet $\pi(R^{\rm eff}_{{\rm pl}, \nu})^2$, i.e., planet disc plus its absorbing atmosphere, by the stellar area
\begin{equation}
\Delta F_\nu = \frac{\pi (R^{\rm eff}_{{\rm pl}, \nu})^2}{\pi R_*^2}.
\end{equation}
Because of the nature of our spherically symmetric model, the effective area of the planet is that of a circle, but in 3D models, this area could take a different form \citep[e.g.][]{2018MNRAS.481.5296V}. Here, we assume a transit along the centre of the stellar disc resulting in an impact parameter $b=0$. Assuming circular orbit, this gives a transit duration of
\begin{equation}\label{eq: tran_dur}
t_{\rm dur} \approx \frac{P_{\rm orb}R_\star }{\pi a} = \frac{2 R_\star}{\sqrt{G M_*/a}} = 2.75 ~ \rm h ,
\end{equation}
where $P_{\rm orb}$ is the orbital period.
\section{Evolution of Atmospheric Escape in close-in giants} \label{evol_results_1}
To probe the evolution of the atmospheric escape, we perform escape simulations for two planet masses (0.3 and 1 $M_{\rm jup}$) orbiting stars that were born as slow, intermediate and fast rotators. This results in six different evolutionary tracks. For each track, we have more than 50 models sampling ages from 10~Myr to 5~Gyr. The ages sampled are shown as diamonds in Figure \ref{fig.fEUVandrad}. In total, we ran more than 300 models to compute the evolution of atmospheric escape. For each of these models, we also predict Ly$\alpha$ and H$\alpha$ transit signals.
The stellar mass ($M_*= 1 M_\odot$), radius ($R_* = 1 R_\odot$) and orbital distance ($a=0.045$~au) are fixed in our study.
In Figure \ref{fig.evol}, we present the evolution of three atmospheric properties predicted using our models: the mass-loss rate (top panels), terminal velocities (middle) and the position of the sonic point (bottom). The results for our 1-$M_{\rm jup}$ and 0.3-$M_{\rm jup}$ planets are shown on the left and right panels, respectively.
\begin{figure}
\begin{center}
\hspace*{-0.5cm}
\includegraphics[width=0.47\textwidth]{figs/fig3.png}
\caption{Evolution of atmospheric properties for a 1-$M_{\rm jup}$ (left) and 0.3-$M_{\rm jup}$ (right) planets orbiting a 1-$M_\odot$ star at 0.045 au, of slow (red), intermediate (green) or fast (blue) rotation. The panels are: mass-loss rate $\dot{M}$ (upper), terminal velocity u$_{\rm term}$ (centre) and radial distance to the sonic point $R_{\rm sonic}$ (lower). Dashed lines show $\dot{M}$ predicted by the energy-limited approximation (Equation \ref{eq: E_lim}). We present analytical fits to these results in Appendix \ref{evol_analt}.}
\label{fig.evol}
\end{center}
\end{figure}
The evolution of $\dot{M}$ for the three considered host star rotations are shown as solid lines in the top panels of Figure \ref{fig.evol}. We see that close-in giants experience greater levels of mass loss during younger evolutionary stages and that their mass-loss rates decrease with time. This is the result of the lowered EUV flux incident on the planetary atmosphere as well as the contracting planetary radius with evolution (Figure \ref{fig.fEUVandrad}). The lowered EUV flux leads to a reduced level of heating in the atmosphere due to a lower number of photoionisations. This ultimately weakens the hydrodynamic wind as the planet evolves, resulting in the declining curves of $\dot{M}$.
The contracting planetary radius with evolution results in a strengthening planetary gravitational force on atmospheric particles. This acts to further reduce the atmospheric escape experienced with evolution by improving the planet's ability to retain its atmospheric particles. A weaker gravitational potential is directly responsible for the stronger escape experienced by the lower mass 0.3-$M_{\rm jup}$ planet, as compared to the 1-$M_{\rm jup}$ planet. Both its lower mass and larger radii are responsible for this.
Although not considered in our models, the decreasing planetary mass resulting from atmospheric escape will soften the effect of the growing planetary gravity with age. We do not include this decrease in mass in our simulations. As a result, our mass-loss rates increasingly underestimate the true mass-loss rate as the planet ages due to a growing overestimation of planetary gravity. Fortunately, we can assume that the resulting inaccuracy is of minor importance as the effect of the neglected mass variation on the planetary gravity is negligible in comparison to that resulting from the radius variation during evolution (Section \ref{evol_tot_mass}).
We used the evolutionary curves from \citet{Fortney2010}, which did not consider the presence of a core. By considering a 25-$M_\oplus$ core, \citet{Fortney2010} predicted planets with smaller radii than those without a core. In our models, this smaller radius would lead to stronger planetary gravity and consequently to a somewhat weaker mass loss. However, this would not change the trends we present in Figure \ref{fig.evol}.
The significantly steeper $\dot{M}$ curves for the close-in giant orbiting a fast-rotating star (blue) further demonstrates the significance of stellar activity on atmospheric escape. Stellar rotation determines the EUV emission, with faster rotation producing higher emission, until a saturation plateau is reached for a fast rotating star (Figure \ref{fig.fEUVandrad}). The $\dot{M}$ curves for the three rotations become indistinguishable after $\sim 1$ Gyr, owing to a convergence of the received EUV flux at this age.
The dashed lines in the upper panels of Figure \ref{fig.evol} show the mass-loss rates estimated using the energy limit approximation (Equation \ref{eq: E_lim}). For the 1-$M_{\rm jup}$ planet, the energy limited approximation overestimates the mass loss when orbiting a fast rotating host star (i.e., in the case of high $F_{\rm EUV}$), while it underestimates mass-loss rates in the case of planets orbiting slowly rotating hosts. The results for the hydrodynamic case and the energy-limit case agree quite well after $\sim 500$~Myr. For the case of the 0.3-$M_{\rm jup}$ planet, the situation is quite different, in which we see that the estimates of $\dot{M}$ provided by the energy-limited approximation are very different from the hydrodynamical calculation.
The middle panels of Figure \ref{fig.evol} show the terminal velocity ${u}_{\rm term}$ of the planetary outflow, where we see that at younger ages, larger terminal velocities are reached. We define ${u}_{\rm term}$ here as the velocity of atmospheric particles leaving the planetary atmosphere at our maximum simulated distance of 10 $R_{\rm pl}$. As with the curves of $\dot{M}$, we see a decline of the ${u}_{\rm term}$ as the planet ages. However, while mass-loss rates vary by more than 3 orders of magnitude from 10 to 5000 Myr, the terminal velocities only vary by a factor of $\sim 2$.
Both $\dot{M}$ and ${u}_{\rm term}$ decline with age due to a combination of an increasingly weaker atmospheric heating and an improved ability of the planet to retain its atmospheric particles causing weaker, slower hydrodynamic winds at later evolutionary stages. The velocity of escaping neutral hydrogen atoms is of particular importance for Ly$\alpha$ observations as only absorption in line wings are observable (Section \ref{lya_sec}).
The bottom panels of Figure \ref{fig.evol} show the evolution of the sonic point $R_{\rm sonic}$, which climbs to higher positions of the atmosphere with evolution. The sonic point occurs when the wind velocity reaches the speed of sound, $u = c_s$. With evolution, there is a slower rise in the wind velocity with distance (i.e., lower acceleration). As a result, the wind must travel a greater distance before it reaches the sound speed $c_s$. With age, $c_s$ will also be slightly lowered due to overall lower atmospheric temperatures, but this only partially counteracts the effects of the lower wind acceleration. As a result, the variations in wind velocity profiles dominate in setting the growth of $R_{\rm sonic}$ with evolution.
We draw attention to the striking resemblance of each of the curves of Figure \ref{fig.evol} to the curves of the received $F_{\rm EUV}$ included in Figure \ref{fig.fEUVandrad}, demonstrating a strong dependence of $\dot{M}$ and ${u}_{\rm term}$ on the irradiation level of the planetary atmosphere. We present analytical fits to these results in Appendix \ref{evol_analt}.
\subsection{Total mass lost during evolution} \label{evol_tot_mass}
Figure \ref{fig: totcurve} shows the total mass lost calculated over the simulated life of the 1- (top panel) and 0.3-$M_{\rm jup}$ (bottom) planets. We calculate the total mass lost by integrating mass-loss rates given in Figure \ref{fig.evol} with respect to time. Figure \ref{fig: totcurve} shows the largely differing mass losses resulting from the three considered rotations of the host star. The total mass lost is clearly dependant on stellar rotation and mass loss is greatest for planets orbiting faster rotating hosts. The mass loss experienced by each planet levels off with age due to the slowing mass-loss rates with evolution shown in Figure \ref{fig.evol}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.49]{figs/fig4.png}
\caption{Total planetary mass lost after the age of 10 Myrs for a 1- (top) and 0.3- (bottom) $M_{\rm jup}$ planets orbiting a slow (red), intermediate (green), and fast (blue) rotating host. Solid and dashed lines are mass lost calculated using the hydrodynamic model and energy-limited case, respectively. }
\label{fig: totcurve}
\end{center}
\end{figure}
Past studies predicted that atmospheric escape does not play an influential role in the overall evolution of hot Jupiters \citep{Murray-Clay2009,2013ApJ...775..105O,2015Natur.522..459E}. In spite of the large mass-loss rates observed, the planetary mass is still very large, constituting essentially an unending atmospheric reservoir.
Indeed, over its entire evolution, we predict that the 1-$M_{\rm jup}$ planet would lose at most $1\%$ of its initial mass. We therefore expect hydrodynamic escape to have a negligible effect on mass of highly irradiated Jupiter-mass planets, owing to their large mass reservoir.
However, for the 0.3-$M_{\rm jup}$ planet, our results indicate the opposite -- mass lost due to hydrodynamic escape is considerably large, and could potentially shape the planet's evolution. In this case, the total mass lost can reach more than 20\%. For orbital distances even shorter than the 0.045~au considered in this work, mass-loss rates are expected to increase even more, giving support to the idea that close-in rocky planets might have begun their evolution as Neptune-like planets, but eventually evaporated the entirety of their atmospheres \citep{2015Natur.522..459E}.
Regardless of the planetary mass, orbiting a faster rotating host star leads to greater losses, with a slow rotator host producing $\sim$ 4 to 5 times less lost mass than a fast rotating host. The dashed lines in Figure \ref{fig: totcurve} show the total mass lost calculated using the energy-limit approach. This approximation overestimates the total mass lost when orbiting a fast rotating host star, while it underestimates the loss for planet orbiting a slow host.
\section{Spectroscopic transits in Hydrogen lines}\label{lya_and_ha}
We use the solutions of the hydrodynamic calculations presented in the previous Section to calculate spectroscopic transits in Ly$\alpha$ and H$\alpha$. To model these transits, we need to know the density of neutral hydrogen in the ground state $n=1$ (hereby $N_1$) and in the first excited state $n=2$ ($N_2$), respectively.
For that, we solve the statistical equilibrium equation in the coronal-model approximation \citep{2018LRSP...15....5D}. In this approximation, the spontaneous radiative decay balances the excitation process, and collisional de-excitation is neglected. Only direct excitations from the ground state $n=1$ are included. To perform this calculation, we use the CHIANTI software \citep[v 9.0.1, ][]{2019ApJS..241...22D}\footnote{We use \texttt{sun\_photospheric\_2015\_scott.abund} for the photospheric abundance file and ion equilibrium file \texttt{chianti.ioneq}.} written in IDL (more specifically, we use the \texttt{ch\_pops} procedure). In practice, the population calculation requires the temperature and electron density, which are outputs from our hydrodynamic calculations. Note that for all our simulations, the ratio $N_2/N_1$ is quite low, with values not exceeding $1$ -- $2\times10^{-5}$ for the young planets, and maximum values well below that for the older planets.\footnote{We also computed the ratio $N_2/N_1$ using the Boltzmann equilibrium equation, and found ratios larger than the ones found by solving the statistical equilibrium equation. The Boltzmann equation predicted larger H$\alpha$ transits, therefore representing an upper limit for the transit calculations. In general, though, these transit depths were not larger by more than a factor of 2 of the ones shown in the present paper.} However, as hydrogen is so abundant, we still predict a reasonably large transit in H$\alpha$, as we will show in Section \ref{ha_sec}.
\subsection{Ly$\alpha$ transits}\label{lya_sec}
Figure \ref{fig:red_lya} shows our predicted transit signatures in the Ly-$\alpha$ line for the 1-$M_{\rm jup}$ planet. For each block of three panels in Figure \ref{fig:red_lya}, colour indicates the planetary age as given by the relevant colour bar. In the three upper plots (a-c), we show the line profiles of hot Jupiters orbiting a slowly rotating host; the central plots (d-f) consider an intermediate rotator, while lower plots (g-i) correspond to a fast rotating host star. The lightcurves resulting from the geometric transit of the planet itself are shown in panels a, d and g. Therefore, these plots simply reveal the transit effect of the evolving planetary radius, where the transit depths can be converted in $(R_{\rm pl}/R_*)^2$.
\begin{figure}
\begin{center}
\includegraphics[scale=0.47]{figs/fig5a.png}\\ \medskip
\includegraphics[scale=0.47]{figs/fig5b.png}\\ \medskip
\includegraphics[scale=0.47]{figs/fig5c.png}
\caption{Ly$\alpha$ transit simulations for the 1-$M_{\rm jup}$ planet. Results are separated into three blocks related to the host star stellar rotation: slow (a-c), intermediate (d-f) and fast (g-i) rotators. Panels a, d, and g show the lightcurves of the geometric transit. Panels b, e and h show the lightcurves as observed in Ly$\alpha$ line centre. Panels c, f and i are the mid-transit obscured flux ($\Delta F_\nu$, Equation \ref{eq.deltaF}) of the Ly$\alpha$ line profile, as a function of Doppler velocity and wavelength. Note that $\Delta F_\nu$ does not reach zero and is offset by a couple \% due to the absorption of the opaque disc of the planet. For each plot, colour represents age, which are indicated in the colour-bars. }
\label{fig:red_lya}
\end{center}
\end{figure}
Panels b, e and h of Figure \ref{fig:red_lya} display predicted transit lightcurves as observed at the Ly$\alpha$ line centre. These lightcurves feature considerably deeper transits than those resulting from the geometric transit, as neutral hydrogen in the extended atmospheres absorbs strongly in Ly$\alpha$. In addition, the typical Ly$\alpha$ transit duration is approximately 2 hours longer than the geometric transit, with young planets producing even longer durations. These plots show the evolution of the atmospheric size: as the atmospheres of younger close-in giants are inflated further than those of older planets, they occult more of the stellar disc, thus resulting in a greater transit duration and depth in Ly$\alpha$.
As Ly$\alpha$ observations are not possible at the line centre, in panels c, f and i of Figure \ref{fig:red_lya}, we show the predicted Ly$\alpha$ line profile at mid-transit. In these panels, we show the occulted flux $\Delta F_\nu$ (Equation \ref{eq.deltaF}) as a function of wavelength and Doppler velocity. These profiles show saturation of Ly$\alpha$ absorption at line centre and larger absorption in the line wings for younger planets. This is a direct result of the stronger atmospheric escape experienced by these planets, resulting in higher velocities of the escaping neutral hydrogen atoms. The symmetric line profiles are a result of our hydrodynamical models being spherically symmetric. A more realistic simulation, including for example interaction with stellar wind and acceleration due to radiation pressure, would present asymmetries in blue and red wings of the line profile \citep[e.g.,][]{2014A&A...562A.116K,2016A&A...591A.121B,2018MNRAS.479.3115V,2018MNRAS.481.5296V,2019ApJ...873...89M, 2019MNRAS.487.5788E}.
Across the three considered host-star rotations, a faster rotating host leads to longer and deeper Ly$\alpha$ transits (line centre) for a given planetary age below $\sim 800$~Myr. These younger planets also show wider line profiles at mid-transit, when orbiting fast-rotating stars. This is because planets orbiting more active stars receive more $F_{\rm EUV}$, which then creates stronger evaporation.
In Figure \ref{fig:red_lya__03MJUP}, we present the same analysis as that shown in Figure \ref{fig:red_lya}, only now we consider the 0.3-$M_{\rm jup}$ planet. It is clear that each of the Ly$\alpha$ transit profiles exhibit more extreme characteristics for this less massive planet: longer duration, deeper transits and wider line profiles. These are due to both a greater inflation of the atmosphere and faster moving neutral hydrogen atoms.
\begin{figure}
\begin{center}
\includegraphics[scale=0.47]{figs/fig6a.png}\\ \medskip
\includegraphics[scale=0.47]{figs/fig6b.png}\\ \medskip
\includegraphics[scale=0.47]{figs/fig6c.png}
\caption{Same as in Figure \ref{fig:red_lya}, but for the 0.3-$M_{\rm jup}$ planet.}
\label{fig:red_lya__03MJUP}
\end{center}
\end{figure}
\subsection{H$\alpha$ transits} \label{ha_sec}
Similar to our predictions of Ly$\alpha$ transits, here we present our estimates of H$\alpha$ transits. An advantage of observing in H$\alpha$ is that the line centre is not contaminated as in the Ly$\alpha$ line and, perhaps, most importantly, they can be conducted with ground-based spectrographs.
Figure \ref{fig:red_ha} shows the H$\alpha$ transits for the 1-$M_{\rm jup}$ and 0.3-$M_{\rm jup}$ planets, left and right panels respectively, in a similar design as Figures \ref{fig:red_lya} and \ref{fig:red_lya__03MJUP}. We omit here the intermediate rotator for brevity and only show results for the slow (top panels) and fast (bottom) host-star rotations. The trends in the H$\alpha$ transits are {\it qualitatively} similar to the ones seen in the Ly$\alpha$ ones: the H$\alpha$ transits exhibit greater depths and duration than the geometric lightcurves due to absorption of H$\alpha$ photons by neutral hydrogen in the first excited state in the evaporating atmosphere. Likewise, we find that younger planets produce deeper, longer and broader H$\alpha$ transits compared to their older counterparts. For a given age below $\sim 800$~Myr ($\log ({\rm age})=2.9)$, the obscuration is larger for planets orbiting fast rotators, due to the higher $F_{\rm EUV}$ of these stars. However, Ly$\alpha$ and H$\alpha$ transits are {\it quantitatively} different.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.47]{figs/fig7a.png} \hspace{0.3cm} \includegraphics[scale=0.47]{figs/fig7b.png}\\ \medskip
\includegraphics[scale=0.47]{figs/fig7c.png} \hspace{0.3cm} \includegraphics[scale=0.47]{figs/fig7d.png}
\caption{Same as in Figure \ref{fig:red_lya}, but for the H$\alpha$ transits of the 1-$M_{\rm jup}$ planet (left) and 0.3-$M_{\rm jup}$ planet (right). The host stars are assumed to start their lives as slow (top), and fast (bottom) rotators. }
\label{fig:red_ha}
\end{center}
\end{figure*}
Contrary to the saturation often seen in Ly$\alpha$ for several ages and host star rotations, H$\alpha$ absorption is considerably weaker. This is due to the significantly lower density of hydrogen in the first excited state $n=2$ compared to the ground state $n=1$. As a result, the transit depths we predict in our models reach, at line centre, at most $\Delta F_\nu \sim$5 -- 8\% (a $\sim$ 3 -- 4\% excess from the geometric transit) at very young ages reducing, at older ages, to fractions of a percent above the geometric transit, for the 1-$M_{\rm jup}$ planet, or no excess absorption at all for the 0.3-$M_{\rm jup}$ planet. The left panels in Figure \ref{fig:Halpha_transit_flux} show the evolution of the H$\alpha$ excess flux, i.e., above the geometric transit, for the planets simulated here.
\begin{figure}
\begin{center}
\includegraphics[width=0.47\textwidth]{figs/fig8.png}
\caption{Left: Evolution of transit depths, in excess of the geometric transit, in the H$\alpha$ line at line centre and at mid-transit. Right: Evolution of the mid-transit equivalent width (Equation \ref{eq.width}), integrated over $\pm 500$~km/s. Top panels are for the 1-$M_{\rm jup}$ planet simulations while the bottom panels are for the 0.3-$M_{\rm jup}$ case. Red and blue curves represent the rotation of the host star, slow and fast, respectively.}
\label{fig:Halpha_transit_flux}
\end{center}
\end{figure}
Our results, therefore, indicate that H$\alpha$ transits could be more easily detected in planets orbiting young and/or active stars. An extended H$\alpha$ transit has indeed been observed for HD189733b, which orbits a relatively active star \citep{2012ApJ...751...86J}. These authors report a planet size in H$\alpha$ that is $\sim 1.4 R_{\rm pl}$, or $\Delta F_\nu \simeq 4.7\%$ at line centre at mid-transit. These values are actually not too dissimilar to our models of the 1-$M_{\rm jup}$ planet at younger ages. H$\alpha$ was also detected in the atmosphere of KELT-9b \citep{2018NatAs...2..714Y, 2019AJ....157...69C}. Although the host is not an active star, it has a strong EUV flux due to its early spectral type. As a result, KELT-9b shows a wide H$\alpha$ line profile ($\sim$ 50 km/s) and a transit depth of 1.8\% (1.15\% in excess of the geometric transit of 0.68\%, \citealt{2018NatAs...2..714Y}).
Conversely, an H$\alpha$ extended transit was not detected in the less active KELT-3 system \citep{2017AJ....153..217C}.
Our models predict that lower gravity giants, like our 0.3-$M_{\rm jup}$ case, would show a larger H$\alpha$ obscuration than the more massive planets at young ages (compare top and bottom panels in Figure \ref{fig:Halpha_transit_flux}). However, at older ages, the situation is reversed, with the 0.3-$M_{\rm jup}$ showing no excess H$\alpha$ flux above $\sim$1.2 Gyr. This is because the extended atmosphere of neutral hydrogen becomes almost entirely in the ground state at older ages. Our result agrees with the lack of absorption observed in the H$\alpha$ transits of GJ436b \citep{2017AJ....153..217C}, in spite of its impressively long and extended transit in Ly$\alpha$ \citep{2014ApJ...786..132K, 2015Natur.522..459E,2017A&A...605L...7L}. This situation can occur when temperatures are not high enough to excite hydrogen. Indeed, GJ436b has a relatively small EUV flux ($F_{\rm EUV} = 10^3$ erg/cm$^2$/s, \citealt{2015Natur.522..459E}). In our simulations, this flux is reached at an age of $\sim$ 3Gyr, when we already see a lack in H$\alpha$ absorption.
We also calculate the equivalent width of our H$\alpha$ transmission spectra as
\begin{equation}\label{eq.width}
W_{{\rm H}\alpha}=\int_{v_i}^{v_f} (\Delta F_{\nu} -\Delta F_{\rm geom}) d{v} ,
\end{equation}
where ${v_i}$ and ${v_f}$ represent the velocity range over which the integration is performed and $\Delta F_{\rm geom}$ is the geometric transit depth ($R_{\rm pl}^2/R_*^2$), which is wavelength-independent. Equation (\ref{eq.width}) is thus the integral of the `excess' transit over a velocity (or wavelength) range. We integrate over our entire simulated velocity range of $\pm 500$ km/s or, equivalently, $\pm 10.9$\AA. However, we note that decreasing the range of integration to $\pm 50$ km/s ($\pm 1$\AA) gives the same equivalent width -- this is because the bulk of our predicted absorption occurs within $\pm 50$ km/s.
The right panels of Figure \ref{fig:Halpha_transit_flux} show the evolution of $W_{{\rm H}\alpha}$ for the 1- (top) and 0.3-$M_{\rm jup}$ (bottom) planets, respectively. Equivalent widths are larger at younger ages for both mass planets, reaching 20 -- 40 m\AA\ for planets orbiting active, young stars. Our values are lower than, but still similar to, the $\sim 65$ m\AA\ observed for the hot Jupiter HD189733b \citep{2018AJ....156..154J}.
Both the excess transit and the equivalent width of the H$\alpha$ line (Figure \ref{fig:Halpha_transit_flux}) show trends that are similar to the $F_{\rm EUV}$ evolution with rotation of the host (Figure \ref{fig.fEUVandrad} top): the curve for fast rotator lies above the curve for slow rotator. However, contrary to the saturation of flux seen at early ages, which is also reflected in the curves of $\dot{M}$ (Figure \ref{fig.evol}), the H$\alpha$ line shows neither a plateau-like saturation in transit depth nor in the equivalent width (Figure \ref{fig:Halpha_transit_flux}). This can be seen in Figure \ref{fig.excess_mdot}, were we plot the excess transit versus the escape rates. At younger ages, from 10 to 400 Myr (between crosses and squares), we see relatively small variations in $\dot{M}$ for planets around fast rotating stars (in the saturated regime), but a significant drop in excess absorption.
\begin{figure}
\begin{center}
\includegraphics[width=0.47\textwidth]{figs/fig9.png}
\caption{Transit depths, in excess of the geometric transit, in the H$\alpha$ line at line centre and at mid-transit versus the predicted escape rate. Symbols mark different evolutionary stage, with the younger systems at the top right of the curves and evolving as going towards the bottom left. The left (right) most curves are for the 1-$M_{\rm jup}$ (0.3-$M_{\rm jup}$) planet simulations.}
\label{fig.excess_mdot}
\end{center}
\end{figure}
\section{Discussion and conclusions}\label{CONCLUSIONS}
In this work, we studied the evolution of atmospheric escape in close-in giants. For that, we modelled escape of a higher gravity planet (1.0 $M_{\rm jup}$) and a lower gravity one (0.3 $M_{\rm jup}$). Both planets were assumed to be at 0.045~au, orbiting stars whose EUV fluxes evolve from early ages (10 Myr) to about 5 Gyr. Given that the host star's fluxes evolve differently depending on their initial rotation, we considered three evolutionary paths for the EUV flux, by following host stars that initiated their lives as slow, intermediate or fast rotators \citep{Johnstone2015a}. In our models, we also took into account contraction of the planet during its evolution \citep{Fortney2010}.
To probe the entire evolution, for the two different planet masses, and three different stellar rotations, we run more than 300 simulations of atmospheric escape, using a hydrodynamic escape model. We found that younger planets experience higher levels of atmospheric escape, owing to a favourable combination of higher irradiation levels and weaker planetary gravity (due to their larger radii). Lesser mass planets experience even higher escape rates than the more massive ones. Altogether, the lower gravitational potential of the less massive planet results in stronger planetary winds, with higher velocities and mass-loss rates.
In spite of the relatively large escape rates ($10^9$ -- $10^{12}$ g/s), we estimated that, over 5 Gyr, the 1-$M_{\rm jup}$ planet would lose at most 1\% of its initial mass due to escape. However, for the 0.3-$M_{\rm jup}$ planet ($10^{10}$ -- $10^{13}$ g/s), up to 20\% of its mass could evaporate through hydrodynamic escape. This strong escape in the lower mass planet is in line with observations of the short-period `Neptunian desert', which shows a lack of planets at $\lesssim 5$--$10$-day orbit, with masses between 0.03 and 0.3 $M_{\rm jup}$ \citep{2016A&A...589A..75M}. The desert is suggestive of significant mass loss in planets with masses $\lesssim 0.3 ~M_{\rm jup}$, which is indeed confirmed by our models.
Using the results of our hydrodynamic models, we then computed spectroscopic transits in both Ly$\alpha$ and H$\alpha$ lines. The Ly$\alpha$ line is saturated at line centre and at mid-transit (we assume no impact parameter for simplicity), while H$\alpha$ transits produce transit depths of at most 3 -- 4\% in excess of their geometric transit at younger ages. Planets older than 200 Myr (or 600 Myr, if orbiting around fast rotating star) would have H$\alpha$ absorption $\lesssim$1\% in excess of the geometric transit.
While at older ages ($\gtrsim$1.2 Gyr), Ly$\alpha$ absorption is still significant (and the line could even be saturated, as in the case of the lower mass planet), the H$\alpha$ absorption nearly disappears. This is because the extended atmosphere of neutral hydrogen becomes almost entirely in the ground state at such ages.
Our models, however, predict line profiles that are symmetric, in contrast to asymmetric profiles derived from observations (e.g., HD189733b (\citealt{Etangs2012a}); HD209458b (\citealt{2003Natur.422..143V, Ben-Jaffel2008}); GJ436b \citep{2015Natur.522..459E}). This is because our escape models are spherically symmetric. To overcome this limitation, we would need to consider additional physical ingredients in our models, such as charge-exchange between stellar wind protons and atmospheric neutrals, acceleration of neutral atoms due to radiation pressure and interaction of planetary flows with a stellar wind. Several multi-dimensional simulation studies have moved into this direction \citep[e.g.,][]{2014A&A...562A.116K,2015ApJ...813...50K,2016A&A...591A.121B,2018MNRAS.479.3115V,2018MNRAS.481.5296V,2019ApJ...873...89M, 2019MNRAS.487.5788E,2019MNRAS.483.2600D,2019arXiv190600075D, carolan2019}. These physical mechanisms are also required to reproduce the large velocities seen in Ly$\alpha$. While these model provide more physical intuition in the details of the atmospheric escape, they can be computationally expensive.
The advantage of our models, versus more complex ones, is thus the ability of computing a large number of simulations due to their low computational cost. As a result, we can make general predictions that could help guide observations. We showed here that younger systems show deeper transits as well as larger widths in both H$\alpha$ and Ly$\alpha$ lines. In theory, this indicates that atmospheric escape is easier to detect during early ages. However, in practice, at younger ages, stars are more active, generating variability that could mimic signatures of atmospheric transits \citep{2016MNRAS.462.1012B,2018AJ....156..154J}. This effect makes escape observations at young ages and/or around active stars more difficult to interpret and, thus, models, like ours, can assist in the interpretations of transit signatures.
\section*{Acknowledgements}
We acknowledge funding from the Irish Research Council Consolidator Laureate Award 2018. CHIANTI is a collaborative project involving George Mason University, the University of Michigan (USA), University of Cambridge (UK) and NASA Goddard Space Flight Center (USA). The authors thank C.~Villarreal D'Angelo and A.~Esquivel for their insightful comments and discussion.
|
1,116,691,497,412 | arxiv | \section{Introduction}
The developments of radioactive-ion beams in the mid-80s, has permitted the exploration of the nuclear chart far from stability.
Various unexpected features have been uncovered thanks to this technical breakthrough.
Halo nuclei are of particular interest here as they exhibit a strongly clusterised structure that explains there very large radius compared to their isobars.
Indeed these neutron-rich nuclei have a small binding energy for one or two neutrons.
Thanks to this lose binding, the valence neutron(s) can tunnel far into the classically forbidden region to form a dilute halo around a more compact core \cite{Tan96}.
\ex{11}Be and \ex{15}C are examples of one-neutron halo nuclei,
while \ex{6}He and \ex{11}Li are archetypical two-neutron halo nuclei.
Though less probable, proton halos are also possible.
For example, \ex{8}B is thought to exhibit a one-proton halo.
Being located close to the dripline, halo nuclei are very short lived and cannot therefore be studied through usual spectroscopic techniques.
To study their structure, one must resort to indirect techniques such as reactions \cite{Tan96}.
Breakup reaction is probably among the best tools to study exotic clusterised structures such as halo nuclei.
During this reaction, the nucleus of interest is broken up into its constituents---the halo nucleon(s) and the core in this case---through its interaction with a target \cite{Fuk04}.
To extract valuable information about the cluster structure of the projectile from breakup observables, a good understanding of the reaction mechanism is needed and a clear knowledge of the influence of the projectile structure upon the measured cross sections is required.
In this contribution, I will present the nonperturbative reaction models mostly used to analyse breakup data: the Continuum Discretised Coupled Channel model (CDCC), the Time-Dependent approach (TD) and the Dynamical Eikonal Approximation (DEA) (see also Ref.~\cite{BC12} for a recent review).
I will then discuss recent results that show the influence of the projectile description upon breakup data.
An outlook concludes these proceedings.
\section{Breakup modelling}\label{models}
\subsection{Theoretical framework}
In most breakup models, the projectile $P$ is described as a two-body system: an inert core $c$ to which the halo nucleon $f$ is loosely bound.
The $c$-$f$ interaction is described by a phenomenological potential $V_{cf}$ that is adjusted to reproduce the energy and the quantum numbers of the bound states and maybe some resonant states of the nucleus.
The Hamiltonian that describes this cluster structure hence reads
\beq
H_0=T_r+V_{cf}(\ve{r}),
\eeqn{e1}
where $\ve{r}$ is the $c$-$f$ relative coordinate.
The eigenstates of $H_0$ are the wave functions describing the projectile internal structure
\beq
H_0 \Phi_{lm}(\epsilon,\ve r)=\epsilon\Phi_{lm}(\epsilon,\ve r),
\eeqn{e2}
where $l$ is the $c$-$f$ relative angular momentum and $m$ is its projection (the spins of the clusters are neglected here for simplicity).
The negative-energy eigenstates correspond to the bound states of the system.
In the following the ground state of energy $\epsilon_0$ is denoted by $\Phi_0$.
The positive-energy states describe the $c$-$f$ continuum, i.e.\ the broken up projectile.
The target $T$ is usually described as a structureless particle and its interaction with the projectile constituents $c$ and $f$ is simulated by the optical potentials $V_{cT}$ and $V_{fT}$, respectively.
Within this framework, studying the $P$-$T$ collision reduces to solving the following three-body Schr\"odinger equation
\beq
\left[T_R+H_0+V_{cT}(\ve{R}_{cT})+V_{fT}(\ve{R}_{fT})\right]\Psi(\ve{r},\ve{R})=E_T\Psi(\ve{r},\ve{R}),
\eeqn{e3}
where $\ve R$ is the coordinate of the projectile centre of mass relative to the target,
while $\ve{R}_{cT}$ and $\ve{R}_{fT}$ are the $c$-$T$ and $f$-$T$ relative coordinates, respectively.
\Eq{e3} is solved with the condition that the projectile, initially in its ground state, is impinging on the target:
\beq
\Psi(\ve{r},\ve{R})\flim{Z}{-\infty}e^{iKZ+\cdots}\Phi_0(\ve{r}),
\eeqn{e5}
where $K$ is the initial $P$-$T$ wave number, which is related to the total energy $E_T=\hbar^2K^2/2\mu_{PT}+\epsilon_0$, with $\mu_{PT}$ the projectile-target reduced mass.
\subsection{CDCC}
In the Continuum Discretised Coupled Channel method (CDCC), \Eq{e3} is solved expanding the three-body wave function $\Psi$ upon the projectile eigenstates $\Phi_{lm}(\epsilon)$ \cite{Kam86,TNT01}
\beq
\Psi(\ve{r},\ve{R})=\sum_i\chi_i(\ve{R})\Phi_i(\ve{r}),
\eeqn{e6}
where $i$ stands for $l$, $m$ and $\epsilon$.
This leads to a set of coupled equations
\beq
\left[T_R+\epsilon+V_{ii}\right]\chi_i+\sum_{j\ne i}V_{ij}\chi_j=E_T\chi_i,
\eeqn{e7}
where $V_{ij}=\langle\Phi_i|V_{cT}+V_{fT}|\Phi_j\rangle$ are coupling the various channels.
Since the model aims at describing the breakup of the projectile, the expansion \eq{e6} must include a reliable description of the continuum of the projectile.
To be tractable, this description is obtained by discretisation of the continuum \cite{Yah86}.
\subsection{Time-dependent approach}
In the Time-Dependent approach (TD) the three-body Schr\"odinger equation \eq{e3} is simplified using a semiclassical approximation, viz. approaching the $P$-$T$ relative motion by a classical trajectory \cite{KYS94,CBM03c}.
Along that trajectory, the projectile experiences a time-dependent potential that simulates its interaction with the target.
This leads to the resolution of a time-dependent Schr\"odinger equation
\beq
i\hbar \frac{\partial}{\partial t}\Psi(\ve{r},\ve{b},t)=[H_0 + V_{cT}(t)+V_{fT}(t)] \Psi(\ve{r},\ve{b},t),
\eeqn{e8}
which is solved with the initial condition $\Psi(\ve{r},\ve{b},t)\flim{t}{-\infty}\Phi_0(\ve r)$ in agreement with \Eq{e5}.
\subsection{Dynamical Eikonal Approximation}
The Dynamical Eikonal Approximation (DEA) is based on the eikonal approximation \cite{Glauber}, which assumes that at sufficiently high beam energy the $P$-$T$ relative motion does not deviate much from the incoming plane wave \eq{e5}.
The idea is thus to factorise that plane wave out of $\Psi$
\beq
\Psi(\ve r,\ve R)=e^{iKZ}\widehat\Psi(\ve r,\ve R)
\eeqn{e9}
to obtain a function $\widehat \Psi$ that varies smoothly with $\ve R$.
Neglecting its second-order derivatives with respect to its first-order derivatives leads to the DEA equation \cite{BCG05}
\beq
i\hbar v \frac{\partial}{\partial Z}\widehat{\Psi}(\ve{r},\ve{b},Z)=[H_0-\epsilon_0+V_{cT}+V_{fT}]
\widehat{\Psi}(\ve{r},\ve{b},Z),
\eeqn{e10}
which is solved with the boundary condition $\widehat\Psi(\ve{r},\ve{R})\flim{Z}{-\infty}\Phi_0(\ve r)$ according to Eqs.~\eq{e5} and\eq{e9}.
\begin{figure}
\center
\includegraphics[trim=1.5cm 18cm 7cm 1.7cm, clip=true, width=8.3cm]{comp_f2.pdf
\includegraphics[trim=2.5cm 18cm 7cm 1.7cm, clip=true, width=7.7cm]{comp_f3.pdf}
\caption{Comparison of CDCC, DEA and TD, three nonperturbative breakup models used to analyse experiments involving halo nuclei \cite{CEN12}.
Energy distribution (left) and angular distribution (right) for \ex{15}C impinging on Pb at $68A$MeV, data from Ref.~\cite{Fuk04}.}\label{f2}
\end{figure}
In Ref.~\cite{CEN12}, these three models have been confronted to one another for \ex{15}C impinging on Pb at $68A$MeV.
All three models agree very well with each other in the energy distribution (see \fig{f2} left) indicating that they describe the same reaction process.
They also agree very well with the data of \Ref{Nak09}, confirming the halo structure of \ex{15}C.
In the angular distribution (see \fig{f2} right), CDCC and DEA agree very well with each other.
TD however, due to its underlying semiclassical approximation, does not exhibit the oscillatory pattern,
that corresponds to quantal interferences.
Nevertheless, it gives the general trend of the distribution, which explains why it reproduces correctly observables integrated over the angles such as the energy distribution.
This example illustrates the reliablity of reaction modelling.
However it is mostly confined to a simple description of the projectile.
Various groups have tried to go beyond this simple two-body description, e.g.\ by extending the aforementioned models to three-body projectiles.
A 4-body CDCC reaction models, meaning a three-body projectile plus a target, has been developed by two groups \cite{Mat04,Rod08}.
Unfortunately, due to the heavy computational requirement of CDCC, this extension was at first limited to the description of the elastic scattering of two-neutron halo nuclei, in which a breakup channel was included in a crude way.
This was nevertheless interesting as it showed that both the three-body structure of the projectile and the inclusion of the breakup channel are necessary to reproduce experimental data.
Another group suggested instead to use a simpler reaction model within which a full description of the three-body continuum could be included \cite{BCD09}.
They used the Coulomb-Corrected Eikonal model, which corresponds to an approximate solution of the DEA equation \eq{e10}.
First applied to \ex{6}He (see \Ref{BCD09} and \fig{f6}), this model showed excellent agreement with experiment for the Coulomb breakup of \ex{11}Li on Pb at $70A$MeV \cite{PDB12}, which confirmed the two-neutron halo in \ex{11}Li and enabled to study the reaction mechanism for such kind of nuclei.
This study suggests in particular that double-differential cross sections like Dalitz plots provide information about correlations between both halo neutrons (see also \Sec{C0}).
\section{Sensitivity of breakup reactions to the projectile structure}\label{structure}
\subsection{Peripherality of breakup reactions}
Breakup measurements are often used to infer a spectroscopic factor (SF) for one particular configuration of the projectile \cite{Fuk04}.
However, this is valid only if the reaction probes the whole range of the projectile wave function.
To check whether or not breakup reactions are sensitive to the internal part of the projectile,
calculations have been performed with descriptions of the projectile that differ significantly in the interior but exhibit identical tails \cite{CN07}.
\fig{f4} illustrates the results obtain in the case of \ex{8}B on Ni at 25~MeV.
The left panel shows the two bound-state wave functions considered in the test, while the right panel shows the corresponding angular distributions obtained within CDCC.
As is clearly shown both curves are superimposed indicating that, albeit significant, changes in the interior of the projectile wave function do not affect breakup calculations.
Similar results have been obtained at other energies, for different projectiles and targets, indicating the generality of this conclusion \cite{CN07}.
Breakup is thus a peripheral reaction in the sense that it probes only the tail of the projectile wave function, i.e. its Asymptotic Normalisation Coefficient (ANC).
\begin{figure}[h]
\center
\includegraphics[trim=4cm 15.5cm 4cm 4cm, clip=true, width=8cm]{B8deepsusyfo.pdf}
\includegraphics[trim=2cm 1.8cm 3cm 12.5cm, clip=true, width=6.5cm]{B8Nideepsusy.pdf}
\caption{Peripherality of breakup reactions: bound-state wave functions that differ significantly in the interior but have identical asymptotics (left) lead to identical breakup cross section (right).
CDCC calculation performed for the breakup of \ex{8}B on Ni at 25~MeV \cite{CN07}.}\label{f4}
\end{figure}
\subsection{Sensitivity to resonant continuum}\label{res}
Along the Coulomb breakup of \ex{11}Be, Fukuda \emph{et al.} have measured its dissociation on a light target (C) (see \fig{f5}) \cite{Fuk04}.
This energy distribution exhibits a very different behaviour than Coulomb-breakup ones (compare to \fig{f2} left): it exhibits a peak at the energy of a known $5/2^+$ resonance.
This was interpreted as the signature of the \ex{10}Be-n resonant state in nuclear-dominated breakup \cite{Fuk04}.
This was confirmed by a TD calculation in which the two-body description of \ex{11}Be included this resonance within the $d5/2$ partial wave \cite{CGB04}.
The calculated cross section exhibits a narrow peak (dashed line in \fig{f5}), which, once folded with the experimental energy resolution, agrees very well with the measurements (solid line).
This peak is observed only in the $d5/2$ partial wave, confirming that nuclear breakup can be used reliably to study the presence of cluster resonant states in nuclei far from stability.
\begin{figure}
\center
\includegraphics[trim=4cm 15.5cm 4cm 4cm, clip=true, width=10cm]{myHouches_fig5.pdf}
\caption{Nuclear breakup of \ex{11}Be on C at $67A$MeV.
The calculation confirms the significant influence of the $d_{5/2}$ resonance upon the energy distribution \cite{CGB04}.
Data from Ref.~\cite{Fuk04}.}\label{f5}
\end{figure}
The influence of resonant states upon breakup has also been shown for three-body projectiles like \ex{6}He (see below and Refs.~\cite{BCD09,KMM13}).
This influence thus appears in more complicate structures than the simple two-body model presented here and one can expect that interesting information about the projectile continuum can be inferred from such measurements (see also \Ref{ML12}).
\subsection{Influence of the non-resonant continuum}\label{C0}
In the previous section, the influence of resonant states on breakup observables has been shown.
It is therefore necessary to properly include these states in reaction modelling.
The question addressed in this section is whether the description of the non-resonant continuum of the projectile influences breakup calculations or not.
A study has shown that for two-body projectiles, breakup cross sections can be affected by up to 40\% depending on the choice of $V_{cf}$ in the continuum \cite{CN06}.
This is significant, in particular if one is interested in extracting spectroscopic information such as ANCs from breakup data.
\begin{figure}[h]
\center
\includegraphics[trim=0cm 10cm 12cm 0cm, clip=true, width=9cm]{He6CCEPbsdE.pdf}
\caption{Influence of the non-resonant continuum upon breakup calculations.
The Coulomb-breakup cross section of \ex{6}He on Pb at $240A$MeV is shown to exhibit a significant $E1$ contribution (lower solid line) when final-state interactions are considered.
This broad peak disappear when the continuum is described by plane waves (dashed-dotted line) \cite{BCD09}.}\label{f6}
\end{figure}
The problem worsens for three-body projectiles.
This is illustrated in \fig{f6}, which shows the energy distribution for the breakup of \ex{6}He on Pb at $240A$MeV, i.e. the breakup cross section as a function of the total energy between the $\alpha$ and the two halo neutrons after dissociation.
The calculation is made within the CCE \cite{BCD09} with a complete description of the continuum of \ex{6}He.
The total cross section is the top solid line, which includes the contribution of the non-resonant continuum and that of the narrow $2^+$ resonance at about 1~MeV above the $\alpha$-n-n threshold.
As already discussed in \Sec{res}, the latter is responsible for the narrow peak at that energy.
The non-resonant contribution is mostly the $E1$ strength shown by the lower solid line.
This non-resonant continuum is obtained assuming proper interactions between the clusters \cite{BCD09}.
When these interactions are switched off, i.e.\ when the continuum is described by mere plane waves, the cross section exhibits a much different behaviour (see the dash-dotted line in \fig{f6}).
The analysis of these results indicates that these final-state interactions lead to a breakup that populates a slightly dominant $\alpha$-dineutron structure in the \ex{6}He continuum.
Other theoretical works have confirmed this hypothesis of a significant effect of the final-state interactions in breakup calculations \cite{KKM10}.
In all cases, complex structures are observed in the double-differential cross sections, which suggests that interesting information about the three-body continuum can be inferred from such observables.
In particular one might hope to study the correlation between both halo neutrons through such cross sections.
Experimental data will help us confirming these hopes.
In any case, they would enable us to discriminate the different models of \ex{6}He that have been used in these calculations.
\section{Outlook}\label{conclusion}
Breakup reactions are a powerful tool to study the structure of nuclei far from stability and in particular exotic cluster structures like halos \cite{Tan96}.
However, the reaction mechanism is more complicated than initially thought and an accurate reaction model is needed to analyse reliably experimental data.
Various such models exist; CDCC \cite{Kam86,TNT01}, TD \cite{KYS94,CBM03c} and DEA \cite{BCG05} have been reviewed in this contribution (see \Ref{BC12} for a more complete review).
They compare well to each other at intermediate energy \cite{CEN12}.
They also reproduce experimental data, which suggests that the reaction mechanism is now rather well understood, at least for two-body projectiles.
The extension of reaction models to more complicated structures, like two-neutron halo nuclei, is computationally challenging \cite{Mat06,Rod09,BCD09} but recent developments have shown it possible.
This has opened the path to an extensive study of breakup observables for three-body projectiles.
Within this contribution, the analyses of the sensitivity of breakup calculations to the projectile description have also been reviewed.
First it has been reminded that breakup reactions are peripheral in the sense that they are sensitive only to the tail of the projectile wave function and not to its whole range, i.e.\ they are better suited to extract ANCs than SFs \cite{CN07}.
Second, they are not only sensitive to the projectile ground state, but they also probe its continuum.
In particular core-halo resonances can be studied through nuclear-dominated reactions \cite{CGB04,KMM13,Fuk04}.
Interestingly, the non-resonant continuum also influences the calculations \cite{CN06}.
This should be kept in mind when analysing experimental data, as it can affect values such as ANCs inferred from measurements.
This sensitivity is an opportunity for three-body projectiles as recent calculations have shown that their continuum has a significant influence upon breakup calculations \cite{BCD09,KKM10}.
In particular, double-differential cross sections (Dalitz plots) could be used to study the correlation between both halo neutrons in projectiles like \ex{11}Li and \ex{6}He.
Breakup reactions therefore exhibit great qualities to analyse exotic cluster structures.
Experimental and theoretical efforts should be made to refine the information that can be obtained through breakup to expand our understanding of nuclear structure far from stability.
\section*{Acknowledgments}
This research was supported in part by the Fonds de la Recherche Fondamentale
Collective (grant number 2.4604.07F).
This text presents research results of the Belgian Research Initiative on eXotic nuclei
(BRIX), Program No. P7/12, on interuniversity attraction poles of the
Belgian Federal Science Policy Office.
\section*{References}
|
1,116,691,497,413 | arxiv | \section{Introduction}
Although their origin is unclear, ultra-high energy cosmic rays
(UHECRs) have often been posited to come from radio galaxies, the subclass of
active galactic nuclei (AGN) that are luminous at radio frequencies.
One fundamental reason for this
is that radio galaxies allow the \cite{hillas_origin_1984} criterion
to be met; they are large, produce fast, energetic outflows,
and have reasonably high magnetic fields.
They are also known to produce high-energy electrons \citep[e.g.][]{hargrave_observations_1974,croston_high-energy_2009},
thought to be mostly accelerated
by diffusive shock acceleration \citep[DSA;][]{axford_acceleration_1977,krymskii_regular_1977,blandford_particle_1978,bell_acceleration_1978} at termination shocks, which create the
hotspots seen in \cite{fanaroff_morphology_1974} (FR) type II sources.
Beyond this general physical reasoning, observational results from CR observatories have also hinted at an association between AGN and UHECRs. Initial results from the Pierre Auger Observatory (PAO) indicated a tantalising correlation between UHECR arrival directions and AGN source catalogues \citep{pierre_auger_collaboration_correlation_2007}. However, this correlation declined in significance as more data were obtained \citep{abreu_update_2010} and subsequent studies found only low-significance departures from isotropy \citep[e.g.][]{pierre_auger_collaboration_search_2012}.
Recently, results from the PAO
indicated a large-scale anisotropy in the arrival directions
of UHECRs \citep{pierre_auger_collaboration_observation_2017}.
Departures from anisotropy were also confirmed
on intermediate angular scales \citep[][hereafter A18]{the_pierre_auger_collaboration_indication_2018}, leading to a 4$\sigma$ association with starburst galaxies (SBGs) and a slightly weaker association with $\gamma$-ray AGN from the 2nd catalogue of hard {\it Fermi}-LAT sources \citep[2FHL;][]{ackermann_2fhl:_2016}. The radio galaxy
Fornax A does not appear in the 2FHL catalogue.
In this Letter, we show that including Fornax A in the analysis could explain the observed excess at southern Galactic latitudes, which could
increase the significance of the
$\gamma$-AGN association. We also outline the physical reasoning behind this and discuss parallels with Centaurus A and other sources. In addition, we show that the minimum power needed to accelerate protons up to 10~EeV can be supplied by jet outbursts in radio galaxies but not by starburst winds.
\vspace{-2em}
\section{UHECR Arrival Directions}
A18 analysed the PAO dataset
consisting of 5514 events above 20 EeV, finding that
a number of models can provide a better fit than isotropy. In particular,
a model involving SBGs is favoured over
isotropy at the 4$\sigma$ level, while their
alternative models involving
AGN attain lower significance (2.7$\sigma$-3.2$\sigma$).
The threshold energy above which the correlation
is evaluated is scanned by A18
to find the best value and the relevant statistical penalty is taken into account when evaluating
the above significance levels.
A18 find threshold energies of
$39$~EeV for SBGs and $60$~EeV for $\gamma$-AGN.
\begin{figure*}
\centering
\includegraphics[width=0.75\linewidth]{fig1.png}
\caption{The positions of the 16 brightest
radio galaxies in Galactic coordinates,
with the area of the points proportional
to 1.1 GHz radio flux and colour corresponding
to distance from Earth. The radio flux is calculated
from table 2 of vV12. The orange circle around Fornax A
illustrates a deflection angle of $22.5^\circ$, while the
green shaded regions mark the approximate PAO excesses above 60 EeV
(HS1 and HS2) from A18 as described in the text. The
blue dashed line marks the area of the sky inaccessible to
PAO. The projection is the same as that of fig.~7 of A18, with
image coordinates $(x,y)$ mapped to Galactic coordinates in
degrees $(l,b)$ by $x=\lambda \cos \theta$, $y=b$ where
$\sin \theta= b / 90^\circ$ and $\lambda = -l$ (for $l\leq180$), $\lambda = 360^\circ - l$ (for $l\geq180$).
}
\label{fig1}
\end{figure*}
The observed excess map above 60 EeV from A18 has two
fairly clear hotspots (see their fig.~7).
We do not have access to the A18 dataset, but
we can estimate the approximate positions of
the hotspot centroids in Galactic coordinates as
$(l=308^\circ,b=26^\circ)$ (HS1) and
$(l=275^\circ,b=-75^\circ)$ (HS2).
We show the two hotspots in Fig.~\ref{fig1} using
the same projection as used by A18, together with the 16 brightest
radio galaxies from the \citet[][hereafter vV12]{van_velzen_radio_2012}
radio catalogue. In the A18 SBG fit, HS1 can be
attributed to combined UHECR emission from M83 and NGC4945,
while HS2 can be explained by NGC 253. In their
$\gamma$AGN fit, Centaurus A
dominates the map with a small contribution from
M87 (Virgo A); HS1 is associated with Centaurus A,
while HS2 is unaccounted for.
\subsection{Fornax A}
Fornax A (NGC 1316) is one of the brightest radio galaxies
in the sky at 1.4 GHz \citep{schweizer_optical_1980},
with a flux density of 150 Jy \citep{brown_ubiquitous_2011}
and at a distance of $20.9$Mpc (vV12). It has giant lobes
$\sim 300$~kpc across, which are bright in radio
\citep{ekers_large-scale_1983,geldzahler_radio_1984},
as well as being one of the two objects whose lobes are
high-energy $\gamma$-ray sources
\citep{ackermann_fermi_2016} -- the other is Centaurus A \citep{abdo_fermi_2010,abdo_fermi_2010-1}.
However, Fornax A does not appear in the 2FHL catalogue
as it is an extended source with a
$0.15^\circ$ offset between the radio
and gamma-ray position \citep{ackermann_fermi_2016},
although it is present in 3FHL \cite{ajello_3fhl:_2017}.
The absence from 2FHL meant that it was
not included in the A18 analysis.
Fornax A lies at a southern Galactic latitude, with the position
of its radio core at $(l=240.16^\circ,b=-56.69^\circ)$
\citep{geldzahler_radio_1984}.
It can be seen to the lower right of Fig.~\ref{fig1}.
The angular separation between Fornax A and our estimated
HS2 position is $22.5^\circ$.
\subsection{Magnetic Deflection}
The magnetic deflection of UHECRs depends on the magnetic field
encountered -- its strength and topology -- and the
{\em magnetic rigidity} of the UHECR, given by
${\cal R} = E / Ze$, where $E$ is the
CR energy and $Ze$ is the charge on the nucleus.
The deflection magnitude can then be written as
$\theta_d = K / {\cal R}$, where $K$ is a constant depending on the magnetic field
between the source and Earth.
Using the \cite{jansson_new_2012}
model for the Galactic magnetic field, \cite{smida_ultra-high_2015}
find $K=242^\circ$EV (degree exa-Volts) for Fornax A.
For a nucleus of ${\cal R}=10$~EV,
this corresponds to a deflection of
$24.2^\circ$, very close to the
offset between the PAO excess and Fornax A.
The deflection angle in an extragalactic turbulent field
can also be estimated assuming some coherence
length for the magnetic field, typically 1 Mpc.
\cite{sigl_ultrahigh_2003,sigl_ultrahigh_2004} and
\cite{eichmann_ultra-high-energy_2018} find deflections of
12-24$^\circ$ are reasonable for a nucleus of ${\cal R}=10$~EV
travelling $20.9$~Mpc in a 1-2~nG magnetic field.
The fact that Centaurus A,
at a distance of only $3.7$~Mpc \citep{tully_two_2015}
is offset from HS1 by $7^\circ$ again implies
that large deflections are feasible for Fornax A at
the greater distance of $20.9$~Mpc.
Further detailed modelling work is possible,
using tools such as CRPropa \citep{alves_batista_crpropa_2016};
however, for the purposes of this
Letter we note that a deflection of $\approx20-25^\circ$
is highly plausible for a source at $20.9$Mpc,
as shown by \cite{sigl_ultrahigh_2004} and
\cite{smida_ultra-high_2015}.
\begin{table}
\centering
\begin{tabular}{l c c}
\hline
Source & 2FHL & 3FHL \\
& \multicolumn{2}{ c }{($10^{-12}$~erg~cm$^{-2}$~s$^{-1}$)} \\
\hline
Cen A core & $3.90\pm2.29$ & $7.40\pm1.90$\\
M87 & $5.12\pm3.47$ & $9.55\pm3.26$\\
Fornax A & -- & $2.59\pm1.27$ \\
\hline
\end{tabular}
\caption{$\gamma$-ray fluxes for the three sources discussed in section~\ref{sec:fluxes}
}
\label{tab:fluxes}
\end{table}
\subsection{Attenuation and Fluxes}
\label{sec:fluxes}
The $ \gamma $-ray fluxes of Centaurus A, M87 and Fornax A from the 2FHL and 3FHL catalogues
are given in Table 1, obtained from \cite{ackermann_2fhl:_2016} and \citep{ajello_3fhl:_2017}.
A18 use the 2FHL $\gamma$-ray luminosity as a proxy for UHECR luminosity, with a choice of three scenarios for UHECR attenuation during propagation. This attenuation is due to the
Greisen--Zatsepin--Kuzmin \citep[GZK;][]{greisen_end_1966,zatsepin_upper_1966} effect and
photodisintegration \citep{stecker_photodisintegration_1999}.
The starburst galaxies in their sample are nearby and the choice of attenuation scenario makes little difference. Strong attenuation (scenario A) is favoured in their AGN analysis since (i) it accounts for the negligible UHECR signal from the direction of M87 which is at five times the distance of Centaurus A (ii) Fornax A, at a distance of $20.9$~Mpc, is not included and might otherwise account for the HS2 hotspot if attenuation were weaker.
Given that M87 and Fornax A are at similar distances and that there is an Auger hotspot close to Fornax A but not M87, a successful model for the observed PAO anisotropy requires the attenuation to be less severe than scenario A of A18 and that M87 is intrinsically less luminous in UHECRs than Fornax A as we argue in section~\ref{sec:lobes}.
Less severe attenuation would be consistent with results from the CRPropa code as given in fig.~1 of \cite{alves_batista_effects_2015}, as well
as canonical values of the GZK length of 50-100~Mpc \citep[e.g.][]{dermer_ultra-high-energy_2009,de_domenico_influence_2013}.
Sensitivity to composition and source energy spectrum makes the adoption of a single
attenuation length difficult; for example, protons at $10$~EeV and $100$~EeV have
approximate GZK lengths of $1000$~Mpc and $100$~Mpc, respectively
\citep{dermer_ultra-high-energy_2009}.
Approximate attenuation lengths for N$^{14}$ and Fe$^{56}$
nuclei at $100$~EeV are $6$~Mpc and $300$~Mpc, respectively
\citep{alves_batista_effects_2015}.
The correlation with AGN in A18 would also be improved by including the contribution from the lobes of Centaurus A, which are estimated to be at least as bright as the core in $\gamma$-rays \citep{abdo_fermi_2010}.
Furthermore, although there may be a direct relation between the observed $\gamma$-rays and UHECRs
\citep{sahu_hadronic-origin_2012,yang_deep_2012,joshi_very_2018}, $\gamma$-ray luminosity may not be the best proxy for UHECR luminosity.
\section{Fading Radio lobes as UHECR reservoirs}
\label{sec:lobes}
As shown by \cite{waxman_cosmological_1995,waxman_high-energy_2001}
and \cite{blandford_acceleration_2000},
there is a minimum power requirement for particle
acceleration to high energy at shocks.
This can be derived just from considering the magnetic
energy density, $U_{\mathrm{mag}} = B^2/(2 \mu_0)$,
and the Hillas energy $E_H=uBLZe$, where $B$ is the magnetic field
strength, $u$ is the shock velocity and $L$ is a characteristic size.
The maximum magnetic power delivered through a shock is then
roughly $u L^2 U_{\mathrm{mag}}$,
meaning we can write an equation for the minimum power
needed to accelerate a nucleus to a given
rigidity, ${\cal R}$:
\begin{equation}
P_{\mathrm{min}} = \frac{{\cal R}^2}{2 \mu_0 u},
\end{equation}
which is equivalent to
\begin{equation}
P_{\mathrm{min}} \sim
10^{43}~\mathrm{erg~s}^{-1}
~\left( \frac{u}{0.1c} \right)^{-1}
~\left( \frac{{\cal R}}{10 \,\rm EV} \right)^2.
\label{eq:power}
\end{equation}
Here we conservatively assume maximum efficiency and adopt $u=0.1c$
due to the difficulties with accelerating UHECRs at highly relativistic
shocks \citep{bell_cosmic-ray_2018}.
This equation is quite general and
places a fundamental constraint on UHECR sources.
We note that starburst
winds struggle to meet this constraint as they have powers on the
order of $10^{42}$~erg~s$^{-1}$ and low
shock velocities
\citep[$\sim1000~$km~s$^{-1}$;][]{heckman_nature_1990,anchordoqui_unmasking_2017,romero_particle_2018}.
To examine which nearby radio galaxies meet the $P_\mathrm{min}$ requirement,
we estimate a `cavity power',
$\bar{P}_\mathrm{cav}$, using the mean empirical
relationship of \cite{cavagnolo_relationship_2010}.
This is quoted in their section~5 and given by
\begin{equation}
\bar{P}_{\mathrm{cav}} \approx 5.8 \times 10^{43}
\left( \frac{{P}_\mathrm{radio}}{10^{40}~\mathrm{erg~s}^{-1}} \right)^{0.7}~\mathrm{erg~s}^{-1}~,
\end{equation}
where we take the 1.1 GHz luminosity from vV12 as our
${P}_\mathrm{radio}$. This estimate should be thought
of as a rough proxy for average kinetic power,
since we make use of the current radio
luminosity but \cite{cavagnolo_relationship_2010} relate
this to kinetic power using work done excavating a cavity.
Fig.~\ref{fig2} shows $\bar{P}_\mathrm{cav}$
plotted against distance,
with the power requirement from equation~\ref{eq:power} marked
for two rigidities. Cen A, Fornax A and M87 are three of only a
handful of sources within a characteristic GZK radius
of $50-100$~Mpc capable
of accelerating UHECRs to ${\cal R}=10$~EV and above.
However, the actual current jet power in these sources
is likely lower, with approximate
estimates in the literature of
$10^{42}$~erg~s$^{-1}$
\citep[Fornax A; ][]{russell_radiative_2013}
$6-8\times10^{42}$~erg~s$^{-1}$
\citep[M87; ][]{rafferty_feedback-regulated_2006,russell_radiative_2013} and
$10^{43}$~erg~s$^{-1}$\citep[Cen A; ][]{russell_radiative_2013,wykes_mass_2013}.
These estimates are uncertain and rely on the enthalpy
($4PV$) calculated from thermal pressure
acting as a reliable estimate of energy content,
when in actual fact the CR and magnetic energy densities
may dominate \citep[e.g.][]{mathews_creation_2008}.
Based on the UHECRs arriving at Earth with energies
above $55$~EeV and directions
clustered around Centaurus A,
\cite{joshi_very_2018} estimate an UHECR luminosity of $\sim10^{39}$~erg~s$^{-1}$. The jet powers
in Centaurus A, Fornax A and M87 exceed this value by orders of magnitude.
However, it seems that the current jet powers in
these sources struggle, similarly to starbust winds, to meet
the power requirements (equation~\ref{eq:power})
for particle acceleration to high
energy.
Despite this, the {\em average} powers in radio galaxies
can be greater than $P_{\mathrm{min}}$, and the peak powers still greater,
suggesting that past jet activity is important.
\begin{figure}
\centering
\includegraphics[width=1.1\linewidth]{fig2.png}
\caption{The logarithm of estimated
cavity power for local radio galaxies
plotted against distance, calculated as described in section~\ref{sec:lobes}.
The filled circles represent AGN observed to have jets and
the coloured circles are the subset of these that are shown in
Fig.~\ref{fig1}, also with matching colours to Fig.~\ref{fig1}.
The two horizontal lines show $P_{\mathrm{min}}$ for two different
rigidities.}
\label{fig2}
\end{figure}
\subsection{Enhanced activity in the past?}
Based on the current energetics and distances
alone, we might expect M87 to contribute a similar
UHECR flux to Fornax A, but there is not a clear hotspot
close to M87 in the observed UHECR data.
However, the jet powers in local radio galaxies
could feasibly have been different in the past.
Acceleration during a more
luminous phase aids with power requirements and allows DSA to
operate at fast shocks, which is important since
the shocks associated with the currently
active Centaurus A jet struggle to accelerate the highest energy CRs
\citep{croston_high-energy_2009}.
There is evidence in Centaurus A
and Fornax A for enhanced activity within the last
$\sim100$~Myr. Both show giant lobes with
linear sizes greater than $250$~kpc,
whose energy contents are large compared to the current
power of the jet; $\sim5\times10^{58}$~erg in Fornax A
\citep{lanz_constraining_2010} and as high as
$10^{59-60}$~erg in Centaurus A \citep{wykes_mass_2013,eilek_dynamic_2014}.
The energy content of the lobes in M87 is lower,
approximately $8\times10^{57}$~erg
\citep{mathews_creation_2008} and the
lobes only extend $\sim80$~kpc across
\citep{owen_m87_2000}. The M87 lobes are generally
consistent with being inflated
by the current jet \citep{owen_m87_2000},
whereas Fornax A and Centaurus A hint at a more violent past.
In Centaurus A, \cite{wykes_mass_2013} estimate the
buoyancy time at $\tau_{\mathrm{buoy}} = 560$~Myr, which
places a lower limit
on the jet power to inflate the giant lobes of
$5\times10^{43}$~erg~s$^{-1}$.
This jet power could feasibly have been much higher.
Fornax A shows direct evidence of declining AGN activity
\citep{iyomoto_declined_1998,lanz_constraining_2010},
and {\em both} sources are thought to have undergone
mergers \citep{mackie_evolution_1998,horellou_atomic_2001},
with Fornax A showing evidence for merger activity
within 3~Gyr \citep{goudfrooij_kinematics_2001,goudfrooij_star_2001},
and potentially as recently as $0.1$~Gyr \citep{mackie_evolution_1998}.
Mergers can trigger AGN activity
as they provide fuel that can subsequently accrete onto
a central black hole
\citep[e.g.][]{blundell_inevitable_1999,hopkins_cosmological_2008,silverman_impact_2011}.
Both Centaurus A and Fornax A seem promising candidates for a
scenario in which a merger-triggered
AGN outburst produced more powerful jets in the past,
accelerating UHECRs that are still escaping from the giant
lobe reservoirs. The Larmor radius of an UHECR proton is
\begin{equation}
r_g \approx
\frac{E}{10~\mathrm{EeV}}
\left( \frac{B}{10\mu\mathrm{G}} \right)^{-1}~\mathrm{kpc},
\end{equation}
indicating that long-term containment in
the much-larger $100$ kpc-scale lobes is likely.
The magnetic field lines advected with the jet
material that ultimately produces the
lobes do not connect to the ambient medium.
UHECRs are confined to local magnetic field lines,
so UHECR escape requires the crossing of field lines, which is a slow process.
\citep[e.g.][]{ozturk_trajectories_2012,zweibel_microphysics_2013}.
It is therefore not safe to assume
that the UHECRs are accelerated in the present source state;
it is the history of the source over the shorter of the GZK time and the UHECR escape time that matters. This also means that the energy content of the lobes could make a good estimate of UHECR luminosity since it is an integrated measure over past activity.
The sound-crossing time -- the timescale
for adiabatic losses -- in Centaurus A is on the order of
$\tau_{\mathrm{buoy}}$ \citep{wykes_mass_2013}.
This is longer than the GZK time of
$r_{\mathrm{GZK}}/c\approx300$~Myr,
which implies that adiabatic losses are unimportant.
GZK and hadronic $\gamma$-ray losses might still matter
for $\gamma$-ray emission.
In fact, UHECRs in the Centaurus A lobes are thought to produce
some of the observed $\gamma$-ray flux
\citep{sahu_hadronic-origin_2012,joshi_very_2018}.
Overall, it seems reasonable that the UHECRs are
escaping on a timescale roughly comparable to the time since
the outburst ended, but shorter than a GZK time.
It has been suggested that the UHECRs can also be accelerated by an
in-situ, ongoing process in the lobes, such as second-order Fermi
\citep{fraschetti_ultra-high-energy_2008,hardcastle_high-energy_2009}.
However, \cite{hardcastle_which_2010} notes that
this would require relativistic turbulence to reach the
required energies.
\section{Conclusions}
We have shown that the observed excesses in the UHECR arrival
directions measured with the
PAO is more naturally explained by association
with radio galaxies than with SBGs.
Although SBGs are favoured in the A18 analysis,
we argue that the increased significance they report compared with
their $\gamma$AGN sample is largely driven by one source
near the south Galactic pole (NGC 253) and an increased flux estimate in the
vicinity of Centaurus A due to the nearby SBGs M83 and NGC4945.
If Centaurus A were more luminous, as indeed it
is in the 3FHL catalogue or if the lobes contribution is accounted for,
and Fornax A were included, then
this can increase the significance of
the $\gamma$AGN result, provided that we allow for
reasonable magnetic deflection of around $20^\circ$ and
decreased attenuation compared to A18 scenario A.
We suggest that radio galaxies are likely
candidates for UHECR production.
Building on previous work
\citep[e.g.][]{norman_origin_1995,romero_centaurus_1996,rachen_ultra-high_2008,fraschetti_ultra-high-energy_2008,eichmann_ultra-high-energy_2018},
we have introduced a physical scenario to account for
UHECR production in fading giant radio lobes from a recent
jet ``outburst''.
This scenario could be further developed to
apply to SBGs with past AGN activity; radio galaxies and SBGs
need not be unrelated populations. Further work from PAO
coupled with more detailed modelling work should help
to discriminate further.
\section*{Acknowledgements}
We are grateful to the anonymous referee for a constructive
and helpful report. We would like to
thank Jonathan Biteau and Alan Watson
for helpful correspondence
regarding magnetic deflection and the PAO analysis.
We would also like to thank
Garrett Cotter, Robert Laing and Justin Bray for
useful discussions.
This work is supported by the Science and Technology
Facilities Council under grants ST/K00106X/1 and ST/N000919/1. A.T.A. thanks the Czech Science Foundation (ref. 14-37086G) - ``Albert
Einstein Center for Gravitation and Astrophysics'' in Prague, and the
EU COST Action (ref. CA16104)
``Gravitational waves, black holes and fundamental physics''.
We acknowledge the use of matplotlib \citep{matplotlib}
and astropy \citep{the_astropy_collaboration_astropy_2018}.
\vspace{-1em}
\bibliographystyle{mnras}
|
1,116,691,497,414 | arxiv | \section{Introduction}
Observations of anisotropies in the Cosmic Microwave Background (CMB) have been fundamental to the development of our understanding of the formation and development of the Universe. Starting with the COBE satellite (Boggess et al., 1992) which provided the first detection of the intrinsic CMB temperature fluctuations, there have been a series of ground and space-based experiments that have measured these fluctuations to ever increasing precision and on an ever-widening range of angular scales (eg. Jaffe et al., 2001; Bennett et al., 2003). The most recent CMB experiment was the {\em Planck} mission, which has produced a map of the CMB temperatures fluctuations over the entire sky down to angular scales of $\sim$ 7 arcminutes (Fig. 1, Planck Collaboration, 2014a).
\begin{figure}
\includegraphics[width=15cm]{dx9_smica_cmb_bw.pdf}
\caption{The all sky CMB anisotropy map from {\em Planck} (Planck Collaboration, 2014a, ESA).}
\label{fig:cmb}
\end{figure}
CMB temperature fluctuations are the result of acoustic oscillations in the plasma of the early universe that were frozen into the background radiation when matter and radiation decoupled at the time of recombination. The statistics of the fluctuations are Gaussian with a typical relative amplitude of $\Delta T/T \sim 10^{-5}$ superimposed on the $T = 2.73$ K black body radiation.
The strength of fluctuations at different angular scales is given by their power spectrum (Fig. 2, Planck Collaboration, 2014a) which shows a clear peak on scales of $\sim$ 1 degree, which corresponds to the horizon scale at the time of recombination. The shape of the CMB power spectrum is a key tool in constraining cosmological models.
\begin{figure}
\includegraphics[width=15cm]{cmbtot_loglin_bw.pdf}
\caption{The CMB power spectrum derived from the {\em Planck} CMB map (Planck Collaboration, 2014a, ESA).}
\end{figure}
The traditional way that CMB fluctuations are visualised is to represent the CMB temperature range with a colour table - eg. red for the hottest scaling to blue for the coldest - and then to display these colours on a sheet of paper using one of the standard spherical projections, most often a Mollweide projection (see Fig. 1). Some attempts have been made to produce interactive computer graphics visualisations of the CMB as a sphere, which can then be rotated at random by a user, but all of the representations to date have relied upon vision as the key tool for representing the varying temperature of the CMB. In this paper we describe how an alternative approach to representing the CMB temperature fluctuations can be achieved by using a 3D printer to render the anisotropies as bumps and dips on the surface of a sphere. Presenting the CMB in a truly 3D form, that can be held in the hand and felt rather than viewed, has many potential benefits for teaching and outreach work, and is especially relevant for those with a visual disability. While 3D printers have previously been used to visualise mathematical functions (Aboufadel et al., 2013; Knill \& Slavkovsky, 2013) and the results of models of complex systems (Reiss et al., 2013), its use to represent astronomical observational data is new, and is potentially a novel way for researchers to represent and interrogate their data as well as to present it to students. 3D printing is also of use for outreach in that it allows those with impaired vision to better understand and appreciate the results of observational cosmology. It also permits everyone to use a different sense, that of touch, to appreciate the properties of the CMB.
This paper describes the use of 3D printers to represent cosmological observations in this way, but a similar approach can be applied to other astrophysical data sets, including, but not limited to, topographic maps of planets, observations and models of the surface and interior structures of stars, the distribution of stars in galaxies, and the distribution of galaxies and intergalactic gas within the large scale structure of the universe. This work can be regarded as a proof-of-concept demonstration for all of these other possibilities. Since it was the result of a final year undergraduate project at Imperial College London (S. Sato \& and A. Portela Fonseca were the students), this work also serves as a demonstration that this kind of project can play a strong pedagogical role, introducing students to real astronomical or cosmological datasets and to the techniques of 3D printing.
The rest of this paper is structured as follows. In Section 2 we describe the Planck dataset that was used as the starting point for our 3D printing. In Section 3 we discuss the capabilities and limitations of different 3D printing technologies for this work, while in Section 4 we describe how the Planck data were processed to make it appropriate for 3D printing. In Section 5 we describe and present the final products from this work. We draw our conclusions in Section 6.
\section{The {\em Planck} Data}
The {\em Planck} mission observed the entire sky at nine different frequencies, from 30GHz to 857GHz (Planck Collaboration, 2014a). This allowed various foregrounds from our own and other galaxies, including dust, synchrotron radiation and emission lines from CO, that would contaminate the 2.73K CMB emission, to be determined and removed (Planck Collaboration, 2014b). The CMB dipole anisotropy (Smoot et al., 1977), that results from our motion relative to the CMB frame, was also removed, allowing the primary CMB anisotropies to be determined. These anisotropies are of order $\Delta T/T \sim 10^{-5}$ in size.
The all-sky {\em Planck CMB} maps, as well as the individual frequency maps, component separation maps and more, are available online at \verb$http://pla.esac.esa.int/pla/$, where the all-sky maps are stored in HEALPix format\footnote{http://healpix.sourceforge.net/} (Gorski et al., 2005). This is a hierarchical equal area pixellation scheme for a sphere and is the the main data format used for astrophysical all-sky mapping projects like {\em Planck} and {\em WMAP}. HEALPix divides the sphere up into equal area pixels starting with twelve equal area curvilinear quadrilaterals and then, for higher resolutions, dividing these quadrilaterals into ever smaller sets of four smaller quadrilaterals (see Fig. 3 of Gorski et al. (2005)). A given HEALPix resolution is defined by the number of pixels each of the original quadrilaterals has on a side, going up by a factor of 2 at each resolution increase. The native HEALPix scale for the {\em Planck} maps is Nside=2048, corresponding to an angular resolution of each HEALPix pixel of $\sim$ 1.7 arcminutes, and a total number of pixels over the whole sky of 50,331,648.
\section{3D Printing}
3D printing is the term usually applied to a wide range of additive manufacturing techniques whereby a solid object is produced through the computer-controlled addition of material that is typically built up layer by layer. A wide range of materials can be used for 3D printing, ranging from sugar to titanium.
Two materials, and two 3D printing technologies, were used for this project. The first used polylactic acid (PLA), a biodegradable thermosoftening plastic. This is printed layer on layer by the {\em Ultimaker} 3D printer used here. PLA is a cheap printing material and as such was ideal for prototyping and testing the computer aided design (CAD) files developed for this project. The second material, used for the final coloured objects, was plaster, printed by a ZPrinter 650 machine. This printer works in a somewhat different way to the PLA printing machine. While the {\em Ultimaker} only lays down material that will be part of the final object, the ZPrinter operates by printing a binding fluid, which may include colouration, onto a preexisting layer of powder. The entire printing bed of the machine is covered in powder, one layer at a time, but only the material bound together by the printing head is retained in the final product. This means that the object being printed, no matter its shape, is supported throughout the printing process by the powder layers. In contrast, material printed by the {\em Ultimaker} must be self supporting. Since we aim to print a representation of the all-sky CMB as bumps on a sphere, a support structure must be added to the {\em Ultimaker} print to ensure that the bottom part of the sphere, where there is a large overhang, is not in danger of slumping.
Each printer requires an input file in its own specific format. For the {\em Ultimaker} a stereolithography (STL) file is the starting point. This is then processed through a slicing programme called \verb|Cura| which produces the GCode file used by the printer. The ZPrinter in contrast uses a Virtual Reality Modelling Language (VRML) file as input instead. This provides an alternative way of describing the object to be printed and adds the possibility of colouring the final product. In principle any colour scale can be produced, but for this project a 23 colour scale based on that used for all-sky CMB images in publications by the {\em Planck} consortium was used.
\section{From {\em Planck} Data to CAD Files}
A number of processing steps are needed to go from the initial nside=2048 HEALPix maps to the CAD files needed by each of the 3D printer types used. The first step involves using the HEALPix data to produce a representation of the CMB anisotropies as bumps on the surface of a sphere with the temperature anisotropies scaled by an appropriate amount so that they are large enough to be felt and seen. This spherical representation then has to be processed to produce appropriate input files for the relevant printers.
\subsection{HEALPix to a Sphere}
The original {\em Planck} maps are stored as HEALPix Nside = 2048 objects, corresponding to a resolution of $\sim$1.7 arcminutes and 50,331,648 pixels. The resulting 3D printer file that would be produced from this would be unmanageably large, and would include detail at a resolution finer than can easily be perceived even for a large diameter finally printed sphere - for a 10 cm diameter printed sphere the smallest scale features would be $\sim$50 $\mu$m across, roughly equivalent to the width of a human hair. Furthermore, most of the power in the CMB anisotropies is on scales of $\sim$ 1 degree (see Fig. 1). The CMB HEALPix maps were therefore degraded by a factor of sixteen to HEALPix Nside = 128 maps, corresponding to a pixel size of $\sim$ 0.46 degrees which is thus well matched to the scale at the peak of the CMB anisotropy power spectrum. The total number of HEALPix pixels at this resolution is a much more manageable 196608.
The input CAD files for the 3D printers used in this project consist of a mesh of triangles that make up the surface of the object being printed. Such a mesh can be generated from a list of Cartesian coordinates that lie on the surface of the object by software such as the \verb!MeshLab! system\footnote{http://meshlab.sourceforge.net/}. The next step in producing our 3D representation of the CMB is thus to use the Nside = 128 HEALPix map to produce a list of Cartesian coordinates lying on the surface of the object to be printed.
To do this, the $r, \theta, \phi$ polar coordinates of each HEALPix pixel were calculated. The $r$ coordinate was then modified on the basis of the CMB temperature anisotropy, scaled by a variable factor so that the structures on the surface representing them would be large enough to be discernible by touch or vision. The initial test printings were done using a 3 cm radius sphere, so the radius $r$ was scaled as follows:
\begin{equation}
r(\theta, \phi) = 3 + \left( n \times 1000 \times \Delta T(\theta, \phi) \right)
\end{equation}
where the eventual radial coordinate is in cm, $\Delta T(\theta, \phi)$ is the CMB anisotropy at the angular position ($\theta$, $\phi$) given by the HEALPix map, and $n$ is a scale factor that was tuned to produce the best effect. Values of 1, 2 and 3 were tested for $n$, with 2 being found to produce the most effective results. The list of $(r, \theta, \phi)$ coordinates generated in this way for the surface of the sphere were then converted to $(x,y,z)$ Cartesian coordinates to provide the input to the next stages of the process which convert this list of points into the meshes appropriate for each type of 3D printer.
\subsection{Making the Mesh}
The mesh, linking the points in the $(x,y,z)$ coordinate list and which is later used to produce the CAD files that will be printed, was generated using the \verb!MeshLab!\footnote{http://meshlab.sourceforge.net/} package. This takes the list of points, finds neighbouring points, and then generates a mesh made up of triangles that connect these points using a Poisson surface reconstruction algorithm (Kazhdan et al., 2006). The end result is the surface of the object that will be printed.
To produce the mesh from the input $(x,y,z)$ file,~\verb!MeshLab! uses a tree searching algorithm where the tree depth specifies the the level of detail in the final product. A variety of depths was tried with fourteen finally being selected for the $n=2$ model that was eventually used for printing. The object descriptions produced by \verb!MeshLab! are not guaranteed to be free of holes or degeneracies. Such flaws were identified and corrected using the \verb!NetFabb!\footnote{http://www.netfabb.com/basic.php} package. The final resulting object description was output to both STL and VRML files, which are appropriate for the {\em Ultimaker} and ZPrinter devices respectively.
\subsection{STL File Generation for the {\em Ultimaker}}
The {\em Ultimaker} printer uses the plastic PLA as its printing material, and prints by extruding this plastic onto the printing surface. An overhanging structure, such as the base of a sphere, will not be able to support itself when printed in this manner. The software \verb!Cura!\footnote{https://ultimaker.com/en/products/cura-software}, is used to further process the input STL files to slice the object into the thin layers that the machine will print. \verb!Cura! can also be used to generate support struts to prevent drooping. \verb!Cura! was also used to give the object a 20\% filled internal grid making the final plastic printed models less breakable. The eventual result of this process is a GCode file which can be input to the {\em Ultimaker}. Figure \ref{fig:ultimaker} shows an {\em Ultimaker} printer in the process of printing our 3D representation of the CMB, complete with \verb!Cura!-generated support structures and a base.
\begin{figure}
\includegraphics[width=15cm]{printing_bw2.pdf}
\caption{3D CMB model being printed by an {\em Ultimaker}. The brown material is the printed plastic, with the print head working above it. The spherical CMB model is supported by a number of vertical supports which can be removed after printing.}
\label{fig:ultimaker}
\end{figure}
\subsection{VRML File Generation for the ZPrinter}
The printing method for the ZPrinter, where layers of plaster are put down and the print head sprays a fluid that glues the powder together that will form the final object, has several advantages for the current work. Firstly the powder layers provide support during the printing process so no additional support struts are required. Secondly, ink-jet printer ink can be mixed with the fixing fluid so that the final object can be coloured. There are several disadvantages though. Firstly the feedstock for the ZPrinter is more expensive, so the final model is over 10 times the price of the plastic Ultimaker version, and secondly the powder is quite dense so the final model needs to be hollow and have a hole in the surface to allow the interior powder to escape. For these reasons the ZPrinter was only used for our final printed model, with a somewhat larger size than the original 3 cm radius plastic models.
The \verb!MeshLab!-produced mesh was rescaled by a factor of 5/3 using \verb!NetFabb! so that the final model will have a 5 cm radius and then hollowed out to have a shell that is at least 5mm thick. The \verb!Blender!\footnote{https://www.blender.org/} tool was then used to produce a cylindrical hole of diameter 10mm at the south galactic pole of the model. The final mesh was exported from \verb!NetFabb! as VRML, the file format used by the ZPrinter. Before this was done a colour value was defined for each vertex using a 23 colour scale modelled on the \verb!Commander! colour scheme used by the {\em Planck} consortium, where blue represents the coldest points and red the hottest. Figure \ref{fig:vrml} shows the full colour CMB model printed by the ZPrinter.
\begin{figure}
\includegraphics[width=15cm]{monochrome_bw2.pdf}
\caption{The final {\em Ultimaker} printed CMB model.}
\label{fig:ultimaker_cmb}
\end{figure}
\begin{figure}
\includegraphics[width=15cm]{colour_print2.pdf}
\caption{The final ZPrinter printed CMB model with colour.}
\label{fig:vrml}
\end{figure}
\section{Printed Universes}
Figures \ref{fig:ultimaker_cmb} and \ref{fig:vrml} show the final printed versions of the CMB models from the ZPrinter and {\em Ultimaker} respectively. As can be seen the structure of the CMB fluctuations on scales matching the first doppler peak are rendered as visible bumps and dips in the surface of the resulting sphere, while the colours of the ZPrinter version are similar to those of the {\em Planck} CMB map in Figure \ref{fig:cmb}. What is less easy to discern from an image is that these structures also produce a highly tactile surface, the features of which are readily appreciated by touch alone. The highest peaks and, even more easily, the deepest holes, are easily found. The famous `CMB cold spot' (Cruz et al., 2007), an unusually low temperature region in the CMB, first detected by {\em WMAP} and since confirmed by {\em Planck}, can be felt as a small but isolated depression.
These models, though, are not perfect, but they do represent a successful proof of concept for 3D printed representations of astronomical or, in this case, cosmological systems. Since we have restricted ourselves to an Nside=128 HEALPix resolution, much of the improved resolution from the {\em Planck} mission is lost. Nevertheless we retain the bulk of the CMB anisotropy power spectrum around the one degree scale. Detail is also lost through a number of approximations made during the construction of the mesh. These were necessitated by power and memory constraints on the machines used in the mesh construction. Alternative algorithms or a deeper tree search in the selection of neighbouring points, for example, might lead to improvements in the future.
\subsection{Availability of CAD Files}
The CAD files used to produce both the plastic (STL) and plaster (VRML) versions of the CMB are made available with this paper. They can be downloaded from \verb!http://dx.doi.org/10.5281/zenodo.60215! and are made available under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
\section{Conclusions}
We have produced a solid 3D representation of the CMB temperature anisotropies using the {\em Planck} mission data and 3D printers. The CAD files that describe the printed objects were produced using a variety of tools, including \verb! MeshLab, Cura, Blender! and \verb!Netfabb! which produced a file in STL for printing on an {\em Ultimaker} machine, and in VRML, for printing on a ZPrinter machine. The final 3D printed objects capture the essence of the CMB anisotropies on angular scales of about 1 degree, matching the peak in the CMB anisotropy power spectrum. They can provide a non-visual appreciation of the CMB for both the sighted and unsighted. The CAD files used to produce these models are freely available for non-commercial use from \verb!http://dx.doi.org/10.5281/zenodo.60215!.
\ack
This paper is based on observations obtained with {\em Planck} (http://www.esa.int/{\em Planck}), an ESA science mission with instruments and contributions directly funded by ESA Member States, NASA, and Canada. This work used \verb!MeshLab!, a tool developed with the support of the 3D-CoForm project. The authors would like to thank the Imperial College Advanced Hackspace, the Imperial College Department of Design Engineering, Sam McKenney, Larissa Kunstel-Tabet, Meilin Sancho, Andrew Jaffe and Tim Evans for their help with this work. It was funded in part by STFC Grant Reference ST/K001051/1.
\References
\item Aboufadel, E, Krawczyk, S V, Sherman-Bennett, M 2013, arXiv:1308.3420
\item Bennett, C K, et al. 2003 {\it ApJS} {\bf 148} 1
\item Boggess, N W, et al., 1992 {\it ApJ} {\bf 397} 420
\item Cruz, M, et al., 2007 {\it ApJ} {\bf 655} 11
\item Gorski, K M, et al., 2005 {\it ApJ} {\bf 622} 759
\item Jaffe, A H, et al., 2001 {\it PhRvL} {\bf 86}, 3475
\item Kazhdan M, Bolitho M \& Hoppe H. Poisson surface reconstruction, Proceeding of the fourth Eurographics symposium on Geometry processing, editors Polthier K, Sheffer A, publisher Eurographics Association, 2006; Vol. 7
\item Knill, O, Slavkovsky, E, 2013, arXiv:1306.5599
\item Planck Collaboration, 2014a {\it A\&A} {\bf 571} 1
\item Planck Collaboration, 2014b {\it A\&A} {\bf 571} 12
\item Reiss, D S, Price, J J, Evans, T, 2013, {\it EPL} {\bf 104}, 48001
\item Smoot, G F, Gorenstein, M V, Muller, R A,, 1977, {\em PhRvL} {\bf 39} 898
\endrefs
\end{document} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.